1. Field of the Invention
This disclosure relates to methods and systems for displaying an input image on a display device using vector error diffusion.
2. Description of the Related Art
Electromechanical systems (EMS) include devices having electrical and mechanical elements, actuators, transducers, sensors, optical components such as mirrors and optical films, and electronics. EMS devices or elements can be manufactured at a variety of scales including, but not limited to, microscales and nanoscales. For example, microelectromechanical systems (MEMS) devices can include structures having sizes ranging from about a micron to hundreds of microns or more. Nanoelectromechanical systems (NEMS) devices can include structures having sizes smaller than a micron including, for example, sizes smaller than several hundred nanometers. Electromechanical elements may be created using deposition, etching, lithography, and/or other micromachining processes that etch away parts of substrates and/or deposited material layers, or that add layers to form electrical and electromechanical devices.
One type of EMS device is called an interferometric modulator (IMOD). The term IMOD or interferometric light modulator refers to a device that selectively absorbs and/or reflects light using the principles of optical interference. In some implementations, an IMOD display element may include a pair of conductive plates, one or both of which may be transparent and/or reflective, wholly or in part, and capable of relative motion upon application of an appropriate electrical signal. For example, one plate may include a stationary layer deposited over, on or supported by a substrate and the other plate may include a reflective membrane separated from the stationary layer by an air gap. The position of one plate in relation to another can change the optical interference of light incident on the IMOD display element. IMOD-based display devices have a wide range of applications, and are anticipated to be used in improving existing products and creating new products, especially those with display capabilities.
Some display devices, such as EMS based display devices, can produce an input color by utilizing more than three primary colors. Each of the primary colors can have reflectance or transmittance characteristics that are independent of each other. Such devices can be referred to as multi-primary display devices. In multi-primary display devices there may be more than one combination of the multiple primary colors to produce the same color having input color values, such as red (R), green (G), and blue (B) values.
Vector error diffusion is often used in color image dithering process to preserve the original color image quality. Vector error diffusion process may use a dithering algorithm that propagates residual vector quantization error of a pixel to its neighboring pixels based on a predetermined weight (coefficient) given for each propagation. Due to heavy computations necessary to determine a vector quantization error of a pixel, large amount of data for high resolution displays, and fast input frame rates, current systems for vector error diffusion often use a dual- or multi-port random access memory (RAM) and an extra internal phase lock loop (PLL) and consume much power.
The technique of this disclosure may be generally related to using a pipeline to implement vector error diffusion for display devices.
In one implementation, a method of diffusing vector error comprises in a first time interval, determining a first pixel accumulated vector error by loading a plurality of previously stored diffused pixel quantization vector errors, determining a first pixel pre-quantization vector by adding the first pixel accumulated vector error to a first pixel input vector, and determining a plurality of first pixel quantization differences between the first pixel pre-quantization vector and a plurality of reference vectors. The method further comprises in a second time interval, determining a plurality of first pixel quantization distances based on the first pixel quantization differences, determining a second pixel accumulated vector error by loading another plurality of previously stored diffused pixel quantization vector errors, determining a second pixel pre-quantization vector by adding the second pixel accumulated vector error to a second pixel input vector, and determining a plurality of second pixel quantization differences between the second pixel pre-quantization vector and the plurality of reference vectors. The method further comprises in a third time interval, determining a first pixel quantization vector error based on one of the plurality of first pixel quantization distances, and determining a plurality of second pixel quantization distances based on the second pixel quantization differences. The method further comprising in a fourth time interval, applying an error diffusion filter to one or more pixel quantization vector errors, including the first pixel quantization vector error, to generate and store a diffused first pixel quantization vector error, and determining a second pixel quantization vector error based on one of the plurality of second pixel quantization distances.
In another implementation, a method of diffusing vector error by processing error data of each of a plurality of pixels comprises determining an accumulated vector error based on a plurality of stored diffused quantization vector errors, determining a pre-quantization vector by adding the accumulated vector error to an input vector, determining a plurality of quantization differences based on the pre-quantization vector and a plurality of reference vectors, determining a plurality of quantization distances based on the plurality of quantization differences, determining a quantization vector error based on one of the plurality of quantization distances, applying an error diffusion filter at least to the quantization vector error to generate at least another diffused quantization vector error, and storing the another diffused quantization vector error, the processing of the error data of the plurality of pixels being in a virtual pipeline with a plurality of stages.
In another implementation, an apparatus for diffusing vector error comprises a processor configured to process error data of each of a plurality of pixels by determining an accumulated vector error based on a plurality of stored diffused quantization vector errors, determining a pre-quantization vector by adding the accumulated vector error to an input vector, determining a plurality of quantization differences based on the pre-quantization vector and a plurality of reference vectors, determining a plurality of quantization distances based on the plurality of quantization differences, determining a quantization vector error based on one of the plurality of quantization distances, and applying an error diffusion filter at least to the quantization vector error to generate at least another diffused quantization vector error, the processor further configured to process the error data of the plurality of pixels in a virtual pipeline with a plurality of stages; and a memory configured to save the another diffused quantization vector error for at least one of the plurality of pixels.
In another implementation, an apparatus for diffusing vector error by processing error data of each of a plurality of pixels comprises a means for determining an accumulated vector error based on a plurality of stored diffused quantization vector errors, a means for determining a pre-quantization vector by adding the accumulated vector error to an input vector, a means for determining a plurality of quantization differences based on the pre-quantization vector and a plurality of reference vectors, a means for determining a plurality of quantization distances based on the plurality of quantization differences, a means for determining a quantization vector error based on one of the plurality of quantization distances, a means for applying an error diffusion filter at least to the quantization vector error to generate at least another diffused quantization vector error, and a means for storing the another diffused quantization vector error, the apparatus further configured to process the error data of the plurality of pixels in a virtual pipeline with a plurality of stages.
In another implementation, a system for diffusion vector error by processing error data of each of a plurality of pixels comprises a processor configured to determine an accumulated vector error based on a plurality of stored diffused quantization vector errors, determine a pre-quantization vector by adding the accumulated vector error to an input vector, determine a plurality of quantization differences based on the pre-quantization vector and a plurality of reference vectors, determine a plurality of quantization distances based on the plurality of quantization differences, determine a quantization vector error based on one of the plurality of quantization distances, apply an error diffusion filter at least to the quantization vector error to generate at least another diffused quantization vector error, and store the another diffused quantization vector error, the processor further configured to process the error data of the plurality of pixels in a virtual pipeline with a plurality of stages.
In another implementation, a non-transitory computer-readable medium storing instructions that, when executed, causes at least one physical computer processor to perform a method of diffusing vector error by processing error data of each of a plurality of pixels, the method comprising determining an accumulated vector error based on a plurality of stored diffused quantization vector errors, determining a pre-quantization vector by adding the accumulated vector error to an input vector, determining a plurality of quantization differences based on the pre-quantization vector and a plurality of reference vectors, determining a plurality of quantization distances based on the plurality of quantization differences, determining a quantization vector error based on one of the plurality of quantization distances, applying an error diffusion filter at least to the quantization vector error to generate at least another diffused quantization vector error, and storing the another diffused quantization vector error, the processing of the error data of the plurality of pixels being in a virtual pipeline with a plurality of stages.
The disclosed aspects will hereinafter be described in conjunction with the appended drawings and appendices, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote like elements.
Like reference numbers and designations in the various drawings indicate like elements.
The following description is directed to certain implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein can be applied in a multitude of different ways. The described implementations may be implemented in any device, apparatus, or system that can be configured to display an image, whether in motion (such as video) or stationary (such as still images), and whether textual, graphical, or pictorial. More particularly, it is contemplated that the described implementations may be included in or associated with a variety of electronic devices such as, but not limited to: mobile telephones, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, Bluetooth® devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, tablets, printers, copiers, scanners, facsimile devices, global positioning system (GPS) receivers/navigators, cameras, digital media players (such as MP3 players), camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (e.g., e-readers), computer monitors, auto displays (including odometer and speedometer displays, etc.), cockpit controls and/or displays, camera view displays (such as the display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, parking meters, packaging (such as in electromechanical systems (EMS) applications including microelectromechanical systems (MEMS) applications, as well as non-EMS applications), aesthetic structures (such as display of images on a piece of jewelry or clothing), and a variety of EMS devices. The teachings herein also can be used in non-display applications such as, but not limited to: electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes, and electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art.
The systems and methods described herein may be used to optimize vector error diffusion for displaying color images on a display device. Vector error diffusion passes a residual vector error from one pixel on to its neighboring pixels with varying weights. In turn, a quantization vector error for one pixel is based on the sum of weighted residual vector errors from its neighboring pixels. The direction and weight of error diffusion can be defined by a vector error diffusion filter. Using a vector error diffusion filter with a limited number of taps—e.g., the number of neighboring pixels that would receive weighted residual errors from a pixel—allows an a priori determination of which pixel vector is dependent on which pixel vector error at what times. Based on such dependency determination, vector error diffusion calculation for multiple pixels can be scheduled to optimize computation time while preserving the dependency. The scheduled vector error diffusion calculation can further be implemented in a virtual pipeline with multiple stages to balance computation load and utilize hardware resource efficiently.
Systems and methods described herein can be used for rendering static images as well as video (e.g., video with fast moving objects). In various implementations, the display device can include an output buffer to store the indices for the primary colors and/or the last input image. A look-up table (LUT) can be used to store a correspondence between the display color and a set of primary colors. In various implementations, the output buffer can be configured to store the last input image and display the last input image when the display device is operated in the always-on mode.
Particular implementations of the subject matter described in this disclosure can be implemented to realize one or more of the following potential non-limiting advantages. It is possible to accelerate the hardware speed and achieve a high performance gain due to optimization of the vector error diffusion calculation. It is possible to adapt the vector error diffusion process rate to a high input pixel clock frequency to support the requirements of a high resolution display with a fast input frame rate in a display driver design. It is also possible to reduce the number of internal PLL circuits to reduce power consumption and use a single-port RAM as opposed to a dual-port RAM, for example, to reduce the cost of hardware. Furthermore, the flexible architecture disclosed herein is easy to modify according to different design requirements and tradeoffs. For instance, the architecture disclosed herein is scalable.
An example of a suitable EMS or MEMS device or apparatus, to which the described implementations may apply, is a reflective display device. Reflective display devices can incorporate interferometric modulator (IMOD) display elements that can be implemented to selectively absorb and/or reflect light incident thereon using principles of optical interference. IMOD display elements can include a partial optical absorber, a reflector that is movable with respect to the absorber, and an optical resonant cavity defined between the absorber and the reflector. In some implementations, the reflector can be moved to two or more different positions, which can change the size of the optical resonant cavity and thereby affect the reflectance of the IMOD. The reflectance spectra of IMOD display elements can create fairly broad spectral bands that can be shifted across the visible wavelengths to generate different colors. The position of the spectral band can be adjusted by changing the thickness of the optical resonant cavity. One way of changing the optical resonant cavity is by changing the position of the reflector with respect to the absorber.
The IMOD display device can include an array of IMOD display elements which may be arranged in rows and columns. Each display element in the array can include at least a pair of reflective and semi-reflective layers, such as a movable reflective layer (e.g., a movable layer, also referred to as a mechanical layer) and a fixed partially reflective layer (e.g., a stationary layer), positioned at a variable and controllable distance from each other to form an air gap (also referred to as an optical gap, cavity or optical resonant cavity). The movable reflective layer may be moved between at least two positions. For example, in a first position, e.g., a relaxed position, the movable reflective layer can be positioned at a distance from the fixed partially reflective layer. In a second position, e.g., an actuated position, the movable reflective layer can be positioned more closely to the partially reflective layer. Incident light that reflects from the two layers can interfere constructively and/or destructively depending on the position of the movable reflective layer and the wavelength(s) of the incident light, producing either an overall reflective or non-reflective state for each display element. In some implementations, the display element may be in a reflective state when unactuated, reflecting light within the visible spectrum, and may be in a dark state when actuated, absorbing and/or destructively interfering light within the visible range. In some other implementations, however, an IMOD display element may be in a dark state when unactuated, and in a reflective state when actuated. In some implementations, the introduction of an applied voltage can drive the display elements to change states. In some other implementations, an applied charge can drive the display elements to change states.
The depicted portion of the array in
In
The optical stack 16 can include a single layer or several layers. The layer(s) can include one or more of an electrode layer, a partially reflective and partially transmissive layer, and a transparent dielectric layer. In some implementations, the optical stack 16 is electrically conductive, partially transparent and partially reflective, and may be fabricated, for example, by depositing one or more of the above layers onto a transparent substrate 20. The electrode layer can be formed from a variety of materials, such as various metals, for example indium tin oxide (ITO). The partially reflective layer can be formed from a variety of materials that are partially reflective, such as various metals (e.g., chromium and/or molybdenum), semiconductors, and dielectrics. The partially reflective layer can be formed of one or more layers of materials, and each of the layers can be formed of a single material or a combination of materials. In some implementations, certain portions of the optical stack 16 can include a single semi-transparent thickness of metal or semiconductor which serves as both a partial optical absorber and electrical conductor, while different, electrically more conductive layers or portions (e.g., of the optical stack 16 or of other structures of the display element) can serve to bus signals between IMOD display elements. The optical stack 16 also can include one or more insulating or dielectric layers covering one or more conductive layers or an electrically conductive/partially absorptive layer.
In some implementations, at least some of the layer(s) of the optical stack 16 can be patterned into parallel strips, and may form row electrodes in a display device as described further below. As will be understood by one having ordinary skill in the art, the term “patterned” is used herein to refer to masking as well as etching processes. In some implementations, a highly conductive and reflective material, such as aluminum (Al), may be used for the movable reflective layer 14, and these strips may form column electrodes in a display device. The movable reflective layer 14 may be formed as a series of parallel strips of a deposited metal layer or layers (orthogonal to the row electrodes of the optical stack 16) to form columns deposited on top of supports, such as the illustrated posts 18, and an intervening sacrificial material located between the posts 18. When the sacrificial material is etched away, a defined gap 19, or optical cavity, can be formed between the movable reflective layer 14 and the optical stack 16. In some implementations, the spacing between posts 18 may be approximately 1-1000 μm, while the gap 19 may be approximately less than 10,000 Angstroms (Å).
In some implementations, each IMOD display element, whether in the actuated or relaxed state, can be considered as a capacitor formed by the fixed and moving reflective layers. When no voltage is applied, the movable reflective layer 14 remains in a mechanically relaxed state, as illustrated by the display element 12 on the left in
The processor 21 can be configured to communicate with an array driver 22. The array driver 22 can include a row driver circuit 24 and a column driver circuit 26 that provide signals to, for example a display array or panel 30. The cross section of the IMOD display device illustrated in
In some implementations, a frame of an image may be created by applying data signals in the form of “segment” voltages along the set of column electrodes, in accordance with the desired change (if any) to the state of the display elements in a given row. Each row of the array can be addressed in turn, such that the frame is written one row at a time. To write the desired data to the display elements in a first row, segment voltages corresponding to the desired state of the display elements in the first row can be applied on the column electrodes, and a first row pulse in the form of a specific “common” voltage or signal can be applied to the first row electrode. The set of segment voltages can then be changed to correspond to the desired change (if any) to the state of the display elements in the second row, and a second common voltage can be applied to the second row electrode. In some implementations, the display elements in the first row are unaffected by the change in the segment voltages applied along the column electrodes, and remain in the state they were set to during the first common voltage row pulse. This process may be repeated for the entire series of rows, or alternatively, columns, in a sequential fashion to produce the image frame. The frames can be refreshed and/or updated with new image data by continually repeating this process at some desired number of frames per second.
The combination of segment and common signals applied across each display element (that is, the potential difference across each display element or pixel) determines the resulting state of each display element.
As illustrated in
When a hold voltage is applied on a common line, such as a high hold voltage VCHOLD
When an addressing, or actuation, voltage is applied on a common line, such as a high addressing voltage VCADD
In some implementations, hold voltages, address voltages, and segment voltages may be used which produce the same polarity potential difference across the modulators. In some other implementations, signals can be used which alternate the polarity of the potential difference of the modulators from time to time. Alternation of the polarity across the modulators (that is, alternation of the polarity of write procedures) may reduce or inhibit charge accumulation that could occur after repeated write operations of a single polarity.
In
The process 80 continues at block 84 with the formation of a sacrificial layer 25 over the optical stack 16. Because the sacrificial layer 25 is later removed (see block 90) to form the cavity 19, the sacrificial layer 25 is not shown in the resulting IMOD display elements.
The process 80 continues at block 86 with the formation of a support structure such as a support post 18. The formation of the support post 18 may include patterning the sacrificial layer 25 to form a support structure aperture, then depositing a material (such as a polymer or an inorganic material, like silicon oxide) into the aperture to form the support post 18, using a deposition method such as PVD, PECVD, thermal CVD, or spin-coating. In some implementations, the support structure aperture formed in the sacrificial layer can extend through both the sacrificial layer 25 and the optical stack 16 to the underlying substrate 20, so that the lower end of the support post 18 contacts the substrate 20. Alternatively, as depicted in
The process 80 continues at block 88 with the formation of a movable reflective layer or membrane such as the movable reflective layer 14 illustrated in
The process 80 continues at block 90 with the formation of a cavity 19. The cavity 19 may be formed by exposing the sacrificial material 25 (deposited at block 84) to an etchant. For example, an etchable sacrificial material such as Mo or amorphous Si may be removed by dry chemical etching by exposing the sacrificial layer 25 to a gaseous or vaporous etchant, such as vapors derived from solid XeF2 for a period of time that is effective to remove the desired amount of material. The sacrificial material is typically selectively removed relative to the structures surrounding the cavity 19. Other etching methods, such as wet etching and/or plasma etching, also may be used. Since the sacrificial layer 25 is removed during block 90, the movable reflective layer 14 is typically movable after this stage. After removal of the sacrificial material 25, the resulting fully or partially fabricated IMOD display element may be referred to herein as a “released” IMOD.
In some implementations, the packaging of an EMS component or device, such as an IMOD-based display, can include a backplate (alternatively referred to as a backplane, back glass or recessed glass) which can be configured to protect the EMS components from damage (such as from mechanical interference or potentially damaging substances). The backplate also can provide structural support for a wide range of components, including but not limited to driver circuitry, processors, memory, interconnect arrays, vapor barriers, product housing, and the like. In some implementations, the use of a backplate can facilitate integration of components and thereby reduce the volume, weight, and/or manufacturing costs of a portable electronic device.
The backplate 92 can be essentially planar or can have at least one contoured surface (e.g., the backplate 92 can be formed with recesses and/or protrusions). The backplate 92 may be made of any suitable material, whether transparent or opaque, conductive or insulating. Suitable materials for the backplate 92 include, but are not limited to, glass, plastic, ceramics, polymers, laminates, metals, metal foils, Kovar and plated Kovar.
As shown in
The backplate components 94a and/or 94b can include one or more active or passive electrical components, such as transistors, capacitors, inductors, resistors, diodes, switches, and/or integrated circuits (ICs) such as a packaged, standard or discrete IC. Other examples of backplate components that can be used in various implementations include antennas, batteries, and sensors such as electrical, touch, optical, or chemical sensors, or thin-film deposited devices.
In some implementations, the backplate components 94a and/or 94b can be in electrical communication with portions of the EMS array 36. Conductive structures such as traces, bumps, posts, or vias may be formed on one or both of the backplate 92 or the substrate 20 and may contact one another or other conductive components to form electrical connections between the EMS array 36 and the backplate components 94a and/or 94b. For example,
The backplate components 94a and 94b can include one or more desiccants which act to absorb any moisture that may enter the EMS package 91. In some implementations, a desiccant (or other moisture absorbing materials, such as a getter) may be provided separately from any other backplate components, for example as a sheet that is mounted to the backplate 92 (or in a recess formed therein) with adhesive. Alternatively, the desiccant may be integrated into the backplate 92. In some other implementations, the desiccant may be applied directly or indirectly over other backplate components, for example by spray-coating, screen printing, or any other suitable method.
In some implementations, the EMS array 36 and/or the backplate 92 can include mechanical standoffs 97 to maintain a distance between the backplate components and the display elements and thereby prevent mechanical interference between those components. In the implementation illustrated in
Although not illustrated in
In alternate implementations, a seal ring may include an extension of either one or both of the backplate 92 or the substrate 20. For example, the seal ring may include a mechanical extension (not shown) of the backplate 92. In some implementations, the seal ring may include a separate member, such as an O-ring or other annular member.
In some implementations, the EMS array 36 and the backplate 92 are separately formed before being attached or coupled together. For example, the edge of the substrate 20 can be attached and sealed to the edge of the backplate 92 as discussed above. Alternatively, the EMS array 36 and the backplate 92 can be formed and joined together as the EMS package 91. In some other implementations, the EMS package 91 can be fabricated in any other suitable manner, such as by forming components of the backplate 92 over the EMS array 36 by deposition.
Various implementations of a multi-primary display device can include the EMS array 36. The EMS elements in the array can include one or more IMODs. In some implementations the IMOD can include an analog IMOD (AIMOD). The AIMOD may be configured to selectively reflect multiple primary colors and provide 1 bit per color.
The reflective layer 906 can be actuated toward either the first electrode 910 or the second electrode 902 when a voltage is applied between the first and second electrodes 910 and 902. In this manner, the reflective layer 906 can be driven through a range of positions between the two electrodes 902 and 910, including above and below a relaxed (unactuated) state. For example,
The AIMOD 900 in
The AIMOD 900 can be configured to selectively reflect certain wavelengths of light depending on the configuration of the AIMOD. The distance between the first electrode 910, which in this implementation acts as an absorbing layer and the reflective layer 906 changes the reflective properties of the AIMOD 900. Any particular wavelength is maximally reflected from the AIMOD 900 when the distance between the reflective layer 906 and the absorbing layer (first electrode 910) is such that the absorbing layer (first electrode 910) is located at the minimum light intensity of standing waves resulting from interference between incident light and light reflected from the reflective layer 906. For example, as illustrated, the AIMOD 900 is designed to be viewed from the substrate 912 side of the AIMOD (through the substrate 912), that is, light enters the AIMOD 900 through the substrate 912. Depending on the position of the reflective layer 906, different wavelengths of light are reflected back through the substrate 912, which gives the appearance of different colors. These different colors are also referred to as native or primary colors. The number of primary colors produced by the AIMOD 900 can be greater than 4. For example, the number of primary colors produced by the AIMOD 900 can be 5, 6, 8, 10, 16, 18, 33, etc.
A position of the movable layer 906 at a location such that it reflects a certain wavelength or wavelengths can be referred to as a display state of the AIMOD 900. For example, when the reflective layer 906 is in position 930, red wavelengths of light are reflected in greater proportion than other wavelengths and the other wavelengths of light are absorbed in greater proportion than red. Accordingly, the AIMOD 900 appears red and is said to be in a red display state, or simply a red state. Similarly, the AIMOD 900 is in a green display state (or green state) when the reflective layer 906 moves to position 932, where green wavelengths of light are reflected in greater proportion than other wavelengths and the other wavelengths of light are absorbed in greater proportion than green. When the reflective layer 906 moves to position 934, the AIMOD 900 is in a blue display state (or blue state) and blue wavelengths of light are reflected in greater proportion than other wavelengths and the other wavelengths of light are absorbed in greater proportion than blue. When the reflective layer 906 moves to a position 936, the AIMOD 900 is in a white display state (or white state) and a broad range of wavelengths of light in the visible spectrum are substantially reflected such that and the AIMOD 900 appears “gray” or in some cases “silver,” and having low total reflection (or luminance) when a bare metal reflector is used. In some cases increased total reflection (or luminance) can be achieved with the addition of dielectric layers disposed on the metal reflector, but the reflected color may be tinted with blue, green or yellow, depending on the exact position of 936. In some implementations, in position 936, configured to produce a white state, the distance between the reflective layer 906 and the first electrode 910 is between about 0 and 20 nm. In other implementations, the AIMOD 900 can take on different states and selectively reflect other wavelengths of light based on the position of the reflective layer 906, and also based on materials that are used in construction of the AIMOD 900, particularly various layers in the optical stack 904.
The multiple primary colors displayed by a display element (for example, AIMOD 900) and the possible color combinations of the multiple primary colors displayed by a display element can represent a color space associated with the display element. A color in the color space associated with the display device can be identified by a color level that represents tone, grayscale, hue, chroma, saturation, brightness, lightness, luminance, correlated color temperature, dominant wavelength, or a coordinate in the color space associated with the display element.
There are many methods for spatial and temporal color blending. One method to render images and/or videos on a display device includes error diffusion. Without subscribing to any particular theory, error diffusion includes halftoning methods in which a color difference (or an error) between the color of an incoming image pixel and the color of the corresponding display pixel to which the incoming image pixel is mapped is distributed to neighboring pixels. Without subscribing to any particular theory, error diffusion based approaches can render static images better than video images.
The functional block 1005 is an optional color gamut mapping unit that is configured to receive an input image in a first color space and map it to a second color space. The second color space can be a color space associated with the display device. In various implementations, the first color space can be a sRGB color space and the second color space can be a linear RGB color space. In various implementations, the input image can be a 24-bit sRGB image and the image output from the color gamut mapping unit 1005 can be a 30-bit linear RGB image. The method may further include loading a look-up table (LUT) block 1010 that can be accessed by the color gamut mapping unit 1005 to map the input image from the first color space to the second color space. The LUT can include colors in the second color space that correspond to the colors in the first color space. In various implementations, the LUT can be a 3D LUT interpolation unit with M×M×M vertices. In implementations where the first color space is a sRGB color space and the second color space is a linear RGB color space, M can have a value of 9. The LUT can be re-loadable for different illumination environments. In various implementations, the LUT can be generated using an interpolation method such as, for example, tetrahedral interpolation method.
The vector error diffusion unit 1020 may provide power-saving advantage. Vector error diffusion based halftoning can provide over-all higher quality than the screening dithering method for static images. Vector error diffusion based halftoning can also be used for generating a quantized output image that can be saved in the output frame buffer 1035 for the always-on display (e.g., when the display module stops receiving video input from the host). In various implementations, the output of the vector error diffusion unit 1020 may be primary color indices or quantized RGB values. An implementation of an image processing method employed by the vector error diffusion unit 1020 is discussed below with reference to
The output frame buffer 1035 is configured to store output from the vector error diffusion unit 1020 described above. In various implementations, only one frame is used for the output generated by the vector error diffusion unit 1020. Besides being used for the situation when the frame rate of the display device is higher than the frame rate of the input signal, the output frame buffer 1035 can also provide the input for the input image retrieval unit 1025 as described below. In various implementations having primary color indices as the quantized output, the size of the required output frame buffer can be 400×400×2×4 bits, where 4 bits are used for storing the primary index for each pixel in each frame, and 2 frames are used for a display device operating/running at 60 Hz. If the display device is operating/running at higher frame rate, the output buffer 1035 may contain more frames. For example, in various implementations, 3 frames may be used for a display device operating at 90 Hz frame rate and 4 frames may be used for a display device operating at 120 Hz frame rate.
The input image retrieval unit 1025 can be configured to translate the primary indices to RGB values in implementation having primary color indices as the quantized output and combine the two output frames to retrieve the original RGB input. The retrieved RGB input can be sent to the vector error diffusion unit 1020 to obtain a one-frame quantized output.
As discussed above, the optional color gamut mapping unit functional block 1005 can receive an input image in a first color space and map it to a second color space. The color gamut mapping unit 1005 can use a variety of methods to map the colors of an input image from a first color space to a second colors. For example, as discussed above, the colors of an input image can be mapped from a first color space to a second color using a tetrahedron interpolation method. The tetrahedron interpolation method can employ a three dimensional look-up table (LUT). In various implementations of the method 1000 illustrated in
The input RGB values for each pixel may be modified by adding diffused errors from the feed-back loop 1205 that includes the diffusion filter 1210. The primary selector 1215 compares the desired color with N primaries to choose the output primary index 1225 to the closest color to the desired color. In various implementations, the closest color can be measured with respect to a distance in the color space. The selected primary index 1225 may be sent to the primary RGB LUT 1220 to generate quantized RGB value, which may be primary RGB values corresponding to the primary index 1225. The error or the difference between the selected primary RGB and the desired RGB color is calculated and sent to the feed-back loop 1205 with the diffusion filter 1210. The diffused errors are added to pixels at future processing locations, such as neighboring pixels of the current pixel, which may or may not be immediately adjacent to the current pixel. One example of an error diffusion filter is described in detail in connection with
In this example, an accumulated vector error for a pixel 1312 can be calculated by summing diffused quantization vector errors of pixels 1302, 1304, 1306, 1308, and 1310. Diffused quantization vector errors can be determined by applying the error diffusion filter to quantization vector errors, which involves multiplying the quantization vector errors by the coefficients of the diffusion error filter. For example, a quantization vector error e (i−1, j) of the pixel 1302 is multiplied by a coefficient of the error diffusion filter 4/16 as illustrated in
Vector error diffusion illustrated in
According to the accumulated error dependency discussed above, multiple pixels of a frame may be scheduled and assigned timestamps. In some implementations the error diffusion filter may be a tap-limited (for example, with a predetermined number of taps), causal (for example, linear time-invariant) finite impulse response (FIR) filter. Depending on the characteristics of the error diffusion filter, the accumulated vector error computation dependency may be determined a priori and, multiple pixel error processes can be scheduled to reduce the wait time, for example. In the example of
During the first stage 1506, an accumulated vector error (e.g., Σe (i, j) in
During the second stage 1510, quantization distances can be calculated from the quantization differences determined at the first stage 1506. For example, quantization distances for pixel (i, j) can be 16 Euclidean distances between the pre-quantization vector for pixel (i, j) and 16 primary color vectors such as √{square root over ((x1−xp)2+(y1−yp)2+(z1−zp)2)}{square root over ((x1−xp)2+(y1−yp)2+(z1−zp)2)}{square root over ((x1−xp)2+(y1−yp)2+(z1−zp)2)}, √{square root over ((x2−xp)2+(y2−yp)2+(z2−zp)2)}{square root over ((x2−xp)2+(y2−yp)2+(z2−zp)2)}{square root over ((x2−xp)2+(y2−yp)2+(z2−zp)2)}, . . . , √{square root over ((x16−xp)2+(y16−yp)2+(z16−zp)2)}{square root over ((x16−xp)2+(y16−yp)2+(z16−zp)2)}{square root over ((x16−xp)2+(y16−yp)2+(z16−zp)2)}. In synchronous circuit implementations, the quantization distances calculated in the second stage 1510 may be saved in pipeline registers 1512. The pre-quantization vector from the first stage 1506 may also be saved in the pipeline registers 1512.
During the third stage 1514, a quantization vector error can be calculated based on one of the quantization distances from the second stage 1510. The quantization vector error may be determined by calculating the difference between the pre-quantization vector determined in the first stage 1506 and the quantization vector, which is the minimum distance among the quantization distances from the second stage 1510. The minimum distance among the quantization distances may be determined in a variety of ways. For example, the quantization distances may be sorted based on their magnitudes through a sorting function, and the minimum distance may be selected after the sorting operation. In some implementations, the quantization distances may be referred to by their index numbers and the index number for the minimum distance may be determined and selected instead of the minimum distance value itself. Referencing the quantization distances by their index numbers may have a non-limiting advantage of ease of retrieving a corresponding reference vector (e.g., one of the 16 primary color vectors) based on the index number of the minimum distance. Based on the minimum distance, a corresponding reference vector (e.g., one of the 16 primary color vectors) may be retrieved and selected as a quantization vector. The quantization vector error may be determined by calculating the difference between the quantization vector and the pre-quantization vector. In synchronous circuit implementations, the quantization vector error calculated in the third stage 1514 may be saved in pipeline registers 1516.
During the fourth stage 1518, a diffused quantization vector error can be calculated by applying an error diffusion filter to the quantization vector error from the third stage 1514. The error diffusion filter may be substantially similar to the filters described in connection with
The stages 1506, 1510, 1514, and 1518 described above can be pipelined as illustrated in
The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48 and a microphone 46. The housing 41 can be formed from any of a variety of manufacturing processes, including injection molding, and vacuum forming. In addition, the housing 41 may be made from any of a variety of materials, including, but not limited to: plastic, metal, glass, rubber and ceramic, or a combination thereof. The housing 41 can include removable portions (not shown) that may be interchanged with other removable portions of different color, or containing different logos, pictures, or symbols.
The display 30 may be any of a variety of displays, including a bi-stable or analog display, as described herein. The display 30 also can be configured to include a flat-panel display, such as plasma, EL, OLED, STN LCD, or TFT LCD, or a non-flat-panel display, such as a CRT or other tube device. In addition, the display 30 can include an IMOD-based display, as described herein.
The components of the display device 40 are schematically illustrated in
The network interface 27 includes the antenna 43 and the transceiver 47 so that the display device 40 can communicate with one or more devices over a network. The network interface 27 also may have some processing capabilities to relieve, for example, data processing requirements of the processor 21. The antenna 43 can transmit and receive signals. In some implementations, the antenna 43 transmits and receives RF signals according to the IEEE 16.11 standard, including IEEE 16.11(a), (b), or (g), or the IEEE 802.11 standard, including IEEE 802.11a, b, g, n, and further implementations thereof. In some other implementations, the antenna 43 transmits and receives RF signals according to the Bluetooth® standard. In the case of a cellular telephone, the antenna 43 can be designed to receive code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), Global System for Mobile communications (GSM), GSM/General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Terrestrial Trunked Radio (TETRA), Wideband-CDMA (W-CDMA), Evolution Data Optimized (EV-DO), 1xEV-DO, EV-DO Rev A, EV-DO Rev B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA+), Long Term Evolution (LTE), AMPS, or other known signals that are used to communicate within a wireless network, such as a system utilizing 3G, 4G or 5G technology. The transceiver 47 can pre-process the signals received from the antenna 43 so that they may be received by and further manipulated by the processor 21. The transceiver 47 also can process signals received from the processor 21 so that they may be transmitted from the display device 40 via the antenna 43.
In some implementations, the transceiver 47 can be replaced by a receiver. In addition, in some implementations, the network interface 27 can be replaced by an image source, which can store or generate image data to be sent to the processor 21. The processor 21 can control the overall operation of the display device 40. The processor 21 receives data, such as compressed image data from the network interface 27 or an image source, and processes the data into raw image data or into a format that can be readily processed into raw image data. The processor 21 can send the processed data to the driver controller 29 or to the frame buffer 28 for storage. Raw data typically refers to the information that identifies the image characteristics at each location within an image. For example, such image characteristics can include color, saturation and gray-scale level. The processor 21 (or other computing hardware in the device 40) can be programmed to perform implementations of the methods described herein. The processor 21 (or other computing hardware in the device 40) can be in communication with a computer-readable medium that includes instructions, that when executed by the processor 21, cause the processor 21 to perform implementations of the methods described herein.
The processor 21 can include a microcontroller, CPU, or logic unit to control operation of the display device 40. The conditioning hardware 52 may include amplifiers and filters for transmitting signals to the speaker 45, and for receiving signals from the microphone 46. The conditioning hardware 52 may be discrete components within the display device 40, or may be incorporated within the processor 21 or other components.
The driver controller 29 can take the raw image data generated by the processor 21 either directly from the processor 21 or from the frame buffer 28 and can re-format the raw image data appropriately for high speed transmission to the array driver 22. In some implementations, the driver controller 29 can re-format the raw image data into a data flow having a raster-like format, such that it has a time order suitable for scanning across the display array 30. Then the driver controller 29 sends the formatted information to the array driver 22. Although a driver controller 29, such as an LCD controller, is often associated with the system processor 21 as a stand-alone Integrated Circuit (IC), such controllers may be implemented in many ways. For example, controllers may be embedded in the processor 21 as hardware, embedded in the processor 21 as software, or fully integrated in hardware with the array driver 22.
The array driver 22 can receive the formatted information from the driver controller 29 and can re-format the video data into a parallel set of waveforms that are applied many times per second to the hundreds, and sometimes thousands (or more), of leads coming from the display's x-y matrix of display elements.
In some implementations, the driver controller 29, the array driver 22, and the display array 30 are appropriate for any of the types of displays described herein. For example, the driver controller 29 can be a conventional display controller or a bi-stable display controller (such as an IMOD display element controller). Additionally, the array driver 22 can be a conventional driver or a bi-stable display driver (such as an IMOD display element driver). Moreover, the display array 30 can be a conventional display array or a bi-stable display array (such as a display including an array of IMOD display elements). The driver controller 29 and/or the array driver 22 can be an AIMOD controller or driver. In some implementations, the driver controller 29 can be integrated with the array driver 22. Such an implementation can be useful in highly integrated systems, for example, mobile phones, portable-electronic devices, watches, or small-area displays.
In some implementations, the input device 48 can be configured to allow, for example, a user to control the operation of the display device 40. The input device 48 can include a keypad, such as a QWERTY keyboard or a telephone keypad, a button, a switch, a rocker, a touch-sensitive screen, a touch-sensitive screen integrated with the display array 30, or a pressure- or heat-sensitive membrane. The microphone 46 can be configured as an input device for the display device 40. In some implementations, voice commands through the microphone 46 can be used for controlling operations of the display device 40.
The power supply 50 can include a variety of energy storage devices. For example, the power supply 50 can be a rechargeable battery, such as a nickel-cadmium battery or a lithium-ion battery. In implementations using a rechargeable battery, the rechargeable battery may be chargeable using power coming from, for example, a wall socket or a photovoltaic device or array. Alternatively, the rechargeable battery can be wirelessly chargeable. The power supply 50 also can be a renewable energy source, a capacitor, or a solar cell, including a plastic solar cell or solar-cell paint. The power supply 50 also can be configured to receive power from a wall outlet.
In some implementations, control programmability resides in the driver controller 29 which can be located in several places in the electronic display system. In some other implementations, control programmability resides in the array driver 22. The above-described methods for generating a constrained color palette may be implemented in any number of hardware and/or software components and in various configurations.
As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
The various illustrative logics, logical blocks, modules, circuits and algorithm steps described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and steps described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.
The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular steps and methods may be performed by circuitry that is specific to a given function.
In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also can be implemented as one or more computer programs, e.g., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection can be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray Disc™ where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above also may be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product.
Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the claims are not intended to be limited to the implementations shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein. Additionally, a person having ordinary skill in the art will readily appreciate, the terms “upper” and “lower” are sometimes used for ease of describing the figures, and indicate relative positions corresponding to the orientation of the figure on a properly oriented page, and may not reflect the proper orientation of, e.g., an IMOD display element as implemented.
Certain features that are described in this specification in the context of separate implementations also can be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also can be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, a person having ordinary skill in the art will readily recognize that such operations need not be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one more example processes in the form of a flow diagram. However, other operations that are not depicted can be incorporated in the example processes that are schematically illustrated. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the illustrated operations. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results.