1. Field of the Invention
The present invention generally relates to the field of electronic paper displays. More particularly, the invention relates to displaying video on electronic paper displays.
2. Description of the Background Art
Several technologies have been introduced recently that provide some of the properties of paper in a display that can be updated electronically. Some of the desirable properties of paper that this type of display tries to achieve include: low power consumption flexibility, wide viewing angle, low cost, light weight, high resolution, high contrast and readability indoors and outdoors. Because these displays attempt to mimic the characteristics of paper, these displays are referred to as electronic paper displays (EPDs) in this application. Other names for this type of display include: paper-like displays, zero power displays, e-paper, bi-stable displays and electrophoretic displays.
A comparison of EPDs to Cathode Ray Tube (CRT) displays or Liquid Crystal Displays (LCDs) reveals that in general, EPDs require much less power and have higher spatial resolution, but have the disadvantages of slower update rates, less accurate gray level control, and lower color resolution. Many electronic paper displays are currently only grayscale devices. Color devices are becoming available often through the addition of a color filter, which tends to reduce the spatial resolution and the contrast.
Electronic Paper Displays are typically reflective rather than transmissive. Thus they are able to use ambient light rather than requiring a lighting source in the device. This allows EPDs to maintain an image without using power. They are sometimes referred to as “bi-stable” because black or white pixels can be displayed continuously, and power is only needed when changing from one state to another. However, many EPD devices are stable at multiple states and thus support multiple gray levels without power consumption.
One type of EPD called a microencapsulated electrophoretic (MEP) display moves hundreds of particles through a viscous fluid to update a single pixel. The viscous fluid limits the movement of the particles when no electric field is applied and gives the EPD its property of being able to retain an image without power. This fluid also restricts the particle movement when an electric field is applied and causes the display to be very slow to update compared to other types of displays.
While electronic paper displays have many benefits there are a number of problems when displaying video: (1) slow update speed (also called update latency); (2) accumulated error; and (3) visibility of previously displayed images (e.g., ghosting).
The first problem is that most EPD technologies require a relatively long time to update the image as compared with conventional CRT or LCD displays. A typical LCD takes approximately 5 milliseconds to change to the correct value, supporting frame rates of up to 200 frames per second (the achievable frame rate is typically limited by the ability of the display driver electronics to modify all the pixels in the display). In contrast, many electronic paper displays, e.g. the E Ink displays, take on the order of 300-1000 milliseconds to change a pixel value from white to black. While this update time is generally sufficient for the page turning needed by electronic books, it is a significant problem for interactive applications with user interfaces and the display of video.
When displaying a video or animation, each pixel should ideally be at the desired reflectance for the duration of the video frame, i.e. until the next requested reflectance is received. However, every display exhibits some latency between the request for a particular reflectance and the time when that reflectance is achieved. If a video is running at 10 frames per second (which is already reduced since typical video frame rates for movies are 30 frames a second) and the time required to change a pixel is 10 milliseconds, the pixel will display the correct reflectance for 90 milliseconds and the effect will be as desired. If it takes 100 milliseconds to change the pixel, it will be time to change the pixel to another reflectance just as the pixel achieves the correct reflectance of the prior frame. Finally, if it takes 200 milliseconds for the pixel to change, the pixel will never have the correct reflectance except in the circumstance where the pixel was very near the correct reflectance already, i.e. slowly changing imagery. Thus, EPDs have not been used to display video.
The second problem is accumulated error. As different values are applied to drive different pixels to different optical output levels, errors are introduced depending on the particular signals or waveforms applied to the pixel to move it from one particular optical state to another. This error tends to accumulate over time. A typical prior are solution would be to drive all the pixels to black, then to white, then back to black. However, with video this cannot be done because there isn't time with 10 or more frames per second, and since there are many more transitions in optical state for video, this error accumulates to the point where it is visible in the video images produced by the EPD.
The third problem is related to update latency in that often there are not enough frames to set some pixels to their desired gray level. This produces visible video artifacts during playback, particularly in the high motion video segments. Similarly, there is not enough contrast in the optical image produced by the EPD because there is not time between frames to drive the pixels to the proper optical state where there is contrast between pixels. This also relates to the characteristics of EPD where near the ends of the pixel values, black and white, the displays require more time to transition between optical states, e.g., different gray levels.
The present invention overcomes the deficiencies and limitations of the prior art by providing a system and method for displaying video on electronic paper displays. In particular, the system and method of the present invention reduce video playback artifacts on electronic paper displays. A system for displaying video on electronic paper displays to reduce video playback artifacts comprises an electronic paper display, a video display driver, a video transcoder, a display controller, a memory buffer and a waveforms module. The video display driver receives a re-formatted video stream, which has been processed by the video transcoder, from the memory buffer. The video display driver directs the video transcoder to process the video stream and generate pixel data. The video display driver also directs the loading of waveforms into the frame buffer and the repeated updating of display commands to activate the display controller until the end of the video playback process. The video transcoder receives a video stream for presentation on the electronic paper display and processes the video stream generating pixel data that is provided to the display controller. The present invention also includes a method for displaying video on an electronic paper display.
The invention is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals are used to refer to similar elements.
A system and method for displaying video on electronic paper displays is described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the invention. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed. For example, the present invention is described below in the context of gray scale and electrophoretic displays, however, those skilled in the art will recognize that the principles of the present invention are applicable to any bi-stable display or color sequences.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. It should be understood that these terms are not intended as synonyms for each other. For example, some embodiments may be described using the term “connected” to indicate that two or more elements are in direct physical or electrical contact with each other. In another example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. The embodiments are not limited in this context.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs and magnetic optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
Finally, the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
Device Overview
Directly beneath the transparent electrode 102 is the microcapsule layer 120. In one embodiment, the microcapsule layer 120 includes closely packed microcapsules 118 having a clear liquid 108 and some black particles 112 and white particles 110. In some embodiments, the microcapsule 118 includes positively charged white particles 110 and negatively charged black particles 112. In other embodiments, the microcapsule 118 includes positively charged black particles 112 and negatively charged white particles 110. In yet other embodiments, the microcapsule 118 may include colored particles of one polarity and different colored particles of the opposite polarity. In some embodiments, the top transparent electrode 102 includes a transparent conductive material such as indium tin oxide.
Disposed below the microcapsule layer 120 is a lower electrode layer 114. The lower electrode layer 114 is a network of electrodes used to drive the microcapsules 118 to a desired optical state. The network of electrodes is connected to display circuitry, which turns the electronic paper display “on” and “off” at specific pixels by applying a voltage to specific electrodes. Applying a negative charge to the electrode repels the negatively charged particles 112 to the top of microcapsule 118, forcing the positively charged white particles 110 to the bottom and giving the pixel a black appearance. Reversing the voltage has the opposite effect—the positively charged white particles 112 are forced to the surface, giving the pixel a white appearance. The reflectance (brightness) of a pixel in an EPD 100 changes as voltage is applied. The amount the pixel's reflectance changes may depend on both the amount of voltage and the length of time for which it is applied, with zero voltage leaving the pixel's reflectance unchanged.
The electrophoretic microcapsules of the layer 120 may be individually activated to a desired optical state, such as black, white or gray. In some embodiments, the desired optical state may be any other prescribed color. Each pixel in layer 114 may be associated with one or more microcapsules 118 contained with a microcapsule layer 120. Each microcapsule 118 includes a plurality of tiny particles 110 and 112 that are suspended in a clear liquid 108. In some embodiments, the plurality of tiny particles 110 and 112 are suspended in a clear liquid polymer.
The lower electrode layer 114 is disposed on top of a backplane 116. In one embodiment, the electrode layer 114 is integral with the backplane layer 116. The backplane 116 is a plastic or ceramic backing layer. In other embodiments, the backplane 116 is a metal or glass backing layer. The electrode layer 114 includes an array of addressable pixel electrodes and supporting electronics.
Electronic paper displays have some physical media capability of maintaining a state. In the physical media 220 of electrophoretic displays, the state is the position of a particle or particles 206 in a fluid, e.g. a white particle in a dark fluid. In other embodiments that use other types of displays, the state might be determined by the relative position of two fluids, or by rotation of a particle or by the orientation of some structure. In
Regardless of the exact device, for zero power consumption, it is necessary that this state can be maintained without any power. Thus, the control signal 230 as shown in
The reflectance of a pixel in an EPD changes as voltage is applied. The amount the pixel's reflectance changes may depend on both the amount of voltage and the length of time for which it is applied, with zero voltage leaving the pixel's reflectance unchanged.
System Overview
The video transcoder 304 receives a video stream 302 on signal line 312 for presentation on the display 100. The video transcoder 304 processes the video stream 302 and generates pixel data on signal line 314 that are provided to the display controller 308. The video transcoder 304 adapts and re-encodes the video stream for better display on the EPD 100. For example, the video transcoder 304 includes one or more of the following processes: encoding the video using the control signals instead of the desired image, encoding the video using simulation data, scaling and translating the video for contrast enhancement and reducing errors by using simulation feedback, past pixels and future pixels. More information regarding the functionality of the video transcoder 304 is provided below with reference to
The display controller 308 includes a host interface for receiving information such as pixel data. The display controller 308 also includes a processing unit, a data storage database, a power supply and a driver interface (not shown). In some embodiments, the display controller 308 includes a temperature sensor and a temperature conversion module. In some embodiments, a suitable controller used in some electronic paper displays is one manufactured by E Ink Corporation. In one embodiment, the display controller 308 is coupled to signal line 314 to transfer the data for the video frame. The signal line 314 may also be used to transfer a notification to display controller 308 that video frame is updated, or a notification of what the video frame rate is, so that display controller 308 updates the screen accordingly. The display controller 308 is also coupled by a signal line 316 to the video transcoder 304. This channel updates the look up tables 404 (as will be described below with reference to
The waveforms module 310 stores the waveforms to be used during video display on the electronic paper display 100. In some embodiments, each waveform includes five frames, in which each frame takes a twenty millisecond (ms) time slice and the voltage amplitude is constant for all frames. The voltage amplitude is either 15 volts (V), 0V or −115V. In some embodiments, 256 frames is the maximum number of frames that can be stored for a particular display controller.
The video display driver 301 receives a video stream 302 on signal line 312 for presentation on the display 100. In another embodiment, the video display driver 301 receives a re-formatted video stream, which has been processed by the video transcoder 304, from memory buffer 320. As previously mentioned, more information regarding the processing performed by the video transcoder 304 is provided below with reference to
As explained in above, the video transcoder 304 processes the video stream 302 as directed by the video display driver 301 and generates pixel data that is provided to the display controller 308. The video transcoder 304 adapts and re-encodes the video stream for better display on the EPD 100. For example, the video transcoder 304 includes one or more of the following processes: encoding the video using the control signals instead of the desired image, encoding the video using simulation data, scaling and translating the video for contrast enhancement and reducing errors by using simulation feedback, past pixels and future pixels. More information regarding the functionality of the video transcoder 304 is provided below with reference to
The display controller 308 includes a host interface for receiving information such as pixel data. The display controller 308 also includes a processing unit, a data storage database, a power supply and a driver interface (not shown). In some embodiments, a suitable controller used in some electronic paper displays is one manufactured by E Ink Corporation. Similar to the display controller 308 in
The waveforms module 310 stores the waveforms to be used during video display on the electronic paper display 100. In some embodiments, each waveform includes five frames, in which each frame takes a twenty millisecond (ms) time slice and the voltage amplitude is constant for all frames. The voltage amplitude is either 15 volts (V), 0V or −15V. In some embodiments, 256 frames is the maximum number of frames that can be stored for a particular display controller.
Video Transcoder 304
The video transcoder 304 can be implemented in many ways to implement the functionality described below with reference to
Those skilled in the art will recognize that in one embodiment the video transcoder 304 and its components process the input video stream 302 in real time so that data can be output to the display controller 308 for generation of an output on display 100. However, in an alternate embodiment, the output of the video transcoder 304 may be stored in a storage device or memory 320 for later use. In such an embodiment, the video transcoder 304 acts as a transcoder to pre-process the video stream 302. This has the advantage of using other computational resources than those used for generation of the display which in turn allows greater quality prior to display.
Referring now to
The video converter 402 has inputs and outputs and is adapted to receive the video stream 302 on signal line 312 from any video source (not shown). The video converter 402 adapts and re-encodes the video stream 302 to take into account the difference in display speed and characteristics of the electronic paper display 100. The video converter 402 is also coupled for communication with the lookup table 404 and the simulation module 406 to reduce video playback artifacts as will be described in more detail below. The video converter 402 is able to generate video images on the electronic paper display 100 by using pulses instead of long waveforms, by re-encoding the video to reduce or eliminate visible video artifacts, and by using feedback error based on a model of the display characteristics. These functions performed by the video converter 402 are discussed in turn below. The video converter 402 advantageously uses shorter durations of voltage in order to achieve high video frame rate.
The lookup table 404 is coupled to the video converter 402 to receive the video stream 302, store it and provide voltage levels to be applied to pixels. In one embodiment, the lookup table 404 comprises a volatile storage device such as dynamic random access memory (DRAM), static random access memory (SRAM) or another suitable memory device. In another embodiment, the lookup table 404 comprises a non-volatile storage device, such as a hard disk drive, a flash memory device or other persistent storage device. In yet another embodiment, the lookup table 404 comprises a combination of a non-volatile storage device and a volatile storage device. The interaction of the lookup table 404 and the video converter 402 is described below.
The simulation module 406 is also coupled to the video converter 402 to provide simulation data. In one embodiment, the simulation module 406 can be a volatile storage device, a non-volatile storage device or a combination of both. The simulation module 406 provides data about the display characteristics of the display 100. In one embodiment, the simulation module 406 provides simulated data representing the display characteristics of the display 100. For example, the simulated data includes reconstructed or simulated values for individual pixels. Depending on the frame rate, there may not be enough time to apply a voltage level to get a pixel to transition from its current to state to the desired state. Thus, the pixel value ends up at an inaccurate level of gray. This inaccurate level of gray is referred here as a simulated or reconstructed value or frame. The simulation module 406 provides such simulated or reconstructed values are used by the video converter 402 to improve the overall quality of the output generated by the display 100. The simulation module 406 also provides estimated error introduced in transition a pixel from one state to another. Thus, the simulated information can be used to encode the video to maximize the quality of the video, as well as be used to reduce or eliminate error.
A significant challenge with displaying video sequences on the display 100 is the time required to modify value of a pixel. This time is a function of the desired gray level and the previous gray levels of the pixel. The video converter 402 of the present invention sets a desired video frame rate, R, and only allows M number of voltage frames to be applied to a pixel to change its value. For example, M equals 1000 ms divided by R multiplied by VT, where VT is the duration of one voltage frame. In one embodiment, VT=20 ms for the display 100, thus, in order to obtain a video frame rate of 12.5 fps, the number of voltage frames to be applied to change the value of a pixel is M=4. If a video clip has N video frames {f0, f1 . . . fN}. Transition from frame fn−1 to frame fn is performed by applying different voltage levels in M number of voltage frames. With an example electrophoretic display, only one of three voltage levels {0, −15, and 15} can be applied in a voltage frame. The lookup table 404 is used to determine what voltage levels to apply in M voltage frames for a pixel level to go from value pn−1(x, y) to pn(x, y), where pn(x, y) is an element in the frame fn, x and y are the coordinates of the pixel pn in the frame fn, and fn is the current video frame. The output of the lookup table is a voltage vector, {right arrow over (Vn)}={V1, V2, . . . , VM}.
Limiting the number of voltage frames to M results in less accurate gray levels for individual pixels, simply because sometimes there is not enough time to apply voltage long enough to set the pixel to a desired gray level, pn(x, y). Therefore, the pn(x, y)ε{f1 . . . fn . . . fN} are inaccurately constructed as p*n(x, y)ε{f*1 . . . f*n . . . f*N}. The video converter 402 advantageously computes the required voltage levels to set the display 100 to a new frame based on the pixels of the reconstructed video frames, f*n−i, instead of the pixels of previous video frames fn−i.
The lookup table 404 can be arbitrarily complex as illustrated in
The data buffer 412 is coupled to the video converter 402 to receive the video data, store it and provide video data. In one embodiment, the data buffer 412 comprises a volatile storage device such as dynamic random access memory (DRAM), static random access memory (SRAM) or another suitable memory device. In another embodiment, the data buffer 412 comprises a non-volatile storage device, such as a hard disk drive, a flash memory device or other persistent storage device. In yet another embodiment, the data buffer 412 comprises a combination of a non-volatile storage device and a volatile storage device. The data buffer 412 is used to store previously constructed frames and future frames. The interaction of the data buffer 412 with the other components is described below.
Referring now also to
Instead, if we look ahead and also consider the future values of pn(x, y) when deciding on the voltage level, the overall error between pn(x, y) and the achieved values p*n(x, y) may be smaller. For example, in the above table, when n=2, if we considered that in the next video frame p*3(0,0)=9, instead of {right arrow over (V2)}={−15,−15,−15}, {right arrow over (V2)}={−15,−15,+15} can be applied, bringing the value of p*2(0,0) to 2 and then back to 3. After {right arrow over (V3)}={+15,+15,+15} is applied, p*3(0,0)=6 is achieved, which is much closer to the target value of p3(0,0)=9. The method of the present invention can be seen as trying to fit a polynomial curve to the desired gray levels for each pixel. Those skilled in the art will recognize that curve fitting can be done using many techniques in the literature such as cubic spline, Bezier curves etc. The new target values for pixels can be determined from the polynomial fit. When performing curve fitting, there are range limitations on the 1st derivative of each point such that the points on the curve are achievable given the number of voltage frames M. In other words, the polynomial should not be too steep at any point. If the polynomial is too steep, low pass filtering can be done for global or local smoothing.
In another embodiment, the voltage vector is determined based on the previously constructed pixel values, p*n−1(x, y), . . . , p*n−i(x, y); current pixel values, pn(x, y); and future pixel values, pn+1 (x, y), . . . , pn+m(x, y) as shown in
In one embodiment, an achievable new target path is set that minimizes the error in pixel values (p*n−pn), minimizes the rise and fall times (an−bn−1) and the first derivative of the path never exceeds the achievable level (|pn−p*n−1|<=M). This can be described mathematically as:
Minimize |p*n−pn| (1)
Minimize an−bn−1 (2)
With achievability condition |pn−p*n−1|<=M (3)
and boundary conditions bn≧an,an≧n−0.5,bn≦n+0.5 (4)
If it is desired that the achieved value of p*n is always reached at n, then instead of (4), boundary conditions can be set as
n≧an≧n−0.5 and n≦bn≦n+0.5
Combining (1) and (2) and optimizing all the video frames, N, we obtain the following optimization problem:
The values of weights α and β determine the trade off between fast rise/fall and the accuracy of constructed pixel values. A relatively large α value guarantees that the pixel levels are achieved first, i.e. p*n−pn=0, before fall and rise times are optimized.
The optimization of equation (5) assumes that a pixel changing from one value to another can be computed from a derivative and a single threshold value. In reality, the amount of change achievable in pixel values is based on many other parameters. For example, the achievable change is greater in the middle ranges of gray values compared to around the limits of the gray values, as will be described in more detail below with reference to
With condition Achievable[pn,p*n−1,M]=true
bn≧an,an≧n−0.5,bn≦n+0.5
Since it may computationally intensive to solve this optimization problem for all the video frames together from 0 to N, in one embodiment, optimization can be done in on few video frames at a time or can be done with pre-processing.
In yet another embodiment, relative values of neighboring pixels can also be taken into consideration. For example, let's say two neighboring pixels pn(x, y) and pn(x, y+1) has the same desired value at video frames n−1 and n: pn−1(x, y)=0 and pn(x, y)=5; and pn−1(x, y+1)=0 and pn(x, y+1)=5. If after optimization the new target values are p*n(x, y)=3 and p*n(x, y+1)=5 this may not be desirable since neighboring pixels p*n(x, y) and p*n(x, y+1) end up at different gray levels. This problem can be addressed by including additional spatial constraints to the optimization problem that forces the neighboring pixels to have similar errors:
With condition Achievable[pn, p*n−1,M]=true
bn≧an,an≧n−0.5,bn≦n+0.5
When δ equals 1 all the neighboring pixels are forced to have the same amount of error.
Thus, the video converter 302 in one embodiment processes the input video sequence by re-encoding them to reduce or eliminate visible video artifacts based on (1) desired value, (2) a previous pixel value, (3) a reconstructed value of pixel (simulation data) or achievable pixel value, (4) future value of pixels, (5) spatial constraints, and (6) minimizing error and rise and fall times.
In one embodiment, the present invention also includes a method for eliminating accumulating errors. Changing the value of a pixel only incrementally results in accumulation of errors on paper like displays. The video transcoder 304 eliminates these errors by occasionally driving pixels to the limits of gray level values, e.g., 0 and 15. If the value of a pixel is already at these levels, extra voltage can be applied to further force the pixels to these limits. For example, if a pixel at pn−1=0 and pn=0, normally one would apply {right arrow over (Vn)}={0,0,0} to go from n−1 to n. However, there is a benefit in applying {right arrow over (Vn)}={−15,−15,−15} to reduce the errors. In other words, the video transcoder 304 occasionally over drives to the pixel limits to ensure that pixel value is at zero without any error. It can be harmful for the display 100 if such voltage levels are continuously applied. So the encoder 304 includes a counter for each pixel that is set to determine the time of last frame update when the pixel was driven to a limit. As long as the threshold is above a predefined amount an extra voltage can be applied.
Referring now to
Referring now also to
Referring now also to
The shifting module 408 and the scaling module 410 also include a candidate module for detecting which portions of a video sequence are candidates for shifting and/or scaling. A good candidate video clip for such dynamic range shifting and/or reduction would be a video clip where most of its motion intense regions are close to the dynamic range borders. In particular, this candidate module determines if and how much dynamic range shifting/reduction are necessary. The candidate module first computes how many pixels, Sh, require transitions from one gray level, h, to the other and the average amount of change, Dh, (the number of gray levels). For example, if a pixel is set from 14 to 15 and another pixel is set from 13 to 15, S15=2 transitions are done for gray level 15 with the amount of D15=(1+2)/2=3/2 average gray level changes. More specifically:
The examples and formulations given here are for an entire video sequence of N frames and the entire region of X by Y in each frame. These formulations can be easily altered to be applied for subsets of the video frames and sub-regions of each frame. When doing so, the transitions of dynamic ranges either between frames or in a frame needs to be taken into account as well.
Once the candidate module computes Sh and Dh for each gray level, each of these offer different information: For example, if Sh has a small value for gray level h and Dh has a large value (note that dynamic range of Sh and Dh are different and their values should be considered in their dynamic range not relative to each other), then this means not many pixels have gray level h, but then a pixel is set to h, the displacement of gray values were high. In contrast, if Sh has a large value and Dh has a small value, this means many pixels are set to h but displacement of gray values are small and more quickly displayable on the display 100.
The candidate module processes the values of Sh and Dh individually or collectively (Sh*Dh,Sh+Dh, etc.) to identify which h value the most motion intensive pixels cluster around. And that the pixel values pn in the whole video sequence can be shifted by ρ and or multiplied by σ. The shift amount p and multiplication amount ρ can be determined in such a way that the shifting and scaling guarantees a minimum dynamic range Rmin when scaling and shifting the most motion intense gray levels to mid gray regions.
Video Display Driver
The video display driver 301 receives a video stream 302 on signal line 312 for presentation on the display 100. In one embodiment, the video display driver 301 receives a re-formatted video stream, which has been processed by the video transcoder 304, from memory buffer 320. The main routine control module 1102 of the video display driver 301 directs the video transcoder 304 to process the video stream 302 and generate pixel data. The main routine control module 1102 of the video display driver 301 also directs the loading of waveforms into the frame buffer 1104 (
The main routine control module 1102 of the video display driver 301 initiates the process performed by the video transcoder 304. The main routine control module 1102 includes a processor 1101. The processor 1101 can be any general-purpose processor for implementing a number of processing tasks. Generally, the processor 1101 is coupled to the display controller 308 and processes data received by the main routine control module 1102. The main routine control module 1102 also loads of waveforms into the frame buffer 1104 and updates display commands repeatedly to activate the display controller 308 until the end of the video playback. More details describing the steps performed in the main routine control module 1102 are described below with reference to
The frame buffer 1104 receives data from the video frame update module 1106 and stores information to be used by the display controller 308. The frame buffer 1104 contains pixel data that is used by the display controller 308. In one embodiment, as shown in this
The video frame update module 1106 of the video display driver is initiated by the main routine control module 1102 and controls the process for copying video frames one by one from the memory buffer 320 to the frame buffer 1102 in real time during the video playback. Details describing the steps performed in this process of the video frame update module 1106 are described below with reference to
In one embodiment, the main routine control module 1102, frame buffer 1104 and video frame update module 1106 are three separate modules containing software routines and are adapted for communication with the display controller 308. In another embodiment, the main routine control module 1102, frame buffer 1104 and video frame update module 1106 are hardware devices operating on the EPD 100.
Methods
Referring now to
The frame buffer 1104 is then initialized 1208 by resetting the frame buffer 1104 to a blank image. The video frame update module 1106 is then initiated 1210. The details of the steps involved are described below with reference to
The foregoing description of the embodiments of the present invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the present invention be limited not by this detailed description, but rather by the claims of this application. As will be understood by those familiar with the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, routines, features, attributes, methodologies and other aspects are not mandatory or significant, and the mechanisms that implement the present invention or its features may have different names, divisions and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, routines, features, attributes, methodologies and other aspects of the present invention can be implemented as software, hardware, firmware or any combination of the three. Also, wherever a component, an example of which is a module, of the present invention is implemented as software, the component can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of ordinary skill in the art of computer programming. Additionally, the present invention is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the present invention, which is set forth in the following claims.
This application is a continuation-in-part of U.S. patent application Ser. No. 12/059,118, filed Mar. 31, 2008, entitled “Video Playback on Electronic Paper Displays”, which claims priority under 35 U.S.C. §119(e) from U.S. Provisional Patent Application No. 60/944,415, filed Jun. 15, 2007, entitled “Systems and Methods for Improving the Display Characteristics of Electronic Paper Displays,” the entire contents of which are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4065770 | Berry | Dec 1977 | A |
4930875 | Inoue et al. | Jun 1990 | A |
5029257 | Kim | Jul 1991 | A |
5122791 | Gibbons | Jun 1992 | A |
5509085 | Kakutani | Apr 1996 | A |
5608420 | Okada | Mar 1997 | A |
5703621 | Martin et al. | Dec 1997 | A |
5754156 | Erhart et al. | May 1998 | A |
5815134 | Nishi | Sep 1998 | A |
5963714 | Bhattacharjya et al. | Oct 1999 | A |
6067185 | Albert et al. | May 2000 | A |
6147671 | Agarwal | Nov 2000 | A |
6243063 | Mayhew et al. | Jun 2001 | B1 |
6285774 | Schumann et al. | Sep 2001 | B1 |
6327017 | Barberi et al. | Dec 2001 | B2 |
6377249 | Mumford | Apr 2002 | B1 |
6504524 | Gates et al. | Jan 2003 | B1 |
6563957 | Li et al. | May 2003 | B1 |
6738039 | Goden | May 2004 | B2 |
6791716 | Buhr et al. | Sep 2004 | B1 |
6804191 | Richardson | Oct 2004 | B2 |
6850217 | Huang et al. | Feb 2005 | B2 |
6864875 | Drzaic et al. | Mar 2005 | B2 |
6901164 | Sheffer | May 2005 | B2 |
7012600 | Zehner et al. | Mar 2006 | B2 |
7075502 | Drzaic et al. | Jul 2006 | B1 |
7109967 | Hioki et al. | Sep 2006 | B2 |
7119772 | Amundson et al. | Oct 2006 | B2 |
7154452 | Nakamura et al. | Dec 2006 | B2 |
7200242 | Murakami | Apr 2007 | B2 |
7227519 | Kawase et al. | Jun 2007 | B1 |
7280103 | Taoka et al. | Oct 2007 | B2 |
7372594 | Kusakabe et al. | May 2008 | B1 |
7456808 | Wedding et al. | Nov 2008 | B1 |
7528848 | Xu et al. | May 2009 | B2 |
7528882 | Saori et al. | May 2009 | B2 |
7733311 | Amundson et al. | Jun 2010 | B2 |
7804483 | Zhou et al. | Sep 2010 | B2 |
20020036616 | Inoue | Mar 2002 | A1 |
20030020701 | Nakamura et al. | Jan 2003 | A1 |
20030063575 | Kinjo | Apr 2003 | A1 |
20030095094 | Goden | May 2003 | A1 |
20030137521 | Zehner et al. | Jul 2003 | A1 |
20030227441 | Hioki et al. | Dec 2003 | A1 |
20030231158 | Someya et al. | Dec 2003 | A1 |
20040002023 | Sowinski | Jan 2004 | A1 |
20040028256 | Murakami | Feb 2004 | A1 |
20040165115 | Daly | Aug 2004 | A9 |
20050013501 | Kang et al. | Jan 2005 | A1 |
20050116924 | Sauvante et al. | Jun 2005 | A1 |
20050174591 | Sowinski et al. | Aug 2005 | A1 |
20050179642 | Wilcox et al. | Aug 2005 | A1 |
20050212747 | Amundson | Sep 2005 | A1 |
20050219184 | Zehner et al. | Oct 2005 | A1 |
20050248575 | Chou et al. | Nov 2005 | A1 |
20050280626 | Amundson et al. | Dec 2005 | A1 |
20050281334 | Walker et al. | Dec 2005 | A1 |
20060055713 | Asao et al. | Mar 2006 | A1 |
20060066503 | Sampsell et al. | Mar 2006 | A1 |
20060066595 | Sampsell et al. | Mar 2006 | A1 |
20060112382 | Glass et al. | May 2006 | A1 |
20060164405 | Zhou | Jul 2006 | A1 |
20060169980 | Morita et al. | Aug 2006 | A1 |
20060170648 | Zhou et al. | Aug 2006 | A1 |
20070002009 | Pasch et al. | Jan 2007 | A1 |
20070013627 | Hsieh et al. | Jan 2007 | A1 |
20070035510 | Zhou et al. | Feb 2007 | A1 |
20070052667 | Zhou et al. | Mar 2007 | A1 |
20070057905 | Johnson et al. | Mar 2007 | A1 |
20070057906 | Johnson et al. | Mar 2007 | A1 |
20070075949 | Lu et al. | Apr 2007 | A1 |
20070085819 | Zhou et al. | Apr 2007 | A1 |
20070139399 | Cook | Jun 2007 | A1 |
20070140351 | Ho | Jun 2007 | A1 |
20070176912 | Beames et al. | Aug 2007 | A1 |
20070206262 | Zhou | Sep 2007 | A1 |
20080068291 | Yuan et al. | Mar 2008 | A1 |
20080084600 | Bita et al. | Apr 2008 | A1 |
20080111778 | Shen et al. | May 2008 | A1 |
20080135412 | Cortenraad et al. | Jun 2008 | A1 |
20080143691 | Cook | Jun 2008 | A1 |
20080198098 | Gelbman et al. | Aug 2008 | A1 |
20080291129 | Harris et al. | Nov 2008 | A1 |
20100026930 | Jepsen | Feb 2010 | A1 |
20110285754 | Harrington et al. | Nov 2011 | A1 |
Number | Date | Country |
---|---|---|
1519620 | Aug 2004 | CN |
1577471 | Feb 2005 | CN |
1589462 | Mar 2005 | CN |
02-136915 | May 1990 | JP |
2003-256134 | Sep 2003 | JP |
2006-243364 | Sep 2006 | JP |
2006243364 | Sep 2006 | JP |
2007-102042 | Apr 2007 | JP |
2007-241405 | Sep 2007 | JP |
200504442 | Feb 2005 | TW |
WO 03044765 | May 2003 | WO |
WO 2004034366 | Apr 2004 | WO |
2005006296 | Jan 2005 | WO |
WO 2005027087 | Mar 2005 | WO |
2005054933 | Jun 2005 | WO |
2005055187 | Jun 2005 | WO |
WO 2005078692 | Aug 2005 | WO |
WO 2005086131 | Sep 2005 | WO |
2005101362 | Oct 2005 | WO |
WO 2005093705 | Oct 2005 | WO |
WO 2005096259 | Oct 2005 | WO |
WO 2005101362 | Oct 2005 | WO |
WO 2006013502 | Feb 2006 | WO |
2006090315 | Aug 2006 | WO |
WO 2007099829 | Sep 2007 | WO |
Entry |
---|
Extended European Search Report, European Patent Application No. EP08777423, Jun. 7, 2011, 12 pages. |
Chinese Office Action, Chinese Application No. 200880000725.3, Jun. 29, 2011, 9 pages. |
United States Office Action, U.S. Appl. No. 12/059,091, Jul. 27, 2011, 24 pages. |
United States Office Action, U.S. Appl. No. 12/059,441, Aug. 12, 2011, 12 pages. |
Whitesides et al., “10.2: Towards Video-rate Microencapsulated Dual-Particle Electrophoretic Displays”, 2004 SID International Symposium, Seattle, WA, May 25-27, 2004; [SID International Symposium], San Jose, CA, SID, US, vol. XXXV, May 25, 2004, pp. 133-135. |
Johnson et al., “56.1: Invited Paper: High Quality Images on Electronic Paper Displays”, 2005 SID International Symposium, Boston, MA, May 24-27, 2005 [SID International Symposium], San Jose, CA, SID, US, vol. XXXVI, May 24, 2005, pp. 1666-1669. |
Extended European Search Report, Application No. 08765766.4, Feb. 9, 2011, 6 pages. |
Extended European Search Report, Application 08765765.6, Mar. 10, 2011, 10 pages. |
Extended European Search Report, Application No. 08777421.2, Apr. 15, 2011, 9 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,091, Mar. 15, 2011, 17 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,118, Apr. 20, 2011, 28 pages. |
Chinese Office Action, Chinese Patent Application No. 200880000556.3, Apr. 8, 2011, 9 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,441, Apr. 20, 2011, 11 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,399, May 2, 2011, 29 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,085, May 13, 2011, 13 pages. |
Bresenham, J.E., Algorithm for Computer Control of a Digital Plotter, IBM Systems Journal, 1965, pp. 25-30, vol. 4, No. 1. |
Crowley, J.M. et al., Dipole Moments of Gyricon Balls, Electrostatics Fundamentals. Applications and Hazards, Selected Papers from the Fourth IEJ-ESA Joint Symposium on Electrostatics, Sep. 25-26, 2000, pp. 247-259, vol. 55, No. 3-4. |
PCT International Search Report and Written Opinion, PCT/JP2008/061277, Aug. 19, 2008, 11 pages. |
PCT International Search Report and Written Opinion, PCT/JP2008/061273, Sep. 16, 2008, 11 pages. |
PCT International Search Report and Written Opinion, PCT/JP2008/061272, Sep. 30, 2008, 10 pages. |
PCT International Search Report and Written Opinion, PCT/JP2008/061271, Sep. 30, 2008, 11 pages. |
PCT International Search Report and Written Opinion, PCT/JP2008/061278, Oct. 7, 2008, 11 pages. |
Zehner, R. et al., Drive Waveforms for Active Matrix Electrophoretic Displays, May 2003, pp. 842-845, vol. XXXIV, Book II. |
Japanese Office Action, Japanese Patent Application No. 2009-506841, Dec. 6, 2011, 2 pages. |
Chinese Office Action, Chinese Application No. 200880000556.3, Aug. 1, 2011, 10 pages. |
United States Office Action, U.S. Appl. No. 12/059,118, Sep. 14, 2011, 54 pages. |
United States Office Action, U.S. Appl. No. 12/059,399, Sep. 15, 2011, 44 pages. |
United States Office Action, U.S. Appl. No. 12/059,091, Oct. 19, 2011, 32 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,085, Nov. 14, 2011, 21 pages. |
U.S. Office Action, U.S. Appl. No. 121059,441, Dec. 2, 2011, 29 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,118, Jan. 11, 2012, 23 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,399, Jan. 20, 2012, 54 pages. |
JP Office Action, JP Patent Application No. 097122474, Feb. 23, 2012, 10 pgs. |
U.S. Notice of Allowance, U.S. Appl. No. 12/059,118, Apr. 9, 2012; 19 pages. |
Number | Date | Country | |
---|---|---|---|
20090219264 A1 | Sep 2009 | US |
Number | Date | Country | |
---|---|---|---|
60944415 | Jun 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12059118 | Mar 2008 | US |
Child | 12415899 | US |