The present invention relates to a method and apparatus for simulating the appearance of an image on a physical display device.
Fabricating a display prototype is a rather complex and time-consuming process. Even for the simplest case of a passive matrix display this fabrication involves at least the following steps: patterning the row and column substrates; laminating the active material between the substrates followed by edge sealing; developing drive electronics and software; and connecting the display to appropriate drive electronics. The fabrication of an active matrix display presents an added challenge due to the need to include one or more transistors for each pixel, integrated into the substrate. While interfacing software (for example, the LabVIEW program (National Instruments Corp.)) and sources for low volume printed circuit boards and electronics have made the task easier, fabricating a prototype that is sufficiently portable and polished for customer validation is much more daunting. As a result, prototyping can take anywhere between a few weeks to several months depending on the particular technology involved and the display specifications, for example size and pixels per inch. Obtaining adequate customer feedback requires screening of numerous display formats, including form factor, pixel density, fill factor, and color gamut. This use of many sample display formats is crucial in the display industry due to the significant capital investments required to establish a manufacturing line to make the displays.
A method for generating and providing a simulated image, consistent with the present invention, includes the steps of receiving a source image and first parameters for a first display device, and generating and displaying a simulated image on a second display device having second parameters. The first parameters are different from the second parameters, and the simulated image displayed on the second display device provides a visual indication of how the source image would appear when displayed on the first display device.
An apparatus for generating and providing a simulated image, consistent with the present invention, includes an image module for receiving a source image, a parameters module for receiving first parameters for a first display device, and a generate module for generating and displaying a simulated image on second display device having second parameters. The first parameters are different from the second parameters, and the simulated image displayed on the second display device provides a visual indication of how the source image would appear when displayed on the first display device.
The method and apparatus can also be used to provide a visual indication of how the source image would appear when displayed on the first display device under varying lighting conditions and under varying viewing angles.
The invention may be more completely understood in the following detailed description of various embodiments of the invention in connection with the accompanying drawings, in which:
Introduction
An accurate display simulation is a viable alternative to an actual device for gathering reliable customer feedback and input. Simulations offer numerous advantages over fabricating actual prototypes including significantly lower cost and turn around time, ease of varying virtually all display parameters (e.g., form factor, pixel density, fill factor, color scheme, and content), and portability since they can be demonstrated to customers electronically or in print form.
Display Simulation System
A display simulation software, as executed by machine 10, receives a digital image and simulates its appearance on a display. Two key attributes of a display are the pixel density, measured in pixels per inch (ppi), and the fill factor (also referred to as the aperture ratio). Images created by graphics software, such as the Adobe Photoshop program (Adobe Systems Inc.), typically are seamless, meaning the pixels are in intimate contact with each other. In a real physical display device, however, the manufacturing process limits the proximity of adjacent pixels. In addition, conductive traces and active components such as thin film transistors (TFTs) can mask portions of the display. This leads to an inactive area between a pixel and its nearest neighbors. This region cannot be switched on or off like the active area within the pixels, and it thus influences the appearance of text or graphics when shown on the display. The ratio of the active area to the total area of a display defines its fill factor. Within each frame, each pixel has a defined color and brightness (Red, Green, Blue (RGB) value) while the inactive area has a background color.
To simulate an image as it would appear on a real display the system generates an n×n array of pixels (a “super pixel”) for each source pixel in the source image. A fraction of the pixels within the super pixel array, defined by the desired fill factor of the display, is then assigned with the RGB value of the source pixel, while the remaining pixels are filled in with the background color. This process is repeated for each pixel in the source image. The super pixels are then tiled to construct the simulated image. The simulated image may be resized to the original source image size by increasing its pixel density. This resizing maintains the new information encoded in the image while maintaining the dimensions of the source image. The aspect ratio of the source and/or super pixel is not limited to a square and could be any desired shape, for example triangles, circles, polygons, or other shapes. For example, the source pixel could be rectangular or other shape and the super pixel could be an array with n×n′ pixels with m×m′ pixels assigned with the source pixel RGB value, where n≠n′ and m≠m′.
The display simulation process is illustrated in
To simulate the appearance of the source image 40 on a display with a 64% fill factor, machine 10 executing software generates a 5×5 super pixel from each source pixel. It then assigns the upper left 4×4 pixels (16 total) within each super pixel with the RGB value of the source pixel (light gray) and the remaining pixels (9 total) within the array are assigned the background color (dark gray). For 24 bit color (approximately 16.7 million colors) each R, G, B, color channel is assigned 8 bits (values 0-255) and each pixel is assigned a RGB value in the range (0-255 R, 0-255 G, 0-255 B). The fill factor is determined by the ratio of the number of pixels assigned with the source pixel's RGB value to the total number of pixels within the super pixel, 16/25=64%. Since the number of pixels in both the x and y dimensions have increased by a factor 5, the dimensions of the image 42 have also increased by the same factor. To scale the simulated image to the dimensions of the source image 40, its pixel density is increased by a factor of five. This conserves the number of pixels in the simulation 44 and ensures that no details are lost after resizing.
Comparison of the source and simulated images (40 and 44) reveals the following two main visual effects: the text in the simulated image appears more pixilated since each pixel is highlighted by an inactive border area; and the overall brightness of the image is lower since a significant fraction (36%) of the image is occupied by a dark gray background. In addition to the fill factor, the colors in the simulated image need to be accurately matched to those in the real display. The RGB values of the pixels and the inactive background region in the real display can be determined using color corrected digital cameras, scanners, or imaging colorimeters. The source and simulated images are created using the color palette in the real display.
In this manner, a high resolution display can be used to simulate the appearance of an image on a display having a lower resolution. In other words, a display having first parameters is used to simulate the appearance of an image on a display having second parameters different from the first parameters. These parameters relate to the actual construction of a display device and can include, for example, size (form factor), ppi, and fill factor.
Display Simulator Screen
The features of an exemplary interface 50 for the system are shown in
Interface 50 has the following sections.
Section 52: The source image raw data is received and displayed. The raw data can be a bitmap file or in any other compressed format such as JPEG (Joint Photographic Experts Group), GIF (Graphics Interchange format), or PNG (Portable Network Graphics).
Section 54: The fill factor is input in this section. If the simulated image is to be printed, then the ppi of the simulated image must be matched to the printer resolution to ensure an accurate print. In this case the size of the super pixel is constrained by the ratio of the printer resolution in dots per inch (dpi) to the ppi of the source image. For instance, for a 600 dpi printer and a 40 ppi source image the simulation uses a 15 (600/40)×15 (600/40) super pixel array. The number of pixels filled in with the source pixel color is determined by the required fill factor and is entered in the “Orig. Pix. Mult.” section. For non-print applications the user enters the fill factor and the tolerance. The software then determines the size of the n×n super pixel array and the number of pixels, m×m, to be filled with the source pixel RGB value to attain the desired fill factor within the tolerance value. Alternatively, the user can manually enter values for m and n. To accurately display the simulated image on a monitor, the number of pixels in the x and y directions must not exceed those on the monitor along the same axes, meaning there should be a one-to-one correspondence between the pixels in the simulation to those on the monitor.
Section 56: The fill color for the background is set in this section. The user has several options as follows: set the fill color to black (R,G,B=0,0,0); choose a color from a palette (section 58); enter specific R, G, B values; or select a color from the source image in section 52 by clicking anywhere within the image.
Section 60: Depending on the size of the source image and super pixel used, the simulated image file can be quite large. For example, the file size for a simulated image of a VGA (Video Graphics Array) resolution source image having 640×480 pixels, using a 20×20 super pixel array with 24 bit color would occupy approximately 370 megabytes. This can exceed the available random access memory (RAM) on many computers, especially if other applications are being run simultaneously and lead to memory issues. To overcome this, the software can optionally process the source image in sections. After the super pixels are created and tiled for each section, the current section of the simulated image is written to a file. The input in this field determines the size of this section and can be entered either as a fraction of the total available memory or as a specific value. Subsequent simulated sections are appended to the pre-existing simulated file. Only a fraction of the simulated image is held in RAM at any one given time. In this scheme, the size of the simulated image is limited only by the available hard drive space. In addition, creating the bitmap (.BMP) file directly, speeds up the simulation process. The source image can also be read and processed in sections and would not be limited by the available RAM.
Section 62: This section displays the current simulation settings including the source and simulated image pixel densities, number of pixels in the source and simulation, fill color, memory allocation, and optionally other settings. In addition, the actual dimensions of the pixels and the inactive area between them in the simulated display are also shown in this section.
Section 64: The simulated image is displayed in this section.
Display Simulator Methodology
There are two steps involved in incorporating the angle dependence of the displays in the simulator. The first is physically changing the perspective of the image, by skewing the dimensions of the image. Assuming a rotation about a vertical axis, the width of the image will become narrower. Vertically, one edge expands and appears closer to the viewer, while the opposite edge shrinks and appears farther away, and the image portion in between the edges can be scaled linearly. The result provides the appearance of a rotated image.
The second step is to transform the original colors to a new color based upon the viewing angle. The spectrum of intensity versus wavelength for a color at normal viewing can be measured to characterize the original image colors. Sample data can be obtained from known data that plots peak reflection wavelength against viewing angle, as well as reflectance against viewing angle.
The data points were fit to a second-degree polynomial to produce a model. This model is applied to the spectrum at normal viewing, which results in a reduced and shifted spectrum. The amount of reduction and shift is directly proportional to the viewing angle. Once the new spectrum is calculated, the transformation from spectrum to RGB values occurs. Therefore, the RGB values to fill the pixels for the skewed image have been found, and the rotated image with angle-dependent colors is complete.
The transformation process from spectrum to RGB values will vary under different lighting conditions. As long as the original spectrum is not dependent upon the lighting conditions (it must be measured with lighting cancellation techniques), the new angle-dependent spectrum is not lighting-dependent either. The International Commission on Illumination (CIE) has developed the idea of color spaces, which are ways to associate colors that the human eye perceives with numeric values. These color spaces are used to transform a spectrum to values that the software can process for display of the appropriate color for each pixel on the monitor (display device). The color spaces are shifted based on the input values for the color white. The CIE has also conveniently developed these white values for many lighting conditions. Depending upon which lighting is present, the values for white can be easily modified when the color space is used during the transformation from final angle-dependent spectrum to new angle-dependent RGB values.
Therefore, the simulator takes the image at normal viewing, physically changes the dimensions to give an appearance of rotation, reduces and shifts the color spectrum depending upon the new viewing angle, and applies the correct color space model for the lighting conditions requested during the RGB value calculation from the angle-dependent spectrum.
Simulation Factors and Examples
Electronic shelf labels are potential replacements for the printed price tags currently being used. They offer significant advantages including the following: lower labor and material costs over the long run since they can be remotely updated and do not need to be replaced when the content does; improved pricing accuracy; and ease of updating. Various two color combinations (yellow/black, black/white, and blue/white) can be achieved using cholesteric liquid crystal, electrophoretic, and electrochromic display technologies respectively.
While the present invention has been described in connection with an exemplary embodiment, it will be understood that many modifications will be readily apparent to those skilled in the art, and this application is intended to cover any adaptations or variations thereof. For example, different interface sections and machines may be used without departing from the scope of the invention. This invention should be limited only by the claims and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
5387985 | Loce et al. | Feb 1995 | A |
5394518 | Friedman et al. | Feb 1995 | A |
5696845 | Loce et al. | Dec 1997 | A |
5838333 | Matsuo | Nov 1998 | A |
5956015 | Hino | Sep 1999 | A |
6008816 | Eisler et al. | Dec 1999 | A |
6020868 | Greene et al. | Feb 2000 | A |
6078936 | Martin et al. | Jun 2000 | A |
6810105 | Nasser-Ghodsi et al. | Oct 2004 | B2 |
7009621 | Tong | Mar 2006 | B1 |
20030095132 | Parish et al. | May 2003 | A1 |
20030142782 | Nasser-Ghodsi et al. | Jul 2003 | A1 |
20050024391 | Damera-Venkata et al. | Feb 2005 | A1 |
20050062767 | Choe et al. | Mar 2005 | A1 |
20050069209 | Damera-Venkata et al. | Mar 2005 | A1 |
20060250503 | Crutchfield et al. | Nov 2006 | A1 |
20060263758 | Crutchfield, et al. | Nov 2006 | A1 |
20070052671 | Childers | Mar 2007 | A1 |
20070110305 | Corcoran et al. | May 2007 | A1 |
20070268547 | Latypov et al. | Nov 2007 | A1 |
20070294634 | Kokemohr | Dec 2007 | A1 |
Number | Date | Country |
---|---|---|
0680200 | Nov 1995 | EP |
1301025 | Apr 2003 | EP |
07-110859 | Apr 1995 | JP |
10-178539 | Jun 1998 | JP |
2004266860 | Sep 2004 | JP |
2005222567 | Aug 2005 | JP |
2006061277 | Mar 2006 | JP |
2006061626 | Mar 2006 | JP |
WO 0021069 | Apr 2000 | WO |
WO 0060479 | Oct 2000 | WO |
WO 0201546 | Jan 2002 | WO |
WO 03038801 | May 2003 | WO |
WO 2005027043 | Mar 2005 | WO |
WO 2006101112 | Sep 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20080043043 A1 | Feb 2008 | US |