Color vision impairment is a condition that affects a significant portion of the population. Approximately one of every twenty five people suffers from red-green color-vision impairment. Six to eight percent of the male population is red-green color-vision impaired. A red-green color-vision impaired observer is generally unable to distinguish between green and red, as well as yellow and any of the shades of orange, which are formed from combinations of red and green.
For these people visual discrimination of color-coded data is difficult, if not practically impossible, when green, red, or yellow data are adjacent in a scene or image. In the color space of such persons, the red-green hue dimension is missing, and red and green are both seen as yellow; they have primarily a yellow-blue dimension.
Even people with normal color vision can, at times, have difficulty distinguishing between colors. Lenses of the eyes tend to cloud with aging, due to a host of causes, such as cataracts. The elderly often experience changes in their ability to sense colors, and many see objects as if they have been viewed through yellowish filters. Additionally, over time, ultraviolet rays degenerate proteins in the eyes, and light having short wavelengths is absorbed and blue-cone sensitivity is thereby reduced. As a result, the appearance of most, if not all, colors changes, yellow tending to predominate, or a blue or a bluish violet color tending to become darker. Specifically, “white and yellow,” “blue and black,” and “green and blue” gradually become more difficult to distinguish. Similarly, even a healthy individual with “normal” vision can perceive colors differently when he or she is at an altitude greater than what he or she is normally used to, or when under certain medications.
Software programs assisting color-vision impaired or other observers distinguish between colors do exist, but have been limited primarily to configuring computers so that the observer can move a pointer over various positions on the computer's display monitor and be cued with information indicative of color content of the object pointed to by the pointer. However, such prior art systems and methods, although helpful, have utility only for images viewed on a computer and fail to provide solutions for most activities of daily living.
There is therefore a need for systems and methods to identify one or more colors for a user, for example a color-vision impaired observer, while at the same time enabling the user to choose, in real time or otherwise, an image of a scene of interest, from which the colors are identified. In one aspect, the systems and methods described herein integrate with a commercial portable electronic device to allow the user to capture an image of a scene of interest; display the captured image on a display screen associated with the portable device; and identify for the user one or more colors of one or more positions or regions, selected by the user, in the image, and to do so in a form and manner perceptible to the user.
In one embodiment, the systems and methods described herein operatively cooperate or integrate with a commercial cellular telephone, equipped with a digital camera, that would allow a color-vision impaired or other user to differentiate colors in an image captured by the digital camera. Once the user has taken a picture of a scene have an object or group of objects, the software program, in one embodiment, provides the user with a visual or auditive cue indicative of the color of the object that a cursor, movable by the user, is over at any given time, thus allowing the user to distinguish between colors in the image.
In another embodiment, the systems and methods described herein can be used on real-time images that the camera device captures as the user aims the camera at various scenes of interest, perhaps panning the camera, zooming in or out of particular objects in a scene, etc. Additionally, software according to an optional embodiment of the systems and methods described herein assigns different texture patterns to different colors. For example, red can be converted to stripes on the image and green can be converted to dots, thereby enabling the user to easily differentiate one color from another in the digital image.
Furthermore, the software can display by flashing, highlighting, and/or altering a color or texture pattern, other objects in the image that are identified to map to the same color as the position or region selected by the user. In a further embodiment of this feature, the user can designate one or more specific colors and prompt the software integrated with the cellular phone to configure the phone to flash, highlight, alter the color and/or texture pattern of, or otherwise identify for the user other positions or regions in the image associated with the same color.
As cellular phones are small and convenient to carry, the fact that the software according to the systems and methods described herein can be installed on or otherwise cooperatively operate with the cellular phone enables a color-vision impaired person or other observer to take a digital picture of a scene of interest and ascertain the color of various objects at any time in a unobtrusive manner and without embarrassment.
In one aspect, the invention includes a method of identifying at least one color for a user. The method includes the steps of: allowing the user to capture an image with a camera; displaying the captured image on a display screen; in response to the user selecting a position or region in the displayed image, identifying a set of at least one color parameter associated with the selected position or region; mapping the set of one or more color parameters to one or more or more reference colors; and identifying for the user, and in a form/manner perceptible to the user, the one or more reference colors to which the color parameters of the selected position or region are mapped.
According to one practice, the method includes indicating to the user an additional position or region having corresponding color parameters that map to the same reference colors as the user-selected position or region. According to one embodiment, the additional position or region is indicated by displaying on the screen at least one visual icon, perceptible to the user, identifying the additional position or region as being associated with the reference colors. The displayed visual icon may include one or more of a displayed intensity level, a displayed texture pattern, and a displayed color corresponding to the at least one additional position or region; each of these may be time-varying, for example, flashing or otherwise changing with time.
According to another aspect, the method of identifying at least one color for a user includes allowing the user to capture an image with a camera and to also choose a designated color of interest, for example, a color with respect to which the user is color-vision impaired. The method further includes the steps of displaying the captured image on a display screen; determining an additional position or region in the displayed image having an associated set of one or more color parameters that map to the selected color; and indicating, in a form perceptible to the user, the additional position or region in the displayed image. The method by which the additional position or region is indicated to the user in this aspect is similar to the one described above, for example, by flashing, altering the color and/or texture of, highlighting, etc. the additional position or region.
Embodiments employing other portable devices, such as a personal digital assistant (PDA), a Pocket PC, and a digital camera having a display screen are within the scope of the systems and methods described herein. Further features and advantages of the invention will be apparent from the following description of illustrative embodiments, and from the claims.
The following figures depict certain illustrative embodiments of the invention in which like reference numerals refer to like elements. These depicted embodiments are to be understood as illustrative of the invention and not as limiting in any way.
To provide an overall understanding of the invention, certain illustrative practices and embodiments will now be described, including a system and method for identifying one or more colors for a user, in particular, a color-vision impaired observer. The systems and methods described herein can be adapted, modified, and applied to other contexts; such other additions, modifications, and uses will not depart from the scope hereof.
INDEXED uses 256 colors. By limiting the palette of colors, indexed color can reduce file size while maintaining visual quality.
LAB COLOR (a.k.a. L*a*b and CIELAB) has a lightness component (L) that ranges from 0 to 100, a green to red range from +120 to −120 and a blue to yellow range from +120 to −120. LAB is used by such software as Photoshop as a intermediary step when converting from one color space to another. LAB is based on the discovery that somewhere between the optical nerve and the brain, retinal color stimuli are translated into distinctions between light and dark, red and green, and blue and yellow.
HSL a spherical color space in which L is the axis of lightness, H is the hue (the angle of a vector in a circular hue plan through the sphere), and S is the saturation (purity of the color, represented by the distance from the center along the hue vector).
MULTICHANNEL uses 256 levels of gray in each channel. A single Multichannel image can contain multiple color modes—e.g., CMYK colors and several spot colors—at the same time.
MONITOR RGB is the color space that reflects the current color profile of a computer monitor.
sRGB is an RGB color space developed by Microsoft and Hewlett-Packard that attempts to create a single, international RGB color space standard for television, print, and digital technologies.
ADOBE RGB contains an extended gamut to make conversion to CMYK more accurate.
YUV (aka Y′CbCr) is the standard for color television and video, where the image is split into luminance (i.e. brightness, represented by Y), and two color difference channels (i.e. blue and red, represented by U and V). The color space for televisions and computer monitors is inherently different and often causes problems with color calibration.
PANTONE is a color matching system maintained by Pantone, Inc.
When discussing color theory in general, particularly as it applies to digital technologies, there are several other important concepts:
HUE—The color reflected from, or transmitted through, an object. In common use, hue refers to the name of the color such as red, orange, or green. Hue is independent of saturation and lightness.
SATURATION (referred to as CHROMINANCE when discussing video)—The strength or purity of a color. Saturation represents the amount of gray in proportion to the hue, measured as a percentage from 0% (gray) to 100% (fully saturated).
LIGHTNESS—Lightness represents the brightness of a color from black to white measured on a scale of 1 to 100.
LOOK-UP TABLE—A look-up table is the mathematical formula or a store of data which controls the adjustment of lightness, saturation hue in a color image or images, and conversion factor for converting between color spaces.
Turning back to
Referring to
The systems and methods disclosed herein include, in one embodiment, software stored in memory/software area 314. The software can be used on images captured with the digital camera 324 resulting in the display of the image on the display screen 312A. The display screen typically is a liquid crystal display (LCD) screen, but other embodiments including plasma or cathode ray tube (CRT) are within the scope of the disclosure herein.
The camera 324 can generate a file in any format, such as the GIF, JPEG, TIFF, PBM, PGM, PPM, EPSF, X11 bitmap, Utah Raster Toolkit RLE, PDS/VICAR, Sun Rasterfile, BMP, PCX, PNG, IRIS RGB, XPM, Targa, XWD, possibly PostScript, and PM formats on workstations and terminals running the X11 Window System or any suitable image file.
The camera 324 may be used to view a scene in real time. For example, the camera 324 may employ small sensors used in a typical cell phone or consumer digital camera, which can read image data in real time using a scheme called “interline transfer.” In this embodiment, charge coupled device (CCD) electronics control exposure rather than a mechanical shutter. The user can then use the cell phone's camera to look at his or her surroundings by panning the cell phone's and looking at the cell phone display to view objects. In this real time setting, too, the systems and methods described herein can be employed by the user to identify colors of objects shown on the display screen.
The cursor can be moved over various parts of the image using the position selector 322 and the cursor will continuously or intermittently display the color of the position in the image that it is over. In an alternative embodiment, the floating caption or other visual icon can appear with an active selection by the user of a particular position or region in the image, without having to pause the cursor over the image.
Alternatively, or additionally, the embodiment of
Although
Moreover, although
The cellular phone 300 can be a Motorola V300 or any suitable, and preferably commercially-available off-the-shelf cellular phone that is equipped with a digital camera. A Nokia 6630 SmartPhone, having high-resolution and fast imaging and video capability (including zoom and sequence mode and mobile broadband access for multimedia content, live video streaming and video conferencing), MP3 audio output, and sufficient memory, is a particular example of a commercially-available cellular phone suitable for integrating with or implementing the systems and methods described herein.
These cellular phones can be programmed using well-known system development kits such as the Symbian OS (operating system). Additionally, there are companies that offer product design and development services to those seeking professional assistance in creating new software products for use in cellular phones.
In another embodiment, any digital camera device, including digital cameras that do not have cellular phone capability, can be used with this software. The digital camera can be a Canon Powershot S400 or any commercially-available off-the-shelf digital camera. In a further optional embodiment, the camera device may be a web camera of the kind commonly employed to capture image for display on, and transfer over, a computer network. In an additional, optional embodiment, the camera device may be a personal digital assistant (PDA) device that is equipped with a digital camera, including the ViewSonic V36. The systems and methods described herein may also be implemented on, or integrated with, Pocket PCs or other handheld devices.
In the embodiment where pausing over the object prompts a callout of the color, the digital camera may be enabled with motion estimation software, known in the art of image and video processing, to detect whether there is camera motion. If motion is determined to be below a predetermined threshold (where the threshold is related to the sensitivity of the motion detection algorithm being employed), then the user is assumed to have paused over the object, indicating that he or she wishes to know the color of that object.
Alternatively, or additionally, the camera motion may be determined using techniques known in the electromechanical art of camera motion detection, employing, for example, gyroscopic or other techniques.
Turning to
To the right of the pie chart is a key table that equates different colors on the graph to different kinds of information. In
In
Alternatively, when colored data in an image is known to have certain color names, for example, when a map of highway congestion is known to mark congested zones as red and uncongested zones as green, the color-vision impaired person or other user will be able to select a desired color name from an on-screen list of color names, and colors in the image corresponding to that name will flash or be otherwise identified.
Although,
In one aspect, the systems and methods described herein discretize a continuous, or practically continuous, range of colors that can appear in an image, into a set of reference colors. For example, various shades of red are mapped to “Red.” In one embodiment, when the user selects a position having any of those shades that map to “Red,” the floating bubble would indicate “Red.” Similarly, when the user is interested in the some or all positions or regions in the image having the color red, the systems and methods described herein map any of the shades of red (or whatever other range of colors is determined a priori to map to “Red”) are highlighted or otherwise exposed to the user in a form perceptible to the user.
This is essentially a form of “quantization” of the color space, associating with each continuous pocket of the color space one color representative of that pocket. Alternatively, referring to
As discussed above, the imaging system can be realized as a software component operating on a cell phone or other device having an image capture device and a data processing system. In that embodiment, the imaging software can be implemented as a C language computer program, or a computer program written in any high level language including C++, Fortran, Java or Basic. Additionally, in an embodiment where microcontrollers or DSPs are employed, the imaging software can be realized as a computer program written in microcode or written in a high level language and compiled down to microcode that can be executed on the platform employed. The development of such image processing systems is known to those of skill in the art. Additionally, general techniques for high level programming are known, and set forth in, for example, Stephen G. Kochan, Programming in C, Hayden Publishing (1983).
The contents of all references, patents, and published patent applications cited throughout this specification are hereby incorporated by reference in entirety.
Many equivalents to the specific embodiments of the invention and the specific methods and practices associated with the systems and methods described herein exist. For example, the systems and methods described herein can work with video images of the type captured by digital camcorders and video devices and are not limited to still images. Accordingly, the invention is not to be limited to the embodiments, methods, and practices disclosed herein, but is to be understood from the following claims, which are to be interpreted as broadly as allowed under the law.
This application incorporates by reference in entirety, and claims priority to and benefit of U.S. Provisional Patent Application No. 60/526,782, filed on 3 Dec. 2003. This application also incorporates by reference in entirety, and claims priority to and benefit of U.S. patent application Ser. No. 10/388,803, filed on 13 Mar. 2003.
Number | Date | Country | |
---|---|---|---|
60422960 | Nov 2002 | US | |
60526782 | Dec 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10388803 | Mar 2003 | US |
Child | 11003865 | Dec 2004 | US |