Conventional film and more recently, digital cameras, are widely commercially available, ranging both in price and in operation from sophisticated single lens reflex (SLR) cameras used by professional photographers to inexpensive “point-and-shoot” cameras that nearly anyone can use with relative ease. Digital cameras are available with user interfaces that enable a user to select various camera features (e.g., ISO speed and red-eye removal).
Little is commercially available for allowing the user to create images on their camera from their own photographs that highlight either the subject or the background of the scene where the subject is being photographed. Software packages are available that allow users to edit their photographs. For example, the user may choose to “cut” a person out of a photograph of the person standing in a kitchen and “paste” the person into another photograph of a forest scene. Other algorithms are available for generating composite images where the subject from one image is overlaid onto another image. However, these images may appear to have been altered. For example, it may by readily apparent that the person is not really standing in the forest.
Systems and methods are disclosed for highlighting a subject in a digital photograph (referred to herein as a “cutout effect”). In an exemplary embodiment, the camera user takes two digital images of the same scene, e.g., the first one having a subject and the second one without the subject. The first image is then “subtracted” from the second image to generate a mask. Optionally, various algorithms (e.g., collective component labeling, median filter, etc.) may be applied for “cleaning” the mask (e.g., removing noise or other imperfections). A photo effect can then be applied to either the background or the subject using the mask and the second image. For example, if the user wants the image to have a color subject on a black/white background, the first image may be converted to black/white, the pixels for the subject are identified using the mask, and then only those pixels for the subject are converted to color. Alternatively, the pixels for the subject may be identified and/or the pixels for the background may be identified using the mask, and then only those pixels that are to be changed are converted to apply the effect (e.g., to apply various types of coloring such as real, cartoon, watercolor, psychedelic, black-and-white, grayscale, etc.).
Exemplary systems may be implemented as an easy-to-use user interface displayed on the digital camera and navigated by the user with conventional camera controls (e.g., arrow buttons and zoom levers already provided on the camera). The user needs little, if any, knowledge about photo-editing, and does not need special software for their PC to create a cutout effect for their digital images.
Exemplary image sensor 130 may be implemented as a plurality of photosensitive cells, each of which builds-up or accumulates an electrical charge in response to exposure to light. The accumulated electrical charge for any given pixel is proportional to the intensity and duration of the light exposure. Exemplary image sensor 130 may include, but is not limited to, a charge-coupled device (CCD), or a complementary metal oxide semiconductor (CMOS) sensor.
Camera system 100 may also include image processing logic 140. In digital cameras, the image processing logic 140 receives electrical signals from the image sensor 130 representative of the light 120 captured by the image sensor 130 during exposure to generate a digital image of the scene 125. The digital image may be stored in the camera's memory 150 (e.g., a removable memory card).
Shutters, image sensors, memory, and image processing logic, such as those illustrated in
Digital camera 100 may also include a photo-editing subsystem 160. In an exemplary embodiment, photo-editing subsystem 160 is implemented in program code (e.g., firmware and/or software) residing in memory on the digital camera 100 and executable by a processor in the digital camera 100, such as the memory and processor typically provided with commercially available digital cameras. The photo-editing subsystem 160 may include user interface engine 162 and image rendering logic 164 for producing digital photographs with a cutout effect.
The image rendering logic 164 may be operatively associated with the memory 150 for accessing digital images (e.g., reading the images stored in memory 150 by image processing logic 140 or writing the images generated by the image rendering logic 164). Image rendering logic 164 may include program code for applying a cutout effect to the digital images and stored on the camera 100, as will be explained in more detail below. The image rendering logic 164 may also be operatively associated with the user interface engine 162.
User interface engine 162 may be operatively associated with a display 170 and one or more camera controls 175 already provided on many commercially available digital cameras. Such an embodiment reduces manufacturing costs (e.g., by not having to provide additional hardware for implementing the photo-editing subsystem 160), and enhances usability by not overwhelming the user with additional camera buttons.
During operation, the user interface engine 162 displays a menu on the digital camera (e.g., on display 170). In an exemplary embodiment, the menu may be accessed by a user selecting the design gallery menu option. The menu may then be navigated by a user making selections from any of a variety menus options. For example, the user interface engine 162 may receive input (e.g., via one or more of the camera controls 175) identifying user selection(s) from the menu for generating an image having the desired cutout effect. The image rendering logic 164 may then be implemented to apply a cutout effect to a digital image stored in the digital camera 100 (e.g., in memory 150) based on user selection(s) from the menu.
A preview image may be displayed on display 170 so that the user can see the cutout effect. Optionally, instructive text may also be displayed on display 170 for modifying, or accepting/rejecting the cutout effect. The instructive text may be displayed until the user operates a camera control 175 (e.g., presses a button on the digital camera 100). After the user operates a camera control 175, the test may be removed so that the user can better see the preview image and cutout effect on display 170.
Also optionally, the user may operate camera controls 175 (e.g., as indicated by the instructive text) to modify the cutout effect. For example, the user may press the left/right arrow buttons on the digital camera 100 to change between the photo effect being applied to the subject or to the background.
In an exemplary embodiment, a copy of the original digital photograph is used for adding a cutout effect to an image stored on the digital camera 100. For example, the new image may be viewed by the user on display 170 directly after the original image so that the user can readily see both the original image and the modified image.
Before continuing, it is noted that the digital camera 100 shown and described above with reference to
In an exemplary embodiment, the camera user takes a first digital photograph 201 of a background scene 220 having background objects 221-224. The camera user then takes a second digital photograph 202 of the same scene 220 with a subject 230. The second digital photograph 202 is “subtracted” from the first digital photograph on a pixel-by-pixel (or group of pixel to group of pixel) basis to generate the mask 210.
Various embodiments are contemplated for maintaining a constant background 220 between the images 201 and 202. For example, the camera user may take the digital photographs 201 and 202 using a tripod or other stabilizing device for the camera. In another example, the images 201 and 202 may be registered with one another by aligning one or more objects in the background to accommodate camera movement (e.g., where a tripod is not used). In still another example, image stabilizing systems may be implemented in the camera to accommodate movement of the camera. Image stabilizing systems are well known in the camera arts and may be readily implemented by those having ordinary skill in the art after becoming familiar with the teachings herein. Image recognition techniques may also be employed to identify the subject and accommodate changes in the scene itself (e.g., changing light, natural movement of grass, tree leaves, or other scenery, etc.).
In any event, the mask 210 may be coded, e.g., as a white on black image (or black on white, or other suitable coding scheme), wherein the pixels corresponding to the subject are assigned a white value and the pixels corresponding to the background are assigned a black value. The mask 210 may then be used to produce an image with the cutout effect, as explained in more detail below with reference to
Before continuing, however, it is observed that the mask 210 includes both a subject area 235 in addition to other lines or “noise” (generally observed in area 237). In an exemplary embodiment, a medial filter may be implemented to reduce noise in the mask 210. In another exemplary embodiment, connected component labeling techniques may be applied to remove lines which do not satisfy a count threshold to reduce noise in the mask 210. Although these and other embodiments for reducing noise appearing in digital images are well known in the camera arts, for purposes of illustration, an exemplary embodiment for applying connected component labeling to a mask is described below with reference to
During connected component labeling, the image is analyzed by scanning the pixels (illustrated by the pixels 320 in image 302), or groups of pixels. The pixels may be scanned right to left and top to bottom on a first pass, then left to right and bottom to top on a second pass, or any other suitable approach for scanning the pixels.
In an exemplary embodiment, pixels are either assigned a “0” (e.g., pixels 320) or a “1” (e.g., pixels 330) based on a threshold value. Only the groups or clusters of pixels which satisfy this threshold value are assigned a “1”. Groups or clusters of pixels which do not satisfy this threshold value are assigned a “0”. In this example, the pixels corresponding to the noise element 312 does not satisfy the threshold value, and therefore these pixels are assigned “0”, the same value assigned the background pixels. All of the pixels comprising the subject 310 satisfy the threshold value and therefore are all assigned “1”. Accordingly, the noise elements 312, 314 are eliminated when the mask 303 is rendered.
Various embodiments for establishing a threshold value are contemplated. Typically, however, the threshold value is selected to remove undesirable “noise” from the mask without slowing camera operations.
The photo effect may be applied by filtering the original digital photograph containing the subject (e.g., image 202 in
The pixels corresponding to the subject 420 may then be identified in the image 401 using the mask. Only those pixels corresponding to the subject 420 are converted back to their original format (e.g., color) to produce image 402 having a color subject 420 (indicated by cross-hatching extending from the top-right hand corner toward the bottom-left hand corner) on a grayscale background 410 (indicated by cross-hatching extending from the top-left hand corner toward the bottom-right hand corner). Alternatively, the pixels for the subject may be identified and/or the pixels for the background may be identified using the mask, and then only those pixels that are to be changed are converted to apply the effect.
It is noted that the example described above with reference to
Exemplary operations which may be used to implement a cutout effect for digital photographs may be embodied as logic instructions on one or more computer-readable medium. When executed on a processor (e.g., in the camera), the logic instructions implement the described operations. In an exemplary embodiment, the components and connections depicted in the figures may be implemented.
In operation 520, the subtraction operation is used to generate a mask. Optionally, generating a mask may also include the operations of cleaning the mask to remove noise. For example, connected component labeling or a median filter may be applied to remove noise from the mask.
In operation 530, a photo effect is applied to all of the pixels in the first image. For example, the photo effect may be the application of “grayscale” tones. In operation 540, pixels corresponding to only the background or only the subject are converted to their original format based on the mask. For example, pixels corresponding to the subject may be converted to color if it is desired to highlight the subject in color on a grayscale background. Alternatively, pixels corresponding to the background may be converted to color if it is desired to highlight the subject in grayscale on a color background. In operation 550, an image is rendered with the photo effect applied to only the subject or only the background.
Other operations, not shown, are also contemplated and will be readily apparent to those having ordinary skill in the art after becoming familiar with the teachings herein. For example, a separate copy of the digital image may be stored in memory before applying the cutout effect to the selected digital image. Accordingly, the user can revert back to the original digital image if the user decides that they do not like the cutout effect they have chosen without having to undo all of the changes.
It is noted that the exemplary embodiments shown and described are provided for purposes of illustration and are not intended to be limiting. Still other embodiments for implementing a cutout effect for digital photographs are also contemplated.