The disclosure of Japanese Patent Application No. 2007-80565 is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an image processing apparatus. More specifically, the present invention relates to an image processing apparatus which is applied to an electronic camera, and performs blurring processing on a background image.
2. Description of the Related Art
In one example of a related art of this kind of an apparatus, image data representing a background area except for a person out of image data representing an object scene captured by an imaging device is subjected to processing for suppressing a high spacial frequency component. Such image data of the background area on which the suppressing processing is performed is combined with the image data of the person to thereby produce a portrait image.
However, in general, detection accuracy of a torso image of a person tends to be lower than that of a head image of a person. In other words, for detecting the torso image of the person, ability higher than that required to detect a head image of the person is required. Thus, in the related art, it may take much time to produce a portrait image.
Therefore, it is a primary object of the present invention to provide a novel image processing apparatus.
An image processing apparatus in one aspect of the present invention comprises: a recognizer for recognizing a facial image of a person from an object scene image, a specifier for specifying a partial background image being a background of a head of the person out of a background image of the person from the object scene image on the basis of a recognition result of the recognizer, and a blurring processor for performing blurring processing on the partial background image specified by the specifier.
A facial image of a person is recognized from an object scene image by a recognizer. A specifier specifies a partial background image being a background of a head of the person out of a background image of the person from the object scene image on the basis of a recognition result of the recognizer. A blurring processor performs blurring processing on the partial background image specified by the specifier.
Detection accuracy of an outline of a head of a person generally tends to be higher than that of an outline of a torso of the person. Furthermore, a background surrounding a head of a person generally tends to be farther than a background surrounding a torso of the person. Hence, in the present invention, a partial background image being a background of a head of a person is specified from an object scene image, and blurring processing is performed on the specified partial background image. Thus, it is possible to easily produce an image in which the background is blurred.
The above described objects and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
Referring to
When a camera mode is selected by a key input device 36, a CPU 34 instructs a TG 16 to repetitively perform a pre-exposure and a thinning-out reading in order to execute through image processing, and instructs an LCD driver 28 to execute displaying processing. The TG 16 performs a pre-exposure on the imaging surface for every 1/30 seconds, and reads a part of the electric charges thus generated from the imaging surface in a raster scanning manner. Thus, a low-resolution raw image signal representing the object scene is output from the imaging device 14 at a frame rate of 30 fps.
A camera processing circuit 18 performs a series of processing of a CDS, AGC, A/D conversion, a color separation, a white balance adjustment and a YUV conversion on a raw image signal of each frame output from the imaging device 14 to generate image data in a YUV format. The generated image data is written to an SDRAM 24 by a memory control circuit 22. An LCD driver 26 reads the image data stored in the SDRAM 24 through the memory control circuit 22 for each 1/30 seconds, and drives the LCD monitor 28 on the basis of the read image data. Thus, a through-image of the object scene is displayed on the monitor screen.
The CPU 34 takes a Y component of the image data generated by the camera processing circuit 18, and performes an AE processing for through image on the basis of the taken Y component. Thus, a pre-exposure for an appropriate time is executed, and brightness of the through-image displayed on the LCD monitor 28 is moderately adjusted.
When a shutter button 36s on the key input device 36 is half-depressed, AE processing for recording image is executed by the CPU 34 in order to strictly adjust the pre-exposure time. The pre-exposure time is set to an optimum time.
When the shutter button 36s on the key input device 36 is fully-depressed, the CPU 34 instructs the TG 16 to execute a single primary exposure and a single all-pixel-reading, and instructs an I/F circuit 30 to execute recording processing. The TG 16 performs a primary exposure on the imaging surface for an optimum time, and reads all the electric charges thus generated from the imaging surface in a raster scanning manner. Thus, a high-resolution raw image signal representing the object scene at a time that the recording operation is performed is output from the imaging device 14. The output raw image signal is subjected to the processing described above by the camera processing circuit 18, and image data in a YUV format thus generated is written to the SDRAM 24 by the memory control circuit 22. The I/F circuit 30 reads high-resolution image data thus retained in the SDRAM 24 through the memory control circuit 22, and records the read image data in a recording medium 32 in a file format.
When a reproduction mode is selected by the key input device 36, the CPU 34 instructs the I/F circuit 30 to reproduce a desired file, and instructs the LCD driver 26 to execute a displaying processing. The I/F circuit 30 accesses the recording medium 32 to reproduce image data stored in the desired file. The reproduced image data is written to the SDRAM 24 by the memory control circuit 22. The LCD driver 26 reads the image data stored in the SDRAM 24 through the memory control circuit 22, and drives the LCD monitor 28 on the basis of the read image data. Thus, a reproduction image is displayed on the monitor screen.
When a background blurring mode is selected by the key input device 36 in a camera mode or a reproduction mode, blurring processing as described below is performed on the high-resolution image data retained in the SDRAM 24 as a processing objective image.
In a case that the processing objective image is an object scene image shown in
Referring to
A partial image belonging to the head surrounding image frame Fhead, that is, a head surrounding image is copied in a work area 24w on the SDRAM 24 as shown in
After completion of the assignment of the reference points P1 and P2, lines L1 and L2 horizontally and outwardly extending from the reference points P1 and P2 respectively, and a head's outline E1 are drawn on the processing objective image. A blurred area is an image area above the drawn lines L1, L2 and the head's outline E1. In other words, an image area below the lines L1, L2 and the head's outline E1 is a non-blurred area.
The blurred area is then divided into an upper area Rupr and a lower area Rlwr with reference to a top T1 of the head's outline, and blurring processing in a different manner is performed on each of the upper area Rupr and the lower area Rlwr. That is, referring to
The CPU 34 executes background blurring task shown in
First, it is determined whether or not a processing objective image is specified in a step S1. In the camera mode, in response to the shutter button 36s being fully depressed, the high-resolution image data retained in the SDRAM 24 corresponds to the processing objective image. Furthermore, in the reproduction mode, the high-resolution image data reproduced from the desired file corresponds to the processing objective image. If “YES” in the step S1, face recognizing processing is executed in a step S3. More specifically, dictionary data corresponding to eyes, a nose, a mouth of a person is checked against the processing objective image to thereby recognize a facial image of the person. When the recognition of the facial image is unsuccessful, the process proceeds to a step S9 as it is while when the recognition of the facial image is successful, background blurring processing is executed in a step S7, and the process proceeds to the step S9. In the step S9, it is determined whether or not the processing objective image is updated, and when the determination result is updated from “NO” to “YES”, the process returns to the step S3.
The background blurring processing in the step S7 complies with a subroutine shown in
In a step S21, with reference to a top T1 on the head's outline, the blurred area is divided into an upper area Rupr and a lower area Rlwr. In a step S23, step-by-step-blurring processing (a degree of blur: 0-1) is executed on the lower area Rlwr, and in a step S25, uniform blurring processing (a degree of blur: 1) is executed on the upper area Rupr. After completion of the processing in the step S25, the process is restored to the routine at the hierarchical upper level.
As understood from the above description, a facial image of the person is recognized from the object scene image by the CPU 34 (S3). The CPU 34 specifies a partial background image being a background of the head out of the background image of the person from the object scene image on the basis of the recognition result of the facial image (S17, S19). The blurring processing is performed on the specified partial background image (S23, S25).
Detection accuracy of an outline of a head of a person generally tends to be higher than detection accuracy of an outline of a torso of the person. Furthermore, a background surrounding a head of a person generally tends to be farther than a background surrounding a torso of the person. Hence, in the present invention, a partial background image being a background of a head of a person is specified from an object scene image, and blurring processing is performed on the specified partial background image. Thus, it is possible to easily produce an image in which the background is blurred.
Additionally, in this embodiment, a lower edge of the blurred area is defined by the lines L1 and L2 outwardly and horizontally extending from the reference points P1 and P2, but in place of this, the lower edge of the blurred area may be defined by lines L1′ and L2′ extending obliquely downwardly from the reference points P1 and P2 (see
Furthermore, in this embodiment, each of the reference points P1 and P2 is assigned to a position downwardly apart from the lower side defining the facial image frame Fface by “0.1L”. However, an ellipse C1 (major diameter: X) circumscribing the head's outline E1 is defined, and each of the reference points P1 and P2 is assigned to a position upwardly apart from the lower edge of the ellipse C1 by 0.1X (see
In addition, in this embodiment, background blurring processing is performed on image data to be recorded in the recording medium 32 or image data reproduced from the recording medium 32. However, the background blurring processing may be performed on low-resolution image data forming a through-image. Specifically, if background blurring processing is executed in response to the shutter button 36s being half-depressed, it is easily determine what kind of recording image can be obtained with respect to the object scene currently captured, capable of improving operability. Furthermore, in this embodiment, only the image data on which background blurring processing is performed is recorded in the recording medium 32, but in addition thereto, normal image data on which background blurring processing is not performed may be recorded in the recording medium 32.
The processing of performing background blurring processing in response to the shutter button 36s being half-depressed, and the processing of performing recording processing of normal image data and image data on which the background blurring processing is performed in response to the shutter button 36s being fully-depressed are executed according to a flowchart shown in
Referring to
In a step S41, it is determined whether or not the shutter button 36s is fully-depressed. In a step S43, it is determined whether or not an operation of the shutter button 36s is cancelled. If “YES” in the step S41, the TG 16 is instructed to execute a single primary exposure and a single all-pixel reading in a step S45. In a step S47, the I/F circuit 30 is instructed to perform recording processing of the normal image data, and in a step S49, the I/F circuit 30 is instructed to perform recording processing of the image data on which the background blurring processing is performed. After completion of the step S49, in a step S51, the blurring mode task is ended, and then the process returns to the step S33. If “YES” in the step S43, the process returns to the step S33 through the processing in the step S51.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2007-080565 | Mar 2007 | JP | national |