1. Field of the Invention
This invention relates to an image compositing apparatus and to a method of controlling this apparatus.
2. Description of the Related Art
In an apparatus that displays moving pictures and game video, there are instances where a displayed face is replaced with another face. For example, there is a system in which an arcade game machine is provided with a video camera so that the user's face can be substituted for the face of a person that appears in a game (see the specification of Registered Japanese Utility Model 3048628). Further, there is a system that automatically tracks the motion of a person's face in a moving picture and makes it possible to compose an image in which the image of the face is transformed into a desired shape (see the specification of Japanese Patent Application Laid-Open No. 2002-269546).
In a case where the face of the user has been imaged, however, a problem arises when the imaged face of the user is displayed. Specifically, although it has been contemplated to replace the imaged face of the user with another face. With such a simple substitution, however, often one cannot tell how the user's face appeared before the substitution.
Accordingly, an object of the present invention is to so arrange it that even if the image of a user is replaced with another image, one can tell what the condition of the user was.
According to the present invention, the foregoing object is attained by providing an image compositing apparatus comprising: an image sensing device for sensing the image of a subject and outputting image data representing the image of the subject; a face image detecting device (face image detecting means) for detecting a face image from the image of the subject represented by the image data that has been output from the image sensing device; a face-condition detecting device (face-condition detecting means) for detecting a face condition which is at least one of face orientation and facial expression of emotion indicated by the face image detected by the face image detecting device; a replacing device (replacing means) for replacing the face image, which has been detected by the face image detecting device, with a compositing face image that conforms to the face condition detected by the face-condition detecting device; and a display control device (display control means) for controlling a display unit so as to display the image of the subject in which the face image has been replaced with the compositing face image by the replacing device.
The present invention also provides a control method suited to the above-described image compositing apparatus. Specifically, the invention provides a method of controlling an image compositing apparatus, comprising the steps of: sensing the image of a subject and outputting image data representing the image of the subject; detecting a face image from the image of the subject represented by the image data that has been obtained by image sensing; detecting a face condition which is at least one of face orientation and facial expression of emotion indicated by the face image detected by the face image detection processing; replacing the face image, which has been detected by the face image detection processing, with a compositing face image that conforms to the face condition detected by the face-condition detection processing; and controlling a display unit so as to display the image of the subject in which the face image has been replaced with the compositing face image by the replacement processing.
In accordance with the present invention, the image of a subject is sensed and a face image is detected from the image of the subject obtained by image sensing. The condition of the detected face, which is one or both of the orientation of the face and a facial expression of emotion, is detected. The detected face image is replaced with a compositing face image that conforms to the detected condition of the face. The image of the subject in which the face image has been replaced with the compositing face image is displayed.
Since the face image in the image of the subject obtained by image sensing is replaced with another face image that is a compositing face image, the entire sensed image of the subject can be displayed even in a case where the face of the subject cannot be displayed. In particular, the compositing face image that has been substituted exhibits an orientation and a facial expression of emotion that are the same as those of the detected face image, examples of expression being joy, anger, sadness and amusement, etc. Accordingly, even though the face image in the image of the subject is not displayed, one can ascertain what the face orientation and facial expression of the subject, i.e., the person, were.
The replacing device (a) replaces the face image, which has been detected by the face image detecting device, with a compositing face image that conforms to the condition of the face detected by the face detecting device, this compositing face image being represented by compositing face image data that has been stored, for every face condition, in a compositing face image data storage device; or (b) transforms a prescribed face image into a compositing face image that conforms to the condition of the face detected by the face detecting device and replaces the face image, which has been detected by the face image detecting device, with the compositing face image obtained by the transformation.
Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
The image compositing apparatus 20 according to this embodiment senses the image of a subject 15 and displays a face image 1, which is for compositing purposes, that has been substituted for the image of the face contained in the image of the subject obtained by image sensing. To achieve this, the image compositing apparatus 20 includes a compositing face image input unit 9 for inputting compositing face image data representing the compositing face image 1. The compositing face image data that has been input from the compositing face image input unit 9 is applied to and stored temporarily in a data storage unit 7.
The image compositing apparatus 20 further includes a video camera 11 for sensing the image of the subject 15. When the image of the subject 15 is sensed by the video camera 11, image data representing the image of the subject is input to a face image detecting unit 4 via an image input unit 10. The face image detecting unit 4 detects the position of the face image from the image of the subject 15 obtained by sensing the image of the subject 15. When the position of a face image is detected, detection processing can be executed at higher speed and accuracy by utilizing the position and face orientation, etc., of a face image, which has been detected in the frame preceding a specific frame, to execute detection processing while placing emphasis on a face image close to the condition of the face detected in the preceding frame. The data representing the detected position of the face image and the data representing the image of the subject is input to a face-condition discriminating unit 3. The condition of the face (the orientation of the face and a facial expression indicative of a human emotion) represented by the detected face image is discriminated by the face-condition discriminating unit 3. Data representing the condition of the face is input to a compositing image generating unit 2.
The compositing face image data that has been stored in the data storage unit 7 also is input to the compositing image generating unit 2. The compositing image generating unit 2 generates a composite image in which the face image contained in the sensed image of the subject has been replaced with a compositing face image that conforms to the face orientation and facial expression of this face image. For example, if the face image in the image of the subject has a horizontal orientation, the face image in the image of the subject will be replaced with a horizontally oriented compositing face image. Further, if the facial expression represented by the face image in the image of the subject is an expression of anger, then the face image in the image of the subject will be replaced with a compositing face image having an angry expression. A compositing face image thus conforming to face orientation and facial expression can be generated and stored in advance for every face orientation and facial expression, and the compositing face image that conforms to the face orientation and facial expression of the detected face portion can be read out and combined with the image of the subject. Further, a compositing face image having a prescribed face orientation and facial expression can be stored in advance, a compositing face image having a face orientation and facial expression represented by the detected face image can be generated from the stored compositing face image and the generated compositing face image can be combined with the image of the subject.
The image data representing the image of the subject with which the compositing face image has been combined is applied to a display unit 6 from an image output unit 5. As a result, the image of the subject in which the face image has been replaced with the compositing face image is displayed on the display screen of the display unit 6.
For example, on occasions where video is broadcast from a pavement camera, there are instances where a passerby is captured in the video and it is best not to broadcast the face of the passerby as is when the right of likeness of the person is taken into account. At such times the face of the passerby is not broadcast as is. Rather, what can be broadcast instead is video in which the face of the passerby has been replaced with a compositing face image upon taking into consideration the facial expression and face orientation of the passerby. The compositing face image may be an illustration such as a “smiling face” mark or a character representing a celebrity or animated personage. Further, if only face orientation will suffice, then it will suffice to remove the face image and display a border in such a manner that the orientation of the face can be discerned.
Further, a right-facing wire-frame transformation method or a smiling-face wire-frame transformation method, etc., can be stored beforehand as a table, and the compositing face image can be transformed in accordance with the transformation method.
In
First, compositing face image data representing a compositing face image having a prescribed expression is input to the image compositing apparatus 20 (step 31). The compositing face image data thus input is stored in the data storage unit 7. The image of a subject is sensed continuously at a fixed period of, e.g., 1/60 of a second (step 32).
A moving image is obtained by such fixed-period imaging and one frame of the image of the subject is extracted from the moving image obtained (step 33). A face image is detected from the extracted frame of the image of the subject (step 34).
With reference again to
The subject image 53 includes a person image 54. The face image of the person image 54 has been replaced with a pattern-by-pattern compositing face image 55. The pattern-by-pattern compositing face image 55 that has been substituted for the face image 52 is facing rightward, which is the same face orientation represented by the face image 52 in
With reference again to
In the foregoing embodiment, a face image contained in the image of a subject is replaced with a compositing face image having an orientation identical with that of the face. However, the face image contained in the image of the subject may just as well be replaced with a compositing face image having an expression rather than an orientation identical with that of the face.
The subject image 50A includes a person image 51A. A face image 52A is detected from the person image 51A in the manner described above. The expression of the detected face image 52A is detected and a compositing face image having the detected expression is substituted for the face image 52A.
The subject image 56 includes a person image 57, and the face image of the person image 57 has been replaced with a compositing face image 58. On the assumption that the facial expression of the face image 52A of the subject image shown in
In the foregoing embodiment, face orientation or facial expression is discriminated and a face image is replaced with a compositing face image. However, it may be so arranged that both face orientation and facial expression are discriminated and a compositing face image conforming to both face orientation and facial expression is substituted.
The differently oriented compositing face images include compositing face images 71, 72, 73, 74 and 75 having a leftward-facing orientation, a leftward-slanted orientation, a frontal orientation, a rightward-slanted orientation and a rightward-facing orientation, respectively. These differently oriented compositing face images 71, 72, 73, 74 and 75 have been generated and stored in advance. In the manner described above, a compositing face image having an orientation conforming to the orientation of a face image detected from the image of a subject that has been obtained by image sensing is selected and the selected compositing face image is then substituted for the face image in the image of the subject.
The compositing face images having different expressions include compositing face images 81, 82, 83, 84 and 85 exhibiting an ordinary expression, an expression of surprise, a smiling-face expression, a weeping expression and an expression of anger, respectively. These compositing face images 81, 82, 83, 84 and 85 having different expressions have been generated and stored in advance. In the manner described above, a compositing face image having an expression conforming to the expression of a face image detected from the image of a subject that has been obtained by image sensing is selected and the selected compositing face image is then substituted for the face image in the image of the subject.
In the examples described above, the differently oriented compositing face images 71, 72, 73, 74 and 75 and the compositing face images 81, 82, 83, 84 and 85 having different expressions have each been generated and stored. However, it may be so arranged that compositing face images having different expressions are generated and stored for every orientation. In this case, since compositing face images having different expressions would be stored for each one of the orientations, 25 face images would be stored.
Compositing face image data representing a compositing face image having a prescribed expression and orientation is input to the image compositing apparatus 20 (step 31). When this is done, compositing face images (pattern-by-pattern compositing face images) conforming to face orientations are generated as shown in
Thereafter, in a manner similar to that of the processing shown in
This embodiment adds a decoration to a compositing face image.
Furthermore, an arrangement may be adopted in which a compositing face image in which the orientation of the decoration has been changed in accordance with the orientation of the face in the image of the subject is substituted for the face image in the image of the subject.
As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2008-252873 | Sep 2008 | JP | national |