1. Field of the Invention
The present invention relates to video processing, and more particularly, to a method of modifying a feature region of an image according to an audio signal.
2. Description of the Prior Art
Web cameras are devices that typically include an image capturing device with a good refresh rate, and optionally a microphone for recording sound in the form of voice or ambient noise. The web camera is usually connected to a computing device, such as a personal computer or notebook computer, through a data interface, such as USB, or integrated with the computing device, e.g. in a housing of the notebook computer. The web camera may be utilized as a video device by software for transmitting streaming video and audio through a data network to provide video conferencing and chat functions between two or more users in a chat session.
As advanced video conferencing and chat technologies are developed, and as video chat grows in user base, users of video chat clients will demand greater ability to customize the video stream sent to their peers. For example, as face detection technologies are refined, facial features, such as hair, eyes, or skin may be modified, and the modifications may be made to track the location of the facial features. However, such modification is usually performed manually by the user, which can be cumbersome and inconvenient.
According to a first embodiment of the present invention, a method of facial image reproduction comprises retrieving an audio characteristic of an audio bitstream, receiving a video bitstream, extracting an image from the video bitstream, extracting a feature region from the image, modifying the feature region according to the audio characteristic to generate a modified image, and outputting the modified image.
According to the above embodiment of the present invention, an electronic device for performing facial image reproduction comprises an audio segmenting module for dividing the audio bitstream into a plurality of audio segments, a video segmenting module for dividing the video bitstream into a plurality of video segments, an audio processing module for retrieving an audio characteristic of the audio segments, an image extraction module for extracting an image from the video segments, a feature region detection module for extracting a feature region from the image, and an image modifying module for modifying the feature region according to the audio characteristic to generate a modified image.
According to a second embodiment of the present invention, a method of modifying an image based on an audio signal comprises capturing the image, recording a sound, performing image analysis on the image, retrieving an audio characteristic from the recorded sound, and modifying the image according to the audio characteristic to form a modified image.
According to the second embodiment of the present invention, a communication system comprises a transmitting computing device and a receiving computing device. The transmitting computing device comprises a system I/O interface for receiving an audio signal and a video signal, a processor for determining an audio characteristic of the sound, modifying an image of the video signal to generate the modified image according to the audio characteristic and encoding the modified image, and a network interface for sending the encoded signal. The second computing device comprises a network interface for receiving the encoded signal from the transmitting computing device, a processor for decoding the encoded signal to retrieve the modified image, and a display interface for outputting the modified image.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Please refer to
First, a video bitstream containing a facial image is received (Step 100). The bitstream is a time series of bits for the transmission of digital data. The data transmission may be by cable connection, networking, telecommunications, etc. The video bitstream may be provided by an image capturing device.
An audio characteristic is retrieved from an audio bitstream (Step 102). This may be accomplished by receiving the audio bitstream and generating the audio characteristic by analyzing the audio bitstream. For example, an average volume may be calculated within a time period. Then, it can be determined whether the average volume exceeds a threshold. Depending on a result of determining whether the average volume exceeds the threshold, a signal may be generated correspondingly. The audio characteristic retrieved in Step 102 may be retrieved by a frequency analysis, a rhythm detection, and/or by a tempo analysis. The audio bitstream itself may be encoded from music or speech. For speech, the audio characteristic may be retrieved by analyzing tone of the speech or by speech recognizing technique.
A feature region is extracted from the video bitstream (Step 104). The feature region may be extracted from the video bitstream by extracting an image from the video bitstream, detecting a head region of the image, and extracting the feature region of the head region. The feature region may be extracted from the head region according to color information of the image, texture information of the image, and/or edge information of the image. Typical edge detection can be achieved by applying a Sobel filter both horizontally and vertically. As for texture recognition, differences with neighboring pixels may be computed for each pixel within a certain region, and the differences may be summarized in a histogram. Note that the texture information can be reflected by a pattern of the histogram.
The feature region may be modified according to the audio characteristic to generate a modified image (Step 106). Modification of the feature region may be accomplished in a number of different ways, including modifying coloration of the feature region according to the audio characteristic, and modifying texture of the feature region according to the audio characteristic. For example, if the audio characteristic is a very high volume, one type of modification may be made, whereas a very low volume may cause another type of modification.
The modified image may then be outputted through an output of the electronic device (Step 108). And, in addition to outputting the modified image, the embodiment of the present invention may also output the audio bitstream. For example, the video bitstream may be divided into a plurality of video segments for efficiency in storage or for efficiency in further processing. Then, the modified image may be embedded in at least one of the video segments. Similarly, the audio bitstream may be divided into a plurality of audio segments. Then, the audio segments may be synchronized with the modified video segments, and the synchronized modified video segments and audio segments may be outputted.
Please refer to
b shows cooperation of modules of the program code 200 shown in
c a diagram of the audio characteristic extractor 280 of
For the modifying module 293 to determine how to modify the image, the image processing module 290 may further comprise the database 295, which stores the behaviors corresponding to the audio characteristic.
The communication system shown in
Referring to
Thus, the method, electronic device, and communication system according to embodiments of the present invention allow the user to enhance his/her video stream conveniently based on sounds being processed. This enhances the video chat experience, allows for greater interaction between users, and also provides entertainment for all users in the chat session.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention.
Number | Name | Date | Kind |
---|---|---|---|
3038061 | O'Reilly | Jun 1962 | A |
4305131 | Best | Dec 1981 | A |
5430495 | Ko | Jul 1995 | A |
5548340 | Bertram | Aug 1996 | A |
5692063 | Lee et al. | Nov 1997 | A |
5880788 | Bregler | Mar 1999 | A |
5953485 | Abecassis | Sep 1999 | A |
5963265 | Bae et al. | Oct 1999 | A |
5991443 | Gallery et al. | Nov 1999 | A |
5999185 | Kato et al. | Dec 1999 | A |
6005180 | Masuda | Dec 1999 | A |
6023675 | Bennett et al. | Feb 2000 | A |
6067126 | Alexander | May 2000 | A |
6067399 | Berger | May 2000 | A |
6133904 | Tzirkel-Hancock | Oct 2000 | A |
6154601 | Yaegashi et al. | Nov 2000 | A |
6184937 | Williams et al. | Feb 2001 | B1 |
6310839 | Lee et al. | Oct 2001 | B1 |
6370315 | Mizuno | Apr 2002 | B1 |
6388669 | Minami et al. | May 2002 | B2 |
6414914 | Lee et al. | Jul 2002 | B1 |
6542692 | Houskeeper | Apr 2003 | B1 |
6571255 | Gonsalves et al. | May 2003 | B1 |
6675145 | Yehia et al. | Jan 2004 | B1 |
6748116 | Yue | Jun 2004 | B1 |
6904176 | Chui et al. | Jun 2005 | B1 |
7015934 | Toyama et al. | Mar 2006 | B2 |
7085259 | Wang et al. | Aug 2006 | B2 |
7224851 | Kinjo | May 2007 | B2 |
7565059 | Neuman | Jul 2009 | B2 |
8004593 | Kusaka | Aug 2011 | B2 |
8218886 | Ayres et al. | Jul 2012 | B2 |
8279228 | Havaldar et al. | Oct 2012 | B2 |
20030035567 | Chang et al. | Feb 2003 | A1 |
20070236513 | Hedenstroem et al. | Oct 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
20100067798 A1 | Mar 2010 | US |