Certain embodiments of the invention relate to communication systems. More specifically, certain embodiments of the invention relate to a method and system for composing an image based on multiple captured images.
Image and video capabilities may be incorporated into a wide range of devices such as, for example, mobile phones, digital televisions, digital direct broadcast systems, digital recording devices, gaming consoles and the like. Mobile phones with built-in cameras, or camera phones, have become prevalent in the mobile phone market, due to the low cost of CMOS image sensors and the ever increasing customer demand for more advanced mobile phones with image and video capabilities. As camera phones have become more widespread, their usefulness has been demonstrated in many applications, such as casual photography, but have also been utilized in more serious applications such as crime prevention, recording crimes as they occur, and news reporting.
Historically, the resolution of camera phones has been limited in comparison to typical digital cameras, due to the fact that they must be integrated into the small package of a mobile handset, limiting both the image sensor and lens size. In addition, because of the stringent power requirements of mobile handsets, large image sensors with advanced processing have been difficult to incorporate. However, due to advancements in image sensors, multimedia processors, and lens technology, the resolution of camera phones has steadily improved rivaling that of many digital cameras.
Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
A system and/or method for composing an image based on multiple captured images, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
Certain embodiments of the invention can be found in a method and system for composing an image based on multiple captured images. In various embodiments of the invention, a mobile multimedia device may be operable to capture consecutive image samples of a scene, where the scene may comprise one or more objects that may be identifiable by the mobile multimedia device. An image of the scene may be created by the mobile multimedia device utilizing a plurality of the captured consecutive image samples based on the identifiable objects. In an exemplary embodiment of the invention, the identifiable objects may comprise one or more faces in the scene. The mobile multimedia device may be operable to identify the faces for each of the captured consecutive image samples utilizing face detection. In an exemplary embodiment of the invention, one or more smiling faces among the identified faces for each of the captured consecutive image samples may then be identified by the mobile multimedia device utilizing smile detection. At least a portion of the captured consecutive image samples may be selected by the mobile multimedia device based on the identified one or more smiling faces. The image of the scene may be composed utilizing the selected at least a portion of the captured consecutive image samples. In this instance, for example, the image of the scene may be composed in such a way that it comprises each of the identified smiling faces which may occur in the scene during a period of capturing the consecutive image samples.
In another exemplary embodiment of the invention, the identifiable object may comprise a moving object in the scene. The mobile multimedia device may be operable to identify the moving object for each of the captured consecutive image samples utilizing a motion detection circuit in the mobile multimedia device. The image of the scene may be composed by selecting at least a portion of the captured consecutive image samples based on the identified moving object. In this instance, for example, the image of the scene may be composed in such a way that the identified moving object, which may occur in the scene during a period of capturing the consecutive image samples, may be eliminated from the composed image of the scene.
The mobile multimedia device 105 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to communicate radio signals across a wireless communication network. The mobile multimedia device 105 may be operable to process image, video and/or multimedia data. The mobile multimedia device 105 may comprise a mobile multimedia processor (MMP) 105a, a memory 105t, a processor 105f, an antenna 105d, an audio block 105s, a radio frequency (RF) block 105e, an LCD display 105b, a keypad 105c and a camera 105g.
The mobile multimedia processor (MMP) 105a may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to perform image, video and/or multimedia processing for the mobile multimedia device 105. For example, the MMP 105a may be designed and optimized for video record/playback, mobile TV and 3D mobile gaming. The MMP 105a may perform a plurality of image processing techniques such as, for example, filtering, demosaic, lens shading correction, defective pixel correction, white balance, image compensation, Bayer interpolation, color transformation and post filtering. The MMP 105a may also comprise integrated interfaces, which may be utilized to support one or more external devices coupled to the mobile multimedia device 105. For example, the MMP 105a may support connections to a TV 105h, an external camera 105m, and an external LCD display 105p. The MMP 105a may be communicatively coupled to the memory 105t and/or the external memory 105n. In an exemplary embodiment of the invention, the MMP 105a may be operable to create or compose an image of the scene 110 utilizing a plurality of consecutive image samples of the scene 110 based on one or more identifiable objects in the scene 110. The identifiable objects may comprise, for example, the faces 110a and/or the moving objects 110e. The MMP 105a may comprise a motion detection circuit 105u.
The motion detection circuit 105u may comprise suitable logic, circuitry, interfaces and/or code that may be operable to detect a moving object such as, for example, the moving object 110e in the scene 110. The motion detection may be achieved by comparing the current image with a reference image and counting the number of different pixels.
The processor 105f may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to control operations and processes in the mobile multimedia device 105. The processor 105f may be operable to process signals from the RF block 105e and/or the MMP 105a.
The memory 105t may comprise suitable logic, circuitry, interfaces and/or code that may be operable to store information such as executable instructions, data and/or database that may be utilized by the processor 105f and the multimedia processor 105a. The memory 105t may comprise RAM, ROM, low latency nonvolatile memory such as flash memory and/or other suitable electronic data storage.
In operation, the mobile multimedia device 105 may receive RF signals via the antenna 105d. Received RF signals may be processed by the RF block 105e and the RF signals may be further processed by the processor 105f. Audio and/or video data may be received from the external camera 105m, and image data may be received via the integrated camera 105g. During processing, the MMP 105a may utilize the external memory 105n for storing of processed data. Processed audio data may be communicated to the audio block 105s and processed video data may be communicated to the LCD 105b, the external LCD 105p and/or the TV 105h, for example. The keypad 105c may be utilized for communicating processing commands and/or other data, which may be required for image, audio or video data processing by the MMP 105a.
In an exemplary embodiment of the invention, the camera 105g may be operable to capture a plurality of consecutive image samples of the scene 110 from a viewing position, where the scene 110 may comprise one or more objects such as, for example, the faces 110a and/or the moving object 110e that may be identifiable by the MMP 105a. The captured consecutive image samples may be processed by the MMP 105a. An image of the scene 110 may be created or composed by the MMP 105a utilizing at least a portion of the image samples from a plurality of the captured consecutive image samples based on the identifiable objects such as the faces 110a and/or the moving object 110e. In instances when the identifiable objects may comprise one or more faces 110a in the scene 110, the MMP 105a may be operable to identify the faces 110a for each of the captured consecutive image samples employing face detection. The face detection may determine the locations and sizes of the faces 110a such as human faces in arbitrary images. The face detection may detect facial features and ignore other items and/or features, such as buildings, trees and bodies. One or more smiling faces 110b-110d among the identified faces 110a on a plurality of the captured consecutive image samples may then be identified by the MMP 105a employing smile detection. The smile detection may detect open eyes and upturned mouth associated with a smiling face such as the smiling face 110b on the scene 110. The image of the scene 110 may be composed by selecting at least a portion of one or more of the plurality of the captured consecutive image samples based on the identified one or more smiling faces 110b-110d. In this instance, for example, the image of the scene 110 may be composed in such a way that it comprises each of the identified smiling faces 110b-110d which may occur in the scene 110 during the period when the consecutive image samples are captured.
In instances when the identifiable object may comprise a moving object 110e in the scene 110, for example, the MMP 105a may be operable to identify the moving object 110e on at least a portion of the plurality of the captured consecutive image samples utilizing, for example, the motion detection circuit 105u in the MMP 105a. The image of the scene 110 may be composed by selecting at least a portion of the plurality of the captured consecutive image samples based on the identified moving object 110e. In this instance, for example, the image of the scene 110 may be composed in such a way that the identified moving object 110e, which may occur in the scene 110 during the period when the consecutive image samples are captured, may be eliminated from the composed image of the scene 110.
The consecutive image samples 201, 202203 may be captured by the camera 105g at a viewing position. During the period when the consecutive image samples 201, 202, 203 are captured, the smiling face 201a is captured in the image sample 201, the smiling face 202b is captured in the image sample 202 and the smiling face 203c is captured in the image sample 203, for example. In an exemplary embodiment of the invention, the MMP 105a may be operable to identify the faces 201a-201c on the image sample 201, the faces 202a-202c on the image sample 202 and the faces 203a-203c on the image sample 203 respectively employing the face detection. The smiling face 201a among the faces 201a-201c on the image sample 201, the smiling face 202b among the faces 202a-202c on the image sample 202 and the smiling face 203c among the faces 203a-203c on the image sample 203 may then be identified respectively by the MMP 105a employing the smile detection. The image 204 of the scene 210 may be composed by selecting at least a portion of the plurality of the captured consecutive image samples 201, 202, 203 based on the identified smiling faces 201a, 202b, 203c. For example, the image 204 of the scene 210 may be composed in such a way that it may comprise two or more of the smiling faces 204a, 204b, 204c. The smiling face 204a may be extracted from the smiling face 201a on the image sample 201, the smiling face 204b may be extracted from the smiling face 202b on the image sample 202 and the smiling face 204c may be extracted from the smiling face 203c on the image sample 203. In some embodiments of the invention, it may be determined that one or more of the captured image samples should not be used. In this regard, those captured image samples that should not be utilized may be discarded and the remaining captured image samples may be utilized to create the image 204. For example, the image sample 202 for smiling face 202b may be discarded and image samples 201 and 203 may be utilized to generate or compose the image 204.
In the exemplary embodiment of the invention illustrated in
The consecutive image samples 301, 302303 may be captured by the camera 105g at a position or particular viewing angle. During the period when the consecutive image samples 301, 302, 303 are captured, the moving object 301a is captured in the image sample 301, the moving object 302a is captured in the image sample 302 and the moving object 303a is captured in the image sample 303, for example. In an exemplary embodiment of the invention, the MMP 105a may be operable to identify the moving object 301a on the image sample 301, the moving object 302a on the image 302 and the moving object 303a on the image sample 303 respectively utilizing the motion detection circuit 105u in the MMP 105a. The image 304 of the scene 310 may be composed by selecting at least a portion of the image samples from a plurality of the captured consecutive image samples 301, 302, 303 based on the identified moving objects 301a, 302a, and 303a. For example, the image 304 of the scene 110 may be composed in such a way that it does not comprise the identified moving objects 301a, 302a, 303a which may occur in the scene 110 during the period when the consecutive image samples 301, 302, 303 are captured.
In the exemplary embodiment of the invention illustrated in
In step 505, the MMP 105a in the mobile multimedia device 105 may be operable to discard one or more of the plurality of the captured consecutive image samples 201, 202, 203 based on the determination. For example, the captured image sample 202 may be discarded. In step 506, the remaining captured consecutive image samples 201, 203 may be utilized to create the image 204 by the MMP 105a based on the identifiable objects. In some embodiments of the invention, in instances where the captured image sample 202 is discarded, the captured image sample may be replaced by an interpolated picture or repeated picture. In step 507, the LCD 105b in the mobile multimedia device 105 may be operable to display the created or composed image 204 of the scene 210. The exemplary steps may proceed to the end step 508.
In various embodiments of the invention, a camera 105g in a mobile multimedia device 105 may be operable to capture consecutive image samples such as image samples 201, 202, 203 of a scene 210, where the scene 210 may comprise one or more identifiable objects, which may be identified by the MMP 105a in the mobile multimedia device 105. An image such as the image 204 of the scene 210 may be created by the MMP 105a in the mobile multimedia device 105 utilizing a plurality of the captured consecutive image samples 201, 202, 203 based on the identifiable objects. In instances when the identifiable objects may comprise one or more faces 210a-210c in the scene 210, the MMP 105a in the mobile multimedia device 105 may be operable to identify the faces such as the faces 201a-201c for a captured image samples such as the image sample 201 utilizing face detection. One or more smiling faces such as the smiling face 201a among the identified faces such as the faces 201a-201c for a captured image sample such as the image sample 201 may then be identified by the MMP 105a in the mobile multimedia device 105 utilizing smile detection. At least a portion of the captured consecutive image samples 201, 202, 203 may be selected by the MMP 105a based on the identified one or more smiling faces 201a, 202b, 203c. The image 204 of the scene 210 may be composed utilizing the selected at least a portion of the captured consecutive image samples 201, 202, 203 based on the identified one or more smiling faces 201a, 202b, 203c. In this instance, for example, the image 204 of the scene 210 may be composed in such a way that it comprises each of the identified smiling faces 210a, 210b, 210c which may occur in the scene 210 during a period of capturing the consecutive image samples 201, 202, 203.
In instances when the identifiable object may comprise a moving object 310a in the scene 310, for example, the MMP 105a in the mobile multimedia device 105 may be operable to identify the moving object such as the moving object 301a for a captured consecutive image samples such as the image sample 301 utilizing a motion detection circuit 105u in the MMP 105a. The image 304 of the scene 310 may be composed by selecting at least a portion of the captured consecutive image samples 301, 302, 303 based on the identified moving objects 301a, 302a, 303a. In this instance, for example, the image 304 of the scene 310 may be composed in such a way that the identified moving object 310a, which may occur in the scene 310 during a period of capturing the consecutive image samples 301, 302, 303, may be eliminated from the composed image 304 of the scene 310.
Other embodiments of the invention may provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for composing an image based on multiple captured images.
Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
This patent application makes reference to, claims priority to, and claims benefit from U.S. Provisional Application Ser. No. 61/316,865, which was filed on Mar. 24, 2010. The above stated application is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61316865 | Mar 2010 | US |