PHOTO SYNTHESIZING METHOD, DEVICE, AND MEDIUM

Abstract
A photo synthesizing method, device and medium are provided. The method includes: starting an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo; when acquiring a current photo, calculating an expression score value for each of first-type face in the current photo, each of the first-type faces indicating a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value; determining whether the expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo; and when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, controlling the image acquisition component to stop acquiring the photos, and generating the synthesized photo by stitching second-type faces in the acquired photos, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is based upon and claims priority to Chinese Patent Application No. 201611078279.0, filed Nov. 29, 2016, the entire contents of which are incorporated herein by reference.


TECHNICAL FIELD

The present disclosure generally relates to the technical field of smart photographing technology, and more particularly, to a photo synthesizing method, device and medium.


BACKGROUND

With the development of photographing technology, more and more users like to record their photos in their daily travels or friends' gatherings. It is relatively difficult to take a photo with good facial expressions of each person. Typically, the image capturing apparatus can grade faces in each photo generated when taking photos of multiple people by performing a group-photo preferred operation, and extract the best-performing face of each of the photos for synthesis.


SUMMARY

Embodiments of the present disclosure provide a photo synthesizing method, device and medium.


According to a first aspect of embodiments of the present disclosure, there is provided a photo synthesizing method, which may include: starting an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo; when acquiring a current photo, calculating an expression score value for each of first-type faces in the current photo, each of the first-type face indicating a face a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value; determining whether the expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo; and when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, controlling the image acquisition component to stop acquiring the photos, and generating the synthesized photo by stitching second-type faces in the acquired photos, each of the second-type face indicating a face whose expression score value is greater than the preset score threshold value.


According to a second aspect of embodiments of the present disclosure, there is provided a photo synthesizing device, which may include: a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured to: start an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo; when acquiring a current photo, calculate an expression score value for each of first-type faces in the current photo, each of the first-type faces indicating a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value; determine whether the expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo; and when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, control the image acquisition component to stop acquiring the photos, and generate the synthesized photo by stitching second-type faces in the acquired photos, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.


According to a third aspect of the embodiments of the present disclosure, there is provided a non-transitory readable storage medium including instructions, executable by a processor in a camera or an electronic device including an image capturing device, for performing a photo synthesizing method, the method including: starting an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo; when acquiring a current photo, calculating an expression score value for each of first-type faces in the current photo, each of the first-type faces indicating a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value; determining whether the expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo; and when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, controlling the image acquisition component to stop acquiring the photos, and generating the synthesized photo by stitching second-type faces in the acquired photos, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.


It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and, together with the description, serve to explain the principles of the invention.



FIG. 1 is a flow chart of a photo synthesizing method according to an exemplary embodiment.



FIG. 2 is a flow chart of a photo synthesizing method according to a first exemplary embodiment.



FIG. 3 is a flow chart of a method for calculating an expression score value of a face according to a second exemplary embodiment.



FIG. 4 is a block diagram of a photo synthesizing device according to an exemplary embodiment.



FIG. 5 is a block diagram of another photo synthesizing device according to an exemplary embodiment.



FIG. 6 is a block diagram of still another photo synthesizing device according to an exemplary embodiment.



FIG. 7 is a block diagram suitable for a photo synthesizing device according to an exemplary embodiment.





DETAILED DESCRIPTION

Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims.



FIG. 1 is a flow chart of a photo synthesizing method according to an exemplary embodiment. The photo synthesizing method may be applied to a camera or an electronic device (such as a smart phone, and a tablet computer) including an image capturing device. As shown in FIG. 1, the photo synthesizing method includes the following steps.


In step 101, upon receiving an instruction of generating a synthesized photo, an image acquisition component is started to acquire photos.


In an embodiment, the instruction of generating a synthesized photo may be triggered by a touch screen or a physical button.


In an embodiment, the number of photos acquired by the image acquisition component cannot exceed a preset number, such as 4. The number of the acquired photos is determined by the expression score value of the face in the first captured photo. For example, if the expression score values of all the faces in the first photo are greater than a preset score threshold value, capturing only one photo is sufficient.


In an embodiment, the preset number may be set by the user, or may be set in advance and stored in a memory by a provider of the image capturing device.


In step 102, when acquiring a current photo, the expression score value for each of first-type faces in the current photo is calculated.


In an embodiment, each of the first-type faces is used to represent a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value. For example, if there are four faces in the photo, the four faces are Face A, Face B, Face C, and Face D respectively. When acquiring the first photo, the first-type faces include Face A, Face B, Face C, and Face D, and it needs to calculate the expression score values of all the faces. In the first photo, if the expression score value for each of Face A, Face B, and Face C is greater than a preset score threshold value, then for the second photo, the first-type faces include Face D, and when generating the second photo, only the expression score value of Face D needs to be calculated.


In an embodiment, the expression score value of each face may be measured by eyes, mouth, face orientation, face image quality, and the like of the face.


In an embodiment, every time after generating one photo, the expression score value of the face may be calculated by using a preset image processing algorithm.


In an embodiment, the process of calculating the expression score value of the face may be referred to the embodiment shown in FIG. 3, which will not be elaborated herein.


In step 103, in the current photo, it is determined whether the expression score value for each of the first-type faces is greater than the preset score threshold value, and if the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, step 104 is performed.


In an embodiment, the preset score threshold value may be a reasonable score, such as 80 points, and the expression that achieves the preset score threshold value is good enough for generating a synthesized photo.


In step 104, the image acquisition component is controlled to stop the acquisition of the photos, and second-type faces in the acquired photos are stitched to generate the synthesized photo.


In an embodiment, each of the second-type faces is used to represent a face whose expression score value is greater than the preset score threshold value.


In an embodiment, a synthesized photo may be generated by stitching faces whose expression score values are greater than the preset score threshold value in the acquired photos.


In the present embodiment, upon receiving the instruction of generating the synthesized photo, the image acquisition component is started to acquire the photos, and every time when one photo is acquired, the expression score value for each of the first-type faces in the current photo is calculated. Then it is determined whether the expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo, and when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, the image acquisition component is controlled to stop acquiring the photos, and the synthesized photo is generated by the acquired photo in a stitching manner. In the present disclosure, every time when one photo is generated, it is possible to calculate only the expression score value of the face having a relatively low expression score value in the previously generated photo, whereby the number of the generated photos can be effectively reduced while ensuring the generation of the synthesized photo with a good effect. Meanwhile, the calculating amount of calculating the expression value of face in the photo is reduced, thereby time of generating the synthesized photo is effectively shortened, and the power consumption of generating the synthesized photo is reduced.


In an embodiment, the method further includes: if not all the expression score values of the first-type faces in the current photo are greater than the preset score threshold value, determining whether the number of the acquired photos is less than a preset number; and if the number of the acquired photos is less than the preset number, determining the first-type faces whose expression score values are to be calculated in a later acquired photo based on the expression score values of the first-type faces in the current photo and starting the image acquisition component to acquire the photos when the number of the acquired photos is less than the preset number.


In an embodiment, the method further includes: if the number of the acquired photos is not less than the preset number, determining faces for generating the synthesized photo from the acquired photos and generating the synthesized photo by a stitching manner.


In an embodiment, the determining the faces for generating the synthesized photo from the acquired photos includes: selecting the second-type faces from the acquired photos and a face having the highest expression score value among the first-type faces as the faces for generating the synthesized photo.


In an embodiment, calculating the expression score value for each of the first-type faces in the current photo includes: identifying each of the first-type faces from the current photo; calculating a local score value for a local feature corresponding to each of the first-type faces; and weighting the local score value for each of the first-type faces to obtain the expression score value for each of the first-type faces.


For details on how to generate the synthesized photo, the following embodiments may be referred to.



FIG. 2 is a flow chart of a photo synthesizing method according to an exemplary embodiment. In the present embodiment, the above-described method provided by the embodiments of the present disclosure is utilized, and the illustrations are given by taking the generation of a synthesized photo as an example. As shown in FIG. 2, the method includes the following steps.


In step 201, upon receiving an instruction of generating a synthesized photo, an image acquisition component is started to acquire photos.


In step 202, when acquiring a current photo, the expression score value for each of first-type faces in the current photo is calculated.


In an embodiment, each of the first-type faces represents a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value.


In an embodiment, the method of step 201 and step 202 may be referred to the description of step 101 and step 102 in the embodiment shown in FIG. 1, which will not be elaborated herein.


In step 203, in the current photo, it is determined whether the expression score value for each of the first-type faces is greater than the preset score threshold value, if the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, step 204 is performed, and if not every expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo, step 205 is performed.


In an embodiment, the preset score threshold value may be a reasonable score, such as 80 points. For example, if the current photo is the second captured photo and only the expression score values of Face A and Face B in the first photo are not greater than the preset score threshold value, then the expression score values of Face A and Face B in the second photo can be calculated, and it is determined whether the expression score values of Face A and Face B in the second photo are greater than the preset score threshold value.


In step 204, the image acquisition component is controlled to stop the acquisition of the photo, and the second-type faces in the acquired photos are stitched to generate the synthesized photo.


In an embodiment, each of the second-type faces represents a face whose expression score value is greater than the preset score threshold value.


In step 205, it is determined whether the number of the acquired photos is less than a preset number, if the number of the acquired photos is less than the preset number, step 206 is performed, and if the number of the acquired photos is not less than the preset number, step 207 is performed.


In step 206, based on the expression score values of the first-type faces in the current photo, the first-type faces whose expression score values are to be calculated in a later acquired photo is determined, and step 201 is performed.


For example, in an example of step 203, in the second photo, if only the expression score value of Face A is greater than the preset score threshold value, then it can be determined that in the third photo, only the expression score value of Face B needs to be calculated.


In step 207, faces for generating the synthesized photo are determined from the acquired photos, and the synthesized photo is generated by a stitching manner.


In an embodiment, it is possible to select the second-type faces from the acquired photos and a face having the highest expression score value among the first-type faces as the faces for generating the synthesized photo. For example, if the expression score values of Face A, Face C, and Face D in the first photo are greater than the preset score threshold value, then the second-type faces include Face A, Face C, and Face D in the first photo, i.e., Face A, Face C, and Face D in the first photo are faces for generating the synthesized photo. For Face B, the expression score value in the first photo is 70 points, the expression score value in the second photo is 72 points, the expression score value in the third photo is 75 points, and the expression score value in the fourth photo is 79 points. If the preset number is 4, Face B in the fourth photo may be selected as the face for generating the synthesized photo.


In this embodiment, by limiting the number of generated photos, it is possible to effectively reduce the number of photos for generating the synthesized photo in the case of ensuring the quality of the synthesized photo; furthermore, in the case where the expression score value of a face in each and every photo is not greater than the preset score threshold value, the face having the highest expression score value is determined as the face for generating the synthesized photo, thereby effectively reducing the number of photos for generating the synthesized photo in the case of ensuring the quality of the synthesized photo, effectively reducing the number of target photos, and improving the speed of generating the synthesized photo.



FIG. 3 is a flow chart of a method for calculating an expression score value of a face according to a second exemplary embodiment. In the present embodiment, the above-described method provided by the embodiments of the present disclosure is utilized, and the illustrations are given by taking the calculation of the expression score value of a face as an example. As shown in FIG. 3, the method includes the following steps.


In step 301, each of the first-type faces is identified from the current photo.


In an embodiment, each face in each photo may be identified by an image recognition model, such as a convolution neural network.


In an embodiment, each face region may also be identified by other image processing techniques.


In step 302, a local score value for a local feature corresponding to each of the first-type faces is calculated.


In an embodiment, when calculating the expression score value of the face, the local score value corresponding to each of the local feature value of the face, such as a mouth corner portion, a human eye portion, a facial clarity, and a face tilt angle, may be preferentially calculated.


In an embodiment, the local score value of the local feature value of the face may also be calculated by a pre-trained model. In a further embodiment, the local score value of the local feature value of the face may also be calculated by a preset algorithm.


In step 303, the local score value for each of the first-type faces is weighted to obtain the expression score value for each of the first-type faces.


In an embodiment, the weighting coefficient corresponding to each local score value may be set by the user or may be preset by an algorithm. For example, the weighted values of the local score values corresponding to the human eye, the mouth corner, and the face tilt angle are 0.3, 0.3, and 0.4 respectively, if the corresponding weighting coefficients are 8.0, 8.3 and 8.4 respectively, then the obtained final score value is 8.0×0.3+8.3×0.3+8.4×0.4=8.25.


In the present embodiment, by calculating the local score value of each face, such as an eye score value, a mouth score value, and a face orientation score value, and then weighting them to obtain the expression score value, the face expression can be determined from many aspects, and the score of the facial expression can be more comprehensive.



FIG. 4 is a block diagram of a photo synthesizing device according to an exemplary embodiment. As shown in FIG. 4, the photo synthesizing device includes: an acquisition module 410, a calculation module 420, a first determination module 430, and a generation module 440.


The acquisition module 410 is configured to start an image acquisition component to acquire a photo after receiving an instruction of generating a synthesized photo.


The calculation module 420 is configured to, when acquiring a current photo, calculate an expression score value for each of first-type faces in the current photo, each of the first-type faces indicating a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value.


The first determination module 430 is configured to determine whether the expression score value for each of the first-type faces calculated by the calculation module 420 is greater than the preset score threshold value in the current photo.


The generation module 440 is configured to, if the first determination module 430 determines that the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, control the image acquisition component to stop acquiring the photos, and generate the synthesized photo by stitching second-type faces in the acquired photos, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.



FIG. 5 is a block diagram of another photo synthesizing device according to an exemplary embodiment. As shown in FIG. 5, on the basis of the above embodiment shown in FIG. 4, in an embodiment, the device further includes: a second determination module 450, and a performance module 460.


The second determination module 450 is configured to, if the first determination module 430 determines that not every expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, determine whether the number of the acquired photos is less than a preset number.


The performance module 460 is configured to, if the second determination module 450 determines that the number of the acquired photos is less than the preset number, determine the first-type faces whose expression score values are to be calculated in a later acquired photo based on the expression score values of the first-type faces in the current photo, and start the image acquisition component to acquire the photo.


In an embodiment, the device further includes: a third determination module 470. The third determination module 470 is configured to, if the second determination module 450 determines that the number of the acquired photos is not less than a preset number, determine faces for generating the synthesized photo from the acquired photos and generate the synthesized photo by a stitching manner.


In an embodiment, the third determination module 470 includes: a selection submodule 471. The selection submodule 471 is configured to select the second-type faces from the acquired photos and a face having the highest expression score value among the first-type faces as the faces for generating the synthesized photo.



FIG. 6 is a block diagram of still another photo synthesizing device according to an exemplary embodiment. As shown in FIG. 6, on the basis of the above embodiment shown in FIG. 4 or FIG. 5, in an embodiment, the calculation module 420 includes: an identification submodule 421, a calculation submodule 422, and a weighting submodule 423.


The identification submodule 421 is configured to identify each of the first-type faces from the current photo.


The calculation submodule 422 is configured to calculate a local score value for a local feature corresponding to each of the first-type faces.


The weighting submodule 423 is configured to weight the local score value for each of the first-type faces to obtain the expression score value for each of the first-type faces.


The specific implementing procedure of functions and actions of individual units in the above device may refer to the implementing procedure of corresponding steps in the above methods, which will not be elaborated herein.


For device embodiments, since the device embodiments are substantially corresponding to the method embodiments, the relevant contents may be referred to some explanations in the method embodiments. The device embodiments described above are only illustrative, wherein the units illustrated as separate components may be or may not be separated physically, the component displayed as a unit may be or may not be a physical unit, i.e., may be located at one location, or may be distributed into multiple network units. A part or all of the modules may be selected to achieve the purpose of the solution in the present disclosure according to actual requirements. The person skilled in the art can understand and implement the present disclosure without paying inventive labor.



FIG. 7 is a block diagram suitable for a photo synthesizing device according to an exemplary embodiment. For example, the device 700 may be a camera or an electronic apparatus including an image capturing device.


Referring to FIG. 7, the device 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.


The processing component 702 typically controls overall operations of the device 700, such as the operations associated with display, voice playing, data communications, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or part of the steps in the above described methods. Moreover, the processing component 702 may include one or more modules which facilitate the interaction between the processing component 702 and other components. For instance, the processing component 702 may include a multimedia module to facilitate the interaction between the multimedia component 708 and the processing component 702.


The memory 704 is configured to store various types of data to support the operation of the device 700. Examples of such data include instructions for any applications or methods operated on the device 700, messages, photos, etc. The memory 704 may be implemented using any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic or optical disk.


The power component 706 provides power to various components of the device 700. The power component 706 may include a power management system, one or more power sources, and any other components associated with the generation, management, and distribution of power in the device 700.


The multimedia component 708 includes a screen providing an output interface between the device 700 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes the touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensors may not only sense a boundary of a touch or swipe action, but also sense a period of time and a pressure associated with the touch or swipe action.


The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a microphone (“MIC”) configured to receive an external audio signal when the device 700 is in an operation mode, such as a call mode, a recording mode, and a voice identification mode. The received audio signal may be further stored in the memory 704 or transmitted via the communication component 716. In some embodiments, the audio component 710 further includes a speaker to output audio signals.


The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, such as a keyboard, a click wheel, buttons, and the like. The buttons may include, but are not limited to, a home button, a volume button, a starting button, and a locking button.


The sensor component 714 includes one or more sensors to provide status assessments of various aspects of the device 700. For instance, the sensor component 714 may detect an open/closed status of the device 700, relative positioning of components, e.g., the display and the keypad, of the device 700, a change in position of the device 700 or a component of the device 700, a presence or absence of user contact with the device 700, an orientation or an acceleration/deceleration of the device 700, and a change in temperature of the device 700. The sensor component 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor component 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 714 may also include an accelerometer sensor, a gyroscope sensor, a magnetic sensor, a distance sensor, a pressure sensor, or a temperature sensor.


The communication component 716 is configured to facilitate communication, wired or wirelessly, between the device 700 and other devices. The device 700 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 716 further includes a near field communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on a radio frequency identification (RFID) technology, an infrared data association (IrDA) technology, an ultra-wideband (UWB) technology, a Bluetooth (BT) technology, and other technologies.


In exemplary embodiments, the device 700 may be implemented with one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components, for performing the following method: starting an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo; when acquiring a current photo, calculating an expression score value for each of first-type faces in the current photo, each of the first-type faces indicating a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value; determining whether the expression score value of each and every first-type face is greater than the preset score threshold value in the current photo; and when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, controlling the image acquisition component to stop acquiring the photos, and generating the synthesized photo by stitching second-type faces in the acquired photos, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.


In exemplary embodiments, there is also provided a non-transitory computer-readable storage medium including instructions, such as the memory 704 including instructions, the above instructions are executable by the processor 720 in the device 700, for performing the above-described methods. For example, the non-transitory computer-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disc, an optical data storage device, and the like.


Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed here. This application is intended to cover any variations, uses, or adaptations of the invention following the general principles thereof and including such departures from the present disclosure as come within known or customary practice in the art. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.


It will be appreciated that the present disclosure is not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes can be made without departing from the scope thereof. It is intended that the scope of the invention only be limited by the appended claims.

Claims
  • 1. A photo synthesizing method, comprising: starting an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo;calculating an expression score value for each of first-type faces in a current photo when acquiring the current photo, each of the first-type faces indicating a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value;determining whether the expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo; andcontrolling the image acquisition component to stop acquiring the photos and generating the synthesized photo by stitching second-type faces in the acquired photos when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.
  • 2. The method of claim 1, further comprising: determining whether the number of the acquired photos is less than a preset number when not every expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value; anddetermining the first-type faces whose expression score values are to be calculated in a later acquired photo based on the expression score values of the first-type faces in the current photo and starting the image acquisition component to acquire the photos when the number of the acquired photos is less than the preset number.
  • 3. The method of claim 2, further comprising: determining faces for generating the synthesized photo from the acquired photos and generating the synthesized photo by a stitching manner when the number of the acquired photos is not less than a preset number.
  • 4. The method of claim 3, wherein the determining the faces for generating the synthesized photo from the acquired photos comprises: selecting the second-type faces from the acquired photos and a face having the highest expression score value among the first-type faces as the faces for generating the synthesized photo.
  • 5. The method of claim 1, wherein the calculating the expression score value for each of the first-type faces in the current photo comprises: identifying each of the first-type faces from the current photo;calculating a local score value for a local feature corresponding to each of the first-type faces; andweighting the local score value for each of the first-type faces to obtain the expression score value for each of the first-type faces.
  • 6. A photo synthesizing device, comprising: a processor; anda memory for storing instructions executable by the processor;wherein the processor is configured to:start an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo;calculate an expression score value for each of first-type faces in a current photo when acquiring the current photo, each of the first-type faces indicating a face whose expression score value obtained by calculating a photo acquired prior to the current photo is not greater than a preset score threshold value;determine whether the expression score value for each of first-type faces is greater than the preset score threshold value in the current photo; andcontrol the image acquisition component to stop acquiring the photos and generate the synthesized photo by stitching second-type faces in the acquired photos when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.
  • 7. The device of claim 6, wherein the processor is further configured to: determine whether the number of the acquired photos is less than a preset number when not every the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value; anddetermine the first-type faces whose expression score values are to be calculated in a later acquired photo based on the expression score values of the first-type faces in the current photo and start the image acquisition component to acquire the photos when the number of the acquired photos is less than the preset number.
  • 8. The device of claim 7, wherein the processor is further configured to: determine faces for generating the synthesized photo from the acquired photos and generate the synthesized photo by a stitching manner when the number of the acquired photos is not less than the preset number.
  • 9. The device of claim 8, wherein the processor configured to determine the faces for generating the synthesized photo from the acquired photos is further configured to: select the second-type faces from the acquired photos and a face having the highest expression score value among the first-type faces as the faces for generating the synthesized photo.
  • 10. The device of claim 6, wherein the processor configured to calculate the expression score value for each of the first-type faces in the current photo is further configured to: identify each of the first-type faces from the current photo;calculate a local score value for a local feature corresponding to each of the first-type faces; andweight the local score value for each of the first-type faces to obtain the expression score value for each of the first-type faces.
  • 11. A non-transitory readable storage medium comprising instructions, executable by a processor in a camera or an electronic device including an image capturing device, for performing a photo synthesizing method, the method comprising: starting an image acquisition component to acquire photos after receiving an instruction of generating a synthesized photo;calculating an expression score value for each of first-type faces in a current photo when acquiring the current photo, the first-type face indicating a face whose expression score value obtained by calculating a photo acquired before the current photo is not greater than a preset score threshold value;determining whether the expression score value for each of the first-type faces is greater than the preset score threshold value in the current photo; andcontrolling the image acquisition component to stop acquiring the photos and generating the synthesized photo by stitching second-type faces in the acquired photos when the expression score value for each of the first-type faces in the current photo is greater than the preset score threshold value, each of the second-type faces indicating a face whose expression score value is greater than the preset score threshold value.
Priority Claims (1)
Number Date Country Kind
201611078279.0 Nov 2016 CN national