1. Field
This application relates generally to a facial capture analysis and training system, and more specifically to evaluating a facial performance using facial capture.
2. Description of the Related Art
Facial capture refers to the process of capturing a digital representation of a person's face using, for example, a video camera, laser scanner, or sensors. The resulting digital representation can then be analyzed to identify and track facial landmarks (such as the location of the lips and eyes) and determine facial movements or expressions, for example. Facial capture is sometimes used for generating CGI avatars for movies or games.
In some cases, it may be useful to compare a user's facial performance to a reference facial performance, such as when one person is attempting to reproduce the facial movements of another person. There is a need for systems and methods that use facial capture to compare the facial performance of a user to a reference facial performance, analyze how well they match, and provide automated feedback on the match quality.
In accordance with some embodiments, a method for evaluating a facial performance using facial capture of two users includes: obtaining a reference set of facial performance data representing a first user's facial capture; obtaining a facial capture of a second user; extracting a second set of facial performance data based on the second user's facial capture; calculating at least one matching metric based on a comparison of the reference set of facial performance data to the second set of facial performance data; and displaying an indication of the at least one matching metric on a display.
In accordance with some embodiments, a system for evaluating a facial performance using facial capture of two users includes: a display; a camera/scanner; one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: obtaining a reference set of facial performance data representing a first facial capture of a first user's performance; obtaining, from the camera/scanner, a second facial capture of a second user's performance; extracting a second set of facial performance data based on the second facial capture; calculating at least one matching metric based on a comparison of the reference set of facial performance data to the second set of facial performance data; and enabling display, on the display, of an indication of the at least one matching metric.
In accordance with some embodiments, a non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a processor in a system with a camera/scanner and a display cause the processor to: obtain a reference set of facial performance data representing a first facial capture of a first user's performance; obtain, from the camera/scanner, a second facial capture of the second user's performance; extract a second set of facial performance data based on the second facial capture; calculate at least one matching metric based on a comparison of the reference set of facial performance data to the second set of facial performance data; and enable display, on the display, of an indication of the at least one matching metric.
The embodiments depicted in the figures are only exemplary. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein can be employed without departing from the principles described herein.
The following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein will be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the present technology. Thus, the disclosed technology is not intended to be limited to the examples described herein and shown, but is to be accorded the scope consistent with the claims.
As discussed above, facial capture can be used for a variety of purposes, including generation of CGI avatars. There is a need, however, for facial capture analysis and training systems that use facial capture to compare a user's facial performance to a reference facial performance and determine how well they match. Such a system may provide feedback to the user (or other people) on the quality of the match, thereby providing a training mechanism to help the the user improve their ability to match a reference facial performance.
There are multiple scenarios in which a user may want to accurately reproduce the facial movements of a reference facial performance. For example, a user may be learning a foreign language and may wish to emulate a native speaker's lip and tongue movements in order to produce certain sounds. A user may be suffering from a medical impairment that affects their ability to control their facial movements and may wish to emulate reference facial movements in order to re-train their facial muscles. A user may wish to accurately lip-synch to a reference vocal performance.
Thus, computer-enabled systems and methods that analyze how well a user is reproducing a reference facial performance and provide feedback on the quality of the match can be useful for a variety of purposes.
Reference Performance: Capture and Analysis
Optionally, system 100 includes microphone 124 to obtain audio data associated with the reference performance, such as an audio recording of the reference performance.
While reference facial performance data storage 110 is shown as being a separate database that is separate from computing system 104, it should be appreciated that reference facial performance data storage 110 may instead be implemented using a single storage device that is local or remote to computing system 104.
As discussed in greater detail below, facial capture and analysis system 100 can be used to extract a reference set of facial performance data that can subsequently be used for comparison with a user's facial performance.
At block 202, a reference facial capture of a first user's performance is obtained. In some embodiments, the reference facial capture is obtained using camera/scanner 102, as depicted in
In some embodiments, camera/scanner 102 includes facial sensors and/or facial electrodes, and the reference facial capture includes data from the sensors and/or electrodes.
A person of skill in the art will appreciate that there are many types of apparatus (and corresponding outputs) that can be used to obtain a facial capture.
Optionally, at block 204, an audio recording is obtained. In some embodiments, the audio recording is obtained using microphone 124 depicted in
Optionally, at block 206, text is obtained. In some embodiments, the text is obtained from a keyboard or from computer storage media, for example. In some embodiments, the text is associated with the reference facial performance. For example, the text may be the lyrics to a song the reference performer is singing or a word the reference performer is speaking.
At block 208, the reference facial capture is analyzed. In some embodiments, the reference facial capture is analyzed using one or more of the objects 112, 114, 116, 118, 120, 122 depicted in memory 108 of
In some embodiments, the reference facial capture is analyzed using the landmark detection object. The landmark detection object extracts landmark data representing the locations of facial landmarks such as the locations of the eyes, lips, nose, etc. Such landmark data may subsequently be used as an input to identify facial expressions, facial movements, head motion, and/or head motion tempo as described with respect to the face shapes object, head motion object, and tempo object below, for example.
In some embodiments, the reference facial capture is analyzed using the face shapes object. The face shapes object extracts a shapes representation of the reference facial capture using face shapes analysis techniques, such as FACS (facial action coding system). Shapes analysis identifies predefined face shapes such as “mouth open,” “lip pucker,” “lip funnel,” “eyebrows up,” “cheek squint,” etc. An exemplary set of face shapes is depicted in
Conceptually, the face shapes object decomposes a facial expression into a combination of predefined face shapes, with each shape having a corresponding coefficient that captures how strongly the face shape is represented. The face shapes object is used to extract a shapes representation of the reference facial capture, which may take the form of the following equation:
fO(n)=s1*shape1(n)+s2*shape2(n) . . . sN*shapeN(n);
where shapex(n) is a predefined facial shape (such as one of the face shapes listed in
For example,
fO(n)=0.13*Lips Funnel+0.12*Jaw Open.
In some embodiments, the face shapes object uses face shapes analysis (e.g., FACS) to determine how a facial expression is changing over time; e.g., to identify facial movements.
In some embodiments, the reference facial capture is analyzed using the emotion object. The emotion object extracts an emotion representation of the reference facial capture. In some embodiments, the emotion algorithm extracts the emotion representation based on the results of the face shapes object, by mapping face shapes to emotions. For example, as depicted in
Happy(n)=0.65*BrowsUp+0.25*JawOpen
The emotion algorithm may map the face shapes representation extracted by the shapes object to emotions using predefined relationships between face shapes and emotions. In some embodiments, the emotion object can determine whether the face shapes in the shapes representation are mapped to more than one emotion; for example, the shapes representation may be associated with a combination of happiness and surprise. Thus, in some embodiments, the emotion object extracts an emotion representation of the reference facial capture in the form of the following equation:
E(n)=e1*emotion1(n)+e2*emotion2(n) . . . eN*emotionN(n);
where emotionX is an emotion such as happy, sad, or scared, for example, such as depicted in
As shown above, in some embodiments, the emotion representation includes coefficients ex for each constituent emotion that are based on their how strongly the emotion is represented. For example, in the emotion representation:
E(n)=0.3*Happy(n)+0.0*Sad(n)+0.1*Surprised(n)
the emotion algorithm has determined that, based on the output from the face shapes object, the performer's expression is moderately correlated with being happy (e.g., the shapes representation is moderately well correlated to predefined “happy” face shapes, such as the equation for Happy(n) depicted above), slightly correlated with “surprised” face shapes, and not correlated at all with “sad” face shapes.
In some embodiments, the emotion object can be configured to compensate for differences in how different people facially express emotions, by creating user-specific mappings between face shapes and emotions. For example, some people's “happy” expression may have more “cheeks raised” and less “jaw open” than other people's “happy” expression. Some people may have a “happy” expression that includes a different set of facial shapes entirely. Such differences may make it difficult to extract an emotion representation of a facial capture. To address this challenge, in some examples, the emotion algorithm can be configured to define a specific user's face shape representation as representing a particular emotion, such as “happy.” For example, system 100 can capture a specific user's “happy” facial expression (e.g., from a video or still image of the user), decompose the expression using, e.g., FACS shape analysis, and identify the resulting combination of facial shapes as a predefined “happy” expression. This mapping can then be used by the emotion object to extract the emotion representation.
In some embodiments, the emotion object can extract an emotion representation based on an analysis of facial landmark information (such as the facial landmark data extracted by the landmark detection object), facial meshes, or other data included in the reference facial capture.
In some embodiments, the reference facial capture is analyzed using the head position object. The head position object extracts a head position representation of the reference facial capture. In some embodiments, the head position object determines the head position and head rotation of the reference performer based on facial landmark data (such as the facial landmark data extracted by the landmark detection object). The head position object extracts a head position representation of the reference facial capture such as the following:
hO(n)=xO(n)+yO(n)+zO(n)+rxO(n)+ryO(n)+rzO(n);
where x, y, z capture translational head position information and rx, ry, rz capture the rotational head position information. In some embodiments, the head position object analyzes the reference facial capture to determine how the head position of the performer changes over time; e.g., to identify head motion.
In some embodiments, the head position object determines head motion by tracking the movements of facial landmarks, based on extracted facial landmark data. As depicted in
In some embodiments, the reference facial capture is analyzed using the tempo object. The tempo object extracts a tempo representation of the reference facial capture. The tempo object can determine the tempo of the head movements of the reference performer, based on the performer's head motion (e.g., based on the head position representation extracted by the head position object).
In some embodiments, the tempo object executes a frequency analysis (e,g., a Fourier transform) on the head motion representation to determine a tempo (or combination of tempos) at which the performer is moving their head. The tempo object can generate a tempo representation of the reference facial capture as a combination of one or more tempos as follows, with tx representing a coefficient for each tempo:
t(n)=t1*tempo1(n)+t2*tempo2(n) . . . +tN*tempoN(n).
In some embodiments, the tempo object analyzes the head position representation to identify specific head movements, such as head bobbing motions (e.g., vertical arcs) and head shaking motions (e.g., lateral arcs).
In some embodiments, the tempo object extracts the tempo representation based at least in part on a frequency analysis of an audio recording, such as the audio recording obtained in block 204.
In some embodiments, the reference facial capture is analyzed using the viseme object. The viseme object extracts a viseme representation of the reference facial capture. A viseme is analogous to a phoneme but captures mouth shapes associated with sounds rather than the sounds themselves.
In some embodiments, the viseme object can extract a viseme representation of facial data based on a shapes analysis of the facial data; e.g., based on the output of the face shapes object. In some embodiments, the viseme object uses predefined relationships between face shapes, sounds, and visemes to extract a viseme representation.
For example,
f(n)=0.23*LipFunnel+0.25*JawOpen
may correspond to a viseme for the sound “DO.” That is, when a person says “DO,” they typically make the face shapes represented by the above shapes representation.
A viseme representation of the reference facial capture can take the form of the following equation:
viz(n)=v1*viz1(n)+v2*viz2(n)+vN*vizN(n);
which expresses a sequence of visemes corresponding to a specific word, phrase, or sound, and vx represents a coefficient associated with each viseme.
In some embodiments, the viseme object can determine a viseme representation of the reference facial capture based at least in part on an audio recording of the reference performance (e.g., the audio recording obtained at block 204), using a pre-determined mapping of sounds to visemes.
In some embodiments, the viseme object can determine a viseme representation of the reference facial capture based at least in part on text (e.g., text obtained at block 206) that is provided to the viseme object, using a predetermined mapping of text to corresponding visemes, or using a predetermined mapping of text to sounds that are then mapped to visemes.
In some embodiments, one or more of the objects described above can determine the amplitude, frequency, timing, and/or duration associated with each of the above-described data representations. For example, in some embodiments, the face shapes object determines how long the reference performer maintains a specific expression or face shape. In some embodiments, the head position object determines how deeply or how frequently the reference performer has bobbed their head. In some embodiments, the emotion object determines how happy a reference performer appears, or for how long the reference performer appears happy. Thus, the corresponding representations of the reference facial capture may include data representing amplitude, frequency, timing, and/or duration information.
Optionally, at block 210, one or more of the representations of the reference facial capture are weighted and/or normalized. Normalizing the representations may subsequently enable more accurate comparisons of facial performances between different users. Weighting the representation may enable subsequent comparisons that weight certain aspects of the reference performance more heavily than others; e.g., that weight the importance of facial shapes more heavily than the importance of head motion when evaluating how well two facial performances match each other.
A person skilled in the art of facial capture will recognize that, in some embodiments, system calibration and training may be performed on a user prior to facial tracking, facial capture, or collection of facial data. Thus, in some embodiments, normalization includes system calibration and training prior to obtaining the facial capture. For example, the system can be calibrated and trained by capturing a user(s) performing a facial range of motion (ROM) routine that exercises the facial muscles to their extremities, thus allowing the system to calibrate the maximum extent of facial deformation and normalize facial shapes or movements among users. For example, the system may normalize users' “brows up” from 0-1. In this manner, one user's “brows up” can be compared to another user's “brows up” even though each user might have significantly differing forehead sizes. Such initial calibration and training data may be used to normalize the representations of a user's facial performance.
At block 212, a reference set of facial performance data is stored. In some embodiments, the reference set of facial performance data includes facial landmark data. In some embodiments, the reference set of facial performance data includes the shapes representation, emotion representation, head position representation, tempo representation and/or viseme representation of the facial capture.
In some embodiments, the reference set of performance data includes normalized and/or weighted versions of one or more of these representations, as described with respect to block 210.
In some embodiments, the reference set of facial performance data includes the facial capture of the reference performance (e.g., a video recording of the reference performance, or a still image of the reference performance, or a scan of the reference performance) and/or the audio capture of the reference performance (e.g., an audio recording of the reference performance).
In some embodiments, the reference set of performance data is stored in reference performance data storage, such as depicted in
In some embodiments, the objects depicted in
In some embodiments, the objects depicted in
The second set of facial performance data can be compared to a reference set of facial performance data retrieved from reference facial performance data storage 310 to generate one or more matching metrics using, e.g., metric object 324 in memory 308. The one or more matching metrics can be displayed on display 326.
While reference facial performance data storage 310 is shown as being a database that is separate from computing system 304, it should be appreciated that reference facial performance data storage 310 may instead be implemented using a single storage device that is local or remote to computing system 304.
As discussed in greater detail below, facial capture and analysis system 300 can be used to compare a user's facial performance to a reference facial performance and display an indication(s) of how well they match, thereby potentially training a user to match a reference performance or providing an objective evaluation of the quality of the match between the user's performance and the reference performance.
At block 402, a user's facial capture is obtained. In some embodiments, the user's facial capture is obtained using a camera or scanner, such as a video camera or laser scanner. In some embodiments, the user's facial capture is a video recording of a user's facial performance obtained using a video camera. In some embodiments, the user's facial capture is a digital scan of the user's facial performance obtained using a laser scanner, mesh scanner, marker-based scanner, or markerless scanner. In some embodiments, the facial capture is obtained using facial sensors and/or electrodes.
At block 404, the user's facial capture is analyzed. In some embodiments, the user's facial capture is analyzed as described earlier with respect to block 208 of process 200. For example, the user's facial capture can be analyzed using a landmark detection object, a face shapes object, an emotion object, a head position object, a tempo object, and/or a viseme object to extract, e.g., landmark data, a shapes representation, an emotion representation, a head position representation, a tempo representation, and/or a viseme representation of the user's facial capture.
Optionally, at block 406, one or more of the representations of the user's facial capture can be weighted or normalized, as described earlier with respect to block 210 of process 200.
At block 408, a reference set of facial performance data is obtained, such as the reference set of facial performance data described earlier with respect to
At block 410, one or more matching metrics are calculated, using, e.g., metric object 324. In some embodiments, calculating the one or more matching metrics includes correlating a reference shapes representation from the reference set of facial performance data to the user's shapes representation extracted at block 408. In some embodiments, correlating the user's shapes representation fU(n) to the reference shapes representation fR(N) includes calculating fr(n)=avgT{fU(n)−fR(n)}, where T is the time duration over which the two representations are correlated and fr(n) is a shapes metric that captures how well the two representations are correlated. That is, in some embodiments, calculating the one or more matching metrics includes computing the average difference between the user's shapes representation and the reference shapes representation over a given time duration T and outputting the result of the calculation as fr(n), where fr(n) represents an n-length sequence of numeric coefficient differences for each constituent face shape.
More generally, there are many ways to correlate a user's shapes representation with a reference shapes representation. For example, if
fU(n)=0.23*JawOpen+0.25*LipsFunnel; and
fR(n)=0.1*JawOpen+0.6*LipsFunnel+0.5*EyeSquint,
fr(n) can be calculated as the difference between the corresponding coefficients, (0.23−0.1)+(0.25−0.6)+(0−0.5). In some examples, the absolute value of the differences between coefficients may be calculated when determining fr(n), as depicted below.
Err(n)=k1*Err{shape1(n)}+k2*Err{(shape2(n)} . . .
where k=weighting coefficients and Err{(shape1(n)}=abs|fR(n)−fU(n)|
In this approach, each element of the shapes representation for the user is compared to the corresponding element of the shapes representation for the reference performer to determine how well the two representations match; e.g., to determine the error (Err) between the two representations, which can be used to calculate a matching metric (or can be used as a matching metric itself). As mentioned below, a similar error-based approach can be used for comparing the other representations and calculating additional matching metrics.
Similarly, in some embodiments, calculating the one or more matching metrics includes correlating the reference emotion representation to the user's emotion representation. In some embodiments, correlating the reference emotion representation eR(N) to the user's emotion representation eU(n) includes calculating er(n)=avgT{eU(n)−eR(n)}, where T is the time duration over which the representation is averaged and er(n) is an emotion metric that captures how well the two representations are correlated.
In some embodiments, calculating the one or more matching metrics includes correlating the reference head position representation with the user's head position representation in a manner similar to that described above.
In some embodiments, calculating the one or more matching metrics includes correlating the reference tempo representation to the user's tempo representation to generate a tempo metric tr(n), in a manner similar to that described above.
In some embodiments, calculating the one or more matching metrics includes correlating the reference viseme representation to the user's viseme representation to generate a viseme metric vr(n).
In some embodiments, calculating the one or more matching metrics includes correlating the user's representation(s) to the reference representation(s) either instantaneously (i.e., correlating the two representations at a specific point in time) or as an average over a time period. For example, the one or more matching metrics can reflect how well the user's face shapes match the reference face shapes (based on the correlation of their respective shapes representations) at a single point in time during the user's performance or reference performance, or can determine how well the user's and reference performer's face shapes matched, on average, over a longer time duration, such as over the full duration of the reference facial performance or over a running ten-second average, for example.
As discussed above, in some embodiments, a separate matching metric can be calculated based on the correlation of each of the types of representations. For example, a shapes metric fr(n) can be calculated based on the correlation between the user's shape representation and the reference shapes representation; an emotion metric er(n) can be calculated based on the correlation between the user's emotion representation and the reference emotion representation, a viseme metric vr(n) can be calculated based on the correlation between the user's viseme representation and the reference viseme representation, and so on.
In some embodiments, calculating the one or more metrics includes calculating an overall matching metric based on the correlations between two or more types of representations. For example, the overall matching metric can be calculated based on the correlation between the user's shapes representation and the reference shapes representation, and on the correlation between the user's emotion representation and the reference emotion representation, and so on.
In some embodiments, calculating the one or more matching metrics includes calculating a final performance metric P(n) by calculating a weighted sum of each of the matching metrics over the duration of the reference facial performance or user facial performance, such as:
P(n)=Pf1*fr(n)+Pf2*er(n)+Pf3*tr(n)+PfN*vizr(n);
where Pf is a weighting factor for each metric.
In some embodiments, calculating the one or more matching metrics includes weighting certain metrics more heavily than others, or weighting certain portions of a correlation more heavily than other portions of the same correlation. For example, calculating a shapes metric based on the correlation of the user's shapes representation to the reference shapes representation may include weighting the result of the correlations between certain facial shapes more heavily than the result of the correlations between other facial shapes. Depending on the purpose for which facial capture, analysis, and training system is used, certain facial shapes or facial movements may be more important for evaluating a match than other facial shapes or movements. For example, if a user is using system 300 for the purpose of learning to speak a foreign language or lip-synching, matching the lip and tongue movements of the reference performance may be more important than matching the eye movements, and the one or more matching metrics may be calculated in a manner that reflects their relative importance (e.g., by weighting the correlation of the lip and tongue movements more heavily than the correlation of the eye movements). On the other hand, if the person is using the system to reproduce a theatrical performance, matching the eye movements or other facial movements may be of similar importance to matching the lip and tongue movements.
Similarly, in some examples, calculating an overall matching metric based on the correlations of multiple types of representations may include weighting the correlations differently depending on how important each correlation is to the matching metric(s). For example, in some scenarios, the correlation of the shapes representations may be more important than the correlation of the head position representations in determining how well a user's performance matches the reference performance, and therefore the correlation of the shapes representation may be weighted more heavily than the correlation of the head position representations when calculating the overall matching metric.
In some embodiments, calculating the one or more matching metrics includes calculating a time difference between one or more of the user's representations to the analogous reference representations to determine whether the user's facial performance leads or lags the reference facial performance.
A person of skill in the art will appreciate that there are many ways to calculate one or more matching metrics based on the correlation of one or more representations of the reference facial capture and user's facial capture.
Returning to
In some embodiments, the indication(s) of the value of the matching metric(s) is a number or score, a word (e.g., “Good,” “Poor,” etc.), or some other alphanumeric indication of the current or cumulative value of the one or more matching metric(s).
User interface 1502 includes graphical element 1510 that provides an indication of the current value of a matching metric; e.g., an indication of how well the user's facial performance is matching the reference facial performance at a given point in time. In exemplary user interface 1502, graphical element 1510 is a needle that swings between reference avatar 1506 and user avatar 1508 depending on the value of the matching metric.
User interface 1502 may be used for training a user to match a reference facial performance, for example, or for allowing an audience to view how well a user's facial performance is matching a reference facial performance (such as in a lip-synching competition, for example).
In some embodiments, a facial capture analysis and training system can provide an indication of how well a user's facial performance is matching a reference facial performance in real-time, while the user is performing. For example, in user interface 1502, user avatar 1508 and graphical element 1510 can be generated and displayed in real-time, while the user is performing.
In some embodiments, a facial capture analysis and training system, such as system 300, can provide the user or an audience with real-time feedback on how well the user is matching the reference performance. Such immediate feedback can be useful for, e.g., a user who is learning a foreign language and is trying to reproduce a particular sound or word. In this case, system 300 and user interface 1502 can provide feedback to the user on whether they are getting closer to the reference facial movement or mouth shape associated with the sound or word, thereby helping to train the user. In some cases, such feedback can help a user iteratively improve their facial performance by providing quantitative feedback on each attempt, potentially in real time.
In the user interface depicted in
In some embodiments, a facial capture analysis and training system (e.g., system 300) can play back an audio recording associated with the reference facial performance while displaying user interface 1502, thereby allowing a user or an audience to hear the audible component of the reference performance while the user is attempting to replicate the reference performance.
While the discussion above focuses on comparing a dynamic reference facial performance (e.g., a reference performer who is singing or speaking) to a dynamic user's facial performance, in some examples, the reference facial capture is a still image of a person's face, without any facial movements. In this case, the facial capture analysis and training system can provide feedback to a user on how well the user is matching the facial expression of the reference image; i.e., whether they are getting closer to matching the image as they perform facial movements. This variation may be useful for training stroke victims to activate certain facial muscles by attempting to emulate a reference facial expression, for example, or for helping users to learn a foreign language by attempting to make a sound by emulating a specific mouth shape.
While the discussion above focuses on comparing facial captures of two different users, in some cases, the reference performer and the user may be the same person; that is, a user may try to reproduce their own reference facial performance.
Turning now to
In computing system 1600, the main system 1602 may include a motherboard 1604 with a bus that connects an input/output (I/O) section 1606, one or more microprocessors 1608, and a memory section 1610, which may have a flash memory card 1612 related to it. Memory section 1610 may contain computer-executable instructions and/or data for carrying out process 200 and/or process 400. The I/O section 1606 may be connected to display 1624 (e.g., to display user interface 1502), a keyboard 1614 (e.g., to provide text to the viseme object), a camera/scanner 1626 (e.g., to obtain a facial capture), a microphone 1628 (e.g., to obtain an audio recording), a speaker 1630 (e.g., to play back the audio recording), a disk storage unit 1616, and a media drive unit 1618. The media drive unit 1618 can read/write a non-transitory computer-readable storage medium 1620, which can contain programs 1622 and/or data used to implement process 200 and/or process 400.
Additionally, a non-transitory computer-readable storage medium can be used to store (e.g., tangibly embody) one or more computer programs for performing any one of the above-described processes by means of a computer. The computer program may be written, for example, in a general-purpose programming language (e.g., Pascal, C, C++, Java, or the like) or some specialized application-specific language.
Various exemplary embodiments are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the disclosed technology. Various changes can be made and equivalents can be substituted without departing from the true spirit and scope of the various embodiments. In addition, many modifications can be made to adapt a particular situation, material, composition of matter, process, process act(s) or step(s) to the objective(s), spirit or scope of the various embodiments. Further, as will be appreciated by those with skill in the art, each of the individual variations described and illustrated herein has discrete components and features which can be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the various embodiments.
This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Patent Application No. 62/286,219, titled “Facial Capture Analysis and Training System,” filed Jan. 22, 2016, the content of which is incorporated herein by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62286219 | Jan 2016 | US |