Online data communications are quite prevalent and pervasive in modern society, and are becoming more so all the time. Moreover, developments in software, communication protocols, and peripheral devices (e.g., video cameras), along with developments in other computing disciplines, have collectively enabled and facilitated the inclusion of multimedia experiences as part of such communications. Indeed, the multimedia nature and aspects of a given communication session are often the focus and even essence of such communications. These multimedia experiences take forms such as audio chats, video chats (that are usually also audio chats), online meetings (e.g., web meetings), and the like.
Using the context of online meetings as an illustrative example, it is often the case that one of the participants is the designated presenter, and often this designated presenter opts to include some visual materials as part of the offered presentation. Such visual materials may take the form of or at least include visual aids such as shared desktops, multiple-slide presentations, and the like. In some instances, from the perspective of another attendee at the online meeting, only such visual materials are presented on the display of the online meeting, while the presenter participates only as an audio voiceover. In other instances, the presenter may be shown in one region of the display while the visual materials are shown in another. And other similar examples exist as well.
Conventional videoconferencing techniques typically employ a camera mounted at one location and directed at a user. The camera acquires an image of the user and background of the user that is then rendered on the video display of another user. The rendered image typically depicts the user, miscellaneous objects, and background that are within the field-of-view of the acquiring camera. For example, the camera may be mounted on the top edge of a video display within a conference room with the user positioned to view the video display. The camera field-of-view may encompass the user and, in addition, a conference table, chairs, and artwork on the wall behind the user, (i.e., anything else within the field-of-view). Typically, the image of the entire field-of-view is transmitted to the video display of a second user. Thus, much of the video display of the second user is filled with irrelevant, distracting, unappealing, or otherwise undesired information. Such information may diminish the efficiency, efficacy, or simply the esthetic of the videoconference. This reduces the quality of the user experience.
Improvements over the above-described options are described herein. Among other capabilities and features, this technology extracts what is known as a “persona,” which is the image of a person contained within a video feed from a video camera that is capturing video of the person. The extracted persona, which in some examples appears as a depiction of the person from the torso up (i.e., upper torso, shoulders, arms, hands, neck, and head), and in other examples may be a depiction of the entire person from head to foot, is then visually combined by this technology with various other video content. In some embodiments, one person may have the role of a presenter, or multiple people may participate in a panel type discussion, a meeting, or even a simple chat session, where each person may be at a separate location. In some embodiments the persona(s) may be combined with content such as a multiple-slide presentation, such that the presenter appears to the attendees at the online meeting to be superimposed over the content, thus personalizing and otherwise enhancing the attendees' experiences.
As mentioned, this persona extraction is carried out with respect to video data that is being received from a camera that is capturing video of a scene in which the presenter is positioned. The persona-extraction technology substantially continuously (e.g., with respect to each frame) identifies which pixels represent the presenter and which pixels do not.
One embodiment takes the form of a method that includes obtaining at least one frame of pixel data; processing the at least one frame of pixel data to generate a hair-identification probability map; and generating a persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map.
In at least one embodiment, processing the at least one frame of pixel data to generate a hair-identification probability map includes identifying a plurality of pixel columns that cross an identified head contour; and, for each pixel column in the plurality of pixel columns: performing a color-based segmentation of the pixels in the pixel column into a foreground segment, a hair segment, and a background segment; and assigning the pixels in the hair segment an increased foreground-probability value in the hair-identification probability map.
In at least one embodiment, the method also includes converting the head contour into a multi-segment polygon that approximates the head contour, the multi-segment polygon being formed of multiple head-contour segments, and identifying the plurality of pixel columns that cross the identified head contour includes identifying pixel columns that cross one of the head-contour segments.
In at least one embodiment, performing a color-based segmentation includes performing a color-based segmentation using a clustering algorithm. In at least one such embodiment, the clustering algorithm is a k-means algorithm with k=3.
In at least one embodiment, performing the color-based segmentation of the pixels in a given pixel column into the foreground segment, the hair segment, and the background segment of the given pixel column includes identifying an average foreground-pixel color, an average hair-pixel color, and an average background-pixel color for the given pixel column; and identifying the foreground segment, the hair segment, and the background segment of the given pixel column using a clustering algorithm to cluster the pixels in the given pixel column around the identified average foreground-pixel color, the identified average hair-pixel color, and the identified average background-pixel color for the given pixel column, respectively.
In at least one embodiment, identifying the average foreground-pixel color for the given pixel column includes identifying the average foreground-pixel color for the given pixel column based on a first set of pixels at an innermost end of the given pixel column; identifying the average hair-pixel color for the given pixel column includes identifying the average hair-pixel color for the given pixel column based on a second set of pixels that includes a point where the given pixel column crosses the identified head contour; and identifying the average background-pixel color for the given pixel column includes identifying the average background-pixel color for the given pixel column based on a third set of pixels at an outermost end of the given pixel column.
In at least one embodiment, the method also includes, for each pixel column in the plurality of pixel columns, assigning the pixels in the foreground and background segments an equal probability of being in the foreground and being in the background in the hair-identification-module persona probability map.
In at least one embodiment, assigning the pixels in the hair segment an increased foreground-probability value in the hair-identification-module persona probability map includes assigning a first value to the pixels in the hair segment in the hair-identification-module persona probability map; and assigning a second value to the pixels in the foreground and background segments in the hair-identification-module persona probability map, wherein the first value corresponds to a higher probability of being a foreground pixel than does the second value.
In at least one embodiment, the method also includes processing the at least one frame of pixel data to generate at least one additional probability map, and generating the persona image by extracting pixels from the at least one frame of pixel data is further based on the generated at least one additional probability map.
In at least one embodiment, obtaining the at least one frame of pixel data includes obtaining the at least one frame of pixel data and corresponding image depth data; and processing the at least one frame of pixel data to generate the at least one additional probability map includes processing the at least one frame of pixel data and the corresponding image depth data to generate at least one of the at least one additional probability map.
In at least one embodiment, the method also includes combining the hair-identification probability map and the at least one additional probability map to obtain an aggregate persona probability map, and generating the persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map and at least in part on the generated at least one additional probability map includes generating the persona image by extracting pixels from the at least one frame of pixel data based on the aggregate persona probability map.
In at least one embodiment, the method also includes processing the at least one frame of pixel data to generate at least one additional probability map, and generating the persona image by extracting pixels from the at least one frame of pixel data is further based at least in part on the generated at least one additional probability map.
In at least one embodiment, obtaining the at least one frame of pixel data includes obtaining the at least one frame of pixel data and corresponding image depth data; and processing the at least one frame of pixel data to generate the at least one additional probability map includes processing the at least one frame of pixel data and the corresponding image depth data to generate at least one of the at least one additional probability map.
In at least one embodiment, the method also includes combining the hair-identification probability map and the at least one additional probability map to obtain an aggregate persona probability map, and generating the persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map and at least in part on the generated at least one additional probability map includes generating the persona image by extracting pixels from the at least one frame of pixel data based on the aggregate persona probability map.
One embodiment takes the form of an apparatus that includes a hair-identification module that is configured to generate a hair-identification probability map based on at least one frame of pixel data at least in part by: identifying a plurality of pixel columns that cross an identified head contour; and for each pixel column in the plurality of pixel columns, performing a color-based segmentation of the pixels in the pixel column into a foreground segment, a hair segment, and a background segment; and assigning the pixels in the hair segment an increased foreground-probability value in the hair-identification probability map. The apparatus also includes a persona extraction module configured to generate a persona image by extracting pixels from at least one frame of pixel data based at least in part on the generated hair-identification probability map.
In at least one embodiment, the hair-identification module is configured to generate the hair-identification probability map based on the at least one frame of pixel data at least in part by: identifying a plurality of pixel columns that cross an identified head contour; and, for each pixel column in the plurality of pixel columns, performing a color-based segmentation of the pixels in the pixel column into a foreground segment, a hair segment, and a background segment; and assigning the pixels in the hair segment an increased foreground-probability value in the hair-identification probability map.
In at least one embodiment, the apparatus also includes a foreground-background module configured to generate a foreground-background map based on image depth data corresponding to the at least one frame of pixel data, and the persona extraction module is configured to generate the persona image by extracting pixels from the at least one frame of pixel data based also on the generated foreground-background map.
In at least one embodiment, the apparatus also includes a plurality of additional persona identification modules configured to generate a corresponding plurality of additional persona probability maps based on the at least one frame of pixel data; and a combiner module configured to generate an aggregate persona probability map based on the hair-identification probability map and the plurality of additional persona probability maps; in at least one such embodiment, the persona extraction module being configured to generate the persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map includes the persona extraction module being configured to generate the persona image by extracting pixels from the at least one frame of pixel data based on the aggregate persona probability map. In at least one such embodiment, the apparatus also includes a foreground-background module configured to generate a foreground-background map based on image depth data corresponding to the at least one frame of pixel data, and one or more of the additional persona identification modules are configured to generate their respective corresponding additional persona probability map based further on the generated foreground-background probability map.
In some embodiments, a method includes obtaining at least one frame of pixel data and corresponding image depth data; processing the at least one frame of pixel data and the image depth data with a plurality of persona identification modules to generate a corresponding plurality of persona probability maps; combining the plurality of persona probability maps to obtain an aggregate persona probability map; and generating a persona image by extracting pixels from the at least one frame of pixel data based on the aggregate persona probability map. The method may include methods wherein the at least one frame of pixel data comprises two frames of stereo pixel data and the image depth map is obtained from disparity data generated by a stereo disparity module. The method may also include processing the at least one frame of pixel data and the image depth data by generating a foreground-background map from the disparity data by designating pixels having a disparity value above a threshold as foreground pixels.
In further embodiments, the method may include scenarios where the disparity data comprises a plurality of disparity values for each pixel, each of the plurality of disparity values having an associated confidence value, and wherein processing the at least one frame of pixel data and the image depth data comprises generating a foreground-background map from the disparity data by identifying pixels having a cumulative confidence measure above a threshold as foreground pixels. The image depth map may be converted to a foreground-background map using a thresholding operation. The image depth data may be simple depth values, or may be in the form of a cost volume, or a cost volume that is filtered such as by using a semi global matching module.
The method may also include scenarios where the foreground-background map is distance-transformed to obtain a persona probability map. In yet other embodiments, the method may include processing the foreground-background map to obtain a persona head contour, and pixels of the at least one frame of pixel data in a band around the persona head contour are selectively categorized as persona pixels based on a color segmentation.
The aggregate persona probability map may be formed by combining the plurality of persona probability maps using predetermined weights. And the predetermined weights may be preset or may be selected according to an image capture environment, or according to user feedback.
The method may include extracting pixels using a graph-cut-based persona extraction module, an active-shape-based persona shape recognition module, or an active-contour-based persona extraction module.
Related technology is also described in the following patent documents, each of which is incorporated in its respective entirety into this disclosure: (i) U.S. patent application Ser. No. 13/083,470, entitled “Systems and Methods for Accurate User Foreground Video Extraction,” filed Apr. 8, 2011 and issued Aug. 26, 2014 as U.S. Pat. No. 8,818,028 and (ii) U.S. patent application Ser. No. 13/076,264, entitled “Systems and Methods for Embedding a Foreground Video into a Background Feed based on a Control Input,” filed Mar. 30, 2011 and published Oct. 6, 2011 as U.S. Patent Application Pub. No. 2011/0242277.
The above overview is provided by way of example and not limitation, as those having ordinary skill in the relevant art may well implement the disclosed systems and methods using one or more equivalent components, structures, devices, and the like, and may combine and/or distribute certain functions in equivalent though different ways, without departing from the scope and spirit of this disclosure.
A more detailed understanding may be had from the following description, which is presented by way of example in conjunction with the following drawings, in which like reference numerals are used across the drawings in connection with like elements.
The computing device 104 may take the form of a desktop computer, laptop computer, tablet computer, smart phone, or the like. In the embodiment shown in
The preceding paragraph is an example of the fact that, in the present disclosure, various elements of one or more of the described embodiments are referred to as modules that carry out (i.e., perform, execute, and the like) various functions described herein. As the term “module” is used herein, each described module includes hardware (e.g., one or more processors, microprocessors, microcontrollers, microchips, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), memory devices, and/or one or more of any other type or types of devices and/or components deemed suitable by those of skill in the relevant art in a given context and/or for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the particular module, where those instructions could take the form of or at least include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, stored in any non-transitory computer-readable medium deemed suitable by those of skill in the relevant art.
Again with respect to
In some embodiments, video module 112 may be configured to receive stereo images from camera 112, and responsively generate image depth data 116. The image depth data may be generated by forming a disparity map, where each pixel location is associated with a disparity value representing the distance of that pixel from the camera. In some embodiments, the depth data is a single depth value corresponding to each pixel location, while in other embodiments, the depth data is in the form of a cost volume (e.g., a disparity data volume), where at each pixel location, each possible depth is assigned a value representing a measure of confidence that the pixel corresponds to the respective depth. The depth data 116, either as a depth map or cost volume, is provided to the foreground-background module 114.
The foreground-background module 114 is configured to generate a foreground-background map from the depth data. In some embodiments, the foreground-background module 114 separates the pixel locations into a foreground and a background by designating each pixel as belonging to either a “foreground” image or “background” image. In some embodiments, the foreground-background module 114 includes a third value of “uncertain” to indicate uncertainty regarding the pixel's status of foreground or background. In one embodiment, the foreground-background module 114 operates on a depth map (e.g., where each pixel location has a single depth value) by designating every pixel location having a depth less than a threshold as a foreground pixel. The particular threshold may be predetermined according to the camera location and environment, such as if it is a built-in laptop/tablet computer.
In a further embodiment, the foreground-background module 114 operates on a cost volume (e.g., where each pixel location has a set of cost values, one for each possible depth value) to determine the foreground-background map. In one such embodiment, the costs may be accumulated for one or more ranges of possible depth values to determine whether the pixel location is foreground or background. Thus, the costs for depth values of between 0 and 1 meter may be accumulated, and the costs for depth values greater than one meter may be accumulated, and the pixel may be designated as foreground or background depending on the lesser accumulated cost. Alternatively, the costs for depth values of a first range (e.g., between 0 and 1 meter) may be accumulated and compared to a threshold to determine if the pixel is to be designated a foreground pixel. In yet a further embodiment, the foreground-background module 114 may filter depth data in the form of a cost volume by performing a semi-global matching operation, wherein possible paths through the cost volume are evaluated along a plurality of directions. The resulting filtered cost volume may then be evaluated by selecting the most likely depth value, followed by a thresholding operation, or by a range accumulation operation as described above.
The persona ID modules 124 operate on the depth data as shown by arrow 116, on the foreground-background map as shown by connection 118, or on the image pixel data shown by connection 120, or on both the foreground-background map and the image pixel data. Each of the persona ID modules 124 generates a probability map indicating a likelihood that the respective pixels are part of a foreground image as compared to a background image. The persona ID modules, as described more fully below, are configured to operate on certain characteristics of the image and/or depth data to identify characteristics of the data indicative of a person's presence in the scene 102. The respective probability maps are then combined by combiner module 122 to provide an aggregate probability map. In some embodiments, the individual probability maps are in the form of a log-likelihood ratio:
which represents the logarithm of the ratio of the probability that the pixel “x” is a foreground (“f”) pixel versus a background (“b”) pixel. Thus, a value of 1 represents a likelihood that the pixel being in the foreground is ten times more likely than being in the background, a value of −1 represents a likelihood that the pixel being in the background is ten times that of being in the foreground, while a value of 0 represents and equal likelihood of a pixel being in the foreground or background (that is, a likelihood ratio of 1 has a log-likelihood of 0). In such an embodiment, the combiner module 122 may combine the probability maps by forming a weighted sum of the plurality of maps on a pixel-by-pixel basis. Note that the probability maps need not be rigorously derived from probability theory, but may also be based on heuristic algorithms that provide approximations of relative likelihoods of a pixel being either a foreground or background pixel.
In one embodiment of a persona extraction module,
In a further embodiment of persona identification module 124, an algorithm is utilized to better identify pixels associated with a person's hair. This persona identification module operates on a combination of the foreground-background map and the image pixel data. In particular, depth information is often fairly reliable with respect to a person's face: facial features provide good textures for providing disparity data for embodiments using stereo depth data, as well as good infrared illumination for time-of flight depth technologies. However, human hair tends to scatter IR light, and is relatively featureless with respect to disparity information. Thus, to improve foreground and background separation in a hair region, the following processing may be performed in accordance with a hair detection algorithm: identify head contour points; determine a plurality of image pixel columns; segment pixels according to pixel value centroids; assign probability measures according to determined segments.
Initially, as shown in
The color information of each column may then be evaluated to determine boundaries between facial colors, hair colors, and background colors. Such segmentation may be performed using a k-means algorithms. The k-means algorithm operates by declaring a number of desired centroids, which in some embodiments is k=3. The algorithm then divides the data of each column into three segments as shown in
In some embodiments, additional verification steps may be performed, such as ensuring three regions exist. Further, a verification step may be performed to ensure the resulting hair color is not too close to background color, which might indicate that no hair is in fact present in the image column.
In a further embodiment, a persona id module 124 may operate on image pixel data only. The operation of one such persona id module is depicted in
In a further embodiment, another persona id module 124 may operate on image pixel data only by using color histograms as shown in
In these histogram-based embodiments of the persona id module 124, each pixel of the image may be evaluated in terms of the occurrence of that color in the foreground histogram versus the occurrence of that color in the background histogram, and a respective ratio is formed. The map of the foreground/background histogram ratios thus forms a persona probability map. Note that normalized color histograms (histograms that sum to one) provide direct measures of the probability that the given color is present in the foreground (or background, as the case may be). In some embodiments, an epsilon value may be added to each histogram value to prevent a divide-by-zero error.
In a further embodiment of persona identification module 124, depth data in the form of a cost volume may be converted directly to a probability map. In this embodiment, the relative foreground and background probabilities may be determined from the cost volume, and a likelihood ratio may be generated therefrom. It will be recognized by those of skill in the art that there exist a number of equivalent formula that may be used to calculate the desired quantities. For example, when working with log likelihoods, the log of a ratio is the difference between the log of the numerator minus the log of the denominator, such that log(fg_score+ε)−log(bg_score+ε)==log((fg_score+ε)/(bg_score+ε)), where fg_score is the inverse of the foreground cost, bg_score is the inverse of the background cost, and ε is a small value to prevent division by zero or infinite log values.
In some embodiments, the relative foreground and background probabilities may be determined by using the smallest cost value in a range of depths likely to be foreground and the smallest cost in the range of depths likely to be background. Again, the relative ranges may be determined by the particular camera configurations in use, or by other means. An alternative method of determining the relative foreground and background probabilities is by aggregating the costs (or 1/cost) for disparity values greater than or equal to two, and divide by aggregated 1/cost for low disparity values.
In further embodiments, more sophisticated approaches may be used. In one such example, a data-driven approach is used whereby a regression is run against training data (which may have ground truth disparity labels) to determine a model. In yet other embodiments, the conversion of a cost (where large values indicate undesirability) to a score (where large values are indicative of high confidence) may be manipulated. That is, instead of using a monotonically decreasing function, such as 1/x as described above, the regression may be used to produce a probability or score instead.
In some embodiments, the modular persona id modules may be combined dynamically based on one or more factors including: (i) image capture conditions such as lighting, persona distance, and/or (ii) processing power available such as a desktop or laptop having a given amount of processing power, versus a smart phone device having relatively less processing power and/or (iii) power source availability (battery level or wired) and/or (iv) communication bandwidth available to transmit video-encoded persona data and/or (v) user feedback indicating which weight set provides a desired result as determined by the user. The factors may be used to determine which persona id modules to use, or which combination of persona id modules to use. The modules may be ranked according to performance under certain lighting conditions or by required processing power such that in a given lighting condition or given processing resources, the best combination of modules may be utilized.
In further embodiments, weights may be used by the combiner module 122 to combine the persona probability maps. In some embodiments, a set of weights may be applied to the maps of the respective modules that have been determine to perform well in order to compute the aggregate persona probability map. In other embodiments, a plurality of sets of weights may be available, where each set of weights performs best according to the given conditions (lighting, processing power, etc.). The set of weights may be selected dynamically based on current conditions detected by the computing device.
The persona extraction module 126 of computing device 104 then operates on the aggregate persona probability map as indicated from line 128 from combiner module 122. In one embodiment, a graph cut utility (such as what is available from within the OpenCV library). In such an embodiment, the segmentation of the persona extraction may be formulated as a mincut/maxflow problem. In this case, the image is mapped into a graph, and each pixel is mapped to a node. In addition, there are two additional special nodes called the source and the sink. The node for each image pixel is connected to both the source and the sink. If the aggregate persona probability map indicates that that pixel is likely to be foreground, a weight is applied to the edge linking the pixel to the source. If the aggregate persona probability map indicates that that pixel is likely to be background, a weight is applied to the edge linking the pixel to the sink. The magnitude of the weight increases as the probability becomes more certain. In addition, edges are included that link the nodes for a pixel to the nodes of a neighboring pixel. The weights of these nodes are inversely proportional to the likelihood of a boundary appearing there. One possible technique is to set these weights to be large if the two pixels are similar in color and set them to be small if the two pixels are not. Thus, transitioning from foreground to background is favored in areas where the color is also changing. The mincut problem is then solved by configuring the algorithm to remove edges from the graph until the source is no longer connected to the sink. (The algorithm will minimize the total weight of the edges it removes.) Since the node for each pixel is connected to both the source and the sink, one of those edges must be removed by the cut. If the node remains connected to the source (the edge to the sink was removed), that pixel is marked as foreground. Otherwise, the node is connected to the sink (the edge to the source was removed), and that pixel is marked as background. The formulation described may be solved efficiently through a variety of techniques.
In an alternative embodiment, the persona extraction module 126 may utilize an active contour model to operate on the aggregate persona probability map. The active contour model (also known as “snake”) may be used for segmentation and tracking. It does this by minimizing the combination of an external energy (to cause the contour to snap to image boundaries) and an internal energy (to keep the contour from becoming too convoluted). In one embodiment, one or more closed contours are used such that each contour will have an “inside” portion. The aggregate probability map is processed by an external energy function that favors including regions of high foreground probability and disfavors including regions of high background probability. In a further embodiment, more traditional terms for external energy may be used that favor high gradient regions in the image. In further embodiments, the internal energy term may include the commonly accepted terms.
In some embodiments, the active contour model is initialized using an initial contour produced from another module, such as a foreground-background map as described above, or a thresholded version of the aggregate persona probability map, or by a graph cut module determination of foreground/background, or the like. As the user moves in the video, the active contour model may be updated frame-by-frame. Some embodiments may periodically check to determine whether a contour needs to be reinitialized, such as if the enclosed area grows too small, if the aggregate foreground probability of the enclosed region drops too low, or if the combined set of contours fails to explain all the high aggregate foreground probability regions.
In some embodiments, the active contour module may directly generate the persona alpha mask, but in alternative embodiments, the active contour model may instead modify the aggregate persona probability map that is then processed by a different persona extraction module.
In yet a further alternative embodiment, the persona extraction module 126 may utilize an active shape model to operate on the aggregate persona probability map. The active shape model (ASM) is a technique to identify a deformable object in a scene. The model itself comprises a basic shape and the ways that this shape can vary in individual instances. The module models a single torso/neck/head in the scene with an ASM, the parameters of which may be learned from a training corpus.
In some embodiments, the ASM module is configured to operate on the aggregated persona probability map. That is, the ASM fitting algorithm favors enclosing regions of high foreground probability and disfavor enclosing regions of high background probability. Some embodiments may also favor placing the occluding contour of the person along edges in the image.
The ASM fitting process may also be initialized using the foreground-background map or a thresholded version of the aggregate persona probability map. Alternatively, a graph cut may be used to produce an initial shape. As the user moves in the video, the model parameters are updated frame-by-frame. Possible conditions for reinitialization is if the enclosed area grows too small or if the aggregate probability of the enclosed region drops too low.
Some embodiments utilize a hybrid approach where the head and torso of the persona are extracted using ASM, while arms and fingers are modeled as articulated objects rather than deformable objects. Once the ASM is used to fit the persona, the shoulders are identified and an arm segmentation model is used. In a further embodiment, an active contour is initialized from the ASM. In a further embodiment, the persona probability map is updated or modified instead of directly generating the alpha mask. This updated map is then provided to a different persona extraction module.
In some embodiments, the persona extraction module may identify the pixel locations belonging to the desired persona by generating an “alpha mask” (e.g., generates an alpha mask for each frame), where a given alpha mask may take the form of or at least include an array with a respective stored data element corresponding to each pixel in the corresponding frame, where such stored data elements are individually and respectively set equal to 1 (one) for each presenter pixel and to 0 (zero) for every other pixel (i.e., for each non-presenter (a.k.a. background) pixel).
The described alpha masks correspond in name with the definition of the “A” in the “RGBA” pixel-data format known to those of skill in the art, where “R” is a red-color value, “G” is a green-color value, “B” is a blue-color value, and “A” is an alpha value ranging from 0 (complete transparency) to 1 (complete opacity). When merging an extracted persona with content, the above-referenced Personify technology creates the above-mentioned merged display in a manner consistent with these conventions; in particular, on a pixel-by-pixel (i.e., pixel-wise) basis, the merging is carried out using pixels from the captured video frame for which the corresponding alpha-mask values equal 1, and otherwise using pixels from the content.
In embodiments shown in
The persona identification modules are configured to operate on the image depth data, or the image pixel data, or both the image depth data and the image pixel data as described above. The apparatus may also include a video module configured to generate the depth data from a plurality of frames of image pixel data. The persona extraction module may be configured to perform a graph cut operation, an active-shape-based or active-contour-based algorithm.
With respect to
In further embodiments, the method may include scenarios where the disparity data comprises a plurality of disparity values for each pixel, each of the plurality of disparity values having an associated confidence value, and wherein processing the at least one frame of pixel data and the image depth data comprises generating a foreground-background map from the disparity data by identifying pixels having a cumulative confidence measure above a threshold as foreground pixels. The image depth map may be converted to a foreground-background map using a thresholding operation. The method may also include scenarios where the foreground-background map is distance-transformed to obtain a persona probability map.
In yet other embodiments, the method may include processing the foreground-background map to obtain a persona head contour, and pixels of the at least one frame of pixel data in a band around the persona head contour are selectively categorized as persona pixels based on a color segmentation. The image depth data may be simple depth values, or may be in the form of a cost volume, or a cost volume that is filtered such as by using a semi global matching module.
The method may include extracting pixels using a graph-cut-based persona extraction module, an active-shape-based persona shape recognition module, or an active-contour-based persona extraction module.
The aggregate persona probability map may be formed by combining the plurality of persona probability maps using predetermined weights. And the predetermined weights may be preset or may be selected according to an image capture environment, or according to user feedback.
With respect to
Although features and elements are described above in particular combinations, those having ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements without departing from the scope and spirit of the present disclosure.
This application is a continuation of U.S. patent application Ser. No. 14/145,874, filed Dec. 31, 2013, entitled “System and Methods for Persona Identification Using Combined Probability Maps,” and published Jul. 2, 2015 as U.S. Patent Application Pub. No. 2015/0187076, and the contents of which are fully incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
8355 | Hurlbut | Sep 1851 | A |
5001558 | Burley | Mar 1991 | A |
5022085 | Cok | Jun 1991 | A |
5117283 | Kroos | May 1992 | A |
5227985 | DeMenthon | Jul 1993 | A |
5343311 | Morag | Aug 1994 | A |
5506946 | Bar | Apr 1996 | A |
5517334 | Morag | May 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5581276 | Cipolla | Dec 1996 | A |
5631697 | Nishimura | May 1997 | A |
5687306 | Blank | Nov 1997 | A |
5875040 | Matraszek | Feb 1999 | A |
6119147 | Toomey | Sep 2000 | A |
6150930 | Cooper | Nov 2000 | A |
6411744 | Edwards | Jun 2002 | B1 |
6618444 | Haskell | Sep 2003 | B1 |
6661918 | Gordon | Dec 2003 | B1 |
6664973 | Iwamoto | Dec 2003 | B1 |
6760749 | Dunlap | Jul 2004 | B1 |
6798407 | Benman | Sep 2004 | B1 |
6937744 | Toyama | Aug 2005 | B1 |
7050070 | Ida | May 2006 | B2 |
7124164 | Chemtob | Oct 2006 | B1 |
7317830 | Gordon | Jan 2008 | B1 |
7386799 | Clanton | Jun 2008 | B1 |
7420490 | Gupta | Sep 2008 | B2 |
7420590 | Matusik | Sep 2008 | B2 |
7463296 | Sun | Dec 2008 | B2 |
7512262 | Criminisi | Mar 2009 | B2 |
7518051 | Redmann | Apr 2009 | B2 |
7574043 | Porikli | Aug 2009 | B2 |
7599555 | McGuire | Oct 2009 | B2 |
7602990 | Matusik | Oct 2009 | B2 |
7631151 | Prahlad | Dec 2009 | B2 |
7633511 | Shum | Dec 2009 | B2 |
7634533 | Rudolph | Dec 2009 | B2 |
7668371 | Dorai | Feb 2010 | B2 |
7676081 | Blake | Mar 2010 | B2 |
7692664 | Weiss | Apr 2010 | B2 |
7720283 | Sun | May 2010 | B2 |
7742650 | Xu | Jun 2010 | B2 |
7755016 | Toda | Jul 2010 | B2 |
7773136 | Ohyama | Aug 2010 | B2 |
7821552 | Suzuki | Oct 2010 | B2 |
7831087 | Harville | Nov 2010 | B2 |
7965885 | Iwai | Jun 2011 | B2 |
8073196 | Yuan | Dec 2011 | B2 |
8094928 | Graepel | Jan 2012 | B2 |
8146005 | Jones | Mar 2012 | B2 |
8175379 | Wang | May 2012 | B2 |
8175384 | Wang | May 2012 | B1 |
8204316 | Panahpour | Jun 2012 | B2 |
8225208 | Sprang | Jul 2012 | B2 |
8238605 | Chien | Aug 2012 | B2 |
8249333 | Agarwal | Aug 2012 | B2 |
8264544 | Chang | Sep 2012 | B1 |
8300890 | Gaikwad | Oct 2012 | B1 |
8300938 | Can | Oct 2012 | B2 |
8320666 | Gong | Nov 2012 | B2 |
8331619 | Ikenoue | Dec 2012 | B2 |
8331685 | Pettigrew | Dec 2012 | B2 |
8335379 | Malik | Dec 2012 | B2 |
8345082 | Tysso | Jan 2013 | B2 |
8355379 | Thomas | Jan 2013 | B2 |
8363908 | Steinberg | Jan 2013 | B2 |
8379101 | Mathe | Feb 2013 | B2 |
8396328 | Sandrew | Mar 2013 | B2 |
8406494 | Zhan | Mar 2013 | B2 |
8411149 | Maison | Apr 2013 | B2 |
8411948 | Rother | Apr 2013 | B2 |
8422769 | Rother | Apr 2013 | B2 |
8437570 | Criminisi | May 2013 | B2 |
8446459 | Fang | May 2013 | B2 |
8503720 | Shotton | Aug 2013 | B2 |
8533593 | Grossman | Sep 2013 | B2 |
8533594 | Grossman | Sep 2013 | B2 |
8533595 | Grossman | Sep 2013 | B2 |
8565485 | Craig | Oct 2013 | B2 |
8588515 | Bang | Nov 2013 | B2 |
8625897 | Criminisi | Jan 2014 | B2 |
8643701 | Nguyen | Feb 2014 | B2 |
8649592 | Nguyen | Feb 2014 | B2 |
8649932 | Mian | Feb 2014 | B2 |
8655069 | Rother | Feb 2014 | B2 |
8659658 | Vassigh | Feb 2014 | B2 |
8666153 | Hung | Mar 2014 | B2 |
8682072 | Sengamedu | Mar 2014 | B2 |
8701002 | Grossman | Apr 2014 | B2 |
8723914 | Mackie | May 2014 | B2 |
8818028 | Nguyen | Aug 2014 | B2 |
8831285 | Kang | Sep 2014 | B2 |
8854412 | Tian | Oct 2014 | B2 |
8874525 | Grossman | Oct 2014 | B2 |
8890923 | Tian | Nov 2014 | B2 |
8890929 | Paithankar | Nov 2014 | B2 |
8897562 | Bai | Nov 2014 | B2 |
8913847 | Tang | Dec 2014 | B2 |
8994778 | Weiser | Mar 2015 | B2 |
9008457 | Dikmen | Apr 2015 | B2 |
9053573 | Lin | Jun 2015 | B2 |
9065973 | Graham | Jun 2015 | B2 |
9084928 | Klang | Jul 2015 | B2 |
9087229 | Nguyen | Jul 2015 | B2 |
9088692 | Carter | Jul 2015 | B2 |
9117310 | Coene | Aug 2015 | B2 |
9269153 | Gandolph | Feb 2016 | B2 |
9285951 | Makofsky | Mar 2016 | B2 |
9336610 | Ohashi | May 2016 | B2 |
20020051491 | Challapali | May 2002 | A1 |
20020158873 | Williamson | Oct 2002 | A1 |
20040004626 | Ida | Jan 2004 | A1 |
20040153671 | Schuyler | Aug 2004 | A1 |
20050094879 | Harville | May 2005 | A1 |
20050219264 | Shum | Oct 2005 | A1 |
20050219391 | Sun | Oct 2005 | A1 |
20050262201 | Rudolph | Nov 2005 | A1 |
20060072022 | Iwai | Apr 2006 | A1 |
20060193509 | Criminisi | Aug 2006 | A1 |
20060221248 | McGuire | Oct 2006 | A1 |
20060259552 | Mock | Nov 2006 | A1 |
20070036432 | Xu | Feb 2007 | A1 |
20070070200 | Matusik | Mar 2007 | A1 |
20070110298 | Graepel | May 2007 | A1 |
20070133880 | Sun | Jun 2007 | A1 |
20070146512 | Suzuki | Jun 2007 | A1 |
20070201738 | Toda | Aug 2007 | A1 |
20070269108 | Steinberg | Nov 2007 | A1 |
20080109724 | Gallmeier | May 2008 | A1 |
20080181507 | Gope | Jul 2008 | A1 |
20080219554 | Dorai | Sep 2008 | A1 |
20080273751 | Yuan | Nov 2008 | A1 |
20090003687 | Agarwal | Jan 2009 | A1 |
20090044113 | Jones | Feb 2009 | A1 |
20090110299 | Panahpour | Apr 2009 | A1 |
20090144651 | Sprang | Jun 2009 | A1 |
20090199111 | Emori | Aug 2009 | A1 |
20090244309 | Maison | Oct 2009 | A1 |
20090245571 | Chien | Oct 2009 | A1 |
20090249863 | Kim | Oct 2009 | A1 |
20090278859 | Weiss | Nov 2009 | A1 |
20090284627 | Bando | Nov 2009 | A1 |
20090290795 | Criminisi | Nov 2009 | A1 |
20090300553 | Pettigrew | Dec 2009 | A1 |
20100027961 | Gentile | Feb 2010 | A1 |
20100046830 | Wang | Feb 2010 | A1 |
20100053212 | Kang | Mar 2010 | A1 |
20100128927 | Ikenoue | May 2010 | A1 |
20100166325 | Sengamedu | Jul 2010 | A1 |
20100171807 | Tysso | Jul 2010 | A1 |
20100195898 | Bang | Aug 2010 | A1 |
20100278384 | Shotton | Nov 2010 | A1 |
20100302376 | Boulanger | Dec 2010 | A1 |
20100302395 | Mathe | Dec 2010 | A1 |
20110038536 | Gong | Feb 2011 | A1 |
20110090311 | Fang | Apr 2011 | A1 |
20110115886 | Nguyen | May 2011 | A1 |
20110158529 | Malik | Jun 2011 | A1 |
20110193939 | Vassigh | Aug 2011 | A1 |
20110216965 | Rother | Sep 2011 | A1 |
20110216975 | Rother | Sep 2011 | A1 |
20110216976 | Rother | Sep 2011 | A1 |
20110242277 | Do | Oct 2011 | A1 |
20110243430 | Hung | Oct 2011 | A1 |
20110249190 | Nguyen | Oct 2011 | A1 |
20110249863 | Ohashi | Oct 2011 | A1 |
20110249883 | Can | Oct 2011 | A1 |
20110267348 | Lin | Nov 2011 | A1 |
20110293179 | Dikmen | Dec 2011 | A1 |
20110293180 | Criminisi | Dec 2011 | A1 |
20120051631 | Nguyen | Mar 2012 | A1 |
20120127259 | Mackie | May 2012 | A1 |
20130016097 | Coene | Jan 2013 | A1 |
20130028476 | Craig | Jan 2013 | A1 |
20130094780 | Tang | Apr 2013 | A1 |
20130110565 | Means, Jr. | May 2013 | A1 |
20130142452 | Shionozaki | Jun 2013 | A1 |
20130147900 | Weiser | Jun 2013 | A1 |
20130243313 | Civit | Sep 2013 | A1 |
20130335506 | Carter | Dec 2013 | A1 |
20140003719 | Bai | Jan 2014 | A1 |
20140029788 | Kang | Jan 2014 | A1 |
20140063177 | Tian | Mar 2014 | A1 |
20140085398 | Tian | Mar 2014 | A1 |
20140112547 | Peeper | Apr 2014 | A1 |
20140119642 | Lee | May 2014 | A1 |
20140153784 | Gandolph | Jun 2014 | A1 |
20140229850 | Makofsky | Aug 2014 | A1 |
20140300630 | Flider | Oct 2014 | A1 |
20140307056 | Collet Romea | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
2013019259 | Feb 2013 | WO |
Entry |
---|
Akbarzadeh, A., et al., “Towards Urban 3D Reconstruction From Video,” Third International Symposium on 3D Data Processing, Visualization, and Transmission, pp. 1-8 (Jun. 14-16, 2006). |
Barnat, Jii'f, et al., “CUDA accelerated LTL Model Checking,” FI MU Report Series, FIMU- RS-2009-05, 20 pages. (Jun. 2009). |
Canesta™, “See How Canesta's Solution Gesture Control Will Change the Living Room,” retrieved Oct. 21, 2010, from http://canesta.com, 2 pages. |
Chan, S.C., et al., “Image-Based Rendering and Synthesis,” IEEE Signal Processing Magazine, pp. 22-31 (Nov. 2007). |
Chan, Shing-Chow, et al. “The Plenoptic Video,” 15(12) IEEE Transactions on Circuits and Systems for Video Technology 1650-1659 (Dec. 2005). |
Chen, Wan-Yu, et al., “Efficient Depth Image Based Rendering with Edge Dependent Depth Filter and Interpolation,” IEEE International Conference on Multimedia and Expo, pp. 1314-1317 (Jul. 6, 2005). |
Debevec, Paul, et al., “Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping,” In 9th Eurographics Workshop on Rendering, pp. 105-116 (Jun. 1998). |
Fehn, Christoph, et al., “Interactive 3-DTV-Concepts and Key Technologies,” 94(3) Proceedings of the IEEE 524-538 (Mar. 2006). |
GPGPU (General-purpose computing on graphics processing units)—Wikipedia, retrieved Nov. 17, 2009, from http://en.wikipedia.org/wiki/GPGPU, 9 pages. |
H. Y. Shum and S. B. Kang, “A Review of Image-based Rendering Techniques,” Proc. IEEE/SPIE Visual Communications and Image (VCIP) 2000, pp. 2-13, Perth, Jun. 2000. |
Ho, Yo-Sung, et al., “Three-dimensional Video Generation for Realistic Broadcasting Services,” ITC-CSCC, pp. TR-1 through TR4 (2008). |
Jung, Kwang Hee, et al., “Depth Image Based Rendering for 3D Data Service Over T-DMB,” IEEE, 3DTV-CON'08, Istanbul, Turkey, pp. 237-240 (May 28-30, 2008). |
Kanade, Takeo, et al., “Virtualized Reality: Constructing Virtual Worlds from Real Scenes,” IEEE MultiMedia, pp. 34-46 (Jan.-Mar. 1997). |
Kao, Wen-Chung, et al., “Multistage Bilateral Noise Filtering and Edge Detection for Color Image Enhancement,” 51 (4) IEEE Transactions on Consumer Electronics 1346-1351 (Nov. 2005). |
Kipfer, Peter, “GPU Gems 3—Chapter 33 LCP Algorithms for Collision Detection Using CUDA,” retrieved Nov. 17, 2009, from http://http.developer.nvidia.com/ GPUGems3/qpuqems3 ch33.html, 11 pages (2007). |
Kitagawa et al., “Background Separation Encoding for Surveillance Purpose by using Stable Foreground Separation”, APSIPA, Oct. 4-7, 2009, pp. 849-852. |
Kubota, Akira, et al., “Multiview Imaging and 3DTV,” IEEE Signal Processing Magazine, pp. 10-21 (Nov. 2007). |
Lee, Eun-Kyung, et al., “High-Resolution Depth Map Generation by Applying Stereo Matching Based on Initial Depth Information,” 3DTV-CON'08, Istanbul, Turkey, pp. 201-204 (May 28-30, 2008). |
Mark, William R., et al., “Post-Rendering 3D Warping,” In Proceedings of 1997 Symposium on Interactive 3D Graphics, Providence, RI, pp. 7-16 (Apr. 27-30, 1997). |
McMillan, Jr., Leonard, “An Image-Based Approach to Three-Dimensional Computer Graphics,” University of North Carolina at Chapel Hill, Chapel Hill, NC, 206 pages. (1997). |
Nguyen, Ha T., et al., “Image-Based Rendering with Depth Information Using the Propagation Algorithm,” Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 4 pages (Mar. 2005). |
Nguyen, Quang H., et al., “Depth image-based rendering from multiple cameras with 3D propagation algorithm,” Proceedings of the 2nd International Conference on Immersive Telecommunications, 6 pages (2009). |
Nguyen, Quang H., et al., “Depth Image-Based Rendering with Low Resolution Depth,” 16th IEEE International conference on Image Processing (ICIP), pp. 553-556 (2009). |
PrimeSense, Home Page, retrieved Oct. 21, 2010, from http://www.primesense.com, 1 page. |
Saxena, Ashutosh, et al., “3-D Depth Reconstruction from a Single Still Image,” 76(1) International Journal of Computer Vision 53-69 (2007). |
Shade, Jonathan, et al., “Layered Depth Images,” Computer Graphics Proceedings, Annual Conference Series, pp. 231-242 (Jul. 19-24, 1998). |
Tomasi, C., et al., “Bilateral Filtering for Gray and Color Images,” Sixth International Conference on Computer Vision, pp. 839-846 (1998). |
Um, Gi-Mun, et al., “Three-dimensional Scene Reconstruction Using Multi-View Images and Depth Camera”, pp. 271-280, SPIE-IS&t, vol. 5664, 2005. |
Vazquez, C., et al., “3D-TV: Coding of Disocclusions for 2D+Depth Representation of Multi-View Images,” Proceedings of the Tenth IASTED Int'l Conference: Computer Graphics and Imaging, pp. 26-33 (Feb. 13-15, 2008). |
Working screenshot of Snagit manufactured by Techsmith, released Apr. 18, 2014. |
Yang, Qingxiong, et al., “Spatial-Depth Super Resolution for Range Images,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8 (2007). |
Zhang, Buyue, et al., “Adaptive Bilateral Filter for Sharpness Enhancement and Noise Removal,” IEEE ICIP, pp. IV-417-IV-420 (2007). |
Zitnick, C. Lawrence, et al., “High-quality video view interpolation using a layered representation,” 23(3) Journal ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2004, pp. 600-608 (Aug. 2004). |
Arbelaez, P., et ,al., “Contour detection and hierarchical image segmentation”, Pattern Analysis and Machine Intelligence, IEEE Transactions on 33.4 (2011): 898-916. |
Benezeth et al., “Review and Evaluation of Commonly-Implemented Background Subtraction Algorithms”, 2008. |
Carsten, R., et al., “Grabcut: Interactive foreground extraction using iterated graph cuts”, ACM Transactions on Graphics (TOG) 23.3 (2004), pp. 309-314. |
Crabb et al., “Real-Time Foreground Segmentation via Range and Color Imaging”, 2008. |
Gvili et al., “Depth Keying”, 2003. |
Kolmogorov, et al., “Bi-Layer Segmentation of Binocular Stereo Vision”, IEEE, 2005. |
Lee, D.S., “Effective Gaussian Mixture Learning for Video Background Subtraction”, IEEE, May 2005. |
Izquierdo' M. Ebroul. “Disparity/segmentation analysis: matching with an adaptive window and depth-driven segmentation.” Circuits and Systems for Video Technology, IEEE Transactions on 9.4 (1999): 589-607. |
Piccardi, M., “Background Subtraction Techniques: A Review”, IEEE, 2004. |
Wang, L., et al., “Tofcut: Towards robust real-time foreground extraction using a time-off camera.”, Proc. of 3DPVT, 2010. |
Xu, F., et al., “Human detection using depth and gray images”, Advanced Video and Signal Based Surveillance, 2003., Proceedings, IEEE Conference on IEEE, 2003. |
Zhang, Q., et al., “Segmentation and tracking multiple objects under occlusion from multiview video.”, Image Processing, IEEE Transactions on 20.11 (2011), pp. 3308-3313. |
Cheung et al., “Robust Techniques for Background Subtraction in Urban Trathce Video”, 2004. |
Number | Date | Country | |
---|---|---|---|
20160350585 A1 | Dec 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14145874 | Dec 2013 | US |
Child | 15231296 | US |