Embodiments of the present invention relate to the field of video processing, and more specifically towards systems and methods for integrating user personas with other display content during chat sessions.
Conventional video conferencing techniques typically employ a camera mounted at one location and directed at a user. The camera acquires an image of the user and background of the user that is then rendered on the video display of another user. The rendered image typically depicts the user, miscellaneous objects, and background that are within the field-of-view of the acquiring camera. For example, the camera may be mounted on the top edge of a video display within a conference room with the user positioned to view the video display. The camera field-of-view may encompass the user and, in addition, a conference table, chairs, and artwork on the wall behind the user, (i.e., anything else within the field-of-view). Typically, the image of the entire field-of-view is transmitted to the video display of a second user. Thus, much of the video display of the second user is filled with irrelevant, distracting, unappealing, or otherwise undesired information. Such information may diminish the efficiency, efficacy, or simply the esthetic of the video conference. This reduces the quality of the user experience.
Conventional chat sessions involve the exchange of text messages. Mere text messages lack the ability to convey certain types of information, such as facial features, other gestures, or general body language expressed by the participants. Conventional video conferencing techniques may convey images of the participants, but, as discussed above, the video conferencing medium has several shortcomings.
Furthermore, typical video conferencing and chat techniques do not incorporate the user with virtual content (e.g., text) being presented, and the traditional capture of the user and surrounding environment is usually unnatural and unattractive when juxtaposed against virtual content. Such a display further removes the exchange from conveying the impression that the exchange is face-to-face.
The systems and computer-implemented methods disclosed herein are for associating extracted images of users with content during a chat session. In one of many embodiments, a method includes creating a first scene by receiving content, such as text, from a user. A persona is created for that user and is associated with the content. This persona is created by extracting the image of the user from a video frame. Thus, in this embodiment, the persona is motionless (e.g., an image). The persona is preferably extracted from a video frame that captures the facial expression, gestures, or other body language of the user when the user was creating the content. Such a persona would generally add to the information conveyed by conveying the user's general attitude or emotion along with the content. The associated content and persona are then displayed in the chat session.
In the following description, numerous details and alternatives are set forth for the purpose of explanation. However, one of ordinary skill in the art will realize that embodiments may be practiced without the use of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the embodiments with unnecessary detail.
According to an embodiment, a user persona connotes an image of the user without surrounding background. The persona may be from, e.g., a single motionless video frame, a series of still frames (e.g., a stop-motion animation), or a video. Integrating a still frame (e.g., a “snapshot”), or a series of frames or short video (e.g., a “clip”) of a user persona with the text may improve the effect of a chat session (e.g., “texting”) by conveying the expressions, gestures, and general body language of the user near the time the session content was sent—effectively a custom emoticon. Grouping multiple personas on a display simulates face-to-face interactions and creates a more immediate, natural, and even visceral experience. The inventive systems and methods within may extract the persona of the user from the field-of-view of the acquiring camera and incorporate that persona into chat sessions on the displays of the users during a chat session. Accordingly, it is highly desirable to integrate user personas with content during chat sessions. Methods for extracting a persona from a video were published in application Ser. No. 13/076,264 (filed Mar. 20, 2011, by Minh N. Do, et al.) and Ser. No. 13/083,470 (filed Apr. 8, 2011, by Quang H. Nguyen, et al.), each of which is incorporated herein in its entirety by reference.
In the embodiment, persona 104 is a snapshot of the respective user created at the initiation of chat session 100. Personas 102 and 104 may initially be a snapshot or clip representing the user at the time the user was invited to the session. Alternatively, personas 102 and 104 may be pre-existing snapshots, clips, or other images chosen to represent the user at the initiation of a chat session. In
In an embodiment, before the creation of content balloon 110, the user is given the option of approving the persona that is to be depicted along with content balloon 110. In the event that the user is dissatisfied with the prospective persona, the user is given the option to create a different persona. In this case persona 102 could be a pose assumed by the user for a snapshot, or series of facial expressions, gestures, or other motions in a clip. Such a persona would no longer be as contemporaneous with the content, but would perhaps convey an expression or body language preferred by the user. The ability of the user to edit the persona may also reduce user anxiety about being “caught on camera” and potentially creating a faux pas. Personas 102 and 104 are updated throughout the chat session upon the sending of new content, adding information conveyed by facial expressions and body language with each new content balloon, and, as a result, adding to the impression that the chat session is held face-to-face.
In addition to snapshots, persona 102 may be a video clip of an arbitrary length, i.e., a moving persona. The moving persona (not shown) may capture the user for an arbitrary period before the creation of a content balloon. In addition, the user may be given the option to approve the moving persona or create a different moving persona as discussed above.
In
Regarding
In an embodiment, a persona is created to accompany both the creation of a content balloon and the receipt of a content balloon. The manner of creating a persona upon receipt of a content balloon may be as discussed with respect to personas created upon the creation of a content balloon, (e.g., based on keyboard entry, facial expressions, or other physical gestures). Thus, reactions from both the sender and receiver of a particular content balloon may be evidenced and displayed by their personas. In addition, the persona feature of a chat session may be disabled, should the nature of the chat session warrant this.
At the displayed instant of chat session 400, every user, whether represented by a persona or an identifier, has accepted the invitation to the chat session. Thus, the combined list of personas 402-408 and identifiers 412-418 is a deck 422 of users who may be chatted with, or “summoned,” by selecting their persona or identifier. Personas 402-408 and identifiers 412-418 may be removed from deck 422. And each individual user may remove themselves, in this case persona 408 as indicated by 424. The removal of a persona from the deck of one user will be limited to that user's deck and not cause that same persona to be removed from all other decks involved in chat session 400. In an embodiment, should a user remove a contact from deck 422, that user would be symmetrically removed from the deck of the removed contact. For example, should the user represented by persona 408 remove persona 406 from deck 422, then persona 408 would be removed from the deck (not shown) of the display of the user represented by persona 406.
At the displayed instant, content has been entered by each of the users represented by personas 402-408 and none of the users represented by identifiers 412-418. Should no user make an entry for an arbitrary time, the chat session may enter a “stand-by mode.” In stand-by mode, any user may activate chat session 400 by selecting a persona or identifier. Upon selecting a persona or identifier, a new content window 106 (not shown) will appear and chat session 400 could continue as described with reference to
Also regarding an indicator of a user's current availability, a placeholder indicating a user's absence could be used. This would be a positive indication of a user's absence, which may be preferable to the mere absence of a persona. Such a placeholder may, for example, be graphic or textual or a combination of the two. A graphic placeholder may include a silhouette with a slash through it, or an empty line drawing of a previous persona of the user. And a textual placeholder may include “unavailable” or “away.”
In an embodiment, only the users involved in an active chat session will leave stand-by mode. That is, a persona remains in stand-by mode until the respective user creates or receives a content balloon.
Creating a persona by extracting a user image from a video will now be described regarding
As seen in
In some embodiments, the camera 910 may further comprise a synchronization module 914 to temporally synchronize the information from the RGB sensor 911, infrared sensor 912, and infrared illuminator 913. The synchronization module 914 may be hardware and/or software embedded into the camera 910. In some embodiments, the camera 910 may further comprise a 3D application programming interface (API) for providing an input-output (IO) structure and interface to communicate the color and depth information to a computer system 920. The computer system 920 may process the received color and depth information and comprise and perform the systems and methods disclosed herein. In some embodiments, the computer system 920 may display the foreground video embedded into the background feed onto a display screen 930.
Any node of the network 1000 may comprise a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof capable to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g. a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration, etc.).
In some embodiments, a node may comprise a machine in the form of a virtual machine (VM), a virtual server, a virtual client, a virtual desktop, a virtual volume, a network router, a network switch, a network bridge, a personal digital assistant (PDA), a cellular telephone, a web appliance, or any machine capable of executing a sequence of instructions that specify actions to be taken by that machine. Any node of the network may communicate cooperatively with another node on the network. In some embodiments, any node of the network may communicate cooperatively with every other node of the network. Further, any node or group of nodes on the network may comprise one or more computer systems (e.g. a client computer system, a server computer system) and/or may comprise one or more embedded computer systems, a massively parallel computer system, and/or a cloud computer system.
The computer system 1050 includes a processor 1008 (e.g. a processor core, a microprocessor, a computing device, etc.), a main memory 1010 and a static memory 1012, which communicate with each other via a bus 1014. The machine 1050 may further include a display unit 1016 that may comprise a touch-screen, or a liquid crystal display (LCD), or a light emitting diode (LED) display, or a cathode ray tube (CRT). As shown, the computer system 1050 also includes a human input/output (I/O) device 1018 (e.g. a keyboard, an alphanumeric keypad, etc.), a pointing device 1020 (e.g. a mouse, a touch screen, etc.), a drive unit 1022 (e.g. a disk drive unit, a CD/DVD drive, a tangible computer readable removable media drive, an SSD storage device, etc.), a signal generation device 1028 (e.g. a speaker, an audio output, etc.), and a network interface device 1030 (e.g. an Ethernet interface, a wired network interface, a wireless network interface, a propagated signal interface, etc.).
The drive unit 1022 includes a machine-readable medium 1024 on which is stored a set of instructions (i.e. software, firmware, middleware, etc.) 1026 embodying any one, or all, of the methodologies described above. The set of instructions 1026 is also shown to reside, completely or at least partially, within the main memory 1010 and/or within the processor 1008. The set of instructions 1026 may further be transmitted or received via the network interface device 1030 over the network bus 1014.
It is to be understood that embodiments may be used as, or to support, a set of instructions executed upon some form of processing core (such as the CPU of a computer) or otherwise implemented or realized upon or within a machine- or computer-readable medium. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g. a computer). For example, a machine-readable medium includes read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical or acoustical or any other type of media suitable for storing information.
Although the present embodiment has been described in terms of specific exemplary embodiments, it will be appreciated that various modifications and alterations might be made by those skilled in the art without departing from the spirit and scope of the invention. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
5001558 | Burley et al. | Mar 1991 | A |
5022085 | Cok | Jun 1991 | A |
5117283 | Kroos et al. | May 1992 | A |
5227985 | DeMenthon | Jul 1993 | A |
5343311 | Morag et al. | Aug 1994 | A |
5506946 | Bar et al. | Apr 1996 | A |
5517334 | Morag et al. | May 1996 | A |
5534917 | MacDougall | Jul 1996 | A |
5581276 | Cipolla et al. | Dec 1996 | A |
5631697 | Nishimura et al. | May 1997 | A |
5687306 | Blank | Nov 1997 | A |
6119147 | Toomey | Sep 2000 | A |
6150930 | Cooper | Nov 2000 | A |
6411744 | Edwards | Jun 2002 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6664973 | Iwamoto et al. | Dec 2003 | B1 |
6798407 | Benman | Sep 2004 | B1 |
7124164 | Chemtob | Oct 2006 | B1 |
7317830 | Gordon et al. | Jan 2008 | B1 |
7324166 | Joslin | Jan 2008 | B1 |
7386799 | Clanton | Jun 2008 | B1 |
7574043 | Porikli | Aug 2009 | B2 |
7633511 | Shum et al. | Dec 2009 | B2 |
7773136 | Ohyama et al. | Aug 2010 | B2 |
8175384 | Wang | May 2012 | B1 |
8300890 | Gaikwad et al. | Oct 2012 | B1 |
8320666 | Gong | Nov 2012 | B2 |
8379101 | Mathe | Feb 2013 | B2 |
8396328 | Sandrew et al. | Mar 2013 | B2 |
8565485 | Craig | Oct 2013 | B2 |
8649592 | Nguyen | Feb 2014 | B2 |
8649932 | Mian et al. | Feb 2014 | B2 |
8818028 | Nguyen | Aug 2014 | B2 |
8913847 | Tang et al. | Dec 2014 | B2 |
20020158873 | Williamson | Oct 2002 | A1 |
20030228135 | Illsley | Dec 2003 | A1 |
20040107251 | Wat | Jun 2004 | A1 |
20040153671 | Schuyler et al. | Aug 2004 | A1 |
20050094879 | Harville | May 2005 | A1 |
20050151743 | Sitrick | Jul 2005 | A1 |
20070110298 | Graepel et al. | May 2007 | A1 |
20070146512 | Suzuki et al. | Jun 2007 | A1 |
20070201738 | Toda et al. | Aug 2007 | A1 |
20070242161 | Hudson | Oct 2007 | A1 |
20080181507 | Gope et al. | Jul 2008 | A1 |
20090030988 | Kuhlke | Jan 2009 | A1 |
20090033737 | Goose | Feb 2009 | A1 |
20090044113 | Jones | Feb 2009 | A1 |
20090110299 | Panahpour Tehrani et al. | Apr 2009 | A1 |
20090199111 | Emori | Aug 2009 | A1 |
20090244309 | Maison et al. | Oct 2009 | A1 |
20100027961 | Gentile | Feb 2010 | A1 |
20100195898 | Bang et al. | Aug 2010 | A1 |
20110193939 | Vassigh et al. | Aug 2011 | A1 |
20110242277 | Do | Oct 2011 | A1 |
20110243430 | Hung et al. | Oct 2011 | A1 |
20110267348 | Lin et al. | Nov 2011 | A1 |
20110293179 | Dikmen et al. | Dec 2011 | A1 |
20140156762 | Yuen | Jun 2014 | A1 |
20140229850 | Makofsky | Aug 2014 | A1 |
20150052462 | Kulkarni | Feb 2015 | A1 |
Entry |
---|
D.S. Lee, “Effective Gaussian Mixture Leaning for Video Background Subtraction”, IEEE, 6 pages, May 2005. |
Benezeth et al., “Review and Evaluation of Commonly-Implemented Background Subtraction Algorithms”, 4 pages, 2008. |
Piccarrdi, “Background Subtraction Techniques: A Review”, IEEE, 6 pages, 2004. |
Cheung et al., “Robust Techniques for Background Subtraction in Urban Traffic Video”, 11 pages, 2004. |
Kolmogorov et al., “Bi-Layer Segmentation of Binocular Stereo Vision”, IEEE, 7 pages, 2005. |
Gvli et al., “Depth Keying”, 2003, pp. 564-573. |
Crabb et al., “Real-Time Foreground Segmentation via Range and Color Imaging”, 4 pages, 2008. |
Number | Date | Country | |
---|---|---|---|
20150172069 A1 | Jun 2015 | US |