Embodiments of the present invention relate generally to communications technology and, more particularly, relate to apparatuses, methods and computer program products for enabling the provision of gaze information to a participant in a communication session.
Communication devices are becoming increasingly ubiquitous in the modern world. In particular, mobile communication devices seem to be particularly popular with people of all ages, socio-economic backgrounds and sophistication levels. Accordingly, users of such devices are becoming increasingly attached to their respective mobile communication devices. Whether such devices are used for calling, emailing, sharing or consuming media content, gaming, navigation or various other activities, people are more connected to their devices and consequently more connected to each other and to the world at large.
Due to advances in processing power, memory management, application development, power management and other areas, communication devices, such as computers, mobile telephones, cameras, personal digital assistants (PDAs), media players and many others are becoming more capable. Furthermore, many such devices are becoming capable of performing tasks associated with more than one of the above listed devices and other tasks as well. Numerous networks and communication protocols have also been developed to support communication between these devices. As a result, whether for business, entertainment, daily routine or other pursuits, communication devices are becoming increasingly reliable and capable mechanisms for sharing information.
With the rise in popularity of communication devices, communication is no longer limited to face-to-face communication and text or speech based communication. Instead, computer-mediated communication (CMC) is becoming more common. CMC may provide the ability for aspects of any or all of face-to-face communication and text or speech based communication to be realized.
A method, apparatus and computer program product are therefore provided that may enable the provision of gaze information, for example, in the context of CMC. Thus, for example, a user may be enabled to receive information regarding where either the user's own or another user's gaze is directed. Embodiments of the present invention may further enable modification of gaze information and/or synthesis of the gaze information based on various different factors thereby enhancing raw gaze information. Embodiments may also provide for visualization of the modified and/or synthesized gaze information for a participant in a communication session.
In one example embodiment, a method of providing gaze information is provided. The method may include receiving content, determining gaze information of an individual relative to the content, modifying the gaze information based on modification criteria, modifying the content based on the modified gaze information, and providing for visualization of the modified content.
In another example embodiment, a computer program product for providing gaze information is provided. The computer program product may include at least one computer-readable storage medium having computer-executable program code portions stored therein. The computer-executable program code portions may include first program code instructions, second program code instructions, third program code instructions, fourth program code instructions and fifth program code instructions. The first program code instructions may be for receiving content. The second program code instructions may be for determining gaze information of an individual relative to the content. The third program code instructions may be for modifying the gaze information based on modification criteria. The fourth program code instructions may be for modifying the content based on the modified gaze information. The fifth program code instructions may be for providing for visualization of the modified content.
In another example embodiment, an apparatus for providing gaze information is provided. The apparatus may include a processor that may be configured to receive content, determine gaze information of an individual relative to the content, modify the gaze information based on modification criteria, modify the content based on the modified gaze information, and provide for visualization of the modified content.
In yet another example embodiment an apparatus for providing gaze information is provided. The apparatus may include means for receiving content, means for determining gaze information of an individual relative to the content, means for modifying the gaze information based on modification criteria, means for modifying the content based on the modified gaze information, and means for providing for visualization of the modified content.
Having thus described the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
Some embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms “data,” “content,” “content item,” “information” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.
CMC may change communication in three general ways. For example, CMC may remove or distort gaze information, may collapse perspective or create a new reference context, and may allow for transformation and/or synthesis of gaze information. As such, active transformation of gaze information may be utilized to influence how communication is conducted for the benefit of one, some or all of the participants. Embodiments of the present invention may enable active transformations of gaze information based on various different criteria and may also enable visualization of transformed (or modified) gaze information based on certain criteria. Accordingly, communication such as CMC may be enhanced.
Referring now to
The network 130 may include a collection of various different nodes, devices or functions that may be in communication with each other via corresponding wired and/or wireless interfaces. As such, the illustration of
One or more communication terminals such as the first and second communication devices 110 and 120 may be in communication with each other via the network 130 and each may include an antenna or antennas for transmitting signals to and for receiving signals from a base site, which could be, for example a base station that is a part of one or more cellular or mobile networks or an access point that may be coupled to a data network, such as a local area network (LAN), a metropolitan area network (MAN), and/or a wide area network (WAN), such as the Internet. In turn, other devices such as processing elements (e.g., personal computers, server computers or the like) may be coupled to the first and second communication devices 110 and 120 via the network 130. By directly or indirectly connecting the first and second communication devices 110 and 120 and other devices to the network 130, the first and second communication devices 110 and 120 may be enabled to communicate with the other devices or each other, for example, according to numerous communication protocols including Hypertext Transfer Protocol (HTTP) and/or the like, to thereby carry out various communication or other functions of the first and second communication devices 110 and 120, respectively.
Furthermore, although not shown in
In example embodiments, either of the first communication device 110 and the second communication device 120 may be mobile or fixed communication devices. Thus, for example, the first and second communication devices 110 and 120 could be any of personal computers (PCs), PDAs, wireless telephones, desktop computer, laptop computers, mobile computers, cameras, video recorders, audio/video players, positioning devices, game devices, television devices, radio devices, or various other like devices or combinations thereof.
In an example embodiment, the service platform 140 may be a device or node such as a server or other processing element. The service platform 140 may have any number of functions or associations with various services. As such, for example, the service platform 140 may be a platform such as a dedicated server (or server bank) associated with CMC functionality, or the service platform 140 may be a backend server associated with one or more other functions or services having additional capability for supporting CMC as described herein. The functionality of the service platform 140 may be provided by hardware and/or software components configured to operate in accordance with embodiments of the present invention.
In an example embodiment, gaze information may be collected at one or both of the first and second communication devices 110 and 120. The gaze information may be modified based on certain criteria either at the device at which the gaze information is collected or at the device to which the gaze information is communicated (e.g., the service platform 140 or the other device). The modified gaze information may then be visualized at either the device at which the gaze information is collected or the other device. In some cases, the gaze information may be determined from or associated with video content and the visualization of the modified gaze information may include playing the video content after the video content has been modified according to the modified gaze information. Examples of apparatuses that could be included in or embodied as either one of the first and second communication devices or the service platform 140 and configured in accordance with embodiments of the present invention will be explained below in reference to
An example embodiment of the invention will now be described with reference to
Referring now to
The processor 210 may be embodied in a number of different ways. For example, the processor 210 may be embodied as various processing means such as a processing element, a coprocessor, a controller or various other processing devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a hardware accelerator, and/or the like. In an example embodiment, the processor 210 may be configured to execute instructions stored in the memory device 216 or otherwise accessible to the processor 210.
Meanwhile, the communication interface 214 may be embodied as any device or means embodied in either hardware, software, or a combination of hardware and software that is configured to receive and/or transmit data from/to a network and/or any other device or module in communication with the apparatus 200. In this regard, the communication interface 214 may include, for example, an antenna and supporting hardware and/or software for enabling communications with a wireless communication network. In fixed environments, the communication interface 214 may alternatively or also support wired communication. As such, the communication interface 214 may include a communication modem and/or other hardware/software for supporting communication via cable, DSL, universal serial bus (USB) or other mechanisms.
The user interface 212 may be in communication with the processor 210 to receive an indication of a user input at the user interface 212 and/or to provide an audible, visual, mechanical or other output to the user. As such, the user interface 212 may include, for example, a keyboard, a mouse, a joystick, a touch screen, a display, a microphone, a speaker, or other input/output mechanisms. In an example embodiment in which the apparatus 200 is embodied as a server or some other network devices, the user interface 212 may be limited, or eliminated.
In an example embodiment, the processor 210 may be in communication with or be embodied as, include or otherwise control a gaze tracker 218, a gaze modifier 220, a goal manager 222 and a visualization driver 224. The gaze tracker 218, the gaze modifier 220, the goal manager 222 and the visualization driver 224 may each be any means such as a device or circuitry embodied in hardware, software or a combination of hardware and software that is configured to perform the corresponding functions of the gaze tracker 218, the gaze modifier 220, the goal manager 222 and the visualization driver 224, respectively, as described below.
The gaze tracker 218 may be a device or module configured to determine and/or track the gaze of a user of the corresponding communication device. Thus, for example, the gaze tracker 218 may be configured to determine where the user is looking (e.g., where the user's gaze is focused) either instantaneously or over time. The gaze tracker 218 may use any suitable mechanism for gaze tracking including, but not limited to, eye tracking, head tracking, face detection, face tracking, mechanisms for determining lips, smile or other facial features, and/or the like. An output of the gaze tracker 218 may be gaze information indicative of a continuous or periodic record of the location of the user's gaze. The gaze tracker 218 may operate in real-time or substantially real-time (e.g., with little delay between determining the gaze information and communicating such information to another device or element) or may buffer or otherwise record the gaze information (e.g., using the memory device 216) for future use or processing. The gaze information could be a stream of data, a plurality of temporal and spatial segments of data, and/or an information file including data descriptive of the user's gaze.
In some situations, a camera or other device used for gaze tracking may also be utilized for gathering context related information or information descriptive of a workspace environment. As an example, a face detection and tracking algorithm may provide a measure of apparent interest and/or engagement of a particular participant in a communication session. Relative attractiveness of participants could also be determined. In other embodiments, a camera associated with the gaze tracker 218 may also provide video content of a workspace (e.g., the surroundings of a user of the apparatus 200), of one or more objects, and/or of a virtual environment. However, another camera or source for the video content may alternatively be provided.
In an example embodiment, the gaze information determined by the gaze tracker 218 may be provided to the gaze modifier 220 for modification and/or synthesis. The gaze modifier 220 may be configured to make modifications or transformations to the gaze information in order to provide an enhanced or synthesized output (e.g., modified gaze information) based on the gaze information provided. The modifications provided by the gaze modifier 220 may be made on the basis of goals or other criteria specified, for example, via the goal manager 222 as described in greater detail below. In various examples, the modifications may include generally, but are not limited to, smoothing of discontinuities in the gaze information, the provision of additional information appended to or inserted within the gaze information (e.g., markup information or metadata), the addition of another participants gaze information, the addition of generic gaze information (e.g., stored in a database), the addition of video or voice data from other participants, the inclusion of cursor and/or selection information, zoom information, the transformation or synthesis of gaze information based on provided goals, and/or the like.
In an example embodiment, some specific examples of modifications that the gaze modifier 220 may be configured to perform may include snapping the gaze to a particular object within a workspace (e.g., under certain trigger conditions that may be specified by the goal manager 222), using a moving average or other technique to smooth out gaze information over time, altering an uncertainty area of the gaze information based on various factors, repeating gaze information, removing or replacing gaze information, and/or the like. Other examples of modifications may include synthesis of the gaze information with other data and/or synthesis of gaze information of one or more participants in a CMC session. Thus, for example, instead of sharing segments of actual sensed gaze information or filling in for a lack of gaze information from one or more participants, the gaze modifier 220 may take data corresponding to gaze information from multiple sources (e.g., including current participants and/or generic gaze information) and merge the data together to provide composite gaze information. In some cases, generic gaze information could be combined with a speaker's gaze information delayed by a random window. In another embodiment, gaze information may be synthesized from cursor position and generic gaze information. Combinations of any or all of the above examples may also be performed.
In an example embodiment, the gaze modifier 220 may also be configured to provide an indication to a user of the apparatus 200 indicative of the modifications being provided (or to be provided) with respect to the user's first person gaze information and/or to modifications being provided (or to be provided) with respect to gaze information of another party. In certain circumstances (e.g., for security or privacy reasons), one or more parties may not be provided with such information or may not be enabled to perform one or more different types of modification. Accordingly, distributed computations and transformations may be enabled. As an example, a client device (e.g., one of the first and second communication devices 110 and 120) may have access to context and/or task information that is not made available to the service platform 140 and/or another device involved in communication with the client device due to security requirements associated with the client device. This may enable a device to indicate modifications that should be applied to some (potentially already modified) gaze information by other devices or servers.
As an example of additional information that may be provided to describe modifications, a segment of gaze information may be identified as co-occurring with some other event, a representation of which may or may not be available elsewhere. As another alternative, a segment may be associated with arbitrary information such as a spatial segment being associated with task-specific information designating the stage in a canonical or otherwise expected workflow corresponding to an area covered by the segment. As yet another example, a segment may be associated with other segments of gaze information. In this regard, semantic relationships that may be encoded include, for example, indicating that one segment is a modified version of another segment (and the converse) or indicating that two segments share a common origin (e.g., that they two segments are separate modifications of the same original gaze information).
In some embodiments, a listing of the modifications being applied to either incoming or outgoing gaze information may be provided at any of various levels of abstraction. For example, since some low level modifications may be switched between being on and off with some regularity based on context or other events or information, it may be useful to enable the provision of a relatively more stable list of modifications at a higher level of abstraction. In some cases, the user may (e.g., via the user interface 212) specify that certain modifications be enabled or disabled via a visualization of the listing of modifications (e.g., by an on/off toggle mechanism relative to the items listed).
The goal manager 222 may be configured to provide criteria such as goals, instructions, rules, preferences, and/or the like, with respect to modifications to be made by the gaze modifier 220. In some cases, the criteria may be provided by a user of the user of the apparatus 200 (e.g., via the user interface 212). However, the criteria may alternatively be provided by other parties (e.g., other participants in a CMC session), or may be predefined criteria. As such, in some situations, the goal manager 222 may serve as an interface between the user and the gaze modifier 220 with respect to directing modifications to be made to gaze information.
In an example embodiment, the goal manager 222 may store (e.g., via the memory device 216), access or apply criteria for gaze information modification including, for example, general goals, relationship-specific goals, task or role specific goals, user specified goals, information about shared or private workspaces, contextual and/or dynamic personal information, information used for a particular modification, and/or the like. In some cases, the goal manager 222 may develop goals based on information accessible to the goal manager 222 (e.g., relationship information, task based roles, past behavior, context information, etc.).
General goals may include criteria that apply universally to all communications. In some cases, general goals may only be applied when they do not conflict with other goals. However, the goal manager 222 may also include rules for de-conflicting the application of modification goals (e.g., on the basis of a hierarchy amongst the criteria). As an example, a general goal may include a preference for directing the listener's gaze to be co-located with the speaker's gaze at the end of an utterance if the listener's gaze dwelled in the same location for some time.
Relationship-specific goals may include preferences that are dependent upon the relationships of the participants (e.g., senior/subordinate, peer/peer, family, and/or the like). As such, for various different relationships between participants, corresponding different modifications may be made based on the goals corresponding to the relationship of the participants in each respective case. The relationship between the participants may be determined based on explicit social network information (e.g., from a social network service (SNS) or organizational database), information specified via a user entry defining the relationship between the participants, and/or statistical comparisons of behaviors attributable to participants during past and current interactions (some of which may be interactions with other individuals with known relationships).
Task or role specific goals may include preferences with respect to a particular type or class of task and/or the roles that specific participants have in a given task (which could be independent of the relationship between the participants). Task or role specific goals may be determined from information provided by an application or applications supporting a task. For example, structured descriptions of a current task and/or supporting tasks or information about an object in a shared workspace may be utilized for determining task or role specific goals. Task or role specific goals may also be determined and refined based on supervised or semi-supervised machine learning where known values of past outcomes (e.g., from questionnaires, post-task gaze and sensor data) may be used. In some cases, information provided by various applications may be used in learning supervision.
Specified goals may include specific rules provided by the user at various levels of abstraction. For example, a first user may specify a preference with respect to a relationship between the first user's gaze and the gaze of another user. Meanwhile, for example, a novice user may simply provide an adjective having corresponding specified rules for gaze modification based on the adjective. For example, designations such as “formal” or “informal” may include corresponding rules that direct the provided gaze information to be modified in a corresponding specific way. The rules may be specified by the user for each respective adjective, or the rules may be predefined for a set of pre-selected adjectives. In some instances, node-based and/or patch-based visual programming user interface may be employed to support the implementation of specified goal provision.
Information shared regarding shared and private workspaces may include goals that are “filled in” or instantiated by information about the workspace. As an example, such a goal may include a preference for snapping the gaze of one or more participants to the same object in the workspace under certain conditions and/or to snap participant gazes to an object to which the speaker's gaze otherwise snaps. Information about the shared workspace, including information about the general environment (e.g., via analysis of a windowing system) and/or information about applications running may be used to identify objects and the visual area consumed by the objects.
Contextual and/or dynamic personal information may include information about the current situation of one or more of the participants. Such information may also include information related to a participant's personal state or mood. Context and personal information may be gathered from sensors, activity records, time/date and other temporal criteria. In some situations context and/or personal information may also be gathered based on applications open and/or activity related to such applications. As an example of a modification that may be made on the basis of such information, if a particular user is determined to be drowsy, a mask may be applied (e.g., to an avatar or other likeness of the respective user) in order to indicate that the user is drowsy. Alternatively, a mask could be applied to indicate that the person is not drowsy if such a modification is desired. The processor 210 and/or the goal manager 222 may be configured to make context determinations in various example embodiments.
Upon receipt of any of the above described goals and/or other preference or goal related information for influencing modification of gaze information from the goal manager 222, the gaze modifier 220 may make corresponding modifications. As indicated above, the gaze modifier 220 may make modifications to gaze information for the user of the apparatus 200 or may provide additional information to the gaze information so that the additional information may be used by a instance of the gaze modifier either at the service platform 140 or at another apparatus (e.g., another communication device involved in a CMC session). The gaze modifier 220 may also extract additional information provided along with gaze information provided from the service platform 140 or another device and generate modifications based on the extracted additional information. A special markup or shared language may be used for defining modifications to be shared between devices in this manner. In this regard, the special markup or shared language may include, depend on or otherwise account for shared workspace or virtual environments.
The visualization driver 224 may be configured to drive a display device to display gaze information and/or modified gaze information in accordance with embodiments of the present invention. In some embodiments, the visualization driver 224 may not be included when the apparatus 200 is embodied at the service platform 140. However, when included at a respective device, the visualization driver 224 may provide for a display of gaze information as indicated from the gaze modifier 220. Thus, for example, the gaze modifier 220 may provide information indicative of the modified gaze information for display by the visualization driver 224 in which the information provided may be indicative of first person gaze information for the user of the apparatus 200 and/or gaze information for one or more other communication session participants. The visualization driver 224 may, in some cases, provide the gaze information (or modified gaze information) relative to video content showing a common workspace, object(s) or virtual environment. Thus, for example, the video content itself may be considered as modified video content. In some situations, modification of the video content may include modifying face orientation, eye orientation or the orientation of other features.
In some cases, the visualization driver 224 may provide for visualization of data provided from a gaze modifier 220 instantiated in another device. For example, raw gaze information (or modified gaze information) may be provided from the first communication device 110 to an instance of the gaze modifier 220 at the service platform 140. The gaze modifier 220 may make modifications to the raw gaze information and provide the modified gaze information to one or more instances of the visualization driver 224 at respective ones of the first communication device 110 and the second communication device 120. Alternatively, either or both of the first and second communication devices 110 and 120 may have instances of the gaze modifier 220 and/or the visualization driver 224 and modified gaze information may be exchanged therebetween with or without involvement from any service platform 140. Thus, visualization may be symmetrical or asymmetrical.
In an example embodiment, the users of different communication devices may be enabled to turn their gaze information and/or modifications thereto on or off. Thus, for example, a user may decide to suspend the sharing of gaze information and/or suspend the sharing of information regarding the modification of gaze information with other users. Moreover, since each user may be given control over gaze information modification at their own respective devices, one user's visualized gaze information may be different from another user's visualized gaze information even though both users have shared the same information with each other.
Some visualization methods may be more useful for gaze information modified or synthesized in particular ways. For example, a visualization in communication of gaze information aggregated over time may be considered to be appropriate to apply to gaze information that has been filtered for a gaze directed toward objects for which disclosing that they have been a significant object of gaze during the conversation is negative. For example, removing gaze information when it is directed at an advertisement not related to the task or an open window displaying a participant's personal email may be desirable for such a visualization that would make otherwise ignorable gaze salient. Visualization methods may also have explicitly specified relationships with corresponding synthesis and modification methods. For example, an indication may be provided for a particular visualization method as to whether the particular visualization method may be meaningfully applied to a particular modification or is a preferred mechanism for visualization for the particular modification. In some cases, the user (e.g., via the user interface 212) may be enabled to choose or assign relationships among various visualization methods.
Some example visualization methods that may be employed by the visualization driver 224 for modified (e.g., transformed, enhanced, and/or synthesized) gaze information may include visualizing gaze information aggregated for multiple participants over time (e.g., over a moving window of time), and visualizing gaze information in a non-explicit way (e.g., gaze information may be visualized as a zoom level on the workspace area around the cursor so that, for example, if two people gaze at the same object or area the visualization is zoomed in and if they look in different areas or at different objects the visualization is zoomed out). Another visualization method may include, visualizing real-time (or near real-time) gaze information for a participant as a highlighted area on a shared workspace. However, according to embodiments of the present invention, the gaze information may be modified gaze information as described above. A visualization of real-time gaze information may, rather than merely showing a single point, use a visualization (e.g., an indication of a location of the gaze of one or more users) over a larger visual area occupied by some object in the shared workspace (e.g., highlight the entire icon or table row indicated by the gaze information). This type of visualization may be considered appropriate for such modifications to gaze information as temporal smoothing, gaze area scale modification, and object snapping, as described above. In some instances, gaze information may be selectively and differentially visualized according to verbal and/or non-verbal communication of the participants. In this regard, for example, gaze information may be shown for only the currently speaking participant, for example, to focus attention on the speaker, especially in larger groups. Additionally, other participants' gaze may be shown with a different visualization. As an example, gaze information may be visualized as an aggregated gaze for all other participants. Alternatively, gaze information may be shown only when the speaker and several other participants gaze at the same object.
In an example embodiment, the visualization driver 224 may be configured to provide more than one visualization of the gaze information for a single person. In this regard, for example, multiple different visualization methods may be applied to the same information or the same (or different) visualization method may be applied to different versions of gaze information for the person. As an example, a first user's client device may visualize both: (1) the user's smoothed, but otherwise unmodified, gaze as a dot with a temporal “tail” and (2) a modified version of the user's gaze exactly as visualized on another user's client device.
The visualization driver 224 may also be configured to manifest gaze information in non-visual ways. In this regard, for example, the visualization driver 224 may be configured to manifest gaze information with sound. As such, gaze information may be used to modify the voices of participants, as heard by themselves and/or others. Spatialization of voices alone may provide a rich output. Beyond a basic association of voice position with position in a virtual environment, which may not apply to many shared workspaces, spatialization may be applied based on role. In some instances spatialization may be used for strategic benefit in negotiations.
As indicated above,
Specifically, P1 may record video content (e.g., using a camera or video recorder that may be a portion of an instance of the gaze tracker 218 embodied at the first communication device 110) at operation 300. The video content may then be communicated to the service platform 140 at operation 302. Context information may be gathered or determined at the first communication device 110 and also communicated to the service platform at operation 304. Context information may be gathered or determined also at the second communication device 120 for P2 and communicated to the service platform at operation 306. Although not necessary, the service platform may store (or buffer) the context information for P1 and P2 at operation 308. At operation 310, the service platform 140 may determine gaze information for P1 from the video content provided for P1. The determination of gaze information may be made, for example, by an instance of the gaze tracker 218 (or the processor 210) embodied at the service platform 140. The video content from P1 may then be modified based on the context information from P1 and P2 and/or based on the gaze information for P1 at operation 312. The modification may be performed by an instance of the gaze modifier 220, and may be made based on rules or criteria applied by an instance of the goal manager 222. At operations 314 and 316, the modified video content may be communicated to both the first and second communication devices 110 and 120, respectively. At operations 318 and 320, the modified video content may be displayed to both the first and second communication devices 110 and 120, respectively. The display of the modified video content may be accomplished via instances of the visualization driver 224 at each of the first and second communication devices 110 and 120.
In this regard, video for P1 may be recorded at operation 400. A determination may then be made at the first communication device 110 regarding gaze information for P1 (e.g., via an instance of the gaze tracker 218) at operation 402. The gaze information may be recorded or saved at the first communication device 110 as well. At operation 404, the first communication device 110 may communicate the video content for P1 to the service platform 140. Context information for P1 may also be determined and communicated at operation 406 and gaze information may be communicated at operation 408. Context information may also be provided for P2 from the second communication device 120 at operation 410. The service platform 140 may then modify the gaze information based on the context information from P1 and/or P2 at operation 412. The modified gaze information may then be used to modify the video content at operation 414.
Following modification of the gaze information and/or video content as described above, in one alternative distribution scenario shown by region A in
In another alternative distribution scenario shown in region B of
At the second communication device 120, the gaze information for P1 may be modified based on context information for P2 at operation 514. At operation 516, the modified video content may then be further modified based on the gaze information modified from operation 514. The modified video content may then be displayed at the second communication device at operation 518.
In an example embodiment, the modified video content may then be communicated back to the first communication device at operation 520. In the example shown in region A, the first communication device 110 may further modify the video content based on the modified gaze information from operation 514 (e.g., applying the criteria set by its own instance of the goal manager 222) at operation 522 and then displaying the modified video content at operation 524. However, as an alternative, the modified video content from operation 520 may be displayed at the first communication device at operation 526 without further modification as shown in region B.
Embodiments of the present invention may provide for focus on the objects of discussion and may avoid any necessity for participants of resolving third-person gaze to the object-of-gaze. Embodiments may also increase empathy by fusing participants' fields of view. Embodiments may provide for a flexible division of labor for transmission, modification, and synthesis, which may enable effective employment with heterogeneous devices, networks, and trust relationships. Embodiments may also support tracking the provenance of non-original gaze information through a language for gaze modification and synthesis and support reference in modification and synthesis instructions to resources, objects, and areas in the shared workspace or environment.
Accordingly, blocks or steps of the flowchart support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that one or more blocks or steps of the flowchart, and combinations of blocks or steps in the flowcharts, may be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
In this regard, one embodiment of a method for providing gaze information as provided in
In an example embodiment, determining the gaze information may include utilizing analysis of a portion of the individual's face to determine the location of the gaze. Modifying the gaze information may include modifying the gaze information based on context information associated with the individual or a different individual. In some cases, modifying the gaze information may include modifying the gaze information based on modification criteria including a role of a participant in a communication session, a task assigned to the participant, an environment of the participant, an object in view of the participant, relationships between participants, general rules, participant specified rules, personal information associated with a participant, and/or the like. In an example embodiment, modifying the gaze information may include synthesizing gaze information associated with the determined gaze information and other gaze information or synthesizing gaze information associated with gaze information from multiple different individuals. In some situations modifying the gaze information may include applying a transformation to the gaze information in which instructions for the transformation are received as a portion of the gaze information.
In an example embodiment, modifying the content may include providing data for visual display indicating the modified gaze information relative to the content. In some situations, providing for visualization of the modified content may include delivering the modified content to a terminal associated with the individual or another individual.
In an example embodiment, an apparatus for performing the method above may include a processor (e.g., the processor 210) configured to perform each of the operations (600-640) described above. The processor may, for example, be configured to perform the operations by executing stored instructions or an algorithm for performing each of the operations. Alternatively, the apparatus may include means for performing each of the operations described above. In this regard, according to an example embodiment, examples of means for performing operations 600 to 640 may include, for example, the gaze tracker 218, the gaze modifier 220, the goal manager 222, the visualization driver 224, and/or the processor 210.
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.