The subject matter described herein relates to systems, methods, and graphical user interfaces for defining and negotiating agreements using collaborative communications such as messaging, e-mail, and web-conferencing and by using contextual information relating to participating users.
Collaboration technologies are typically designed and engineered in a generic way without having the context and the outcome of a collaboration in mind. For example, a chat service, while enabling the communication between one or multiple people, does not have any context as to why certain users have connected and exchanged information. As a result, any outcomes resulting from such communications over such services must be manually transferred to a separate application or module. Moreover, due to the lack understanding the framing context for such communications, the system is not able to provide contextual information facilitating the connection.
In one aspect, an agreement object comprising a plurality of terms for negotiation among two or more users is concurrently presented with an unstructured conversation among at least two users. Thereafter, data characterizing terms for which an agreement has been reached is received resulting in the agreement object being updated to reflect the terms for which an agreement has been reached. An agreement based on the agreement object can then be finalized after an agreement has been reached for each of the plurality of terms.
The presented agreement object can comprise a plurality of graphical user interface elements which, when activated, generate the received data characterizing terms for which an agreement has been reached.
At least a portion of the unstructured conversation can be parsed to associate the unstructured conversation with the agreement object, wherein the agreement object is presented in response to this association. The parsing can use a variety of technologies including, for example, Speech Act Theory, to associate the conversation with the agreement object. In addition or in the alternative, at least a portion of the unstructured conversation can be parsed to characterize terms for which an agreement has been reached. This parsing can use a variety of technologies including, for example, Speech Act Theory, to characterize the terms for which an agreement has been reached.
The agreement object can be one of a plurality of agreement templates made available to a user via a graphical user interface that the user selects. The plurality of agreement templates made available to the user can comprise be based on contextual information such as agreement templates historically used by the user. The user can have a pre-defined role such that the plurality of available agreement templates made available to the user comprise agreement templates associated with the pre-defined role. The user can have a pre-defined access level such that the plurality of available agreement templates made available to the user comprise agreement templates associated with the pre-defined access level.
The unstructured conversation can comprises one or more of: messaging, e-mail communications, videoconferencing, and web conferencing.
Finalizing the agreement can comprise storing data characterizing values for each of the terms in a repository, displaying data characterizing values for each of the terms in a repository, and/or transmitting the agreement to at least one entity for approval.
One or more additional users can be added to the conversation to seek approval of at least one of the terms or to obtain input regarding at least one of the terms. The graphical user interface can comprise at least one contact graphical user interface element, which when activated, adds at least one additional user to the conversation. The graphical user interface can comprise at least one information graphical user interface element, which when activated, concurrently displays additional information associated with one or more of the users and/or the agreement object. In addition, there can be a plurality of categories of terms and each category has a corresponding category graphical user interface element, which when activated, causes associated terms to be displayed in the agreement object.
In another aspect, an unstructured electronic conversation between two or more users is parsed, using a speech recognition algorithm, to identify an agreement object. The agreement object comprises a plurality of terms for negotiation among two or more of the users. Thereafter, user-generated input is received via a graphical user interface from at least one of the users defining a value for at least one of the plurality of terms. An agreement is then generated based on the user-generated input and the agreement object and the agreement is persisted.
In a further aspect, a graphical user interface is rendered that concurrently displays a conversations panel and an agreement object. The conversations panel displays communications between two or more users. The agreement object specifies a plurality of terms forming part of an agreement and comprising a plurality of graphical user interface elements associated with the plurality of terms which, when activated, cause values associated with the terms to change. User generated input is received via the graphical user interface from at least one of the users activating at least one of the graphical user interface elements and changing at least one value. An agreement is then generated based on this input and the agreement object.
In still a further aspect, an agreement object is instantiated that comprises a plurality of terms for negotiation among two or more users. The agreement object can be instantiated and/or initial values for terms can be populated based on contextual information associated with at least one user. Thereafter, data characterizing terms for which an agreement has been reached can be received (by, for example, parsing an unstructured conversation among the two or more users, etc.). This data results in the agreement object being updated to reflect the terms for which an agreement has been reached. Subsequently, an agreement based on the agreement object can be finalized when agreement has been reached for each of the plurality of terms.
Articles of manufacture are also described that comprise computer executable instructions permanently stored on computer readable media, which, when executed by a computer, causes the computer to perform operations herein. Similarly, computer systems are also described that may include a processor and a memory coupled to the processor. The memory may temporarily or permanently store one or more programs that cause the processor to perform one or more of the operations described herein. Methods described herein can be implemented by one or more data processors forming part of a single computing system or distributed among two or more computing systems.
The subject matter described herein provides many advantages. For example, the current subject matter allows for goal and/or result oriented communications amongst individuals. In particular, the current subject matter allows for agreements to be defined and confirmed based on unstructured conversations between individuals/entities. In addition, by embedding structured tools/forms into an unstructured conversation, the agreement between two people can be informally or formally captured. Such a tool can serve to record the agreements or service level agreement, the accepted offer, the formal approval, or virtually any consensus about certain conditions. Moreover, by combining both qualities (unstructured ad-hoc conversation and shared semi-synchronous tool), the tool can help to set context for a conversation and to capture the outcome of that conversation. The other way around, the conversation capability helps to reduce the design of the tool to just capturing the agreed facts instead of sending different proposals back and forth.
The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims.
Subsequently, in the negotiate stage 220, the initiator 212 interacts with at least one second user 222 via a collaborative communications protocol such as messaging, e-mail, web conferencing or the like. Through such collaborative communications, the terms of the agreement object 214 are negotiated in an effort to reach a completed agreement 224. In some cases, the negotiation process can involve one or more optional stakeholders 226. Such optional stakeholders can be requested, for example, to give an opinion regarding one or more of the terms, approve one or more of the terms and the like. One or more views 232 of the completed agreement can be provided to the various participants (initiator 212, second user 222, optional stakeholders 226, etc.) in the implement stage 230. These views 232 can provide a graphical representation of one or more of the terms of the agreement 224 and can be updated when tasks associated with such terms are completed and/or when terms are subsequently modified. In addition, the agreement 224 can be stored in a data repository 234 and/or transmitted to various stakeholders 236.
As referenced in
Categories of agreement objects include resource agreements such as service level agreements, project assignments, staffing commitments, leave requests, and task completion; cost-based agreements such as trip requests, discounts on goods or services, purchase requests, head count allocation, budget transfer, and sponsorship; goal agreements can include performance goals, development goals, project goals, and customer engagement goals; authorization agreements can include work from home request, visit customers, attend event, and posts regarding specified topics; and choice/decision agreements such as whether to hire a candidate, vendor selection, course of actions to take, and event date/location.
The workflow 200 of
Various types of contextual information can be used to identify/select the appropriate agreement object and/or to populate the agreement object 214. More specifically, agreement objects 214 can be derived and/or updated from an unstructured (informal) conversation between two or more parties and/or from any other existing contextual information. In addition, agreement objects 214 themselves can be part of the context of a 1:1 work relationship context comprising, for example, all past and pending agreements between two people and/or the respective roles of the two people (employee/manager, etc.) (and such relationship information can form part of the contextual information). Other contextual information such as the names of the users, contact information, relationship, and the like can also be added to the agreement object 214. Similarly, data relating to ongoing tasks and/or topics for a particular user can be used to identify/populate the agreement object 214.
In some implementations, data captured by the tool as the result of the conversations can be linked to an application context such as business logic, analytics, and personal project or task management. Such application context can be used to identify the agreement object (or a plurality of agreement templates) and/or term values for the agreement object 214.
By having an understanding of the desired outcome of the discussion, the tool can also propose stakeholders relevant to the discussion. This proposal can be based on one or more of the initiator 212, the second user 222, the stakeholders 226 and the agreement object 214. For example, when negotiating the allocation of a people resource, the tool can suggest the direct manager and second level manager of that resource to be included in the discussion and sign-off the conditions described in the tool. The content in the tool becomes the agreement 224 between these stakeholders and can be tracked with respect to fulfillment or revised it conditions change.
Various implementations of the subject matter described herein may be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations may include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the subject matter described herein may be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user may provide input to the computer. Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The subject matter described herein may be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user may interact with an implementation of the subject matter described herein), or any combination of such back-end, middleware, or front-end components. The components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Although a few variations have been described in detail above, other modifications are possible. For example, the logic flow depicted in the accompanying figures and described herein do not require the particular order shown, or sequential order, to achieve desirable results. In addition, while many aspects of the current disclosure are directed to the use of a graphical user interface, it will be appreciated that many of the features described herein have utility separate from a graphical user interface. Other embodiments may be within the scope of the following claims.