The exchanging of ideas forms an important if not essential part of any collaborative activity, and the productivity of a team often depends upon it. Many large global corporations face managing multi-disciplinary collaboration as one of the main challenges. Generally in any large enterprise, various collaborators from diverse disciplines examine, discuss, and revise hundreds of documents in different formats as part of their daily job functions. This process regularly produces significantly large amounts of data in various forms of digital annotation and markup, hand-written comments and annotations, emails, memos, chats, voice mail, images, etc. The data is often fragmented across the enterprise and stored in a wide variety of repositories, and currently no single effective solution for retrieving or managing such data exists.
Many of the current solutions have the ability to support only a limited number of formats, and often only a single format. An enterprise collaboration process results in documents processed in different formats such as 2D Computer Assisted Design (CAD), 3D CAD, and Electronic Computer Assisted Design (ECAD), textual documents, emails, and images. Collaborators and reviewers use different software solutions that normally do not talk to each other.
In a typical large enterprise, different repositories and enterprise back-end systems such as Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Product Lifecycle Management (PLM), and Supply Chain Management (SCM), routinely store documents. Email applications and local individual user PCs also store documents. However, existing products cannot easily integrate into such systems and none of them support multiple back-end systems.
Current software applications that support annotation and markup do not model and visualize annotations correctly based on a user's mental model of an annotation, which makes annotations difficult to understand, especially since collaborators often have come from different educational and/or cultural backgrounds.
In addition, current annotation solutions are document-centric, meaning that annotations bind with individual documents. In the real world, a single annotation may include multiple documents of different formats.
These and other limitations make it very difficult to retrieve and manage user annotations and map them to the pertinent workflow process, as well as integrating into back-end systems. These and other disadvantages of previous systems make collaboration considerably longer and frequently result in misunderstandings among different disciplines as well as inconsistency in the end product.
In all collaborative activities, team members should work on a clearly-defined context. The context of collaboration generally includes goals, scope of the projects, products, team members' discussions, and the like, which usually exist in a number of forms such as face to face discussions, documents of different formats, graphs, emails, voicemail, pages on the internet, memos, etc. The context is typically the core part of any collaboration and team discussion, and the quality of the end product usually depends on each team-member's ability to grasp it. Therefore, one of the most important challenges is to provide all team members with an understandable representation of the context.
In order to understand a context, team members from multiple disciplines should be able to view multiple documents of different formats that are usually stored in multiple backend systems. Moreover, understanding a context typically depends on understanding the connections and relationships between its elements. If a collaborator cannot clearly view these connections, for example, the context can easily be misinterpreted, which will often lead to misunderstanding and conflict between team members. In software technology and “contextual collaboration,” there have been several attempts to represent the context. However, none of them have been complete or successful.
One way to make the context more understandable is to add digital annotations to the documents. However, current annotations are document-centric, meaning that annotations bind with individual documents. Since the elements of a context can exist in different forms stored in several documents, current annotations can not visualize the context correctly.
Another attempt is creating portals, which involves embedding all the relevant applications, such as word processors, enterprise instant messaging (EIM), shared calendars, and groupware, into a unified user interface. Using portals may make retrieving information faster and easier, but it does not visualize the relationship between all elements, meaning that team members must still find their own way through represented information in order to have a correct mental model of a context. The portal acts more as visual glue around fragmented and heterogeneous information than as a real way to represent a complete collaboration context.
The disclosed technology, unlike previous attempts, provides various new and advantageous techniques to not only visualize the data related to the context of a collaboration, but the relationships between its elements. In various implementations, multiple team members can create connections between multiple elements of different forms that are stored in multiple repositories. Because these annotations are typically external, the collaborators can view annotations independently, from any repository or collaborative environment. Thus, implementations can greatly facilitate collaboration and communication in virtually any enterprise. An example of a software application utility that may embody the advantageous techniques discussed herein will be referred to as AutoVue Annotation.
An annotation generally represents a thread of comments in a context and is typically stored as a standalone, separate entity. The annotation can desirably include threaded comments and context identifiers. A threaded comment generally includes a thread of one or several comments in the form of text and/or image added to one or more documents that represent the exchange of ideas between one or multiple participants. Context identifiers generally refer to graphical elements that allow participants to identify the subject of comments based on visual common sense or workplace convention. For example, an identifier could highlight a region of a document, or an empty identifier could mean that the annotation applies to the whole document or simply to the current page. The context identifier can target a part of a file, a whole file, parts of multiple files, or multiple whole files of virtually any format.
Using implementations of AutoVue Annotation, users can advantageously create annotations to documents of various different formats, including but not limited to 2D and 3D drawings, ECAD files, and office documents. Moreover, users can create an annotation that covers multiple files of different formats. For example, a user can create an annotation with the text “this chip will not fit within the provided case” and also point to a dimension of a PCB design, a dimension of a 3D model, and a certain paragraph in a requirement document indicating the correct size to be used.
The generated annotations can be advantageously leveraged throughout the enterprise processes and systems as the entities representing the context of a specific collaboration activity.
Developers usually view annotations as metadata, as they give additional information about an existing piece of data. One or more annotation servers or local storage may store annotations. When a user browses a collaborative project, for example, the browser typically sends a query or group of queries to the annotation servers to request all of the annotations related to a document or a project.
As discussed above, annotations are typically external and can be stored independent of the documents to which they apply.
In certain embodiments, annotations can be created as dynamic objects for use as part of a collaborative process. Collaborators can view annotations of multiple documents from a centric repository, search them, and filter them. The collaborators can add tags to annotations, apply at least one status to one or more of the annotations, or reply to the annotations. The collaborators can keep the track of some or all of the pertinent annotations at any given time in a workflow process. For example, the users can check on the number of critical annotations and quickly and easily determine how many of the critical annotations have been resolved.
An annotation can represent a thread of comments in context. The annotation can include threaded comments and context identifiers. A threaded comment generally includes a thread of one or several comments in the form of text and/or image added to one or more documents that represent the exchange of ideas between one or multiple participants. Context identifiers generally refer to graphical elements that allow participants to identify the subject of comments based on visual common sense or workplace convention. For example, an identifier could highlight a region of a document or an empty identifier could mean that the annotation applies to the whole document or simply to the current page. The context identifier can target a part of a file, a whole file, parts of multiple files, or multiple whole files of virtually any format that is supported by AutoVue.
In this example, the context identifiers 212 consist of graphical elements that allow participants to identify the subject of the comments based on visual common sense or workplace convention. For example, one of the context identifiers 212 could highlight a region of a document using a rectangle. The annotation 202 has several context identifiers 212 added to different documents. Each of the context identifiers has an associated view 214 that specifies the visual representation of the context including the zoom level, the camera angle and so on.
The annotation 202 may also have at least one status 218 and one or more tags 220 that are created by collaborators. The annotation 202 can be published or unpublished, and it can also be locked or unlocked. These and other attributes are described in greater detail below.
In certain embodiments, a user can create an annotation by initiating a threaded comment and pointing it to the context using one or more context identifiers. While creating an annotation, an exemplary system can automatically group annotation entities in the form of the threaded comments and its context identifier, for example. This grouping can be based on user interactions while creating or modifying annotations, providing users with the capability to map review information properly into the pertinent workflow process. The annotations become dynamic objects that can advantageously reflect the current status (and/or a past status) of the review process at any given time. Also, a participant (e.g., collaborator) can add and/or modify a context identifier manually.
If the user decides at 302 to add a context entity, the system must determine at 310 whether a connection exists to an annotation. If the system determines that no such connection exists, a new annotation is advantageously created. Also, the user can decide to create a new annotation at 312. After allowing the user to create the new annotation, the system returns the user to 302.
If the system determines at 310 that at least one connection to one or more existing annotations exists, the system must then determine at 314 whether the connection is a direct connection to an annotation or an arrow that connects an empty connotation to a comment. If the system determines that the connection is a direct connection to an annotation, then the system allows the user at 316 to add the context to the annotation and then generally returns the user to 302. Otherwise, the system typically combines annotations at 318 and returns the user to 302. The combination at 318 can be performed automatically or in response to user input.
In this scenario, a user first draws (or otherwise places or causes to be placed) a circle 402 in a workspace area 404, as illustrated in the screenshot 400 of
The circle 402 that was drawn by the user in the workspace area 404 is represented by a smaller circle 410 displayed in connection with the first annotation 406. One of skill in the art will recognize, however, that representations displayed in the annotations area 408 (e.g., the smaller circle 406) of corresponding items in the workspace area 404 (e.g., the circle 402) do not necessarily differ in size, shape, or appearance.
The user adds (or otherwise places or causes to be placed) a text box 412 in the workspace area 404, as illustrated in the screenshot 500 of
In the example, the system displays the second annotation 414 adjacent to and immediately below the first annotation 406 in the annotations area 408. It will be recognized by one of skill in the art that multiple annotations can be displayed in the annotations area 408 in a variety of different ways and that such arrangements are not in any way limited by the particular arrangement shown in the screenshot 500.
The user now adds (or otherwise places or causes to be placed) an arrow 418 in the workspace 404, as illustrated in the screenshot 600 of
In certain embodiments, the system can apply an annotation to multiple documents. In previous systems, all annotations were associated with a single file, increasing redundancy and complexity in managing the review process while decreasing user understandability in the collaboration. In certain implementations of the techniques described herein, however, the context of one or more annotations can be associated with multiple files, making it easier for a user (e.g., a collaborator) to understand and manage the annotations.
A given context of a corresponding annotation does not necessary connect to threaded comments. In fact, a context of an annotation may spread into multiple pages or multiple documents.
In the example, a user first selects a particular annotation at 702 (e.g., using a “Select Annotation” option via a user interface). If the system approves the selection (e.g., “annotation.lock” has a null value), the system can then present a dialogue box to the user.
The user then chooses a particular document from the dialogue box at 704. For example, the user can select the document from a list of documents that may or may not be available for selection by the user. If the system approves the selection, the system can then open the selected document in a separate window.
Finally, the user adds the context at 706. In certain embodiments, the user will create a new context entity (e.g., in response to a prompt by the system) to be added to the annotation. In other embodiments, the user will select an existing context entity to be added to the annotation. Once the context has been added at 706, the system returns the user to the initial step 702 of the method 700.
In the example, a user first performs a right-click operation on a text box 802, as illustrated in the screenshot 800 of
In response to the user selecting the “Add Context” option 808, the system opens and presents to the user a dialog box 810, as illustrated in the screenshot 900 of
The system opens the selected document 812 in a separate window, as illustrated in the screenshot 1000 of
After the user closes the second window displaying the document 812, the system shows that the new context 814 has been added to the annotation 804 by displaying the context 814 in connection with the annotation 804, as illustrated in the screenshot 1100 of
Publishing and locking are two concepts that can be used in implementations of the disclosed technology in order to control collaboration over annotations. For example, annotations can be automatically saved as they are created. Initially they are generally unpublished, though, which means that only the author of the annotations is provided with the ability to see the annotations. In this state, the author is also allowed to modify or delete any or all of the annotations.
Once a user (typically the author) publishes an annotation, other collaborators can view the published annotation. The published annotation is still unlocked, however, which means that the author continues to retain the ability to modify or unpublish the annotation. However, as soon as another collaborator replies to an annotation, changes the annotation's status, or uses the annotation as a context of another annotation, the annotation becomes locked and neither the designated user (e.g., author) nor any other user can modify or unpublish the annotation.
The user then decides to publish the annotation and does so at 1210. For example, the user can select a “Publish Annotation” option from a drop-down menu or click on a “publish annotation” desktop icon or toolbar button. The system publishes the annotation responsive to the user's action at 1210. A status of the annotation is then changed to “published,” as indicated at 1212. While the author of an annotation is typically the only user permitted to publish the annotation, other users could be granted such permission as well.
Once the annotation is indicated as being published at 1212, the user has several different options he or she can choose. For example, the user (e.g., any of the collaborators) can change a status of the annotation at 1220, use the annotation in another context at 1222, or reply to the annotation at 1224. After selecting any of these options, the process then moves along to 1226, at which point the annotation can be locked (e.g., automatically or responsive to user or other input). Once the annotation is locked at 1226, a status for the annotation is changed to “locked,” as indicated at 1228.
One of the various advantages of the techniques described here is that annotations can be centralized in a backend system. Such an arrangement allows collaborators to see a list of any or all annotations related to a particular project from virtually any backend system.
At 1312, another determination is made regarding access rights, but this time the system checks whether the user has any access rights to a particular context for the annotation (e.g., the earliest context, the most recent context, a certain user-identified context, a context based on certain other criteria, etc.). If the system determines that the user does not have such access rights, the pertinent context is ignored (as indicated at 1316) and the process continues to 1320. If, however, the system determines that the user does have context access rights, the context is made visible to the user (e.g., displayed on the user's screen), as indicated at 1318.
In the example, the system is instructed to make a final determination at 1322 and does so at 1324. Specifically, the system checks for any other remaining context at 1324. If the system determines that no other context exists or is unable to locate any remaining contexts, processing terminates at 1324. If there is at least one more unchecked context, however, processing returns to 1312, where the system is instructed to check whether the user has any access rights to the newly-identified context.
In implementations of an annotation and workflow process in accordance with the disclosed technology, users can add different tags to annotations and also filter annotations based on a particular tag or tags. Moreover, each annotation typically has a status that can be defined by an administrator and changed by collaborators. For example, a designer (e.g., a collaborator) can open an annotation explaining a problem in a design and, once the problem has been resolved, the designer can close the annotation. Other users can thus monitor the status of a collaborative activity by looking at a dashboard that reflexes these statuses at any given time. For example, a supervisory user (e.g., a manager) can easily and readily see how many issues are still open and need to be resolved, if there is any critical problem.
A threaded comment generally consists of a thread of one or more comments in the form of text and/or image that are added to one or more documents and represent the exchange of ideas between multiple participants. An example of threaded comments 1402-1408 is illustrated in a simulated screenshot 1400 shown in
A context identifier generally includes one or more graphical elements that allow participants to identify a pertinent subject of the comment or comments in a thread based at least in part on visual common sense or workplace convention. An example of a context identifier 1502 is illustrated in a simulated screenshot 1500 shown in
Previous annotation applications are undesirably document-centric, meaning that a single annotation cannot involve multiple documents regardless of type. In implementations of the disclosed technology, however, a context identifier can target multiple files of virtually any format (e.g., any format that AutoVue supports). An example of a context identifier 1608 targeting multiple files 1604 and 1606 (of different file types) is illustrated in a simulated screenshot 1600 shown in
Annotations as described here are typically dynamic objects that can readily reflect a status of a review process at any given time.
The screenshot 1700 of
In embodiments of the disclosed technology, a user can take direct action with respect to an annotation directly from the web center. For example, the screenshot 1800 of
The computer-implemented techniques described here provide various advantages, such as facilitating communication and improving the productivity of virtually any collaborative activities in a large enterprise. Using the disclosed techniques, participants of different workflow processes can effectively and efficiently collaborate on multiple documents of virtually any format. These techniques provide an effective solution for centralization and management of collaboration data that is often segmented as a result of being produced by multiple teams during different phases. This collaboration data then can be desirably mapped to different workflow processes.
The following discussion is intended to provide a brief, general description of a suitable machine in which certain aspects of the disclosed technology can be implemented. Typically, the machine includes a system bus to which are attached processors, memory, (e.g., random access memory (RAM), read-only memory (ROM), or other state-preserving medium), storage devices, a video interface, and input/output interface ports. The machine can be controlled, at least in part, by input from conventional input devices such as keyboards, mice, etc., as well as by directives received from another machine, interaction with a virtual reality (VR) environment, biometric feedback, or other input signal. As used here, the term “machine” is intended to broadly encompass a single machine, or a system of communicatively coupled machines or devices operating together. Exemplary machines include computing devices such as personal computers, workstations, servers, portable computers, handheld devices, telephones, tablets, etc.
The various advantageous techniques described here may be implemented as computer-implemented methods. Additionally, they may be implemented as instructions stored on a tangible computer-readable medium that, when executed, cause a computer to perform the associated methods. Examples of tangible computer-readable media include, but are not limited to, disks (e.g., floppy disks, rigid magnetic disks, and optical disks), drives (e.g., hard disk drives), semiconductor or solid state memory (e.g., RAM and ROM), and various other types of tangible recordable media such as CD-ROM, DVD-ROM, and magnetic tape devices.
Having described and illustrated the principles of the invention with reference to illustrated embodiments, it will be recognized that the illustrated embodiments may be modified in arrangement and detail without departing from such principles, and may be combined in any desired manner. And although the foregoing discussion has focused on particular embodiments, other configurations are contemplated. In particular, even though expressions such as “according to an embodiment of the invention” or the like are used here, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the invention to particular embodiment configurations. As used here, these terms may reference the same or different embodiments that are combinable into other embodiments.
In view of the wide variety of permutations to the described embodiments, this detailed description is intended to be illustrative only and should not be taken as limiting the scope of the claims. What is claimed as the invention is all such modifications as may come within the scope and spirit of the following claims and equivalents thereto.
This application claims the benefit of U.S. Provisional Application No. 61/085,778, filed Aug. 1, 2008, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61085778 | Aug 2008 | US |