Computing and network technologies have transformed many aspects of everyday life. Computers have become household staples rather than luxuries, educational tools and/or entertainment centers, and provide individuals and corporations with tools to manage and forecast finances, control operations such as heating, cooling, lighting and security, and store records and images in a permanent and reliable medium. Networking technologies like the Internet provide individuals virtually unlimited access to remote systems, information and associated applications.
In light of such advances in computer technology (e.g., devices, systems, memory, wireless connectivity, bandwidth of networks, etc.), mobility for individuals has greatly increased. For example, with the advent of wireless technology, emails and other data can be communicated and received with a wireless communications device such as a cellular phone, smartphone, portable digital assistant (PDA), and the like. As a result, physical presence for particular situations has drastically reduced or been reduced. In an example, a business meeting between two or more individuals can be conducted virtually in which the two or more participants interact with one another remotely. Such virtual meetings that can be conducted with remote participants can be referred to as a telepresence session.
Traditional virtual meetings include teleconferences, web-conferencing, or desktop/computer sharing. Yet, each virtual meeting may not sufficiently replicate or simulate a physical meeting. Moreover, virtual meetings require numerous settings and configurations that must be defined or provided manually. For example, a teleconference requires a notification to the attendees with pass codes, meeting identifications, and the like. To attend the teleconference, the participant must manually input data such as a dial-in number, a meeting identification, a password, a spoken description for participant identification, etc. Furthermore, during such virtual meetings, data sharing is limited and restricted to data previously delivered or local data accessible via desktop/computer sharing.
The following presents a simplified summary of the innovation in order to provide a basic understanding of some aspects described herein. This summary is not an extensive overview of the claimed subject matter. It is intended to neither identify key or critical elements of the claimed subject matter nor delineate the scope of the subject innovation. Its sole purpose is to present some concepts of the claimed subject matter in a simplified form as a prelude to the more detailed description that is presented later.
The subject innovation relates to systems and/or methods that facilitate automatically initiating and setting up a telepresence session leveraging a smart meeting room. An automatic telepresence engine can generate a smart meeting room or smart room that can seamlessly automate various features of the telepresence session. The smart room can employ various automatic settings for a telepresence session in which local and remote users can participate. The room or telepresence session can automatically identify the participants, information about the participants, documents needed for the meeting, etc. The smart room can further identify the right mode of communication to use for the documents (e.g., upload, hard copy, email address, server upload, website delivery, etc.). In general, the smart room can take care of all the telepresence session needs revolving around the users, data, documents, and the like. In another aspect, the room can provide archiving, event summaries, rosters, follow ups, and even access to related meetings.
As one example, the smart meeting room can detect people with a face scan to identify participants, user preferences, and documents that are useful for collaboration. The data can be automatically uploaded to an accessible file share in real time. The room can provide emails that include summaries of meetings to participants. For example, in a second meeting related to a first meeting, one can access the archive to allow for accurate referencing of the first meeting. In addition, the smart room can provide the use of a previous meeting to identify deadlines, facts, meeting minutes, etc. In other aspects of the claimed subject matter, methods are provided that facilitate automatically initiating a telepresence session for participants and related data.
The following description and the annexed drawings set forth in detail certain illustrative aspects of the claimed subject matter. These aspects are indicative, however, of but a few of the various ways in which the principles of the innovation may be employed and the claimed subject matter is intended to include all such aspects and their equivalents. Other advantages and novel features of the claimed subject matter will become apparent from the following detailed description of the innovation when considered in conjunction with the drawings.
The claimed subject matter is described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject innovation. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject innovation.
As utilized herein, terms “component,” “system,” “data store,” “session,” “engine,” “organizer,” “collector,” “device,” “module,” “manager,” “application,” and the like are intended to refer to a computer-related entity, either hardware, software (e.g., in execution), and/or firmware. For example, a component can be a process running on a processor, a processor, an object, an executable, a program, a function, a library, a subroutine, and/or a computer or a combination of software and hardware. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and a component can be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
Now turning to the figures,
By leveraging the automatic telepresence engine 102, various settings and configurations can be performed and implemented without user intervention or manual configuration. For example, typical virtual meetings require manual input or intervention such as selecting meeting attendees, data required for the meeting, initiating meeting recordation (e.g., recording audio, recording video, etc.), activating data sharing (e.g., desktop/computer sharing, data files, etc.). However, the automatic telepresence engine 102 can automatically identify data, attendees, and recordation data in order to eliminate manual intervention or input. In other words, the automatic telepresence engine 102 can evaluate data in order to automatically initiate the telepresence session 104 with attendees (e.g., virtually represented users), data utilized for the session, and/or any other necessary data to conduct the telepresence session 104.
In particular, the automatic telepresence engine 102 can evaluate data associated with at least one of a virtually represented user, a schedule for a virtually represented user, a portion of an electronic communication for a virtually represented user, and/or any other suitable data identified to relate to at least one of the virtually represented user or the telepresence session 104. The automatic telepresence engine 102 can further identify at least one the following for a telepresence session based upon the evaluated data: a participant to include for the telepresence session, a portion of data related to a presentation within the telepresence session, a portion of data related to a meeting topic within the telepresence session, a device utilized by a virtually represented user to communicate within the telepresence session. With such evaluation and identification of data, the telepresence session 104 can be initiated, conducted, and recorded (e.g., tracked, monitored, archived, etc.) without active manual user intervention or input.
The telepresence session 104 (discussed in more detail in
In addition, the system 100 can include any suitable and/or necessary interface component 106 (herein referred to as “the interface 106”), which provides various adapters, connectors, channels, communication paths, etc. to integrate the automatic telepresence engine 102 into virtually any operating and/or database system(s) and/or with one another. In addition, the interface 106 can provide various adapters, connectors, channels, communication paths, etc., that provide for interaction with the automatic telepresence engine 102, the telepresence session 104, and any other device and/or component associated with the system 100.
The system 200 can include a data collector 202 that can gather data in real time in order to automatically generate the telepresence session 104. The data collector 202 can evaluate any suitable data utilized with the telepresence session 104. For example, the data collector 202 can evaluate data associated with at least one of a virtually represented user (e.g., personal information, employment information, profile data, biographical information, etc.), a schedule for a virtually represented user (e.g., calendar, online calendar, physical calendar, scheduling data on a device, electronic mail application, etc.), or a portion of an electronic communication for a virtually represented user (e.g., phone calls, emails, online communications, text messages, short message service (SMS) messages, chat program communications, physical mail, pages, messaging applications, voicemails, etc.). Based at least in part upon such evaluation of data, the data collector 202 can identify information that can be utilized with the telepresence session 104. For example, based upon evaluating an email application and included emails, the data collector 202 can identify a need for a telepresence session between two users and that the two users can meet at a particular time (e.g., availability based upon evaluating calendar/schedule data) to discuss specific data or documents (e.g., data or documents can be identified and made accessible for the telepresence session). For example, the user can be identified utilizing face recognition, voice recognition, a biometric reading, etc. Even though the meeting schedule has a list of attendees, not all of them show up for the meeting. Moreover, the meeting can be updated to include an invitee not included on the original list of attendees (e.g., a last-minute participant addition, etc.). So, this type of recognition can help to ascertain who's actually in the meeting and also such information can be used to display a name tag or identification for that person in their virtual representation so others tuned into the telepresence session can get information without interrupting.
In other words, the data collector 202 can gather data such as who is attending the telepresence session, what is to be discussed or presented (e.g., data, documents, etc.), when the telepresence session can take place (e.g., evaluating schedules/calendars to identify potential or dates to have the session), and the like. For instance, the data collector 202 can ascertain whether or not a telepresence session is to be initiated or scheduled between particular individuals in order to discuss particular topics, data, documents, etc. Such determination can be identified based at least in part upon evaluating communications, interactions, assignments (e.g., projects, workload, etc.), scheduling data, calendar data (e.g., deadlines, timelines for action items, etc.), and the like. Thus, based upon a project action item deadline in which such aspects that need to be handled are by a group of users, the data collector 202 can identify such need for a scheduled telepresence session with the appropriate attendees (e.g., the group of users, managers, advisors, etc.) with the necessary data.
In another example, the data collector 202 can identify if the user is in the meeting room or remote. If remote, there is a need for the initiation of the telepresence session. If all users are local, then there may be a need for a telepresence session depending on the needs of such a meeting. For instance, even if all users are local, users need to show some presentation on a large screen display, or a need to record the summary of the meeting. It is to be appreciated that some of the components of the subject innovation can be exist outside of a telepresence (e.g., a meeting recorder, summarizer, organizer, etc.).
The automatic telepresence session can further include a communication module 204 that can evaluate invited or potential attendees for the telepresence session 104 in order to ascertain available devices for communication within the telepresence session 104. In other words, the communication module 204 can manage devices for each virtually represented user in order to optimize the features of such devices within the telepresence session 104. The devices can be, but are not limited to, a laptop, a smartphone, a desktop, a microphone, a live video feed, a web camera, a mobile device, a cellular device, a wireless device, a gaming device, a portable digital assistant (PDA), a headset, an audio device, a telephone, a tablet, a messaging device, a monitor, etc.
For example, a first user may have access to a laptop with an email account, a cellular device, a webcam, and a wireless headset. Based on such identification of the available devices, the communication module 204 can enable interaction with the telepresence session 104 utilizing such devices. Moreover, the communication module 204 can leverage such available devices in order to optimize delivery or communication of data to such user. For instance, by ascertaining the available devices for a user, data can be optimally communicated to such user. Such criteria for identifying the optimal mode of data delivery can be, but is not limited to, bandwidth, device features (e.g., screen size, performance, processor, memory, peripherals, resolution, Internet access, security, input capabilities, output capabilities, etc.), geographic location, service plans (e.g., cost, security, peak-hours, etc.), user-preference, data to be delivered (e.g., size, sensitivity, urgency, etc.), and the like. Additionally, the input or output capabilities for each device can be optimally selected or adjusted. For example, audio input (e.g., microphones) on various devices can be adjusted or utilized as well as audio output (e.g., speakers) on various devices.
The communication module 204 can further seamlessly bridge remote and local users virtually represented within the telepresence session 104. In particular, a telepresence session can include participants on a first network as well as participants on a second network, wherein such interaction between various networks can be managed in order to allow data access, data sharing, security, authentication, and the like. The communication module 204 can enable authentication between various participants on disparate networks and provide secure data communications therewith independent of the network.
The system 200 can further include an organizer 206 that can track, monitor, and/or record the telepresence session 104 and included communications. The organizer 206 can manage recordation of data such as, but not limited to, communications (e.g., audio, video, graphics, data presented, data accessed, data reviewed, transcriptions, portions of text, etc.), attendees, participation (e.g., which user communicated which data, etc.), notes taken by individual participants, a stroke to a whiteboard, an input to a whiteboard, an input to a chalkboard, an input to a touch screen, an input to a tablet display, and the like. In general, the organizer 206 can handle archiving, tracking, and storing any suitable data related to the telepresence session 104. It is to be appreciated that the organizer 206 can provide metadata, tags, and/or any other suitable archiving techniques. Such tags or labeling of data can be based upon events, wherein the events can be, but are not limited to, topics presented, data presented, who is presenting, what is being presented, time lapse, date, movement within the telepresence session, changing between devices for interaction within the telepresence session, arrival within the session of virtually represented users, departure from the session from virtually represented users, etc. Moreover, the organizer 206 can enable sharing and/or linking the recorded data. For instance, the recorded data for a first telepresence session can be linked to a second meeting based upon an automatic determination or a request (e.g., user request, etc.). The link can be based upon a related topic, related attendees, etc. in which a portion of the first telepresence session can correspond to the second telepresence session. Additionally, a portion of the recorded data or stored data can be shared with any other suitable entity (e.g., a group, an enterprise, a web site, the Internet, a server, a network, a telepresence session, a machine, a device, a computer, a virtually represented user within a telepresence session, or a portable device, etc.) or user. Furthermore, it is to be appreciated that the organizer 206 can enable such stored or recorded data to be searched with a query. For example, a search on a telepresence session can include a query such as “presenter =name and words said,” or “topic=[insert topic to query] and presenter=name and meeting date=[insert meeting date to query].”
The organizer 206 can further generate a summarization or a “highlight” of the telepresence session 104 that can include any suitable portion of the recorded data or stored data. In other words, the organizer 206 can allow a participant to be informed in a scenario of the participant stepping out (e.g., leaving the meeting or session, etc.), being tardy (e.g., late to the session or meeting, etc.). For example, the organizer 206 can be configured to automatically deliver (e.g., email, stored locally, stored remotely, stored on a local drive/network, stored on a remote drive/network, etc.) such summary to identified users (e.g., identified automatically such as attendees, identified by designation, etc.). The summary can be, for instance, a transcription, an outline, an audio file, a video file, a word processing document, a meeting minutes document, a portion of data with participant identified data (e.g., user-tagging, etc.), pictures, photos, presented material, etc. Moreover, it is to be appreciated that the summarization of the telepresence 104 can be created in real time during the telepresence session and distributed to designated entities. In addition, the system 200 can provide a quick way for late corners to the meeting to come to speed without interrupting others. Summarization and quick playback of salient events on that user's device can help them quickly understand what's went on before they joined the meeting.
For example, the organizer 206 can handle a scenario where a participant has to step out of the telepresence session (e.g., the smart meeting room, etc.) for a time period during the telepresence session. For instance, the participant can see a high level very crisp summary update appearing on his/her device (e.g., PDA, mobile device, device utilized to communicate with the telepresence session, etc.) as the telepresence session continues with a picture/video/etc. of the current speaker. The participant may temporarily leave or not be in range/contact with a device to communicate with the telepresence session. In particular, the user can utilize an alarm (e.g., on participant speaking alarm, etc.) that can inform him/her when a specific participant is talking. Similarly, the participant temporarily out of contact or communication with the telepresence session can set an on subject changing alarm that can inform him/her when the subject is changing. It is to be appreciated that any suitable alarm or event can be utilized to trigger the designated notification for the participant that is out of communication with the telepresence session.
In another instance, when a participant steps out of the automatically initiated telepresence session and comes back, he/she can be automatically updated with pertinent information to quickly catch-up with the current state of the meeting/session. For example, the telepresence session can detect topics and changes in such topics during the telepresence session (e.g., using the meeting agenda content, context change in the discussion, etc). When a participant step out of the session during “Topic 1” and come back during “Topic 2”, the telepresence session can suggest to give directly a quick summary on where the meeting is on “Topic 2” so far so the participant can efficiently jump back into the current discussion, and get an update on “Topic 1” later on. In yet another instance, the degree of summarization can vary within the same topic. For example, if the participant comes back in the room after “Topic 2” has been discussed for a while, he/she would get a very crisp summary of the beginning of “Topic 2” with outcomes, a less summarized middle part, and the last 3 sentences in full. Moreover, the above concepts can be applied for participants that join the telepresence session after the start time of the session.
The authentication component 302 can provide security and authentication for at least one of a virtually represented participant (e.g., a participant communicating with the telepresence session 104 that maps to a real, actual person or entity), data access, network access, server access, connectivity with the telepresence session 104, or data files. The authentication component 302 can verify participants within the telepresence session 104. For example, human interactive proofs (HIPS), voice recognition, face recognition, personal security questions, and the like can be utilized to verify the identity of a virtually represented user within the telepresence session 104. Moreover, the authentication component 302 can ensure virtually represented users within the telepresence session 104 have permission to access data automatically identified for the telepresence session 104. For instance, a document can be automatically identified as relevant for a telepresence session yet particular attendees may not be cleared or approved for viewing such document (e.g., non-disclosure agreement, employment level, clearance level, security settings from author of the document, etc.). It is to be appreciated that the authentication component 302 can notify virtually represented users within the telepresence session 104 of such security issues or data access permissions.
The profile manager 304 can employ a telepresence profile for a virtually represented user that participants within the telepresence session 104. The telepresence profile can include settings, configurations, preferences, and/or any other suitable data related to a user in order to participate within the telepresence session 104. For example, the telepresence profile can include biographical information (e.g., age, location, employment details, education details, project information, assignment specifications, contact information, etc.), geographic location, devices used for telepresence (e.g., inputs preferred, outputs preferred, data delivery preferences, etc.), authentication information, security details, privacy settings, archiving preferences (e.g., stored location, delivery preferences, medium/format, etc.), information related to initiating/conducting telepresence sessions based on preferences (e.g., scheduling data, historic data related to past attendees for sessions, historic data related to past sessions, etc.), and the like. Additionally, the profile manager 304 can enable a telepresence profile to be created, deleted, and/or edited. For example, a new user to telepresence sessions can create a telepresence session based on his or her preferences, whereas a user with a previously created telepresence profile can update or edit particular details of such profile. Furthermore, a user can delete his or her telepresence profile.
The system 300 can further include a data store 306 that can include any suitable data related to the automatic telepresence engine 102, the telepresence session 104, the authentication component 302, the profile manager 304, the data collector (not shown), the communication module (not shown), the organizer (not shown), etc. For example, the data store 306 can include, but not limited to including, data associated with at least one of a virtually represented user (e.g., personal information, employment information, profile data, biographical information, etc.), a schedule for a virtually represented user (e.g., calendar, online calendar, physical calendar, scheduling data on a device, electronic mail application, etc.), or a portion of an electronic communication for a virtually represented user (e.g., phone calls, emails, online communications, text messages, short message service (SMS) messages, chat program communications, physical mail, pages, messaging applications, voicemails, etc.), available devices for communicating within a telepresence session, settings/preferences for a user, telepresence profiles, device capabilities, device selection criteria, authentication data, archived data, telepresence session attendees, presented materials, summarization of telepresence sessions, any other suitable data related to the system 300, etc.
It is to be appreciated that the data store 306 can be, for example, either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM). The data store 306 of the subject systems and methods is intended to comprise, without being limited to, these and any other suitable types of memory. In addition, it is to be appreciated that the data store 306 can be a server, a database, a hard drive, a pen drive, an external hard drive, a portable hard drive, and the like.
The system 400 can further include a plug-in component 404 that can expand the features and capabilities of the automatically initiated telepresence session 104 and/or the smart meeting room. The plug-in component 404 can allow seamless and universal incorporation of applications, hardware, software, communications, devices, and the like. In general, the plug-in component 404 can receive or transmit information related to the telepresence session 104 in which such data can be utilized with disparate applications, hardware, software, communications, devices, and the like. It is to be appreciated that the plug-in component 404 can allow for expansion in connection to any suitable feature of the telepresence session 104 and/or the automatic telepresence engine 102, wherein such expansion can relate to data collection, communications, organization of data, authentication, profiles, etc.
The system 500 can enable a physical user 502 to be virtually represented within the telepresence session 506 for remote communications between two or more users or entities. The system 500 further illustrates a second physical user 508 that employs a device 510 to communicate within the telepresence session 506. As discussed, it is to be appreciated that the telepresence session 506 can enable any suitable number of physical users to communicate within the session. The telepresence session 506 can be a virtual environment on the communication framework in which the virtually represented users can communicate. For example, the telepresence session 506 can allow data to be communicated such as, voice, audio, video, camera feeds, data sharing, data files, etc. It is to be appreciated that the subject innovation can be implemented for a meeting/session in which the participants are physically located within the same location, room, or meeting place (e.g., automatic initiation, automatic creation of summary, etc.).
Overall, the telepresence session 506 can simulate a real world or physical meeting place substantially similar to a business environment. Yet, the telepresence session 506 does not require participants to be physically present at a location. In order to simulate the physical real world business meeting, a physical user (e.g., the physical user 502, the physical user 508) can be virtually represented by a virtual presence (e.g., the physical user 502 can be virtually represented by a virtual presence 512, the physical user 508 can be represented by a virtual presence 514). It is to be appreciated that the virtual presence can be, but is not limited to being, an avatar, a video feed, an audio feed, a portion of a graphic, a portion of text, etc.
For instance, a first user can be represented by an avatar, wherein the avatar can imitate the actions and gestures of the physical user within the telepresence session. The telepresence session can include as second user that is represented by a video feed, wherein the real world actions and gestures of the user are communicated to the telepresence session. Thus, the first user can interact with the live video feed and the second user can interact with the avatar, wherein the interaction can be talking, typing, file transfers, sharing computer screens, hand-gestures, application/data sharing, etc.
For example, the intelligent component 602 can infer data associated with at least one of a virtually represented user (e.g., personal information, employment information, profile data, biographical information, etc.), a schedule for a virtually represented user (e.g., calendar, online calendar, physical calendar, scheduling data on a device, electronic mail application, etc.), a portion of an electronic communication for a virtually represented user (e.g., phone calls, emails, online communications, text messages, short message service (SMS) messages, chat program communications, physical mail, pages, messaging applications, voicemails, etc.), a participant to include for the telepresence session, a portion of data related to a presentation within the telepresence session, a portion of data related to a meeting topic within the telepresence session, a device utilized by a virtually represented user to communicate within the telepresence session, data to archive, tags/metadata for archived data, summarization of telepresence sessions, authentication, verification, telepresence profiles, private conversations between virtually represented users, etc.
The intelligent component 602 can employ value of information (VOI) computation in order to identify which telepresence sessions to schedule and when (e.g., a first telepresence session regarding a high priority matter can be scheduled prior to a second telepresence session having a lower priority). For instance, by utilizing VOI computation, the most ideal and/or appropriate dates and priorities for telepresence sessions can be determined. Moreover, it is to be understood that the intelligent component 602 can provide for reasoning about or infer states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines . . . ) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter.
A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class, that is, f(x)=confidence(class). Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed. A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs, which hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data. Other directed and undirected model classification approaches include, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
The automatic telepresence engine 102 can further utilize a presentation component 604 that provides various types of user interfaces to facilitate interaction between a user and any component coupled to the automatic telepresence engine 102. As depicted, the presentation component 604 is a separate entity that can be utilized with the automatic telepresence engine 102. However, it is to be appreciated that the presentation component 604 and/or similar view components can be incorporated into the automatic telepresence engine 102 and/or a stand-alone unit. The presentation component 604 can provide one or more graphical user interfaces (GUls), command line interfaces, and the like. For example, a GUI can be rendered that provides a user with a region or means to load, import, read, etc., data, and can include a region to present the results of such. These regions can comprise known text and/or graphic regions comprising dialogue boxes, static controls, drop-down-menus, list boxes, pop-up menus, as edit controls, combo boxes, radio buttons, check boxes, push buttons, and graphic boxes. In addition, utilities to facilitate the presentation such as vertical and/or horizontal scroll bars for navigation and toolbar buttons to determine whether a region will be viewable can be employed. For example, the user can interact with one or more of the components coupled and/or incorporated into the automatic telepresence engine 102.
The user can also interact with the regions to select and provide information via various devices such as a mouse, a roller ball, a touchpad, a keypad, a keyboard, a touch screen, a pen and/or voice activation, a body motion detection, for example. Typically, a mechanism such as a push button or the enter key on the keyboard can be employed subsequent entering the information in order to initiate the search. However, it is to be appreciated that the claimed subject matter is not so limited. For example, merely highlighting a check box can initiate information conveyance. In another example, a command line interface can be employed. For example, the command line interface can prompt (e.g., via a text message on a display and an audio tone) the user for information via providing a text message. The user can then provide suitable information, such as alpha-numeric input corresponding to an option provided in the interface prompt or an answer to a question posed in the prompt. It is to be appreciated that the command line interface can be employed in connection with a GUI and/or API. In addition, the command line interface can be employed in connection with hardware (e.g., video cards) and/or displays (e.g., black and white, EGA, VGA, SVGA, etc.) with limited graphic support, and/or low bandwidth communication channels.
At reference numeral 704, an attendee, a portion of data to present, a date, and a time can be identified based upon the evaluated data. In other words, the evaluation of data can identify who is attending a telepresence session, what is presented at a telepresence session, and when the telepresence session is to be conducted. At reference numeral 706, a device for at least one attendee to communicate within a telepresence session can be ascertained. For example, the device can be any suitable electronic device that can receive inputs or communicate outputs corresponding to a telepresence session. At reference numeral 708, a telepresence session can be automatically initiated with the identified attendee using the identified device.
At reference numeral 806, an isolated communication can be employed between two users, wherein the isolated communication is private to the telepresence session and/or disparate users outside the communication. For example, the private conversation can be substantially similar to a whisper or a note-passing in which a communication can be discretely presented. At reference numeral 808, a summary of the telepresence session can be created that includes the event detection. Moreover, such summary can be delivered to users for reference. The summary can be, for instance, a transcription, an outline, an audio file, a video file, a word processing document, a meeting minutes document, a portion of data with participant identified data (e.g., user-tagging, etc.), pictures, photos, presented material, etc.
In order to provide additional context for implementing various aspects of the claimed subject matter,
Moreover, those skilled in the art will appreciate that the inventive methods may be practiced with other computer system configurations, including single-processor or multi-processor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based and/or programmable consumer electronics, and the like, each of which may operatively communicate with one or more associated devices. The illustrated aspects of the claimed subject matter may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all, aspects of the subject innovation may be practiced on stand-alone computers. In a distributed computing environment, program modules may be located in local and/or remote memory storage devices.
One possible communication between a client 910 and a server 920 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The system 900 includes a communication framework 940 that can be employed to facilitate communications between the client(s) 910 and the server(s) 920. The client(s) 910 are operably connected to one or more client data store(s) 950 that can be employed to store information local to the client(s) 910. Similarly, the server(s) 920 are operably connected to one or more server data store(s) 930 that can be employed to store information local to the servers 920.
With reference to
The system bus 1018 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 1394), and Small Computer Systems Interface (SCSI).
The system memory 1016 includes volatile memory 1020 and nonvolatile memory 1022. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 1012, such as during start-up, is stored in nonvolatile memory 1022. By way of illustration, and not limitation, nonvolatile memory 1022 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory 1020 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM).
Computer 1012 also includes removable/non-removable, volatile/non-volatile computer storage media.
It is to be appreciated that
A user enters commands or information into the computer 1012 through input device(s) 1036. Input devices 1036 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1014 through the system bus 1018 via interface port(s) 1038. Interface port(s) 1038 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1040 use some of the same type of ports as input device(s) 1036. Thus, for example, a USB port may be used to provide input to computer 1012, and to output information from computer 1012 to an output device 1040. Output adapter 1042 is provided to illustrate that there are some output devices 1040 like monitors, speakers, and printers, among other output devices 1040, which require special adapters. The output adapters 1042 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1040 and the system bus 1018. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1044.
Computer 1012 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1044. The remote computer(s) 1044 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 1012. For purposes of brevity, only a memory storage device 1046 is illustrated with remote computer(s) 1044. Remote computer(s) 1044 is logically connected to computer 1012 through a network interface 1048 and then physically connected via communication connection 1050. Network interface 1048 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
Communication connection(s) 1050 refers to the hardware/software employed to connect the network interface 1048 to the bus 1018. While communication connection 1050 is shown for illustrative clarity inside computer 1012, it can also be external to computer 1012. The hardware/software necessary for connection to the network interface 1048 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
What has been described above includes examples of the subject innovation. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but one of ordinary skill in the art may recognize that many further combinations and permutations of the subject innovation are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.
In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the innovation includes a system as well as a computer-readable medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
There are multiple ways of implementing the present innovation, e.g., an appropriate API, tool kit, driver code, operating system, control, standalone or downloadable software object, etc. which enables applications and services to use the advertising techniques of the invention. The claimed subject matter contemplates the use from the standpoint of an API (or other software object), as well as from a software or hardware object that operates according to the advertising techniques in accordance with the invention. Thus, various implementations of the innovation described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software.
The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
In addition, while a particular feature of the subject innovation may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements. What is claimed is: