Smart phones and other mobile devices provide nearly limitless options to users, such as texting, talking on the phone, surfing the web, etc. One downside of these devices is the tendency to isolate the user from their surroundings and what is going on around them. The present concepts can leverage features of these devices to re-engage users with those around them.
The described implementations relate to interactive presentations. One example of the present concepts can associate multiple mobile devices, such as smart phones with an interactive presentation. This example can receive feedback relating to the presentation from at least some of the mobile devices and aggregate the feedback into a visualization that is configured to be presented in parallel with the interactive presentation. The example can also generate another visualization for an individual mobile device that generated individual feedback.
Another example can obtain a unique registration for an interactive participation session. This example can receive a request to establish the interactive participation session and allow mobile devices, such as smart phones or pad-type computers, to join the interactive participation session utilizing the unique registration. This example can also correlate feedback from the mobile devices to content from the interactive participation session.
The above listed examples are intended to provide a quick reference to aid the reader and are not intended to define the scope of the concepts described herein.
The accompanying drawings illustrate implementations of the concepts conveyed in the present application. Features of the illustrated implementations can be more readily understood by reference to the following description taken in conjunction with the accompanying drawings. Like reference numbers in the various drawings are used wherever feasible to indicate like elements. Further, the left-most numeral of each reference number conveys the Figure and associated discussion where the reference number is first introduced.
This patent relates to mobile devices such as smart phones and/or pad-type computers and reconnecting users with the activities around them in a face-to-face manner. The present concepts allow mobile devices to facilitate user engagement with their current surroundings or context rather than taking users out of their current context. The present concepts can leverage these devices to help people participate more fully in what is going on around them and build stronger ties with their companions. These concepts can also offer the ability to share data between ad-hoc, location-based groups of mobile devices and, as such, can foster rich face-to-face social interactions.
The inventive concepts can provide a real-time interactive participation system designed for use during presentations. For instance, during a meeting, audience members can submit feedback on what has been (or is being) presented using their smart phones. As an example, the users may use a “like” or “dislike” button to rate the presented content. This feedback can then be aggregated and displayed for the audience members and the presenter (e.g., a shared visualization of the feedback). The visualization can be integrated with the presented content or displayed independent of the presented content. The visualization may be presented in multiple ways. For instance, the visualization may be presented to both the presenter and the audience and/or a customized visualization may be generated for individual audience members and/or the presenter.
For purposes of explanation, consider introductory
Presenter 112 can utilize notebook computing device 104 to make a presentation that includes visual material represented on a first portion 114 of display 106. A second portion 116 of the display 106 can relate to real-time interactive participation. In this case, the first portion 114 relating to the presentation is separate and distinct from the second portion 116 relating to the real-time interactive feedback, but both portions are presented on display 106. In other cases, the portions 114 and 116 can be intermingled. For instance, comments about a particular aspect of a slide may be visualized proximate to or with that particular aspect. As mentioned above, in this case the two portions 114 and 116 co-occur on the same display 106. Such need not be the case. An alternative example is shown relative to
In the present example of
In this implementation, the second portion 116 also includes a feature 122 for allowing audience members to join the presentation. In this case, this feature is represented as a QR code. Other implementations can utilize other types of codes, universal resources identifiers (URIs), links, etc. For example feature 122 could include a URI that the audience member manually enters into his/her smart phone to become a participant.
For purposes of explanation, assume that audience member 110(3) has just entered the room to view the presentation. At this point, audience members 110(1) and 110(2) are represented on feature 118 as darkened circles 120(1) and 120(2), respectively. Audience member 110(3) can become a participant by taking a picture of the QR code with her smart phone 102(3). This act can automatically log the audience member into the presentation (e.g., register the audience member) without any other effort on the part of the user (e.g., audience member). Note that while not shown, personal information concerns of the audience members can be addressed when implementing the present concepts. For instance, the audience members can be allowed to opt out, opt in, and/or otherwise define and/or limit how their personal information is used and/or shared. Any known (or yet to be developed) safeguards can be implemented to protect the privacy of participating audience members.
In this example, assume that audience member 110(3) selected the ‘like’ option 304 as indicated at 308. This selection is also identified on feature 118 as indicated at 310. Further, audience member 110(2)'s selection is evidenced at 312. Of course, the use of an ‘up arrow’ is only one way that the user input can be represented. For instance, color can be utilized. For example, green could be utilized to represent a ‘like’ or favorable response and red could be used to represent a ‘dislike’ or unfavorable response. Thus, when an individual audience member provides feedback, their character (in this case circle) on the feature 118 could be turned either green or red. Further, the time since voting can be represented on the feature 118. For instance, as time lapses after the audience member votes, the character (e.g., circle) could fade back to its original color, such as yellow. Similarly, in the illustrated configuration, the ‘up arrow’ or ‘down arrow’ could fade from view as the vote becomes stale. In an alternative implementation, the vote could be removed after a predefined duration. For instance, the vote (e.g., the up or down arrow) could be removed after 10 seconds.
Note that while a GUI 302 enables voting via the smart phone's touch screen, other implementations do not rely on the touch screen. For instance a user ‘like’ vote could be recorded if the user raises the smart phone, tips it upward, or places it face up, among others. Similarly, a dislike could be registered when the user lowers the smart phone, tips it downward, or places it face down, among others.
In an alternative scenario illustrated in
In other implementations the audience member can raise their hand while holding the smart phone to ask a question. This hand raising gesture can be detected by the smart phone which can then provide notice to the presenter 112 (e.g., the presenter's smart phone 102(4)) that an audience member has a question. The notice can be generic or specific. For instance, the notice can appear on the presenter's smart phone 102(4) and/or notebook computing device 104. The notice may include identifying the character (e.g., circle) associated with the audience member asking the question. The question may also provide a stimulus to the presenter to let the presenter know that a question has been received. For instance, the presenter's smart phone may vibrate and/or beep to get the presenter's attention.
Badges can also apply to the entire group, and not just an individual. For example, when many audience members provide feedback, an ‘active audience’ badge may trigger. Group badges may represent presentation events like the amount of feedback activity, the quality of the activity, the number of participants, or the length of the presentation. These group badges may be displayed on audience members' smart phones, or elsewhere (e.g., as part of a shared visualization of the feedback). One such example is shown at 606 in second portion 116 of display 106. In this example, a ‘happy face’ is used to indicate an active positive audience.
In summary, one goal of the present concepts is to create a sense of community among meeting attendees, engage audience members in the presentation, and help the presenter (e.g., speaker) understand the audience reaction. The above description explains an implementation for accomplishing this goal.
Audience members 710(1) and 710(n) can participate utilizing techniques described above relative to
In this implementation, display device 708 can provide a running record of audience feedback at 712. The running record can be displayed in a way that correlates it to the movie content as represented by the time(s) in minutes indicated generally at 714. For instance, when feedback is received at a particular point in the movie (e.g., at a particular temporal instance) the feedback can be time stamped with that particular temporal instance to provide easy correlation between the feedback and the movie.
At particular instances, display device 708 can provide additional information relating to the audience feedback. One such example is shown in
In summary, the feedback collected during presentation of content, such as a meeting or a movie can also be used after the meeting to retrieve or summarize meeting content (e.g., individual slides from a larger slide deck, portions of a transcript, segments of a video, etc.). Meetings typically last for 30 minutes to many hours. There are a variety of reasons why a person would like to review the important content of a meeting without replaying the entire meeting. For example, the person might not have been able to attend or may want to prepare a written summary. Existing approaches include analyzing audio and video recordings of meetings via signal processing to determine key points in time, synchronizing with slide decks, etc. However, these methods use either inferred sentiment or sentiment-agnostic techniques that may generate many false positive “important” moments. In contrast the present implementations can obtain and aggregate attendee feedback and correlate that feedback to the content so that a subsequent user can utilize the comments as a guide to points of interest in the content.
Stated another way, the above discussion can provide the ability to view feedback over time, to associate or correlate feedback events with meeting artifacts such as slides, transcripts, or video recordings, and to use the feedback to summarize meeting artifacts.
In this case, display 106 can be a monitor, TV, or projector that is coupled to notebook computing device 104 and is not described further. However, in some implementations the display could be a smart device with some or all of the capabilities described below.
In the present configuration each of the smart phones 102(1)-102(4) can include a processor 1002, storage/memory 1004, an interactive participation component 1008, wireless circuitry 1006, cell circuitry 1010, and positional circuitry 1012. Further, notebook computing device 104 also includes a processor 1002, storage/memory 1004, an interactive participation component 1008, and wireless circuitry 1006. Suffixes (e.g., (1), (2), (3), (4), or (5)) are used to reference a specific instance of these elements on specific respective smart phones or the notebook computing device. Use of these designators without a suffix is intended to be generic. The discussed elements are introduced relative to particular implementations and are not intended to be essential. Of course, individual devices can include alternative or additional components that are not described here for sake of brevity. For instance, devices can include input/output elements, buses, graphics cards, power supplies, optical readers, and/or USB ports, among a myriad of potential configurations.
Smart phones 102(1)-102(4) and notebook computing device 104 can be thought of as computers or computing devices. Examples of computing devices can alternatively or additionally include traditional computing devices, such as personal computers, cell phones, mobile devices, personal digital assistants, pad-type computers, cameras, or any of a myriad of ever-evolving or yet to be developed types of computing devices.
Computing devices can be defined as any type of device that has some amount of processing capability and/or storage capability. Processing capability can be provided by processor 1002 that can execute data in the form of computer-readable instructions to provide a functionality. Data, such as computer-readable instructions, can be stored on storage/memory 1004. The storage/memory can be internal and/or external to the computer.
The storage/memory 1004 can include any one or more of volatile or non-volatile memory, hard drives, flash storage devices, and/or optical storage devices (e.g., CDs, DVDs, etc.), among others. As used herein, the term “computer-readable media” can include signals. In contrast, the term “computer-readable storage media” excludes signals. Computer-readable storage media can include “computer-readable storage devices.” Examples of computer-readable storage devices include volatile storage media, such as RAM, and non-volatile storage media, such as hard drives, optical discs, and flash memory, among others.
In the illustrated implementation, computing devices are configured with general purpose processors and storage/memory. In some configurations, such devices can include a system on a chip (SOC) type design. In such a case, functionalities can be integrated on a single SOC or multiple coupled SOCs. In one such example, the computing devices can include shared resources and dedicated resources. An interface(s) can facilitate communication between the shared resources and the dedicated resources. As the name implies, dedicated resources can be thought of as including individual portions that are dedicated to achieving specific functionalities. For instance, in this example, the dedicated resources can include any of the wireless circuitry 1006 and/or the interactive participation component 1008.
Shared resources can be storage, processing units, etc. that can be used by multiple functionalities. In this example, the shared resources can include the processor and/or storage/memory. In one case, interactive participation component 1008 can be implemented as dedicated resources. In other configurations, this component can be implemented on the shared resources and/or the processor can be implemented on the dedicated resources.
Wireless circuitry 1006 can include a transmitter and/or a receiver that can function cooperatively to transmit and receive data at various frequencies in the RF spectrum. The wireless circuitry can also operate according to various wireless protocols, such as Bluetooth, Wi-Fi, etc. to facilitate communication between devices.
In one case, the notebook computing device's wireless circuitry 1006(5) can function as a Wi-Fi group leader relative to the smart phone devices 102(1)-102(4) to facilitate the interactive feedback. In other cases, the notebook computing device may work in cooperation with the presenter's smart phone 102(4) which can facilitate communications among the various devices to facilitate the interactive feedback.
Cell circuitry 1010 can be thought of as a subset of wireless circuitry 1006. The cell circuitry can allow the smart phones 102 to access cellular networks. The cellular networks may be utilized for communication between devices and/or the cloud as described above.
Positional circuitry 1012 can be any type of mechanism that can detect or determine relative position, orientation, movement, and/or acceleration of the smart phone device 102. For instance, positional circuitry can be implemented as one or more gyroscopes, accelerometers, and/or magnetometers. In one example, these devices can be manifest as microelectromechanical systems (MEMS). Examples of techniques that utilize the positional circuitry are described above relative to
Interactive participation component 1008 can allow audience members and/or a presenter to share ideas and thoughts in real-time. The interactive participation component 1008 can operate cooperatively with the wireless circuitry 1006 to facilitate communication between the various devices.
Briefly, in some implementations the interactive participation component 1008 can be configured to receive audience feedback during a presentation and to aggregate the feedback. In some cases the interactive participation component can send a summary of the aggregated feedback to a first device for display concurrently with the presentation and send the summary to a presenter's smart phone during the presentation.
In some cases, the interactive participation components 1008 employed in a system can each be fully functioning, robust components. In other configurations, an instance of the interactive participation component 1008 associated with the presenter may be robust, while those associated with the audience members may offer a more limited functionality. For example, in the illustrated configuration, an instance of the interactive participation component 1008(1) or 1008(2) on the presenter's notebook computing device 104 and/or smart phone 102(4), respectively, may function in a ‘lead’ role that registers audience members' smart phones 102(1)-102(3). This lead interactive participation component can transmit questions to the audience members' smart phones. The lead interactive participation component can receive feedback from the audience members' smart phones and aggregate and/or otherwise process the feedback.
The lead interactive participation component 1008(1) or 1008(2) can present the aggregated feedback adjacent to the presenter's content via a second portion of the display (e.g., sidebar), within the content or on a separate device from the content. The lead interactive participation component can employ algorithms to generate badges when there are interesting feedback events. The lead interactive participation component can then send the badge to the corresponding smart phone. The lead interactive participation component may cause the smart phone to vibrate or otherwise notify the user of the badge. An alternative configuration is described below relative to
One technique for accomplishing an interactive participation session can entail a user (e.g., presenter) engaging a graphical user interface (GUI) generated on notebook computer 104 by interactive participation component 1008(5). The user can request an interactive participation session on the GUI. The interactive participation component 1008(5) can cause the interactive participation session request to be sent to interactive participation component 1008(6) on the cloud. Interactive participation component 1008(6) can generate an interactive participation session and a mechanism to log into (e.g., register with) the session. For example, the mechanism can be a URI or a code such as a QR code (this aspect is described in more detail above relative to
Interactive participation component 1008(6) can send the log-in mechanism back to notebook computer 104. The notebook computer's interactive participation component 1008(4) can cause the log-in mechanism to be displayed on display 106 (and/or otherwise made available to attendees). Any attendees can utilize the log-in mechanism to join the interactive participation session via their smart phone (e.g., smart phones 102(1), 102(2), and 102(3)). Notebook computer 104 may also provide another log-in mechanism or a derivation thereof to the presenter so that the presenter's smart phone 102(4) is distinguished by interactive participation component 1008(6) as the presenter's smart phone as opposed to the audience members' smart phones. Once the session begins, interactive participation component 1008(6) can obtain feedback from audience members' smart phones, aggregate the feedback and/or otherwise process the feedback as participation data to generate the features described relative to second portion 116 of the display described relative to
Similarly, the implementation described relative to
In summary, at least some of the implementations described above can provide an end-to-end, real-time interactive presentation feedback system. Some implementations can include a shared visualization of audience feedback, projected alongside the (presenter's or presented) content. This can be accomplished on the same display device or a different display device. This visualization can allow the audience and the speaker to take the collective temperature of the audience at any given time during a presentation of the content. The displayed feedback can be ambient and complementary to, rather than in competition with, the presentation content.
The present concepts can leverage the detection of interesting feedback events. In light of the description above relative to
Some versions can include several components: a mobile client for providing feedback, a server component that collects the feedback, a shared visualization of the feedback, badges designed to include the speaker in the feedback, and a post-meeting summary of the feedback. One implementation of each these components is discussed in greater detail below.
Feedback Mobile Client
Meeting attendees provide feedback by visiting a webpage or by installing a feedback mobile phone application. For the webpage, the attendee is uniquely identified with a cookie. For the application, the attendee is uniquely identified with a user ID. (The application may also gather additional information about the participant such as gender, job role, or other recorded signals including geographic location, mobile operator, IP address, etc.). The webpage can exist to encourage early adoption, while the application provides a richer user experience. All experiences can be optimized for the mobile phone, pad-type device, etc. Audience members can provide positive feedback using a green thumbs up button, and negative feedback using a red thumbs down button. Other types of feedback could be provided, including, go faster, go slower, “identify me in the shared visualization,” or specific speaker-identified responses intended to elicit specific audience responses (e.g., polling, voting, or survey questions). In addition to button presses, gestures could be used to provide feedback.
Feedback Server
A server component can collect feedback from participants and display the feedback to the group. The server component may also record the audio or video from the meeting. Feedback and associated signals can be stored in a retrieval system, such as a database.
Feedback Sidebar
Feedback can be displayed to the audience members in a shared sidebar representation. Each “vote” on the client can correspond to a “light” on the sidebar, and changes to a color representing the feedback provided. Other visual features, such as shape, could be used to represent different types of feedback. The feedback can fade back to neutral over time.
The sidebar can be a stand-alone executable. When a slide presentation uses a specially designed template, the active sidebar can be positioned to float above a blank region on the template so that it appears immediately adjacent to the slide content. The sidebar could also be shown on its own, separately from a slide deck, either projected individually or shown on specialized hardware. It could also be built directly into a slide projecting application like PowerPoint® or other presentation software.
Badges and Speaker Notification
Badges can be triggered by certain individual behaviors, group behaviors or participation milestones, including those related to the type, quantity, quality, and timing of the feedback provided (e.g., participation data). Particular badges can be queued to appear by the speaker (e.g., in a “voting” scenario). The speaker's phone can buzz (e.g., vibrate) when a badge is triggered. Audience member phones may also vibrate. Badges could alternatively or additionally be represented in an auditory manner (e.g., as an audio message).
Post-Meeting Analysis of Feedback
After a meeting, users are able to view a summary of the participant feedback over time. Users can analyze feedback and signals recorded to determine “interesting moments,” or have such moments automatically identified for them. Interesting moments are synchronized in time (e.g., correlated) with the audio and video. A user can then replay only the time regions surrounding moments of interest. Feedback provided by subsets of participants (e.g., by demographics or job role) can also be viewed. Other methods of summarization such as transcription can be used to summarize interesting moments. Alternative and/or additional implementations are described above and below.
At block 1202, the method can associate multiple mobile devices with a presentation.
At block 1204, the method can receive feedback relating to the presentation from at least some of the mobile devices.
At block 1206, the method can aggregate the feedback into a visualization that is configured to be presented in parallel with the presentation. In one example, this visualization can be visible to all of the audience members and the presenter.
At block 1208, the method can generate another visualization for an individual mobile device that generated individual feedback. In one implementation, this another visualization is a badge that is displayed only on an individual mobile device of a recipient. The recipient may be an individual audience member or the presenter. Thus, this implementation can provide a summary of the feedback to everyone and individualized feedback for certain participants.
At block 1302, the method can receive a request to establish an interactive participation session.
At block 1304, the method can obtain a unique registration for the interactive participation session. Various examples are described above, such as QR codes and URLs, among others. In another example, the users could go to a web page that supports interactive participation sessions generally and then utilize a unique ID or registration that is specific to an individual interactive participation session.
At block 1306, the method can allow computing devices to join the interactive participation session utilizing the unique registration.
At block 1308, the method can correlate feedback from the computing devices to content from the interactive participation session. In this case, correlating feedback can be thought of as identifying a relationship between the feedback and the session, the relationship can be temporally based and/or content based, among others.
The methods can be performed by any of the computing devices described above and/or by other computing devices. The order in which the above methods are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order to implement the method, or an alternate method. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof, such that a computing device can implement the method (e.g., computer-implemented method). In one case, the method is stored on a computer-readable storage media as a set of instructions such that execution by a computing device causes the computing device to perform the method.
Although techniques, methods, devices, systems, etc., pertaining to real-time interactive participation implementations are described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed methods, devices, systems, etc.