A presenter who is giving a live presentation to an audience often times performs and narrates a demonstration during the presentation. For example, a presenter who is giving a live presentation about a new software application often times gives the audience a demonstration of the application and its features during the presentation. By demonstrating real-world user interactions with the software application (e.g., by guiding the audience through exemplary user inputs and showing the audience the results thereof), the presenter can demonstrate various features of the application to the audience. As such, performing and narrating a demonstration during a live presentation can be an effective way to communicate with the audience and keep them engaged.
Sometimes a presenter will perform and narrate a live demonstration for an audience, which can be very effective and engaging. However, performing and narrating a live demonstration can also be very risky since the presenter can encounter many different and unexpected issues during a live demonstration, where these issues can significantly decrease the demonstration's effectiveness and its ability to keep the audience engaged. Such issues include the presenter forgetting the sequence of steps to be performed during a live demonstration, and/or forgetting the narration content that is to be spoken in conjunction with each of these steps, and/or forgetting to demonstrate certain features. In the case where the presenter is performing and narrating a live demonstration of a software application on a real working computer system, such issues also include software crashes and hangs resulting from the software application being buggy and thus unstable, computer system failures, mismatched display screen resolutions, and unreliable network connectivity, to name a few. Furthermore, even if none of these issues winds up occurring when performing a live demonstration, the presenter often worries about the possible occurrence of one or more of these issues and can thus become distracted from the live demonstration. As a result, during a live demonstration the audience may need to wait for system problems to be resolved and may also see the presenter rambling, thus making the live demonstration ineffective and causing the audience to become disengaged. The just-described live demonstration issues can be addressed by the presenter playing back a previously recorded screencast video of the demonstration to the audience.
This Summary is provided to introduce a selection of concepts, in a simplified form, that are further described hereafter in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Demonstration re-performing technique embodiments described herein are generally applicable to allowing presenters to re-perform demonstrations during live presentations. In one exemplary embodiment user input events in a screencast video of a demonstration are identified as follows. The screencast video is input, and data associated with a sequence of low-level user input events that takes place during the demonstration is also input. The sequence of low-level user input events is then converted into a sequence of high-level user input events. Portions of the screencast video are then identified as either event portions or inactive portions, where this identification includes mapping each of the high-level user input events to a different event portion, and mapping each gap in time that exists between two consecutive high-level user input events to a different inactive portion.
In another exemplary embodiment a presenter is allowed to rehearse and edit a demonstration. A screencast video of the demonstration is input. Metadata that is associated with a sequence of high-level user input events that takes place during the demonstration is also input, where this metadata identifies portions of the screencast video as either event portions or inactive portions, each of the high-level user input events is mapped to a different event portion, and each gap in time that exists between two consecutive high-level user input events is mapped to a different inactive portion. Metadata that is associated with each of the portions is also input. Upon receiving a request from the presenter to playback the screencast video, an augmented version of the screencast video is played back to the presenter on a display device. This augmented version is generated on-the-fly as the screencast video is being played back, and during at least one point in time during this playback this augmented version includes a visualization of the current high-level user input event that is automatically overlaid on top of the screencast video at the screen location where the current high-level user input event takes place, and a visualization of the next high-level user input event that is automatically overlaid on top of the screencast video at the screen location where the next high-level user input event takes place.
In yet another exemplary embodiment the presenter is allowed to re-perform the demonstration during a live presentation to an audience. A screencast video of the demonstration is input. Metadata that is associated with the sequence of high-level user input events that takes place during the demonstration is also input, where this metadata identifies portions of the screencast video as either event portions or inactive portions, each of the high-level user input events is mapped to a different event portion, and each gap in time that exists between two consecutive high-level user input events is mapped to a different inactive portion. A video playback graphical user interface (GUI) is then displayed on a private display device that just the presenter is able to see, where this GUI includes a video sector and a timeline sector. Upon receiving a request from the presenter to playback the screencast video, the following actions occur. An augmented version of the screencast video is played back within the video sector, where during at least one point in time during this playback this augmented version includes a visualization of the current high-level user input event and a visualization of the next high-level user input event. An event timeline is displayed within the timeline sector, where the event timeline includes an overall timeline that shows each of the event portions and each of the inactive portions of the screencast video.
The specific features, aspects, and advantages of the demonstration re-performing technique embodiments described herein will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In the following description of demonstration re-performing technique embodiments reference is made to the accompanying drawings which form a part hereof, and in which are shown, by way of illustration, specific embodiments in which the demonstration re-performing technique can be practiced. It is understood that other embodiments can be utilized and structural changes can be made without departing from the scope of the demonstration re-performing technique embodiments.
It is also noted that for the sake of clarity specific terminology will be resorted to in describing the demonstration re-performing technique embodiments described herein and it is not intended for these embodiments to be limited to the specific terms so chosen. Furthermore, it is to be understood that each specific term includes all its technical equivalents that operate in a broadly similar manner to achieve a similar purpose. Reference herein to “one embodiment”, or “another embodiment”, or an “exemplary embodiment”, or an “alternate embodiment”, or “one implementation”, or “another implementation”, or an “exemplary implementation”, or an “alternate implementation” means that a particular feature, a particular structure, or particular characteristics described in connection with the embodiment or implementation can be included in at least one embodiment of the demonstration re-performing technique. The appearances of the phrases “in one embodiment”, “in another embodiment”, “in an exemplary embodiment”, “in an alternate embodiment”, “in one implementation”, “in another implementation”, “in an exemplary implementation”, and “in an alternate implementation” in various places in the specification are not necessarily all referring to the same embodiment or implementation, nor are separate or alternative embodiments/implementations mutually exclusive of other embodiments/implementations. Yet furthermore, the order of process flow representing one or more embodiments or implementations of the demonstration re-performing technique does not inherently indicate any particular order not imply any limitations of the demonstration re-performing technique.
The term “presenter” is used herein to refer to one or more people who are using a computer (herein also referred to as a computing device) to give a live presentation that includes the performance of a demonstration. Generally speaking and as is appreciated in the art of computers, a screencast (also known as a video screen capture) is a digital recording of what is displayed on the display screen of a computer display device (hereafter simply referred to as the display screen of a computer) over time. A screencast can optionally include audio content such as a narration, or the like. Accordingly, the term “screencast video” is used herein to refer to a video recording that captures what is displayed within a prescribed region of the display screen of a computer over time. A screencast video thus records the pixels within this prescribed region over time.
The term “visualization” is used herein to refer to either a graphical digital object, or a textual digital object, or a combination thereof that is overlaid on top of a video which is being played back on the display screen of a computer and can be quickly perceived and interpreted by a presenter. As will be described in more detail hereafter, in the demonstration re-performing technique embodiments described herein various types of visualizations can be overlaid on top of a screencast video, where these visualizations are visible to just a presenter (e.g., the visualizations cannot be seen by an audience). The term “sector” is used herein to refer to a segmented region of the display screen of a computer in which a particular type of graphical user interface (GUI) and/or information (such as a screencast video, and one or more visualizations, among other things) can be displayed, or a particular type of action can performed by a presenter or another user, where the GUI/information/action are generally associated with a particular application program that is running on the computer. As is appreciated in the art of computer operating environments, a given display screen can include a plurality of different sectors which may be layered or overlapped one on top of another. A given display screen can also be touch-sensitive.
Generally speaking and as will be described in more detail hereafter, the demonstration re-performing technique embodiments described herein provide presenters with an alternative to performing and narrating a live demonstration during a live presentation. More particularly, the demonstration re-performing technique embodiments allow a presenter to re-perform a demonstration during a live presentation by playing back a screencast video of the demonstration to an audience in a controlled manner and giving a live narration of the video as it is being played back. The demonstration re-performing technique embodiments also allow the presenter to rehearse their presentation (e.g., their live narration) of the video as it is being played back, and quickly edit the playback of the video based on their rehearsal experiences.
The demonstration re-performing technique embodiments described herein generate two different versions of the screencast video during its playback, namely an augmented version and an audience version. In an exemplary embodiment of the demonstration re-performing technique the augmented version of the screencast video is played back to just the presenter (it cannot be seen by the audience) and the audience version of the screencast video is played back to the audience. The augmented version includes various types of information that is automatically timed to the screencast video on-the-fly as it is being played back. This information includes various types of visualizations that are overlaid on top of the video on-the-fly as it is being played back, and an event timeline that is displayed adjacent to the video, among other things. In one embodiment of the demonstration re-performing technique the audience version is a non-augmented version of the screencast video (e.g., the audience version does not include any of the just-described information that is included in the augmented version). In another embodiment of the demonstration re-performing technique the audience version includes a user-configurable subset of this information. In yet another embodiment of the demonstration re-performing technique the audience version is the same as the augmented version.
As will be appreciated from the more detailed description that follows, the just-described visualizations and event timeline generally serve as visual cues that make the presenter who is playing back and narrating the screencast video aware of various aspects of the upcoming content and events in the demonstration that is recorded in the video. More particularly and by way of example but not limitation, the event timeline that is displayed adjacent to the video makes the presenter aware of the different topics that are covered in the demonstration, the sequence and timing of these different topics, and the sequence and timing of user input events and the results thereof that take place during each of the topics, among other things. The presenter can use the event timeline to navigate the video playback in various ways such as jumping to a specific topic in the video, or jumping to a specific event in the video, or jumping to a specific point in time in the video. The visualizations that are overlaid on top of the video make the presenter aware of when upcoming user input events will happen and where on the screen they will happen. The visualizations can also remind the presenter of one or more talking points that are to be spoken as the video is being played back, and when to speak each of the talking points.
The visualizations and event timeline are advantageous for various reasons including, but not limited to, the following. Generally speaking and as will be appreciated from the more detailed description that follows, the visualizations and event timeline provide the presenter with on-the-fly information assistance that enables them to anticipate, rather than react to, the content, the user input events, and the results thereof that are coming up in the screencast video as it is being played back. The visualizations and event timeline thus help the presenter guide the audience's attention to the right place at the right time. The visualizations and event timeline are also glanceable (e.g., the presenter can quickly perceive and interpret each of the visualizations and the event timeline at a glance with a minimal amount of attention).
The demonstration re-performing technique embodiments described herein are advantageous for various reasons including, but not limited to, the following. Generally speaking and as will be appreciated from the more detailed description that follows, the demonstration re-performing technique embodiments support the entire demonstration authoring process including, but not limited to, preparing a given demonstration, rehearsing and fine tuning (e.g., editing) the demonstration, and giving the demonstration during a live presentation. The demonstration re-performing technique embodiments also minimize the cognitive load and stress on presenters and allow them to give more effective, more understandable, and more engaging demonstrations in a natural, at ease, and time-efficient manner. As such, the demonstration re-performing technique embodiments enhance a presenter's experience in giving a live presentation that includes the performance and live narration of a demonstration. The demonstration re-performing technique embodiments also maximize the audience's attention to and level of engagement with the demonstration.
More particularly and by way of example but not limitation, the demonstration re-performing technique embodiments described herein eliminate the aforementioned issues that can occur during a live demonstration. The demonstration re-performing technique embodiments also allow a presenter to show an audience just the most significant parts of the demonstration in the lowest risk, most understandable, and most time-efficient manner. The demonstration re-performing technique embodiments also minimize the need for a presenter to have to remember the aforementioned various aspects of the demonstration. Accordingly, the demonstration re-performing technique embodiments make the presenter seem more at ease and in control while they are playing back and narrating the screencast video of the demonstration during the live presentation, thus further enhancing the effectiveness of the demonstration. The demonstration re-performing technique embodiments also help the presenter optimally guide the audience through the video as it is being played back. The demonstration re-performing technique embodiments also help the presenter precisely match the content and timing of their live narration to the video as it is being played back.
Referring again to
As is appreciated in the art of software applications, in the case where the demonstration that is being performed is a software application demonstration, various types of low-level user input events can take place during the demonstration including, but are not limited to, the following. A mouse-button-down event is herein defined to take place whenever the user manually depresses a given button on the mouse. A mouse-button-up event is herein defined to take place whenever the user releases this button. A mouse-wheel-down event is herein defined to take place whenever the user rotates a scroll wheel on the mouse downward. A mouse-wheel-up event is herein defined to take place whenever the user rotates the scroll wheel upward. A key-press event is herein defined to take place whenever the user manually depresses a given key on the keyboard.
Various types of data are stored for each of the low-level user input events that take place during the demonstration, examples of which include, but are not limited to, the following. A time-stamp identifying when the low-level user input event takes place in the demonstration is stored. An event-type identifying the type of low-level user input event that takes place is also stored. The screen location (e.g., the x-y coordinates on the screen) where the low-level user input event takes place is also stored. In an exemplary embodiment of the demonstration re-performing technique described herein the time-stamp for the low-level user input event is obtained from the operating system of the computer and has an accuracy of 0.1 seconds. It is noted that alternate embodiments of the demonstration re-performing technique are also possible where the time-stamp can be obtained from a source other than the operating system of the computer, and can have an accuracy that is either greater than 0.1 seconds or less than 0.1 seconds.
Referring again to
Referring again to
A mouse-button-down mouse-button-up event sequence that takes place at substantially the same screen location is converted into a single-click event. As is appreciated in the art of computer user interface design, the user can perform a single-click event to select a particular item (such as either a file, or a menu item, or a toolbar icon, or a text field, or a checkbox, or a graphical button, or the like) that is displayed at a specific screen location. The start-time of the single-click event is the time-stamp of the mouse-button-down event in this sequence. The end-time of the single-click event is the time-stamp of the mouse-button-up event in this sequence.
A mouse-button-down mouse-button-up mouse-button-down mouse-button-up event sequence that takes place at substantially the same screen location within a prescribed double-click period of time is converted into a double-click event. It will be appreciated that this double-click period of time can have various values. By way of example but not limitation, the double-click period of time can be set to either a user-prescribed value or a default value that is employed in the computer on which the demonstration is captured. As is also appreciated in the art of computer user interface design, the user can perform a double-click event to open a particular file that is displayed at a specific screen location. The start-time of the double-click event is the time-stamp of the first mouse-button-down event in this sequence. The end-time of the double-click event is the time-stamp of the last mouse-button-up event in this sequence.
A mouse-button-down mouse-button-up event sequence that takes place at two different screen locations is converted into a drag event. As is also appreciated in the art of computer user interface design, the user can perform a drag event to move a selected item that is displayed on the display screen. The start-time of the drag event is the time-stamp of the mouse-button-down event in this sequence. The start-location of the drag event is the screen location where the mouse-button-down event takes place. The end-time of the drag event is the time-stamp of the mouse-button-up event in this sequence. The end-location of the drag event is the screen location where the mouse-button-up event takes place.
One or more consecutive mouse-wheel-down events that take place within a prescribed scrolling period of time is converted into a scroll-down event. The start-time of the scroll-down event is the time-stamp of the first of these mouse-wheel-down events. The end-time of the scroll-down event is the time-stamp of the last of these mouse-wheel-down events. One or more consecutive mouse-wheel-up events that take place within the scrolling period of time is converted into a scroll-up event. The start-time of the scroll-up event is the time-stamp of the first of these mouse-wheel-up events. The end-time of the scroll-up event is the time-stamp of the last of these mouse-wheel-up events. As is also appreciated in the art of computer user interface design, the user can perform a scroll-down/up event within a given sector on the display screen to scroll down/up the information that is displayed within the sector.
One or more consecutive key-press events that take place within a prescribed text-entry period of time is converted into a keystroke event. As is also appreciated in the art of computer user interface design, the user can perform a keystroke event to enter any character string (such as either a word, or a word phrase, or a number, or a word-number combination, or the like) into the computer. The start-time of the keystroke event is the time-stamp of the first of these key-press events. The end-time of the keystroke event is the time-stamp of the last of these key-press events. In an exemplary embodiment of the demonstration re-performing technique described herein the scrolling period of time is two seconds, and the text-entry period of time is equal to the scrolling period of time. It is noted that alternate embodiments are also possible where the scrolling period of time is either greater than two seconds or less than two seconds, and where the text-entry period of time is either greater than the scrolling period of time or less than the scrolling period of time.
After the sequence of low-level user input events has been analyzed and converted into a sequence of high-level user input events as just described, the sequence of high-level user input events is used to identify portions of the screencast video as either event portions or inactive portions, where this identification includes storing metadata for each of the portions of the screencast video. More particularly, each of the high-level user input events is mapped to a different event portion. Whenever a gap in time exists between two consecutive high-level user input events (e.g., whenever the end-time of a particular high-level user input event is not the same as the start-time of the high-level user input event that immediately succeeds the particular high-level user input event), this gap in time is mapped to an inactive portion. Examples of such a gap in time can include, but are not limited to, a period of time during which the user is simply moving the mouse, or another period of time during which a user interface transition is taking place, or yet another period of time during which the same video frame is repeated in the screencast video (e.g., there are no visible changes in the video).
In an exemplary embodiment of the demonstration re-performing technique described herein the metadata that is stored for a given portion of the screencast video includes, but is not limited to, the duration of the portion and an indicator specifying whether the portion is an event portion or an inactive portion. In the case where the portion is an event portion, the metadata that is stored for the portion also includes another indicator specifying the particular high-level user input event that is mapped to the portion. In the case where the portion is an event portion, the duration of the portion is initially computed to be the end-time of the high-level user input event that is mapped to the portion minus the start-time of this high-level user input event. In the case where the portion is an inactive portion, the duration of the portion is initially computed to be the duration of the gap in time that is mapped to the portion.
After portions of the screencast video have been identified as either event portions or inactive portions as just-described, the duration of each of the event portions of the screencast video can optionally be adjusted as necessary in order to ensure that the event portion can be easily observed by the audience. In an exemplary embodiment of the demonstration re-performing technique described herein this adjustment is performed in the following manner. Whenever the duration of the event portion is less than a prescribed minimum duration, the following actions will occur. Whenever an inactive portion immediately precedes the event portion the duration of the event portion is expanded by subtracting a prescribed amount of time from the start-time of the event portion (thus shorting the duration of the immediately preceding inactive portion by the prescribed amount of time). Whenever an inactive portion also immediately succeeds the event portion, the duration of the event portion is further expanded by adding the prescribed amount of time to the end-time of the event portion (thus shorting the duration of the immediately succeeding inactive portion by the prescribed amount of time). It will be appreciated that this expansion of the duration of the event portion results in the playback speed of the event portion being appropriately decreased, thus slowing down the event portion so that it can be easily observed by the audience. Whenever the duration of the shortened immediately preceding inactive portion is less than the prescribed minimum duration, the shortened immediately preceding inactive portion is merged into the expanded event portion. Similarly, whenever the duration of the shortened immediately succeeding inactive portion is less than the prescribed minimum duration, the shortened immediately succeeding inactive portion is also merged into the expanded event portion. In an exemplary embodiment of the demonstration re-performing technique described herein the prescribed minimum duration is one second and the prescribed amount of time is half a second. It is noted that other embodiments are also possible where the prescribed minimum duration is either greater than one second or less than one second, and where the prescribed amount of time is either greater than half a second or less than half a second.
Referring again to
As stated heretofore, the augmented version of the screencast video includes various types of information that is automatically timed to the video as it is being played back and provides the presenter with visual cues that make the presenter aware of various aspects of the upcoming content and events in the video. More particularly and as will be described in more detail hereafter, the augmented version of the screencast video includes an event timeline that is displayed adjacent to the video. Generally speaking, the augmented version of the screencast video also includes a visualization (e.g., a visible representation) of each of the high-level user input events that is automatically overlaid on top of the screencast video as the high-level user input event takes place and at the particular screen location where it takes place. The augmented version of the screencast video also includes a visualization of any text notes that the presenter inserts into the video while they are rehearsing it. The visualizations of both the high-level user input events and the text notes have a prescribed degree of transparency so that the presenter is able to see any video content that exists underneath the visualizations. It will be appreciated that this degree of transparency can have various values. In an exemplary embodiment of the demonstration re-performing technique described herein the degree of transparency is user configurable, which is advantageous since this transparency can be adapted to the particular characteristics of the screencast video.
In an exemplary embodiment of the demonstration re-performing technique described herein the visualization of a given high-level user input event is a simple but distinct glyph that uniquely represents the event and can be accurately interpreted by the presenter even if the screencast video is visually complex.
Generally speaking, the different glyphs 200, 206, 220, 222, 228 and 234 exemplified in
At any point in time during the playback of the screencast video, the augmented version of the screencast video includes a visualization of a prescribed number of consecutive high-level user input events, namely the high-level user input event that is currently taking place (hereafter simply referred to as the current high-level user input event) and one or more high-level user input events that immediately succeed the current high-level user input event. In an exemplary embodiment of the demonstration re-performing technique described herein this prescribed number is two so that at any point in time during the playback of the screencast video, the augmented version of the screencast video includes a visualization of the current high-level user input event and a visualization of the high-level user input event that immediately succeeds the current high-level user input event (hereafter simply referred to as the next high-level user input event).
Generally speaking and referring again to
Since the distance between two consecutive high-level user input events can vary, the demonstration re-performing technique embodiments described herein employ three different types of motion-arrow glyphs in order to ensure that a given motion-arrow glyph will always be visible to the presenter.
Generally speaking, various methods can be optionally employed to provide the presenter with a sense of timing for the next high-level user input event, and thus enable the presenter to optimally time their narration of the screencast video playback. By way of example but not limitation, in one embodiment of the demonstration re-performing technique described herein a progress bar can be embedded within each of the just-described different types of motion-arrow glyphs. In another embodiment of the demonstration re-performing technique each of the aforementioned different glyphs that represent the different types of high-level user input events can be implemented using a countdown version of the glyph that visually coveys the specific timing of when an impending high-level user input event will start. These progress bar and countdown version embodiments will now be described in more detail.
It will be appreciated that alternate embodiments (not shown) of the countdown version of the single-click glyph are also possible where the interval of time between successive changes in the glyph is either less than one second or greater than one second, and where the number of concentric circles is either less than three or greater than three. It will also be appreciated that similar countdown versions of the aforementioned double-click glyph, drag glyph, scroll-down glyph, scroll-up glyph, and keystroke glyph are possible.
As will be appreciated from the more detailed description of the video playback GUI that is provided hereafter, the presenter can use the GUI to quickly edit the playback of the screencast video in various ways while they are rehearsing their live narration of the video, where this editing results in revisions being made to the metadata that is stored for one or more of the portions of the video. By way of example but not limitation, the presenter can use the GUI to input a request to group a specific sequence of portions of the screencast video into a topic, and to input a text label for the topic. Upon receiving this request and text label, the metadata for each of the portions in this specific sequence will be revised to indicate that the portion is part of the topic having the text label. The presenter can define the topics in any manner they desire. By way of example but not limitation, in the case where a software application is being demonstrated, a given sequence of portions of the screencast video may be associated with a particular high-level feature of the application.
Generally speaking, the presenter can also use the video playback GUI to modify the content and timing of the screencast video in various ways as it is being played back. This is advantageous since it allows the presenter to fine tune the playback of the screencast video to match the needs of a particular live presentation they will be giving (e.g., different presentations to different audiences may call for either more or less extensive narrations during certain parts of the video, the overall length of the video as it was originally recorded may exceed the amount of time the presenter has to give the presentation, and the presenter may determine that certain parts of the video progress either too slowly or too quickly). More particularly and by way of example but not limitation, the presenter can use the GUI to input a request to adjust (e.g., either increase or decrease) the playback speed of a specific portion of the screencast video. Upon receiving this request, the metadata for the specific portion will be revised to indicate that the specific portion is to be played back at this adjusted speed. An exemplary situation where the presenter may choose to increase the playback speed of a specific portion is where the portion is an event portion to which a keystroke event is mapped, and the presenter feels that the keystroke event progresses too slowly and thus may bore the audience. An exemplary situation where the presenter may choose to decrease the playback speed of a specific portion is where the portion is an event portion to which a drag event is mapped, and the presenter feels that the drag event progresses too quickly (e.g., the mouse was moved very quickly during the drag) and thus may not be understandable by the audience. The presenter may also choose to adjust the playback speed of a specific event portion in order to match the playback duration of the event portion to their narration thereof.
The presenter can also use the video playback GUI to input a request to insert a pause segment having a specified length of time into a specific portion of the screencast video. Upon receiving this request, the metadata for the specific portion will be revised to indicate that the specific portion includes this pause segment. The pause segment will cause the playback of the portion to automatically pause at the last frame thereof for the specified length of time, after which the playback of the screencast video will automatically resume. The presenter can also use the GUI to input a request to remove a previously inserted pause segment.
The presenter can also use the video playback GUI to input a request to insert a stop segment into a specific portion of the screencast video. Upon receiving this request, the metadata for the specific portion will be revised to indicate that the specific portion includes the stop segment. The stop segment will cause the playback of the portion to automatically stop at the last frame thereof. The playback of the screencast video will not resume until the presenter provides an explicit input to do so (such as the presenter pressing any key on the keyboard, or the presenter using the mouse to single-click a pause/play icon that is displayed in the GUI, among other types of input). The presenter can also use the GUI to input a request to remove a previously inserted stop segment.
The presenter can also use the video playback GUI to input a request to insert a text note into a specific portion of the screencast video, where the text note has been sized and positioned by the presenter at a desired screen location (e.g., the presenter can adjust the size of the text note and move the text note to a screen location that insures the text note won't block video content that the presenter feels the audience needs to see). Upon receiving this request, the metadata for the specific portion will be revised to indicate that the specific portion includes the text note, and that the text note is to be automatically overlaid on top of the screencast video at the desired screen location. During the playback of the screencast video a visualization of the text note will be automatically overlaid on top of the video a prescribed note display period of time before the portion starts, and the text note will be automatically removed from the display screen when the portion starts. In an exemplary embodiment of the demonstration re-performing technique described herein the prescribed note display period of time is three seconds. However, alternate embodiments of the demonstration re-performing technique are also possible where the prescribed note display period of time is either less than or greater than three seconds. This text note feature is advantageous for various reasons including, but not limited to, the following. Text notes can be used by the presenter to remind them of a particular talking point that is to be spoken, or a particular feature that will be shown, during the playback of the portion. The presenter can also use the GUI to input a request to remove a previously inserted text note.
The presenter can also use the video playback GUI to input a request to hide a specific portion of the screencast video. Upon receiving this request, the metadata for the specific portion will be revised to indicate that the specific portion is to be hidden during the playback of the screencast video. The hiding of a portion will cause the portion to be skipped during the playback of the screencast video, thus shortening the video playback time accordingly. The presenter can also use the video playback GUI to input a request to unhide a previously hidden portion.
The presenter can also use the video playback GUI to input a request to hide a specific topic in the screencast video. Upon receiving this request, the metadata for the sequence of portions of the screencast video that makes up the specific topic will be revised to indicate that this sequence of portions is to be hidden during the playback of the screencast video. The hiding of a topic will cause the entire topic (e.g., the entire sequence of portions that makes up the topic) to be skipped during the playback of the screencast video, thus shortening the video playback time accordingly. The presenter can also use the video playback GUI to input a request to unhide a previously hidden topic.
In an exemplary embodiment of the demonstration re-performing technique described herein the various types of edits that the presenter can make to the playback of the screencast video do not modify the screencast video itself. Rather, as just described, the edits revise the metadata that is stored for one or more of the portions of the video. Accordingly, the presenter can make one set of edits to the playback of the screencast video for one presentation, and can store one version of revised video portions metadata that includes this one set of edits. The presenter can then make another set of edits to the playback of the screencast video for another presentation, and can store another version of revised video portions metadata that includes this other set of edits.
Generally speaking and referring again to
As described heretofore, in conjunction with the augmented version of the screencast video being played back to the presenter, an audience version of the screencast video is played back to the audience. This audience version is a full-screen video that is played back on a different display screen than the augmented version. The playback of the audience version is synchronized to the playback of the augmented version so that each of the portions of the audience version is played back synchronously with (e.g., at the same playback speed and with the same timing as) the corresponding portion of the augmented version. As also described heretofore, the audience version of the screencast video may be a non-augmented version of the screencast video (e.g., it may not include any of the aforementioned overlaid visualizations or the aforementioned event timeline that are included in the augmented version), or it may include a user-configurable subset of these augmentations, or it may be the same as the augmented version.
Generally speaking and as will be appreciated from the more detailed description of the video playback GUI that follows, the presenter can use the GUI to control the playback of the screencast video in various ways. By way of example but not limitation, the presenter can use the GUI to initiate the video playback, pause the video playback at any point in time, and resume the video playback at any subsequent point in time. Additionally, whenever the video playback is paused, the presenter can visually point out a specific region of the currently displayed video frame to the audience as follows. The presenter can use the mouse to hover a cursor over the specific region on the augmented version of the currently displayed video frame that the presenter sees in the GUI, and the demonstration re-performing technique embodiments described herein will then overlay another cursor onto the same region of the audience version of the currently displayed full-screen video frame that the audience sees, thus drawing the audience's attention to this region. The presenter's cursor and the cursor that the audience sees are synchronized so that the cursor that the audience sees will move synchronously with any movement of the presenter's cursor.
Referring again to
Referring again to
Referring again to
As also exemplified in
Generally speaking and referring again to
Referring again to
Referring again to
Referring again to
While the demonstration re-performing technique has been described by specific reference to embodiments thereof, it is understood that variations and modifications thereof can be made without departing from the true spirit and scope of the demonstration re-performing technique. By way of example but not limitation, a finer granularity of event portions of the screencast video can be identified by analyzing the inactive portions of the screencast video using either conventional computer vision techniques, or conventional video analysis techniques, or a combination thereof. Although the demonstration re-performing technique embodiments have been described in the context of the demonstration that is recorded in the screencast video being a software application demonstration that was originally performed on the display screen of a computer, the demonstration re-performing technique embodiments described herein can also support any other type of demonstration that can be performed on the display screen of a computer. Although the aforementioned various types of edits that the presenter can make to the playback of the screencast video during the rehearsal and editing phase of the workflow do not modify the screencast video itself, an alternate embodiment of the demonstration re-performing technique is possible where the screencast video itself can be edited using conventional video editing methods.
Furthermore, although the demonstration re-performing technique embodiments have been described in the context of the low-level user input events during the demonstration taking place via a conventional mouse and a conventional physical keyboard, it is noted that alternate embodiments of the demonstration re-performing technique are also possible where the low-level user input events can also take place via one or more natural user interface modalities. By way of example but not limitation, in the case where the display screen of the computer is touch-sensitive, the low-level user input events can also take place via various types of physical contact (e.g., taps, drags, and the like) on the display screen. In the case were the computer has a voice recognition capability, the low-level user input events can also take place via spoken commands. In the case where the computer has a gesture recognition capability, the low-level user input events can also take place via hand gestures (among other types of gestures).
Yet furthermore, rather than capturing a screencast video of the demonstration, a conventional video camera can be used to capture a video of the demonstration as it is being performed. A video analysis system can then be used to identify prescribed types of events that take place in the video and mark each of the identified events in the video. By way of example but not limitation, the video analysis system can detect when a person walks into a room, or when a person makes a certain gesture, or when a person is talking, or when a person opens a door. An alternate embodiment of the demonstration re-performing technique can then be used to annotate the marked video with visualizations of the identified events on-the-fly as the marked video is being played back.
Yet furthermore, rather than one or more consecutive mouse-wheel-down events that take place within the prescribed scrolling period of time being converted into a scroll-down event, and one or more consecutive mouse-wheel-up events that take place within this period of time being converted into a scroll-up event, such mouse-wheel-down events can be converted into a zoom-out event and such mouse-wheel-up events can be converted into a zoom-in event, or vice versa.
It is also noted that any or all of the aforementioned embodiments can be used in any combination desired to form additional hybrid embodiments. Although the demonstration re-performing technique embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described heretofore. Rather, the specific features and acts described heretofore are disclosed as example forms of implementing the claims.
The demonstration re-performing technique embodiments described herein are operational within numerous types of general purpose or special purpose computing system environments or configurations.
To allow a device to implement the demonstration re-performing technique embodiments described herein, the device should have a sufficient computational capability and system memory to enable basic computational operations. In particular, the computational capability of the simplified computing device 1100 shown in
In addition, the simplified computing device 1100 shown in
The simplified computing device 1100 shown in
Retention of information such as computer-readable or computer-executable instructions, data structures, program modules, and the like, can also be accomplished by using any of a variety of the aforementioned communication media (as opposed to computer storage media) to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and can include any wired or wireless information delivery mechanism. Note that the terms “modulated data signal” or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media can include wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, radio frequency (RF), infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves.
Furthermore, software, programs, and/or computer program products embodying some or all of the various demonstration re-performing technique embodiments described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer-readable or machine-readable media or storage devices and communication media in the form of computer-executable instructions or other data structures.
Finally, the demonstration re-performing technique embodiments described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, and the like, that perform particular tasks or implement particular abstract data types. The demonstration re-performing technique embodiments may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks. In a distributed computing environment, program modules may be located in both local and remote computer storage media including media storage devices. Additionally, the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.