This disclosure relates to time-based presentation document editing.
Slide-based presentation applications mix time-based concepts (such as animations and transitions) with slide-based concepts (such as objects on slides). This often leads to unnatural product limitations. For example, it is sometimes desirable to transition from one slide to a next slide while keeping an object from the first slide in the exact same place in the next slide. In slide-based presentation applications, this is done by including a copy of the object in both slides. In another example, a user may wish to include an animation in a slide. The animation may include replacing one shape with another shape in the same location. In slide-based presentation applications, this requires positioning one shape directly on top of another, making it cumbersome to select the bottom shape. Slide-based presentation applications also make it difficult to see what the presentation view looks like at a particular time midway through the slide.
Accordingly, methods are disclosed herein for time-based editing of a presentation. One aspect relates to a method for providing an electronic presentation editing interface for editing an electronic presentation. The interface includes a digital canvas including multiple canvas objects in multiple canvas layers and a digital timeline including multiple timeline objects. Each canvas object is linked to a timeline object, and a location of a timeline object on the digital timeline is indicative of a time and a canvas layer that each linked canvas object is displayed on the digital canvas.
Another aspect relates to a system for providing an electronic presentation editing interface for editing an electronic presentation. The interface includes a digital canvas including multiple canvas objects in multiple canvas layers and a digital timeline including multiple timeline objects. Each canvas object is linked to a timeline object, and a location of a timeline object on the digital timeline is indicative of a time and a canvas layer that each linked canvas object is displayed on the digital canvas.
Another aspect relates to a non-transitory computer readable medium storing computer executable instructions, which, when executed by a processor, cause the processor to carry out a method for editing an electronic presentation. The method includes providing an electronic presentation editing interface, which includes a digital canvas including multiple canvas objects in multiple canvas layers and a digital timeline including multiple timeline objects. Each canvas object is linked to a timeline object, and a location of a timeline object on the digital timeline is indicative of a time and a canvas layer that each linked canvas object is displayed on the digital canvas.
The above and other features of the present disclosure, including its nature and its various advantages, will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which:
To provide an overall understanding of the invention, certain illustrative embodiments will now be described, including a system for time-based presentation editing. However, it will be understood by one of ordinary skill in the art that the systems and methods described herein may be adapted and modified as is appropriate for the application being addressed and that the systems and methods described herein may be employed in other suitable applications, and that such other additions and modifications will not depart from the scope thereof.
The user device 102 includes, without limitation, any suitable combination of one or more input devices (e.g., keypads, touch screens, trackballs, voice recognition systems, etc.) and/or one or more output devices (e.g., visual displays, speakers, tactile displays, printing devices, etc.). As used herein, “user device” includes, without limitation, any suitable combination of one or more devices configured with hardware, firmware, and software to carry out one or more of the computerized techniques described herein. Examples of user devices include, without limitation, personal computers, laptops, and mobile devices (such as smartphones, blackberries, PDAs, tablet computers, etc.).
The processor 110 refers to one or more computers, microprocessors, logic devices, servers, or other devices configured with hardware, firmware, and software to carry out one or more of the computerized techniques described herein. Processors and processing devices may also include one or more memory devices, or may use the memory unit 112 for storing inputs, outputs, and data that is currently being processed.
The memory unit 112 or a memory device in the processor 110 stores the presentation editing interface 108 and/or the presentation document 106. In addition, the memory unit further stores computer readable instructions which, when carried out by the processor 110, cause the processor to provide the presentation editing interface 108 to the user 104 such that the user 104 may use time-based presentation editing to modify the presentation document 106.
The time ruler 226 represents a time axis indicative of time in a reference time period. For example, a reference time may be at the beginning of the presentation, at the time of a transition in the presentation, or any other suitable time. As shown, the time ruler 226 includes multiple equally spaced tick marks corresponding to fixed time intervals. However, the time ruler may instead include non-equidistant tick marks, may not include tick marks, or may not even be shown in the presentation editing interface 108.
The digital canvas 214 is a view of the presentation at a particular moment in time defined by the location of the time marker 218 along the time ruler 226. Each timeline object (e.g., timeline objects 230, 231, 234, and 236) in the digital timeline 216 is linked to a canvas object in the digital canvas 214 that appears at a time during the presentation. The horizontal position of each timeline object along the time ruler 226 corresponds to the time that the corresponding canvas object appears and disappears in the presentation. Thus, the width of each timeline object 230, 231, 234, and 236 in the digital timeline 216 corresponds to an amount of time (e.g., measured by the time ruler 226) that the corresponding canvas object appears in the presentation. The user 104 may adjust the location of a timeline object by providing a user input indicative of the desired location (e.g., clicking and dragging the object and/or its edges using mouse device or entering the desired location using a keyboard).
Furthermore, the layer 238 of each timeline object in the digital timeline 216 corresponds to a canvas layer in the digital canvas 214 in which the corresponding canvas object appears. For example, layers 238a-238d correspond to a front-to-back layering (corresponding to a z-axis ordering of canvas objects) in the canvas, such that canvas objects corresponding to layer 238a appear in front of those corresponding to layers 238b-238d. Alternatively, layers 238a-238d may correspond to the opposite layering order (back-to-front), or any other suitable layering order. Because the digital timeline 216 includes a layer axis representative of a front-to-back ordering of layers in the digital canvas 214, the presentation editing interface 108 allows the user 104 to conveniently select timeline objects located in low layers (i.e., back layers in the digital canvas) that may be otherwise difficult to select in a slide-based presentation application.
In the presentation editing interface 108, the view of the digital canvas 214 only includes canvas objects corresponding to timeline objects that are displayed during the presentation at the time corresponding to the time marker 218. For example, as shown, only timeline objects 230 and 231 coincide with time marker 218. In one example, the title canvas object 220 may be associated with the timeline object 230 and the canvas object 221 may be associated with the timeline object 231. At time 218, only timeline object 230 (canvas object 220) and timeline object 231 (canvas object 221), are displayed in the digital canvas 214.
As indicated by the digital timeline 216, the timeline object 230 extends over nearly the entire length of the time ruler 226, such that the canvas object 220 is displayed throughout the duration of time indicated by the time ruler 226. In one example, the digital canvas 214, the canvas object 220 is a text box including the title of the presentation (or a section title of the presentation).
The timeline object 231 includes three “timeline sub-objects” 232a-232c, separated by dashed lines. A timeline sub-object corresponds to a portion of a timeline object that may have different characteristics than a remainder of the timeline object. For example, as shown in diagram 200, timeline sub-objects 232a-232c correspond to different appear and disappear times. In particular, timeline sub-object 232a appears first, followed by timeline sub-object 232b, finally followed by timeline sub-object 232c. Bullet points 222a and 222b in the canvas object 221 correspond to timeline sub-objects 232a and 232b, respectively. The canvas object 221 further includes a third bullet point corresponding to timeline sub-object 232c, but the third bullet point is not shown in the current view because the position of timeline sub-object 232c does not coincide with the time marker 218.
Similarly, timeline objects 234 and 236 also correspond to canvas objects which are not shown in the digital canvas because timeline objects 234 and 236 also do not coincide with the time marker 218. Canvas objects may also include shapes, figures, images, graphs, data, tables, links, hyperlinks, video files, audio files, graphics, or any other object suitable for use in a presentation, or a combination thereof.
The user 104 may change the location of the time marker 218 to change the current view of the digital canvas 214. For example, the user 104 may select the time marker 218 (e.g., by clicking with an input device such as a mouse) and select a new location (e.g., by dragging the time marker 218 to a different location on the time ruler 226). In addition, the presentation editing interface 108 may include play, pause, stop, and/or fast forward buttons. These buttons may also be used by the user 104 to navigate the presentation. When a new location is selected, the digital canvas 214 would then include canvas objects corresponding to timeline objects that coincide with the new location. Thus, the presentation editing interface 108 allows the user to view (and edit) the presentation at an arbitrary point in the presentation time and does not limit the user to work with slides as in conventional slide-based presentation applications.
The pause bar 224 in the timeline 216 indicates a pause in the presentation. This means that when the presentation reaches the time corresponding to the location of the pause bar 224, the presentation pauses until an input from the user (e.g., the presenter) is received such as a click from a mouse or a press of a button. When the user input is received, the presentation advances.
When the user 104 selects the pull-down tab 228, the timeline 216 decreases substantially in size or is partially or completely hidden, allowing for a larger display of the digital canvas 214 for more convenient canvas editing. This feature allows for the user to more accurately place canvas objects at desired locations in the digital canvas 214 or make any other suitable adjustments to the presentation. Other tabs, buttons, or options may also include the same functionality as the pull-down tab 228.
In diagram 200, the digital canvas 214 is displayed above the digital timeline 216. However, the digital timeline may also be placed at any other suitable placement in the presentation editing interface 108 relative to the digital canvas 214. For example, the time ruler 226 is shown as a horizontal axis, but the time ruler 226 may be vertical or any other suitable axis.
In some embodiments, each of timeline objects 230, 231, 234, and 236 are displayed as a smaller version of the corresponding canvas object, an abbreviated version, a user-set label or icon, any other suitable way of denoting a canvas object, or a combination thereof. The user 104 may customize the view of the timeline objects by selecting one or more of these options. Timeline objects may also be color-coded according to the object type of the corresponding canvas object.
The timeline objects 330-337 may be tied (or “locked”) to the transition region 340, such that the widths of the timeline objects 330-337 may also change when the transition region changes. For example, if the transition edge 341a is moved to the left (i.e., to an earlier time), the right edges of timeline objects 330-334 may also shift to the left by the same amount, effectively shortening the widths of these timeline objects. A “lock edges” option may be selected such that transition edges 341 are tied to edges of timeline objects if one edge is placed within a threshold proximity to another edge. The edges may later be “unlocked” by providing input (by right-clicking on the edges and selecting an unlock option, for example).
The transition region 340 may further be configured to include one or more transition effects such as fading, blinds, box, checkerboard, comb, or any other suitable transition effect for a presentation. The speed of the transition effect is set by the width of the transition region 340.
Diagram 300b is identical to diagram 300a, except that the widths of timeline objects 330 and 332 extend through the transition region 340. Any transition effect corresponding to the transition region 340 occurs between timeline objects 334 and 336, and timeline objects 330 and 332 are unaffected by the transition region 340. In this case, the timeline objects are not tied to the transition region 340 such that if the transition edges are moved, the edges of the timeline objects 330 and 332 do not change.
In particular, the presentation time ruler 442 represents a time axis indicative of an amount of time since a reference time (i.e., the beginning of the presentation). The presentation time ruler 442 extends over a longer length of the presentation than extended by the focused time ruler 426. For example, the presentation time ruler 442 may extend over the entire length of the presentation such that the left edge of the presentation time ruler 442 corresponds to the beginning of the presentation and the right edge corresponds to the end of the presentation. Alternatively, the presentation time ruler 442 may not extend over the entire length of the presentation. In this case, the user 104 may select (e.g., by clicking or hovering with a mouse or using keyboard input) arrows 446 or 448 to scroll the presentation time ruler 442 to the left or right, respectively.
The left and right edges of box 444 on the presentation time ruler 442 correspond to the same times represented by the left and right edges of the focused time ruler 426. The user 104 may adjust the width and position of the focused time ruler 426 by adjusting the width and position of the box 444 (i.e., by clicking and dragging the box 444 and/or its edges) on the presentation time ruler 442.
In addition, some presentations have many layers, and it may be undesirable to view layers that do not contain objects at times within the left and right edges of the focused time ruler 426. Thus, the digital timeline 416 may be focused (or collapsed) in the layer axis as well, such that those layers that do not contain objects within a defined time window do not appear (or are substantially smaller in size than other layers).
In diagram 500a, the canvas objects 520 and 521 correspond to the timeline objects 530 and 536, respectively. Each timeline object has at least one state, where a state includes information related to the corresponding canvas object such as the position, size, angle, orientation, color, layer, or any other suitable information related to an object in a presentation. Timeline objects 534 and 536 each have a single state, and timeline objects 530 and 532 each include two states: states 531a (A1) and 531b (A2) for timeline object 530 and states 533a (B1) and 533b (B2) for timeline object 532. Two states corresponding to the same timeline object may have different characteristics regarding any of the information related to an object in a presentation. The position and width of each state correspond to the time the corresponding canvas object remains in the defined state. As shown in the digital canvas, the state 531a of the timeline object 530 indicates that the canvas object 520 is a circle shape at the top left of the digital canvas 514.
In diagram 500b, the canvas object 521 corresponding to timeline object 536 is no longer visible in the digital canvas because the timeline object 536 is no longer aligned with time marker 518. Instead, the canvas object 550 corresponding to timeline object 532 is visible and has a first state 533a (B1). Further, the time marker 518 is in between states 531a and 531b, such that the digital canvas 514 displays canvas object 520 transitioning between the two states. The state 531b of the timeline object 530 has the same characteristics as the state 531a, except that state 531b indicates that the canvas object 520 will be placed at the bottom right of the digital canvas 514, rather than the top left. Thus, at the time corresponding to the time marker 518, canvas object 520 is transitioning from state 531a (top left position) to state 531b (bottom right position).
In diagram 500c, the canvas object 520 is in the bottom right position defined by state 531b of timeline object 530. In addition, the canvas object 550 has a different angle corresponding to state 533b, and the canvas object 552 has appeared in the digital canvas 514.
The user 104 may set the speed of transition between states for a timeline object by adjusting the edge locations of the states 531a-531b and 533a-533b. For example, the width of the state 531a indicates the time for which the canvas object 520 remains in that state before beginning to transition into the state 531b. The transition is complete when the state 531b is reached. Thus, the speed of transition is determined by locations of the right edge of state 531a and the left edge of state 531b. The speed of transition for timeline object 532 may be similarly set by adjusting the edges of states 533a and 533b.
The user 104 may also set other characteristics of the transition between states. Example transition characteristics include effects similar to those described in relation to
For ease of illustration, the canvas objects shown in diagrams 500a-500c are simple shapes with simple states, but it will be understood that canvas objects are not limited to shapes and may include text, figures, images, or any other suitable object in a presentation, and that states may involve more complex data such as size, color, layer, any other suitable data related to an object, or a combination thereof. In addition, as shown the timeline objects 530 and 532 each have two states. However, any number of states may be included in a timeline object.
At step 880, the user 104 configures the canvas objects on a digital canvas. Referring now to
At step 882, when the user 104 is satisfied with the configuration of the canvas objects in the digital canvas 714, the user 104 sets a first checkpoint by selecting the button 760. When button 760 is selected, the presentation editing interface 108 saves data associated with the current view of the digital canvas into memory unit 112. In particular, the data is saved into state objects 731a and 733a such that these states now include data indicative of the configuration of their corresponding canvas objects.
At step 884, the user 104 modifies the configuration of the canvas objects on the digital canvas and also creates empty state objects 731b and 733b for timeline objects 730 and 732, respectively. For example, referring now to
At step 886, when the user 104 is satisfied with the modification, the user 104 sets a second checkpoint by selecting the button 760. When button 762 is selected, the presentation editing interface 108 saves data associated with the current view of the digital canvas into memory unit 112. In particular, the data is saved into state objects 731b and 733b such that these states now include data indicative of the configuration of their corresponding canvas objects.
At step 888, an animation is configured between the two views corresponding to the two checkpoints. To configure an animation, the user 104 selects characteristics of the animation, such as a smooth and linear transition between the two views as described in relation to
At step 990, the user 104 opens the presentation document 106 on a user device 102. When the presentation document 106 is opened, at step 992, the processor 110 provides a presentation editing interface. In particular, the presentation editing interface 108 or any other suitable presentation editing interface described herein may be provided. Providing the presentation editing interface 108 includes providing a digital canvas at step 994 and a digital timeline at step 996.
At step 994, the processor 110 provides a digital canvas. In particular, the digital canvas 214 or any other digital canvas described herein may be provided. The provided digital canvas includes multiple canvas objects, and each canvas object is in a canvas layer, corresponding to a front-to-back layering of the canvas.
At step 996, the processor 110 provides a digital timeline, such as the digital timeline 216 or any other digital timeline described herein. The digital timeline includes timeline objects, each of which is linked to a canvas object in the digital canvas. The position of each timeline object in the digital timeline corresponds to a time and a layer in which the linked canvas object is displayed on the digital canvas.
The presentation editing interface is provided by displaying a portion of the digital timeline concurrently with displaying a portion of the digital canvas corresponding to a time indicated on the displayed portion of the digital timeline. The presentation editing interface 108 is configured to enable the user 104 to edit the presentation document 106 by modifying a position of a timeline object in the digital timeline and/or a position of a canvas object in the digital canvas.
The computing device 1000 comprises at least one communications interface unit, an input/output controller 1010, system memory, and one or more data storage devices. The system memory includes at least one random access memory (RAM 1002) and at least one read-only memory (ROM 1004). All of these elements are in communication with a central processing unit (CPU 1006) to facilitate the operation of the computing device 1000. The computing device 1000 may be configured in many different ways. For example, the computing device 1000 may be a conventional standalone computer or alternatively, the functions of computing device 1000 may be distributed across multiple computer systems and architectures. In
The computing device 1000 may be configured in a distributed architecture, wherein databases and processors are housed in separate units or locations. Some units perform primary processing functions and contain at a minimum a general controller or a processor and a system memory. In distributed architecture implementations, each of these units may be attached via the communications interface unit 1008 to a communications hub or port (not shown) that serves as a primary communication link with other servers, client or user computers and other related devices. The communications hub or port may have minimal processing capability itself, serving primarily as a communications router. A variety of communications protocols may be part of the system, including, but not limited to: Ethernet, SAP, SAS™, ATP, BLUETOOTH™, GSM and TCP/IP.
The CPU 1006 comprises a processor, such as one or more conventional microprocessors and one or more supplementary co-processors such as math co-processors for offloading workload from the CPU 1006. The CPU 1006 is in communication with the communications interface unit 1008 and the input/output controller 1010, through which the CPU 1006 communicates with other devices such as other servers, user terminals, or devices. The communications interface unit 1008 and the input/output controller 1010 may include multiple communication channels for simultaneous communication with, for example, other processors, servers or client terminals.
The CPU 1006 is also in communication with the data storage device. The data storage device may comprise an appropriate combination of magnetic, optical or semiconductor memory, and may include, for example, RAM 1002, ROM 1004, flash drive, an optical disc such as a compact disc or a hard disk or drive. The CPU 1006 and the data storage device each may be, for example, located entirely within a single computer or other computing device; or connected to each other by a communication medium, such as a USB port, serial port cable, a coaxial cable, an Ethernet cable, a telephone line, a radio frequency transceiver or other similar wireless or wired medium or combination of the foregoing. For example, the CPU 1006 may be connected to the data storage device via the communications interface unit 1008. The CPU 1006 may be configured to perform one or more particular processing functions.
The data storage device may store, for example, (i) an operating system 1012 for the computing device 1000; (ii) one or more applications 1014 (e.g., computer program code or a computer program product) adapted to direct the CPU 1006 in accordance with the systems and methods described here, and particularly in accordance with the processes described in detail with regard to the CPU 1006; or (iii) database(s) 1016 adapted to store information that may be utilized to store information required by the program.
The operating system 1012 and applications 1014 may be stored, for example, in a compressed, an uncompiled and an encrypted format, and may include computer program code. The instructions of the program may be read into a main memory of the processor from a computer-readable medium other than the data storage device, such as from the ROM 1004 or from the RAM 1002. While execution of sequences of instructions in the program causes the CPU 1006 to perform the process steps described herein, hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of the present disclosure. Thus, the systems and methods described are not limited to any specific combination of hardware and software.
Suitable computer program code may be provided for performing one or more functions in relation to aligning dietary behavior as described herein. The program also may include program elements such as an operating system 1012, a database management system and “device drivers” that allow the processor to interface with computer peripheral devices (e.g., a video display, a keyboard, a computer mouse, etc.) via the input/output controller 1010.
The term “computer-readable medium” as used herein refers to any non-transitory medium that provides or participates in providing instructions to the processor of the computing device 1000 (or any other processor of a device described herein) for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media include, for example, optical, magnetic, or opto-magnetic disks, or integrated circuit memory, such as flash memory. Volatile media include dynamic random access memory (DRAM), which typically constitutes the main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM or EEPROM (electronically erasable programmable read-only memory), a FLASH-EEPROM, any other memory chip or cartridge, or any other non-transitory medium from which a computer can read.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the CPU 1006 (or any other processor of a device described herein) for execution. For example, the instructions may initially be borne on a magnetic disk of a remote computer (not shown). The remote computer can load the instructions into its dynamic memory and send the instructions over an Ethernet connection, cable line, or even telephone line using a modem. A communications device local to a computing device 1000 (e.g., a server) can receive the data on the respective communications line and place the data on a system bus for the processor. The system bus carries the data to main memory, from which the processor retrieves and executes the instructions. The instructions received by main memory may optionally be stored in memory either before or after execution by the processor. In addition, instructions may be received via a communication port as electrical, electromagnetic or optical signals, which are exemplary forms of wireless communications or data streams that carry various types of information.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.
Number | Name | Date | Kind |
---|---|---|---|
5142674 | Barker et al. | Aug 1992 | A |
5146552 | Cassorla et al. | Sep 1992 | A |
5231577 | Koss | Jul 1993 | A |
5408470 | Rothrock et al. | Apr 1995 | A |
5557722 | DeRose et al. | Sep 1996 | A |
5708826 | Ikeda et al. | Jan 1998 | A |
5799325 | Rivette et al. | Aug 1998 | A |
5819304 | Nilsen et al. | Oct 1998 | A |
6061697 | Nakao | May 2000 | A |
6073144 | van Hoff | Jun 2000 | A |
6349308 | Whang et al. | Feb 2002 | B1 |
6349314 | Patel | Feb 2002 | B1 |
6377957 | Jeyaraman | Apr 2002 | B1 |
6544294 | Greenfield et al. | Apr 2003 | B1 |
6766333 | Wu et al. | Jul 2004 | B1 |
6904561 | Faraday et al. | Jun 2005 | B1 |
6988241 | Guttman et al. | Jan 2006 | B1 |
7017112 | Collie et al. | Mar 2006 | B2 |
7069502 | Numata et al. | Jun 2006 | B2 |
7162693 | Yamanaka et al. | Jan 2007 | B2 |
7197510 | Abe et al. | Mar 2007 | B2 |
7213199 | Humenansky et al. | May 2007 | B2 |
7263497 | Wiser et al. | Aug 2007 | B1 |
7299404 | Agarwal et al. | Nov 2007 | B2 |
7305613 | Oezgen | Dec 2007 | B2 |
7325187 | Yashiro | Jan 2008 | B2 |
7325199 | Reid et al. | Jan 2008 | B1 |
7350142 | Kraft et al. | Mar 2008 | B2 |
7437421 | Bhogal et al. | Oct 2008 | B2 |
7506242 | Kotler et al. | Mar 2009 | B2 |
7536633 | Faraday et al. | May 2009 | B2 |
7634728 | Kraft | Dec 2009 | B2 |
7667862 | Ziegler et al. | Feb 2010 | B2 |
7680932 | Defaix et al. | Mar 2010 | B2 |
7737996 | Gerhard et al. | Jun 2010 | B2 |
7761796 | Faraday et al. | Jul 2010 | B2 |
7779113 | Samar | Aug 2010 | B1 |
8151204 | Lusen et al. | Apr 2012 | B2 |
8184811 | Patten et al. | May 2012 | B1 |
8266534 | Curtis et al. | Sep 2012 | B2 |
8332815 | Balfe et al. | Dec 2012 | B2 |
8392425 | Matsumoto | Mar 2013 | B2 |
8555161 | Parker | Oct 2013 | B2 |
20010037346 | Johnson | Nov 2001 | A1 |
20020032701 | Gao et al. | Mar 2002 | A1 |
20020035580 | Tanabe | Mar 2002 | A1 |
20020051185 | Yamaguchi et al. | May 2002 | A1 |
20020133492 | Goldstein et al. | Sep 2002 | A1 |
20020161797 | Gallo et al. | Oct 2002 | A1 |
20020174085 | Nelson et al. | Nov 2002 | A1 |
20020194302 | Blumberg | Dec 2002 | A1 |
20030014406 | Faieta et al. | Jan 2003 | A1 |
20030037076 | Bravery et al. | Feb 2003 | A1 |
20030037303 | Bodlaender et al. | Feb 2003 | A1 |
20030084078 | Torii et al. | May 2003 | A1 |
20030105719 | Berger et al. | Jun 2003 | A1 |
20030145279 | Bourbakis et al. | Jul 2003 | A1 |
20040044965 | Toyama et al. | Mar 2004 | A1 |
20040056882 | Foreman et al. | Mar 2004 | A1 |
20040085354 | Massand | May 2004 | A1 |
20040088653 | Bell et al. | May 2004 | A1 |
20040133444 | Defaix et al. | Jul 2004 | A1 |
20040215672 | Pfitzner | Oct 2004 | A1 |
20040215825 | Pfitzner | Oct 2004 | A1 |
20040215826 | Pfitzner | Oct 2004 | A1 |
20040216090 | Kaler et al. | Oct 2004 | A1 |
20040255005 | Spooner | Dec 2004 | A1 |
20050055337 | Bebo et al. | Mar 2005 | A1 |
20050091291 | Kaler et al. | Apr 2005 | A1 |
20050125461 | Filz | Jun 2005 | A1 |
20050131887 | Rohrabaugh et al. | Jun 2005 | A1 |
20050144256 | Blumberg | Jun 2005 | A1 |
20050185636 | Bucher | Aug 2005 | A1 |
20050200896 | Narusawa et al. | Sep 2005 | A1 |
20050273695 | Schnurr | Dec 2005 | A1 |
20060031751 | Ehud | Feb 2006 | A1 |
20060075332 | Fairweather et al. | Apr 2006 | A1 |
20060101071 | Henderson | May 2006 | A1 |
20060149831 | Dutta et al. | Jul 2006 | A1 |
20060200755 | Melmon et al. | Sep 2006 | A1 |
20060230344 | Jennings et al. | Oct 2006 | A1 |
20070033654 | Wilson | Feb 2007 | A1 |
20070061714 | Stuple et al. | Mar 2007 | A1 |
20070070066 | Bakhash | Mar 2007 | A1 |
20070186157 | Walker et al. | Aug 2007 | A1 |
20070208992 | Koren | Sep 2007 | A1 |
20070220068 | Thompson et al. | Sep 2007 | A1 |
20070288637 | Layton et al. | Dec 2007 | A1 |
20080028302 | Meschkat | Jan 2008 | A1 |
20080040659 | Doyle | Feb 2008 | A1 |
20080059417 | Yamada et al. | Mar 2008 | A1 |
20080059539 | Chin et al. | Mar 2008 | A1 |
20080082604 | Mansour et al. | Apr 2008 | A1 |
20080126943 | Parasnis et al. | May 2008 | A1 |
20080127212 | Nakamizo et al. | May 2008 | A1 |
20080222273 | Lakshmanan et al. | Sep 2008 | A1 |
20090055755 | Hicks et al. | Feb 2009 | A1 |
20090089664 | Wagner et al. | Apr 2009 | A1 |
20090112953 | Barsness et al. | Apr 2009 | A1 |
20090112990 | Campbell et al. | Apr 2009 | A1 |
20090119572 | Koivunen | May 2009 | A1 |
20090132907 | Shao et al. | May 2009 | A1 |
20090164620 | Ziegler et al. | Jun 2009 | A1 |
20090307585 | Tranchant et al. | Dec 2009 | A1 |
20100005410 | Pang | Jan 2010 | A1 |
20100030578 | Siddique et al. | Feb 2010 | A1 |
20100050089 | Kim et al. | Feb 2010 | A1 |
20100070852 | Li | Mar 2010 | A1 |
20100083096 | Dupuis-Latour et al. | Apr 2010 | A1 |
20100205230 | Simeonov et al. | Aug 2010 | A1 |
20100205520 | Parish et al. | Aug 2010 | A1 |
20100218099 | van Melle et al. | Aug 2010 | A1 |
20100229086 | Howell et al. | Sep 2010 | A1 |
20100235763 | Massand | Sep 2010 | A1 |
20100245256 | Estrada et al. | Sep 2010 | A1 |
20100251122 | Lee et al. | Sep 2010 | A1 |
20100281076 | Pan et al. | Nov 2010 | A1 |
20100309436 | Allen, Jr. et al. | Dec 2010 | A1 |
20100318894 | Billharz et al. | Dec 2010 | A1 |
20110035661 | Balinsky et al. | Feb 2011 | A1 |
20110066957 | Prats et al. | Mar 2011 | A1 |
20110078246 | Dittmer-Roche | Mar 2011 | A1 |
20110085211 | King et al. | Apr 2011 | A1 |
20110099093 | Mills | Apr 2011 | A1 |
20110164043 | Arora et al. | Jul 2011 | A1 |
20110179427 | Krishnamoorthy et al. | Jul 2011 | A1 |
20110219331 | DeLuca et al. | Sep 2011 | A1 |
20110252299 | Lloyd et al. | Oct 2011 | A1 |
20110252335 | Lloyd et al. | Oct 2011 | A1 |
20110252339 | Lemonik et al. | Oct 2011 | A1 |
20110264712 | Ylonen | Oct 2011 | A1 |
20110282933 | Schmier | Nov 2011 | A1 |
20110296299 | Parker | Dec 2011 | A1 |
20120117406 | Eun | May 2012 | A1 |
20120117452 | Lloyd et al. | May 2012 | A1 |
20120131483 | Archer et al. | May 2012 | A1 |
20120166984 | Brunswig et al. | Jun 2012 | A1 |
20120229825 | Takahashi et al. | Sep 2012 | A1 |
Entry |
---|
Faithe Wempen, Microsoft PowerPoint 2010 Bible, May 24, 2010, John Wiley & Sons, p. 491-519. |
“Using Adobe® Flash® Professional CS5 & CS5.5”, Jan. 16, 2012, Adobe Systems Incorporated, p. 15, 19 and 125-126, http://help.adobe.com/en—US/flash/cs/using/flash—cs5—help.pdf. |
Apple Inc., Motion 5 User Manual, 2011, Apple Inc., pp. 1-1461. |
Maxim Jago et al., Creating and Editing Titles, Sep. 7, 2011, video2brain, the video located at https://www.video2brain.com/en/lessons/creating-and-editing-titles, 0-47 seconds. |
Michael, Quick Tip: How do you pause a slide in Adobe Captivate, Aug. 5, 2011, CPguru, website captured at http://www.cpguru.com/quick-tip-how-do-you-pause-a-slide-in-adobe-captivate/, p. 3. |
Ken Stone, Transitions in FCP X, Dec. 5, 2011, www.kenstone.net, captured from at http://www.kenstone.net/fcp—homepage/fcp—x—transitions—stone.html, p. 1-2. |
“Using Adobe Buzzword”, 2008, pp. 1-35. |
Bibi et al., “A Platform for Delivering Multimedia Presentations on Cultural Heritage,” 2010 14th Panhellenic Conference on Informatics, pp. 175-179. |
Brouwer et al., MathDox editor, Electronic Proceedings MathUI 2009, 2009, XP55028009, retrieved from the Internet May 23, 2012: <http://Www/>win.tue.nl/hansc/mde.pdf. |
De Lara et al., “Puppeteer: Component-Based Adaptation for Mobile Computing,” Proceedings of the 3rd USEIX Symposium on Internet Techonologies and Systems, 14 pages (Mar. 27, 2001). |
Ellis et al., “Concurrency Control in Groupware Systems,” ACM 1989, pp. 399-407. |
http://web.archive.orq/web/20121021135356/http://support.mozilla.org/en—-US/kb/find-and-install-add-ons-add-features-to-firefox. |
Huang et al., “A General Purpose Virtual Collaboration Room,” Google 1999, pp. 1-9. |
International Search Report and Written Opinion issued in PCT/US2012/028279 on Jun. 6, 2012. |
John Day-Richter, Internet Archive Online Article; What's Different About the New Google Docs: Making Collaboration Fast, Sep. 9, 2010, 1-6, http:f/web.archive.org/web/20100927180700/http:J/googledocs.blospot. com/2010/09/whats-different-about-new-google-docs—23.html, retrieved Feb. 14, 2012. |
Junuzovic et al., Read, Write and Navigation Awareness in Realistic Multi-View Collaborations, International Conference on Collaborative Computing: Networking, Applications and Worksharing, 2007, 494-503. |
Kindberg, “Mushroom: A Framework for Collaboration and Interaction across the Internet,” Google 1996, pp. 1-11. |
Masoodian, M., et al., “RECOLED: A Group-Aware Collaborative Text Editor for Capturing Document History,” In Proceedings of IADIS International Conference on WWW/Internet, Lisbon, Portugal, Oct. 19-22, 2005, International Association for Development of the Information Society, vol. 1, 323-330. |
Muhammad et al., “ Awareness Elembents in Web Based Cooperative Writing Applications,” Second Asia-Pacific Conference on Computationa Intelligence and Industrial Applications, 18 pages (2009). |
Mulvany, “What's Going on in Indexing,” ACM 1997, pp. 10-15. |
Munteaunu et al., “Collaborative Editing for Improved Usefulness and Usability of Transcript-Enhanced Webcasts,” ACM 2008, pp. 373-382. |
Nauman et al., “Apex: Extending Android Permission Model and Enforcement with User-Defined Runtime Constraints,” ASIACCS '10 Apr. 13-16, 2010 Beijing, China. |
Pacull et al., “Duplex: A Distributed Collaborative Editing Environment in Large Scale,” ACM 1994, pp. 165-173. |
http://www-archive.mozilia.org/projects/webservices, retrieved from the internet on Dec. 13, 2013. |
Number | Date | Country | |
---|---|---|---|
20150193380 A1 | Jul 2015 | US |