The present invention relates to a system and methods for time-warped playback and, in particular, for automatic time-warped playback in rendering a recorded computer session.
Remote presentation protocols such as the ICA protocol manufactured by Citrix Systems, Inc., of Ft. Lauderdale, Fla., the X protocol by the X.org Foundation, the Virtual Network Computing protocol of AT&T Corp., or the RDP protocol, manufactured by Microsoft Corporation of Redmond, Wash., are inherently stateful. In order to view a particular point in a stream of recorded presentation protocol data, playback of the stream must begin from the very beginning of stream and played back sequentially until the particular point is encountered.
Many conventional methods for session recording operate by taking screen snapshots periodically, and some of these increase the snapshot frequency in response to indications of potential interest gleaned from session activity. The sequences of images may be viewed as slide shows or using image collection management tools. Other methods may record to frame-based digital video formats such as MPG or AVI, and these are viewed with an appropriate media player such as Windows Media Player or the QuickTime player. Many conventional methods lack the ability to reduce review time by eliminating sections showing interactions with certain windows or applications.
Some conventional methods enable playback of recorded sessions at multiples of real-time rate. A user can choose to play back at any one of those multiples, and may change the speed multiplier during playback. However, because the user is unaware of what is about to be rendered, they are prone to turning the speed up during sections of low interest and then missing details when sections of higher interest start. Furthermore, even speeds many times faster than real time are subjectively slow when reviewing lengthy sections of insignificant user activity.
Many conventional systems attempt to optimize playback by minimizing snapshot generation to increase the speed of stream traversal. In some instances, some of these systems perform less frequent screen snapshots until significant activity is detected, and then increasing the frequency of snapshots. Some of these systems may suffer the drawback of losing state when inputs and state changes between snapshots are not captured.
The present invention provides a method for recording and playback of remote presentation protocols such as the ICA protocol manufactured by Citrix Systems, Inc., of Ft. Lauderdale, Fla., the X protocol by the X.org Foundation, the Virtual Network Computing protocol of AT&T Corp., or the RDP protocol, manufactured by Microsoft Corporation of Redmond, Wash. The present invention reduces the time spent manually reviewing session recordings by reducing the time spent during playback rendering one or more sections of the recording where it can be algorithmically determined that the complexity of the recording or the importance of the recorded content is lower than normal. The present invention provides a directed playback, i.e. an alternative rendering of the recorded session. The invention enhances the off-screen rendering operation to generate a playback data structure that describes how to perform the directed playback, and uses that playback data structure to control the on-screen rendering process. Rather than provide solely user-selected multiples of real time, the present invention provides automatically varied context-sensitive playback rates additionally modulated by the reviewer. The present invention presents information to reviewers at an automatically chosen rate that approximates their comprehension rate.
In one aspect, the present invention relates to a method for automatic time-warped playback in rendering a recorded computer session. A background protocol engine receives a recorded session, said recorded session comprising a plurality of packets and representing display data. The background protocol engine determines a measure of complexity represented by at least some of the plurality of packets in the recorded session. The background protocol engine identifies an interval of time between the at least some of the plurality of packets in the recorded session. The background protocol engine modifies the identified interval of time responsive to the measure of complexity represented by the at least some of the plurality of packets in the recorded session. The background protocol engine stores, in a playback data structure, the modified interval of time. A foreground protocol engine renders the recorded stream responsive to the playback data structure
In another aspect, the present invention relates to another method of automatic time-warped playback in rendering a recorded computer session. A recorded session comprising a plurality of packets and representing display data is received. A first packet having a content representing a window having focus is identified, said window indicating an application. A time interval is identified between a second packet, whose contents render prior to the rendering of the content of the first packet and a third packet whose contents render after the rendering of the content of the first packet. The time interval is modified responsive to the indicated application. At least one packet in the recorded stream is rendered responsive to the modification.
In another aspect, the present invention relates to a system for automatic time-warped playback in rendering a recorded computer session. A protocol engine generates a playback data structure in response to receiving a recorded stream, said recorded stream comprising a plurality of packets, and said protocol engine renders at least one packet in the recorded stream responsive to the generated playback data structure.
These and other aspects of the invention will be readily apparent from the detailed description below and the appended drawings, which are meant to illustrate and not to limit the invention, and in which:
Referring now to
The first and second devices 100, 140 can connect to the network 180 through a variety of connections including standard telephone lines, LAN or WAN links (e.g., T1, T3, 56 kb, X.25), broadband connections (ISDN, Frame Relay, ATM), and wireless connections. Connections can be established using a variety of communication protocols (e.g., TCP/IP, IPX, SPX, NetBIOS, NetBEUI, SMB, Ethernet, ARCNET, Fiber Distributed Data Interface (FDDI), RS232, IEEE 802.11, IEEE 802.11a, IEE 802.11b, IEEE 802.11g and direct asynchronous connections).
The first device 100 can be any device capable of receiving and displaying output from applications executed on its behalf by one or more second computing devices 140 and capable of operating in accordance with a protocol as disclosed herein. The first device 100 may be a personal computer, windows-based terminal, network computer, information appliance, X-device, workstation, mini computer, personal digital assistant, or cell phone.
Similarly, the second computing device 140 can be any computing device capable of: receiving from a first computing device 100 user input for an executing application, executing an application program on behalf of a first device 100, and interacting with the first computing device 100 using a protocol as disclosed herein. The second computing device 140 can be provided as a group of server devices logically acting as a single server system referred to herein as a server farm. In one embodiment, the second computing device 140 is a multi-user server system supporting multiple concurrently active connections from one more first devices 100.
The central processing unit 102 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 104. In many embodiments, the central processing unit is provided by a microprocessor unit, such as: the 8088, the 80286, the 80386, the 80486, the Pentium, Pentium Pro, the Pentium II, the Celeron, or the Xeon processor, all of which are manufactured by Intel Corporation of Mountain View, Calif.; the 68000, the 68010, the 68020, the 68030, the 68040, the PowerPC 601, the PowerPC604, the PowerPC604e, the MPC603e, the MPC603ei, the MPC603ev, the MPC603r, the MPC603p, the MPC740, the MPC745, the MPC750, the MPC755, the MPC7400, the MPC7410, the MPC7441, the MPC7445, the MPC7447, the MPC7450, the MPC7451, the MPC7455, the MPC7457 processor, all of which are manufactured by Motorola Corporation of Schaumburg, Ill.; the Crusoe TM5800, the Crusoe TM5600, the Crusoe TM5500, the Crusoe TM5400, the Efficeon TM8600, the Efficeon TM8300, or the Efficeon TM8620 processor, manufactured by Transmeta Corporation of Santa Clara, Calif.; the RS/6000 processor, the RS64, the RS 64 II, the P2SC, the POWER3, the RS64 III, the POWER3-II, the RS 64 IV, the POWER4, the POWER4+, the POWER5, or the POWER6 processor, all of which are manufactured by International Business Machines of White Plains, N.Y.; or the AMD Opteron, the AMD Athlon 64 FX, the AMD Athlon, or the AMD Duron processor, manufactured by Advanced Micro Devices of Sunnyvale, Calif.
Main memory unit 104 may be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor 102, such as Static random access memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Dynamic random access memory (DRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Enhanced DRAM (EDRAM), synchronous DRAM (SDRAM), JEDEC SRAM, PC100 SDRAM, Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), SyncLink DRAM (SLDRAM), Direct Rambus DRAM (DRDRAM), or Ferroelectric RAM (FRAM). In the embodiment shown in
In the embodiment shown in
A wide variety of I/O devices 130 may be present in the computer system 100. Input devices include keyboards, mice, trackpads, trackballs, microphones, and drawing to tablets. Output devices include video displays, speakers, inkjet printers, laser printers, and dye-sublimation printers. An I/O device may also provide mass storage for the computer system 100 such as a hard disk drive, a floppy disk drive for receiving floppy disks such as 3.5-inch, 5.25-inch disks or ZIP disks, a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, DVD-R drive, DVD-RW drive, tape drives of various formats, and USB storage devices such as the USB Flash Drive line of devices manufactured by Twintech Industry, Inc. of Los Alamitos, Calif.
In further embodiments, an I/O device 130 may be a bridge between the system bus 120 and an external communication bus, such as a USB bus, an Apple Desktop Bus, an RS-232 serial connection, a SCSI bus, a FireWire bus, a FireWire 800 bus, an Ethernet bus, an AppleTalk bus, a Gigabit Ethernet bus, an Asynchronous Transfer Mode bus, a HIPPI bus, a Super HIPPI bus, a SerialPlus bus, a SCI/LAMP bus, a FibreChannel bus, or a Serial Attached small computer system interface bus.
General-purpose desktop computers of the sort depicted in
In other embodiments, the first device 100 or second device 140 may have different processors, operating systems, and input devices consistent with the device. For example, in one embodiment the first device 100 is a Zire 71 personal digital assistant manufactured by Palm, Inc. In this embodiment, the Zire 71 uses an OMAP 310 processor manufactured by Texas Instruments, of Dallas, Tex., operates under the control of the PalmOS operating system and includes a liquid-crystal display screen, a stylus input device, and a five-way navigator device.
Referring now to
Referring now to
The protocol data stream 208 comprises a plurality of packets at least some of which represent display data. In some embodiments, the protocol data stream 208 comprises information about a recorded session. In one embodiment, the protocol data stream 208 comprises metadata. In another embodiment, the protocol data stream 208 comprises information about the user in a recorded session. In still another embodiment, the protocol data stream 208 comprises information about the server generating the recorded data. In yet another embodiment, the protocol data stream 208 comprises a timestamp.
In one embodiment, the protocol data stream 208 comprises multiple channels. In this embodiment, a channel comprises a peer-to-peer connection over which data is transferred. In another embodiment, the protocol data stream 208 comprises multiple virtual channels. In this embodiment, the virtual channel is a channel wrapped in another channel. The second device 212 receives the protocol data stream 208 and, in some embodiments, uses a remote presentation protocol client engine 214 to regenerate the display data. Processing the protocol data stream 208 allows the second device 212 to present a display to a user through the display 216. The second device 212 may use the remote presentation protocol client engine 214 to process the display data. The display includes, without limitation, audio, visual, tactile, or olfactory presentations, or combinations of these.
The recorder 206 intercepts the protocol data stream 208 sent from the first to device 202 to the second device 212. In one embodiment, the recorder 206 intercepts the protocol data stream 208 by intercepting one or more channels. In another embodiment, the recorder 206 intercepts the protocol data stream 208 by intercepting one or more virtual channels. In some embodiments, the recorder 206 monitors one or more virtual channels over which the first device 202 may transmit the protocol data stream 208 to the second device 212. The recorder 206 copies at least one packet from the protocol data stream. In one embodiment, the recorder 206 determines to copy a particular packet of the protocol data stream responsive to a policy. In some embodiments, the policy defines the packets the recorder 206 records based upon the type of data contained within the packet. In other embodiments, the recorder 206 determines to copy a packet of the protocol data stream based upon a determination of whether the packet contains data. In some of these embodiments, the recorder 206 does not record empty packets while in others of these embodiments, the recorder 206 does record empty packets. In some embodiments, the recorder 206 records every packet in the protocol data stream 208.
The recorder 206 creates a recorded protocol data stream 210 using the at least one copied packet. In one embodiment, the recorder 206 associates information with the at least one copied packet. In one embodiment, the recorder 206 associates a time stamp with the at least one copied packet. In another embodiment, the recorder 206 associates a data length indicator with the packet. For embodiments where the recorder 206 associates information with the at least one copied packet, for example time stamps or data length indicator, the recorder 206 may embed this information into the recorded protocol data stream 210 in addition to the packet or the recorder 206 may embed this information directly into the packet, or the recorder 206 may store the association in a location separate from the packet and the recorded protocol data stream 210.
As depicted in
The recorder 206 creates the recorded protocol data stream 210 using the at least one copied packet and, in some embodiments, information associated with the at least one copied packet. In some embodiments, the recorder 206 stores the recording of the protocol data stream 210 after creating it. In some of these embodiments, the recorder 206 stores the recording of the protocol data stream 210 to a storage element 218. The storage element 218 may comprise persistent storage, such as a hard drive, floppy drive, CD-RW, DVD-RW, or any other device, which maintains data state when power is removed. In other embodiments, the storage element may comprise one or more volatile memory elements, such as Static random access memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Dynamic random access memory (DRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Enhanced DRAM (EDRAM), synchronous DRAM (SDRAM), JEDEC SRAM, PC100 SDRAM, Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), SyncLink DRAM (SLDRAM), Direct Rambus DRAM (DRDRAM), or Ferroelectric RAM (FRAM).
In one embodiment the storage element comprises a network storage device. The storage element 218 may reside on the first device 202 or on a second device 212. In other embodiments, the storage element 218 resides on a third device, such as a proxy server computing device or a passthrough server computing device. In still other embodiments, the storage element 218 resides on a network and the recorder 206 accesses the storage element 218 over the network to store the recording of the protocol data stream 220. In other embodiments, the recorder 206 stores the recording of the protocol data stream on the same device on which the recorder 206 resides.
Referring now to
In some embodiments, the protocol engine 502 comprises a packet reader 508 and a display data regeneration element 510. In these embodiments, the packet reader 508 reads at least one copied packet from the recording of the protocol data stream 506. In some embodiments, the packet reader 508 reads the at least one copied packet sequentially from the recording of the protocol data stream 506.
The protocol engine 502 processes the at least one copied packet and any information associated with the at least one copied packet. The protocol engine 502 uses, in some embodiments, a display data regeneration element 510 for the processing. The packet contains data enabling the regeneration of a perceptible display presented to a user. In some embodiments, a second device 212 processed this data, as shown in
In some embodiments, the protocol engine 502 resides on the first device 202. In other embodiments, the protocol engine 502 resides on the second device 212. In still other embodiments the protocol engine resides on a third device, such as a proxy server computing device or a passthrough server computing device.
Referring ahead now to
The recorder 702 includes, in one embodiment, a protocol data stream interceptor 704, a packet copier 706, and a recording generator 708. In one embodiment, the recorder 702 uses the protocol data stream interceptor 704 to monitor the protocol data stream 710. In another embodiment, the recorder 702 uses the protocol data stream interceptor 702 to intercept a protocol data stream 710 comprising a plurality of packets transmitted from a first device 202 to a second device 212. The packet copier 706 copies at least one packet of the protocol data stream. The packet copier 706 determines whether or not to copy a packet to in the protocol data stream. In some embodiments, the packet copier 706 makes this determination responsive to a policy. In these embodiments, the packet copier 706 may determine to copy a packet based on whether or not the packet contains any data or on the type of data contained within the packet.
In one embodiment, the recorder 702 utilizes a recording generator 708 to create a recording of the protocol data stream using the at least one copied packet. The recording generator assembles the at least one copied packet into a recording 712 of the protocol data stream 710. In some embodiments, the recording generator 708 embeds information into the recording of the protocol data stream. This information may comprise, without limitation, time references indicating when to regenerate the display data represented by the data contained within the packet, data length indicators descriptive of the data contained within the packet, or other types of information used to regenerate the display data represented by the data contained within the protocol data stream 710.
Referring back to
In one embodiment, depicted in shadow by
In another embodiment depicted by
In one embodiment, the recorder 702, protocol engine 502, or storage element 714 may be located, together or separately on the first device 202. In other embodiments, they may be located, together or separately, on the second device 212. In still other embodiments, they may reside, together or separately, on a third device, such as a proxy server computing device, a network packet sniffer, or a passthrough server computing device. In yet other embodiments, the storage element 714 may reside on a storage area network separately from the recorder 702 and the protocol engine 502.
Referring back to
A recorder 206 intercepts a protocol data stream 208 comprising a plurality of packets, representing display data transmitted from a first device 202 to a second device 212. The recorder 206 copies at least one packet of the protocol data stream 208. The recorder 206 creates a recording of the protocol data stream using the at least one copied packet. The recorder 206, in some embodiments, associates information with the at least one copied to packet. The information may comprise a time stamp or a data length indicator. In some of these embodiments, the recorder 206 embeds the information associated with the packet into the recording of the protocol data stream 210. In others of these embodiments, the recorder 206 stores the information associated with the packet in a separate protocol data stream. In still others of these embodiments, the recorder stores the information associated with the packet in a data store. A protocol engine 502 reads the at least one copied packet from the recording of the protocol data stream 210 and uses information associated with the at least one copied packet to regenerate the display data represented by the protocol data stream 210.
Referring ahead now to
Referring now to
The state-snapshot 1104 enables regeneration of display data because it stores a state of a protocol engine rendering the protocol data stream 1110 at a point in time when a recorder 206 copied at least one packet from the protocol data stream 208 into the recording to of the protocol data stream 1110. In one embodiment, the state-snapshot 1104 comprises a data structure describing a state of a screen at a point in time. In another embodiment, the state-snapshot 1104 represents all the variables, images and data components that make up the state of a protocol engine at a reference point in the protocol data stream 1110. The foreground protocol engine 1106 also receives the recording of the protocol data stream 1110 and renders the contents of the at least one packet in the protocol data stream 1110 by recreating the state of the protocol engine which originally rendered the protocol data stream 1110. In one embodiment, the foreground protocol engine 1106 uses the contents of the state-snapshot 1104 to render the contents of the at least one packet.
In one embodiment, the state-snapshot 1104 comprises a data structure. In other embodiments, the state-snapshot 1104 comprises a database. In one embodiment, the contents of the state-snapshot 1104 include display data regarding the state of a visible surface. In another embodiment, the contents of the state-snapshot 1104 include display data regarding the state of an off-screen surface. In yet another embodiment, the contents of the state-snapshot 1104 include display data regarding the state of a drawing object. In some embodiments, the contents of the state-snapshot 1104 include display data regarding the state of a color palette. In other embodiments, the contents of the state-snapshot 1104 include display data regarding the state of a cached object. In still other embodiments, the contents of the state-snapshot 1104 include display data regarding the state of a buffer.
The foreground protocol engine 1106 receives the recording of the protocol data stream 1110 and uses the state-snapshot 1104 to identify a packet containing the representation of the requested digital data and to render the packet. In some embodiments, the foreground protocol engine 1106 generates a real-time perceptible representation of the recording of the protocol data stream 1110 for presentation to a viewer using the display 1108. In some embodiments, the foreground protocol engine 1106 generates the real-time perceptible representation by rendering the contents of at least one packet in the protocol data stream 1110. The perceptible representation may include, without limitation, separately or together, audio, visual, tactile, or olfactory presentations.
In one of the embodiments in which the foreground protocol engine 1106 renders the contents of at least one packet in the protocol data stream 1110, the foreground protocol engine 1106 initiates rendering the contents of at least one packet in the protocol data stream 1110 simultaneous to the rendering by the background protocol engine 1102. However the background protocol engine 1102 renders only to a buffer and completes the rendering and the generation of the at least one state-snapshot 1104 prior to the completion of the real-time perceptible rendering initiated by the foreground protocol engine 1106, which, in one embodiment, renders to both a buffer and in a perceptible manner. In one embodiment, the background protocol engine 1102 renders the protocol data stream 1110 at a maximum possible speed regardless of any timestamps associated with the recording which would otherwise specify a time for rendering. Therefore, at least one state-snapshot 1104 is available to the foreground protocol engine 1106 during its generation of a real-time perceptible representation of the recording of the protocol data stream 1110.
In one embodiment, the foreground protocol engine 1106 renders the contents of the plurality of packets within the recording of the protocol data stream 1110 in a sequential manner. In this embodiment, the display data rendered and presented to the user presents the display in the order in which it occurred at the time the protocol data stream was recorded. The recording of the protocol data stream 1110 may include information, such as time stamps, for use by the foreground protocol engine 1106 in rendering the display data sequentially. In some embodiments, the foreground protocol engine 1106 renders the display data in real-time. When the foreground protocol engine 1106 receives a request to regenerate a particular display data represented by a particular packet in the recording of the protocol data stream 1110, the foreground protocol engine 1106 renders the requested display data using the contents of the identified state-snapshot 1104.
In some embodiments, the background protocol engine 1102 and the foreground protocol engine 1106 reside on the same device. In other embodiments, the background protocol engine 1102 and the foreground protocol engine 1106 reside on separate devices.
Referring back now to
In one embodiment, the foreground protocol engine 1106 receives a request to render the contents of a packet in a recording of a protocol data stream 1110. The protocol data stream 1110 comprises a plurality of packets whose contents represent display data. In some embodiments, the request results when the foreground protocol engine 1106 regenerates display data by rendering the contents of a packet in a recording of a protocol data stream 1110 to a viewer using the display 1108 and the viewer wishes to seek for a particular display data.
The foreground protocol engine 1106 identifies a state-snapshot 1104 having an associated timestamp not later than a time stamp associated with the requested packet. The foreground protocol engine 1106 displays the display data represented by the contents of the requested packet responsive to the identified state-snapshot 1104. In one embodiment, the identified state-snapshot 1104 indicates the exact packet from the protocol data stream 1110 whose contents the foreground protocol engine 1106 may render to provide the user with the requested display data.
In other embodiments, the identified state-snapshot 1104 comprises a state of a protocol engine rendering the protocol data stream at a point in time when a recorder copied a packet from the protocol data stream 1110 but the display data represented by the contents of the copied packet precede the display data requested by the viewer. In some of these embodiments, there are multiple packets between the state-snapshot and the packet containing the representation of the requested display data. In some of those embodiments, the foreground protocol engine 1106 renders the contents of the intermediate packet or packets only to an off-screen buffer. The foreground protocol engine 1106 then renders the packet whose contents represent the display data both to an off-screen buffer and to the user in a perceptible manner. In one embodiment, the foreground protocol engine 1106 presents the display data represented by the contents of the intermediate packets in a perceptible manner prior to the display data represented by the contents of the requested packet.
Referring now to
The background protocol engine 1102 receives a recording of a protocol data stream 1110 comprising a plurality of packets (step 1002). The background protocol engine 1102 generates a representation of the recording of the protocol data stream. In one embodiment, the background protocol engine 1102 generates the representation by rendering the contents of the plurality of packets to a buffer. In some embodiments, the buffer is an off-screen buffer.
In some embodiments, the foreground protocol engine 1106 also receives the recording of the protocol data stream 1110. In these embodiments, the foreground protocol engine 1106 generates a human-perceptible representation of the recording of the protocol data stream, although, as discussed above, the foreground protocol engine 1106 renders both to an off-screen buffer and in a perceptible manner (step 1004). In one of these embodiments, the foreground protocol engine 1106 generates a human-perceptible representation of the recording of the protocol data stream 1110 by rendering the contents of the plurality of packets substantially simultaneously with the background protocol engine 1102 generating at least one state-snapshot during its reading of the recording of the protocol data stream.
After the reading of the at least one packet in the recording of the protocol data stream 1110, the background protocol engine 1102 generates at least one state-snapshot (step 1006). In one embodiment, the background protocol engine 1102 generates at least one state-snapshot during a sequential reading of the recording of the protocol data stream 1110. In another embodiment, the background protocol engine 1102 reads the at least one packet in the recording of the protocol data stream 1110 substantially simultaneously with a rendering of the contents of the packet to a buffer. In one embodiment, the background protocol engine 1102 then stores the generated state-snapshot 1104 (step 1008). In embodiments where the background protocol engine 1102 generates multiple state-snapshots periodically, the state-snapshots may act as markers throughout the recording of the protocol data stream 1110, assisting in the location of a particular point in time in the protocol data stream 1110 and of the packets that come before or after the state-snapshot 1104.
Referring ahead to
During a presentation of a representation of a recording of a protocol data stream 1110 to a user (step 1202), a background protocol engine 1102 monitors an activity of the user (step 1204). In one embodiment, the foreground protocol engine 1106 generates the representation of the recording of the protocol data stream 1110 and presents it to the user with the display 1108. In other embodiments, the background protocol engine 1102 generates the representation. In still other embodiments, a third device generates the representation.
The background protocol engine 1102 monitors an activity of the user during the presentation (step 1204). By monitoring the activity of the user, the background protocol engine 1102 develops an activity profile responsive to the monitoring of the activity (step 1206). The background protocol engine generates at least one state-snapshot 1104 responsive to the developed activity profile (step 1208).
In some embodiments, the background protocol engine 1102 identifies a level of activity of the user. In some embodiments, the background protocol engine 1102 identifies a period of inactivity. In other embodiments, the background protocol engine 1102 identifies an area of interest to the user in the display data. The activity profile reflects these identifications.
The background protocol engine 1102 generates at least one state-snapshot responsive to the activity profile. In some embodiments, the background protocol engine 1102 determines to extend an interval between one or more state-snapshots. In other embodiments, the background protocol engine 1102 determines to reduce an interval between one or more state-snapshots. In still other embodiments, the background protocol engine 1102 determines to remove the at least one state-snapshot, responsive to the activity profile. In still other embodiments, the background protocol engine 1102 determines to add at least one state-snapshot, responsive to the activity profile.
In one embodiment, the background protocol engine 1102 identifies a predicted statistical distribution of seek probabilities.
For embodiments with new users or users without a distinguishable usage to pattern, the background protocol engine 1102 applies a default state-snapshot generation pattern. This pattern assumes most seeking will occur close to the current frame in either direction, but long range seek performance must only be at best satisfactory. The typical user will demand high performance when jogging back-and-forth around the current frame as many small seek steps can be achieved with jog wheel input device. Seeking long range is less common and noticeable delays may be an acceptable trade-off.
If the user strays from their recognized usage pattern, the background protocol engine 1102 adjusts the state-snapshot generation pattern during live playback without the user's knowledge. The background protocol engine 1102 moves state-snapshot positions to adjust for the new usage pattern. For example, if a user that normally seeks in small steps with the mouse wheel begins seeking longer range, the background protocol engine 1102 reduces the number of state-snapshots around the current frame to free resources for adding state-snapshots within the areas at longer range.
Referring ahead now to
In one embodiment, the protocol engine comprises a protocol engine 502, as described in
In one embodiment, the protocol engine determines for a packet in the recorded stream to display the packet in a human-perceptible manner (step 1804). The display includes, without limitation, audio, visual, tactile, or olfactory presentations, or combinations of these. In some embodiments, the protocol engine determines to display a packet based responsive to the contents of the packet. In one of these embodiments, the protocol engine makes the determination responsive to an indication of an application program having input focus. In another of these embodiments, the protocol engine makes the determination responsive to an evaluation of a type of user input stored in the packet. In some of these embodiments, the protocol engine makes the determination responsive to an evaluation of a type of graphics update stored by the packet. In others of these embodiments, the protocol engine makes the determination responsive to an evaluation of a type of interaction sequence stored by the packet.
In one embodiment, the protocol engine stores the determination in a playback data structure (1806). In some embodiments, a playback data structure describes how to regenerate the display data contained within the recorded stream. In one embodiment, the instructions stored within the playback data structure control the process of rendering display data. In one embodiment, the playback data structure comprises a time for rendering the contents of a packet in the recorded stream. In this embodiment, the time contained in the to playback data structure is used for rendering the contents of the packet and not a time of rendering associated with the packet in the recording, if any. In one embodiment, the playback data structure accepts user input in changing the time of rendering.
In some embodiments, the playback data structure comprises metadata that describes how to perform one or more playbacks of a recorded session. In one embodiment, the playback data structure consists of a record for each packet in the recorded stream, indicating at what relative point in time the contents of that packet should be rendered during playback. In some embodiments, the metadata also contains the offset within the file of the start of the packet.
Referring back now to
In one embodiment, the background protocol engine 1704 and the foreground protocol engine 1708 each receive the recorded stream 1714. In this embodiment, the background protocol engine 1704 generates the playback data structure substantially simultaneously with the foreground protocol engine 1708 rendering the recorded stream.
In one embodiment, the foreground protocol engine 1708 resides on the first device 1702. In another embodiment, shown in shadow in
In one embodiment, the background protocol engine stores in the playback data structure at least one instruction for rendering at least one packet in the recorded stream. In another embodiment, the background protocol engine stores metadata in the playback data structure. In yet another embodiment, the background protocol engine stores in the playback data structure a record indicating a time to render at least one packet in the recorded session.
The foreground protocol engine 1708 renders at least one packet in the recorded session responsive to the playback data structure. In one embodiment, the foreground protocol engine renders at least one packet in the recorded session in a human-perceptible manner and to a buffer. In another embodiment, the foreground protocol engine renders at least one packet in the recorded session to a buffer.
Referring ahead to
In one embodiment, the protocol engine determines for a packet in the recorded stream to display the packet in a human-perceptible manner (step 1904). The display includes, without limitation, audio, visual, tactile, or olfactory presentations, or combinations of these. In some embodiments, the protocol engine determines to display a packet based responsive to the contents of the packet. In one of these embodiments, the protocol engine makes the determination responsive to an indication of an application program having input focus. In another of these embodiments, the protocol engine makes the determination responsive to an evaluation of a type of user input stored in the packet. In some of these embodiments, the protocol engine makes the determination responsive to an evaluation of a type of graphics update stored by the packet. In others of these embodiments, the protocol engine makes the determination responsive to an evaluation of a type of interaction sequence stored by the packet. In one embodiment, the protocol engine stores the determination in a playback data structure (1906).
In one embodiment, the foreground protocol engine receives the recorded session. In other embodiments, the foreground protocol engine retrieves the recorded session. In some of these embodiments, the foreground protocol engine retrieves the recorded session from a storage element.
In one embodiment, the foreground protocol engine retrieves at least one packet from the recorded stream (step 1908). In this embodiment, the foreground protocol engine then accesses the playback data structure (step 1910) and renders the contents of the packet responsive to the playback data structure (step 1912). In some embodiments, the playback data structure contains an instruction to render the contents of the packet in a perceptible manner. In one of these embodiments, the foreground protocol engine renders the contents of the packet on-screen. In some embodiments, the foreground protocol engine always renders the contents of the at least one packet to a buffer. In many embodiments, when the foreground protocol engine renders the contents of a packet to a buffer, it is an off-screen buffer. In one of these embodiments, the foreground protocol engine renders the contents of the packet to an off-screen buffer and also renders the contents of the packet on-screen, as directed by the playback data structure.
In other embodiments, the playback data structure comprises an instruction not to render the contents of the packet in a perceptible manner. In one of these embodiments, upon accessing the playback data structure, the foreground protocol does not render the contents of the packet in a perceptible manner but does render the contents of the packet to a buffer.
For embodiments in which the foreground protocol engine renders the contents of a packet only to an off-screen buffer, responsive to the playback data structure, the foreground protocol engine perceptibly regenerates display data differing from the recorded stream. This results, in one embodiment, in a presentation of display data shorter than the original recorded stream. In some embodiments, the rendered contents of the packets provide a streamlined regeneration of the original display data. In other embodiments, the rendered contents of the packets provide a customized version of the to display data. In one embodiment, the determination to render the contents of the packet in a perceptible manner is responsive to a policy or user request. These embodiments provide users with control over the playback of the recorded session.
Referring ahead now to
One embodiment of a method to eliminate perceptible intervals of time with no activity is as follows. A first packet in a recorded session is identified. The recorded session comprises a plurality of packets representing display data. The nearest previous packet to the first packet in the recorded session is identified as a second packet. A first time interval is determined, the time interval occurring between said first packet and said second packet. A determination is made that the first time interval exceeds a threshold. The contents of the packets in the recorded session are rendered with a second time interval between said first packet and said second packet shorter than the first time interval.
In one embodiment, a protocol engine makes the determinations. In some embodiments, the protocol engine stores the determinations in a playback data structure. In one embodiment, the same protocol engine renders the recorded session responsive to the playback data structure. In another embodiment, the protocol engine making the determinations comprises a background protocol engine and the protocol engine rendering the recorded session comprises a foreground protocol engine.
In one embodiment, when the protocol engine determines that the time interval exceeds the threshold, the protocol engine categorizes the time interval as a perceptible time interval. A time interval is perceptible if a user of the regenerated recorded session can perceive that a period of time lacking activity has elapsed. In some embodiments, a policy determines the threshold. In other embodiments, the protocol engine is hard coded with a predefined threshold. In this embodiment, the protocol engine stores an instruction in the playback data structure to render a shorter time interval between the first and second packets to instead of the original time interval. In another embodiment, the protocol engine determining that the time interval exceeds the threshold also renders the contents of the recorded session. In this embodiment, the protocol engine does not store the instruction to render the shorter time interval in the playback data structure. For a time interval not categorized as perceptible, no shortened time interval is needed and the original time interval is rendered between the first and second packets.
Referring back now to
In one embodiment, the type of input stored by a packet determines whether or not the packet will be rendered. In one embodiment, the packet contains no content. In some embodiments, at least one packet contains no content. In these embodiments, an interval of time comprised of at least one packet containing no content is identified. In some of these embodiments, the interval of time will not be rendered.
In some embodiments, the type of input refers to input from certain types of input devices, including, without limitation, a keyboard, a mouse, a microphone, or a camera. In one embodiment the step of identifying the type of input further comprises identifying the type of input as input from an input device. In another embodiment, the step of identifying the type of input further comprises identifying the type of input as keyboard input. In other embodiments, the type of input is not related to the input device. In one of these embodiments, the type of input is identified as a command.
The packet containing the input is marked responsive to the type of input it contains (step 2004). In one embodiment, the packet is marked responsive to a policy. In this embodiment, a policy determines the types of input which result in a packet being marked. In another embodiment, no marking is required.
A destination for rendering the packet is stored in a playback data structure responsive to the marking (step 2006). In some embodiments, the destination comprises a buffer. In one embodiment, an instruction is stored in the playback data structure, directing to rendering of the packet to the buffer. In one embodiment, the buffer is an off-screen buffer and when the contents of the packet are rendered to the buffer they are not perceptible to a user of the rendering. In one embodiment, an instruction is stored in the playback data structure, directing rendering of the marked packet both in a perceptible manner and to a buffer.
In one embodiment, the method eliminates perceptible intervals of time containing no meaningful activity. In this embodiment, a policy identifies a particular type of input as meaningful or as insignificant. The policy may be hard coded into a protocol engine, in some embodiments. In other embodiments, an administrator configures the policy.
In some embodiments, a protocol engine identifies a packet as insignificant if the packet contains no content. In some of those embodiments, the packet represents an interval of time in which the no user activity occurred to be recorded into the recorded stream 1714. In these embodiments, the protocol engine stores in a playback data structure a destination for rendering each of the plurality of packets in the recorded stream in such a way that any insignificant packet does not render in a perceptible manner.
In some embodiments, the protocol engine identifies an input type responsive to previously defined input types comprising provably insignificant time. In some embodiments, insignificant time includes an interval of time in which no packet contains any content. In other embodiments, a policy defines the input types, which constitute insignificant time. In still other embodiments, a definition of an input type comprising provably insignificant time is hard coded into the protocol engine.
In some embodiments, the contents of a packet represent user activity but a to policy identified the activity as insignificant activity. In one of these embodiments, the policy defines an insignificant activity as activity deemed to be of no interest to a user of the regenerated recorded session. In another of these embodiments, meaningful packets contain contents of interest to a user of the regenerated recorded session, as determined by the policy. In one embodiment, an insignificant packet has no content representing input meaningfully interacting with an application. In another embodiment, the device transmitting application data in the protocol data stream from which the recorded stream was created transmitted no meaningful screen updates.
In one embodiment, the protocol engine determines for at least one packet in the recorded session whether the contents of the packet include types of input such as, without limitation, keyboard input, mouse input, or command messages. If the packet does contain a type of input such as keyboard input, the protocol engine marks the packet as a meaningful packet. If the packet does not contain that type of input, the protocol engine marks the packet as insignificant. In one embodiment, the packet is insignificant only if all of its contents are insignificant. In another embodiment, a packet contains more than one type of input each of which may be marked as meaningful or insignificant.
In one embodiment, when the protocol engine marks a packet as insignificant, the protocol engine determines that the contents of the packet should not render in a perceptible manner. In some embodiments, the protocol engine determines instead that the contents of the packet should render to a buffer. In one of these embodiments, the buffer is an off-screen buffer. If the packet is marked as a meaningful packet, the protocol engine determines, in one embodiment, that the contents of the packet should render in a perceptible manner. In some embodiments, a perceptible manner comprises rendering on-screen. In one embodiment, the protocol engine determines that the packet should render both in a perceptible manner and to a buffer. In this embodiment, the contents of the packet render both to an on-screen display and to an off-screen buffer. The protocol engine stores the determination in the playback data structure.
In one embodiment, depicted in
Referring now to
In one embodiment, a protocol engine makes the determinations. In some embodiments, the protocol engine stores the determinations in a playback data structure. In one embodiment, the same protocol engine renders the recorded session responsive to the playback data structure. In another embodiment, the protocol engine making the determinations comprises a background protocol engine and the protocol engine rendering the recorded session comprises a foreground protocol engine.
In some embodiments, the protocol engine makes the determination of the first time interval (step 2302) and whether or not the first time interval exceeds a threshold (step 2304) after a packet has been marked as a meaningful packet responsive to the type of input contained in the packet. In one of these embodiments, the type of output contained in the packet impacts the determination to mark the packet. In one embodiment, the protocol engine determines the time interval between the packet marked as meaningful and the nearest previous meaningful packet, or the start of the recording if there are no previous meaningful packets. In another embodiment, the protocol engine renders the contents of the recorded session with a second time interval between the marked packet and a previous packet said second time interval comprising a shorter time interval than the first time interval. In another embodiment, the protocol engine renders the contents of the recorded session with a second time interval between the marked packet and a packet following the marked packet, said second time interval comprising a shorter time interval than the first time interval.
In one embodiment, when the protocol engine determines that the time interval exceeds the threshold (step 2304), the protocol engine categorizes the time interval as a perceptible time interval. A time interval is perceptible if a user of the regenerated recorded session can perceive that a period of time lacking activity has elapsed. In some embodiments, a policy determines the threshold. In other embodiments, the protocol engine is hard coded with a predefined threshold. In this embodiment, the protocol engine stores an instruction in the playback data structure to render a shorter time interval between the two meaningful packets instead of the original time interval. In another embodiment, the protocol engine determining that the time interval exceeds the threshold also renders the contents of the recorded session. In this embodiment, the protocol engine does not store the instruction to render the shorter time interval in the playback data structure. For a time interval not categorized as perceptible, no shortened time interval is needed and the original time interval is rendered between the two meaningful packets.
In some embodiments, contents of a packet in the recorded stream represent graphics updates affecting a screen region. In one embodiment, the graphics updates include, without limitation, flashing system tray icons, title bars or task bar entries, blinking text in web pages or applications, clock displays, system animations, application animations, and stock tickers and other periodically updated information displays. In some embodiments, graphics updates such as these are determined to be insignificant to a user of a regeneration of the recorded stream. In one of these embodiments, a protocol engine comprises this determination. In another of these embodiments, a policy defines at least one graphics update as insignificant. In this embodiment, an administrator generates the policy. In another embodiment, a user of the regeneration of the recorded stream generates the policy.
Referring now to
In one embodiment, a protocol engine performs the steps depicted by
In one embodiment, the protocol engine identifies a second graphics update affecting the screen region within a time interval. In some embodiments, a policy determines the length of the time interval. In one of these embodiments, the policy determines a time interval approximating the upper limit of human scale cyclic periods used by applications and operating systems. In one embodiment, when a region of the screen goes through a cyclic display, at a period designed to be viewed by the user (for example, a significant fraction of a second up to several seconds), the display comprises a human scale cyclic period. In some embodiments, the protocol engine comprises a definition of the length of the time interval.
In an embodiment where the protocol engine identifies a second graphics update affecting the screen region affected by the first graphics update, the protocol engine determines whether the state of the screen region after the second graphics update varies from the state of the screen region after the first graphics update. If the screen region does not vary after the second graphics update, the second graphics update need not render in the regenerated recorded session. A screen graphics update in this embodiment need not render since the protocol engine determined that the graphics update is performing a cycle of drawing commands at human-scale speeds, making the update observable to a user of the regenerated recorded session, but the graphics update carries insignificant information for the user. In some embodiments, the graphics update affects the screen region by drawing, without limitation, a caret flashing, a flashing taskbar icon, a network activity indicator, or scrolling text. In some embodiments, a policy determines that affecting a screen region with that type of graphics update does not constitute a meaningful activity and should not render in the regeneration of the recorded session for a user. In other embodiments, the protocol engine comprises this determination.
In one embodiment, an indication of a destination for rendering the second packet containing the second graphic update affecting the screen region is stored in a playback data structure, responsive to whether the screen region varies after the second graphics update. In another embodiment, an indication of a time interval to render associated with the second packet containing the second graphic update affecting the screen region is stored in a playback data structure, responsive to whether the state of the screen region after the second graphics update varies from the state of the screen region after the first graphics update.
In some embodiments, the contents of a plurality of packets represent a graphics update. In one of these embodiments, a determination to render a graphics update in a perceptible manner is made responsive to the effects of more than two graphics updates on a screen region. In one embodiment, the determination of a destination for rendering a graphics update is responsive to the graphics update represented by the contents of each packet in the identified plurality of packets.
In some embodiments, contents of a packet in the recorded stream represent an interaction sequence. In one embodiment, the interaction sequence comprises, without limitation, a logon sequence, a logoff sequence, or the entering of credentials. In some embodiments, interaction sequences such as these are determined to be insignificant to a user of a regeneration of the recorded stream. In one of these embodiments, a protocol engine comprises this determination. In another of these embodiments, a policy defines at least one interaction sequence as insignificant. In this embodiment, an administrator generates the policy. In another embodiment, a user of the regeneration of the recorded stream generates the policy.
Referring now to
In one embodiment, a protocol engine makes the identifications and indications to eliminate an interaction sequence. An identification of a start of an interaction sequence is made (step 2602). In one embodiment, the start of the interaction sequence is identified by identifying a visual marker. In one embodiment, a visual marker comprises a credentials window, displayed in the same way for all sessions. In another embodiment, a visual marker comprises a replacement of a credentials window by a blank screen and then by a desktop background. In one embodiment, a visual marker comprises the display of recognizable icons.
In some embodiments, a start of an interaction sequence is identified by determining a start time of an interaction sequence. In one of these embodiments, a component detects the start time of an event in an interaction sequence. In another of these embodiments, the component detects the start time of a logon sequence. In still others of these embodiments, the component detects the start time of a logoff sequence. In one embodiment, the identification of the start of the interaction sequence is responsive to identifying a window with an input focus.
An indication is made in a playback data structure that an interaction sequence should render in a buffer (step 2604). In this embodiment, where an identified interaction sequence should not render perceptibly, the interaction sequence is rendered to a buffer. Rendering the interaction sequence to a buffer results in the interaction sequence being imperceptible to a user of the rendering. For embodiments where a policy or user categorized the interaction sequence as insignificant, this rendering results in the elimination of an insignificant interaction sequence.
An identification of a termination of an interaction sequence is also made (step 2606). In some embodiments, the termination of the interaction sequence is identified by identifying a visual marker. In other embodiments, a termination of an interaction sequence is identified by determining a termination time of the interaction sequence. In one of these embodiments, a component detects the termination time of an event in an interaction sequence. In another of these embodiments, the component detects the termination time of a logon sequence. In still others of these embodiments, the component detects the termination time of a logoff sequence. In another embodiment, identifying the termination of the interaction sequence is responsive to identifying a window with an input focus.
In some embodiments, an interaction sequence comprises use of an application. In one of these embodiments, a policy identifies interaction sequences comprising use of an application that should not render in a perceptible manner. In one embodiment, such applications include, without limitation, word processing documents.
In one of these embodiments, a start of an interaction sequence is identified by identifying an application having input focus. When the contents of a packet represent a window having focus, a determination is made as to the application responsible for the process that created the window. In one embodiment, the contents of the packet representing a window having focus include window notification messages indicating a change in input focus. If the responsible application identifies a start of an interaction sequence which should not render perceptibly, an indication is stored in a playback data structure to render the interaction sequence to a buffer. A termination of an interaction sequence is identified by identifying the acquisition of focus by a window owned by a process not associated with the application of the interaction sequence.
In one embodiment, a first time interval is associated with the interaction sequence. Perceptibly rendering the time interval associated with the interaction sequence in an embodiment where the interaction sequence itself does not render results in a period of time perceptible to a user of the rendering in which no display data renders and the user waits through the time interval before a rendering of the contents of a packet after the interaction sequence. One embodiment eliminates the time interval associated with the interaction sequence by rendering a shorter time interval in place of the original time interval. In this embodiment, a first time interval between a packet preceding the identified start of the interaction sequence and a packet following the identified termination of the interaction sequence is identified (step 2608). A playback data structure contains an indication to render a second time interval shorter than the first time interval (step 2610).
In some embodiments, a protocol engine renders the contents of a packet in a recorded session, providing to a user a regeneration of the recorded session. In some of these embodiments, the protocol engine automatically varies the time intervals between rendering the contents of at least one packet, resulting in context-sensitive time-warped playback. In these embodiments, rendering approximates the ability of the user to comprehend the display data presented to the user. In one embodiment, the time intervals between rendering contents of packets increase when the protocol engine determines the display data represented by the contents of the packets to have an increased level of complexity or importance, as defined by a policy. In another embodiment, the time intervals between rendering contents of packets to decrease when the protocol engine determines the display data represented by the contents of the packets to have a decreased level of complexity or importance, as defined by a policy. In these embodiments, the protocol engine approximates the ability of the user to comprehend the display data and renders the contents either more slowly to give the user time to comprehend the rendering or renders the contents faster when a user requires less comprehension time.
Referring now to
In some embodiments, the protocol engine determining the measure of complexity, identifying the interval of time, modifying the interval of time and storing the modification is a background protocol engine. In one of these embodiments, the background protocol engine also renders the recorded stream. In another of these embodiments, a foreground protocol engine renders the recorded stream responsive to the playback data structure. In some embodiments, the background protocol engine and the foreground protocol engine reside on the same device. In other embodiments, the background protocol engine and the foreground protocol engine reside on separate devices.
In some embodiments, the protocol engine determines a measure of complexity represented by at least some of a plurality of packets in the recorded session (step 2704). In some of these embodiments, the protocol engine determines the measure of complexity by identifying likely sequences of typing in keyboard input. In one embodiment, the protocol engine inspects at least one type of key involved to identify likely sequences of typing in keyboard input. In another embodiment, the protocol engine inspects a sequence of at least one glyph rendered to complete a heuristic approximation of likely sequences of typing in keyboard input.
In some of these embodiments, the protocol engine stores classifications of keys determined by characteristics of the key. Key characteristics include, without limitation printable or non-printable characters, white space, navigation keys, or function keys, and include combinations of characteristics. In one embodiment, a protocol engine determines that sections of input comprising printable characters and occasional navigation keys constitute normal typing, while sections with mostly non-visible keys do not constitute normal typing. in one embodiment, the protocol engine determines a measure of complexity responsive to the amount of white space identified. In this embodiment, the protocol engine comprises a definition of word processing indicating that a white space key appears on average approximately every 5-8 characters in typical typing patterns.
In one embodiment, the protocol engine uses the appearance of non-printable characters to determine the measure of complexity. In another embodiment, the protocol engine accesses the keystroke sequences to identify sequences of non-white space printable characters appearing close together in time. In this embodiment, the protocol engine comprises the capacity to compare the keystroke sequences to a dictionary to identify valid words and determine a measure of complexity relating to an ability of a user to comprehend valid words versus invalid words.
In another embodiment, the protocol engine determines that the contents of the packet contain commands to render glyphs. In this embodiment, the protocol engine uses the glyphs to determine whether the display data represents a user activity of typing. In this embodiment, if a glyph rendering rate approximates the keyboard input rate with a small delay, it is likely that keystrokes are directly resulting in glyphs, thus making it quite likely the user is typing. In one embodiment, the protocol engine correlates the keys entered with the glyphs produced. In another embodiment, the protocol engine determines the spatial sequence (left-to-right, right-to-left, etc.) of the rendered glyphs to determine that a user is typing. In one embodiment, the protocol engine makes the determination of the measure of complexity responsive to the result of analyzing the contents of the plurality of packets and identifying patterns and activities represented by the contents.
In other embodiments, the protocol engine makes the determination of the measure of complexity responsive to an identification of a type of mouse input. In one embodiment, the protocol engine determines that a mouse input representing a click of the mouse causes actions that may need a slower rendering rate to comprehend, especially if the clicks follow a sequence of typing. In another embodiment, the protocol engine determines that mouse input that does not represent a clicking of a mouse does not affect the ability of a user to comprehend display data, and thus does not affect the measure of complexity.
In other embodiments, the protocol engine makes the determination of the measure of complexity responsive to identifying a heuristic approximation of complexity of a graphics update. In one embodiment, the protocol engine identifies a heuristic approximation of complexity of a graphics update based upon, without limitation, the size of region(s) being updated, the size of the area of the region changed by the graphics commands, a historical frequency of updates to individual regions, cyclic graphics commands, number of graphics commands, frequency of graphics commands, time interval between adjacent packets whose contents contain graphics command, or the type of graphics update. In an embodiment where the protocol engine identifies a low measure of complexity for the graphics update, the protocol engine determines a low measure of complexity represented by the packets containing the graphics updates. In an embodiment where the protocol engine identifies a high measure of complexity for the graphics update, the protocol engine determines a high measure of complexity represented by the packets containing the graphics updates.
In one embodiment, the protocol engine identifies an interval of time between the at least some of the plurality of packets in the recorded session (step 2706). In this embodiment, the protocol engine modifies the interval of time responsive to the determined measure of complexity (step 2708). In an embodiment where at least some of the plurality of packets in the recorded session have content representing display data associated with a high measure of complexity, the protocol engine increases the interval of time between the packets to allow the user of the rendering increased time to comprehend the rendered display data. In another embodiment where at least some of the plurality of packets in the recorded session have content representing display data associated with a low measure of complexity, the protocol engine decreases the interval of time between the packets to reflect decreased amount of time the user requires to comprehend the rendered display data. In one embodiment, a user requires a different amount of time between the rendered contents of packets than the amount rendered by the protocol engine. In this embodiment, the user modifies the interval of time to reflect the amount of time the user requires to comprehend the rendered display data. In some embodiments, the protocol engine also identifies a time interval between the at least some of the plurality of packets and other packets in the plurality of packets, modifying the interval of time identified between those sets of packets.
In some embodiments, the protocol engine identifies a first marker associated with a packet in the recorded session. In one embodiment, the packet comprises the marker. In another embodiment, the recorded session comprises the marker.
In one embodiment, a user of the rendering of the display data defines the marker. In another embodiment, the protocol engine defines the marker. In embodiments where the protocol engine identifies a marker, the protocol engine modifies the interval of time responsive to the first marker. In one embodiment, the protocol engine increases the interval of time providing the user of the rendering of the display data additional time for comprehending the contents of the packet associated with the first marker. In other embodiments, the protocol engine identifies a second marker in a second packet. In this embodiment, the protocol engine modifies the interval of time responsive to the distance between the first marker and the second marker. In this embodiment, the protocol engine provides increased time for comprehension of display data represented by contents of packets marked and decreased time for comprehension of data represented by contents of unmarked packets. In one embodiment, a user defines markers for display data of interest to the user and the protocol engine renders additional time for the display data of interest to the user and decreases time of rendering for display data not of interest to the user, as determined by the markers.
In one embodiment, the protocol engine identifies a first marker in the at least some of the plurality of packets in the recorded session, said marker indicating an initial packet in the at least some of the plurality of packets in the recorded session. The protocol engine modifies the interval of time responsive to the first marker. The protocol engine identifies a second marker in a second packet in the at least some of the plurality of packets in the recorded session, said second marker indicating a final packet in the at least some of the plurality of packets in the recorded session and modifying the interval of time responsive to the interval of time between the first marker and the second marker.
In one embodiment, the protocol engine stores the modified interval of time in a playback data structure (step 2710) and the recorded stream is rendered responsive to the contents of the playback data structure (step 2712). In one embodiment, the protocol engine also renders the recorded stream responsive to the playback data structure instructions regarding modified time intervals. In another embodiment, a separate foreground protocol engine renders the recorded stream.
In some embodiments, a determination is made that recorded interaction with an application requires increased time for rendering, to provide a user of the rendering increased time for comprehension of the rendered display data. In some of these embodiments, the determination is made that the application requiring increased time comprises a more important application than an application not requiring the increased time. In one of these embodiments, the user makes the determination. In another of these embodiments, a policy makes the determination. In still another of these embodiments, the protocol engine comprises a definition of applications that require increased time.
Referring now to
In one embodiment, a protocol engine receives the recorded session (step 2802). In this embodiment, the protocol engine also identifies a first packet having a content representing a window having focus is identified, said window indicating an application (step 2804). In one embodiment, the contents of the packet representing a window having focus include window notification messages indicating a change in input focus. In one embodiment, a time interval is identified between a second packet whose contents render prior to the rendering of the content of the first packet and a third packet whose contents render after the rendering of the content of the first packet (step 2806). In this embodiment, the protocol engine identifies a packet whose contents render prior to the rendering of content representing an application window having focus, a packet whose contents represent the application window having focus, and a packet whose contents represent the application window no longer having focus.
In some embodiments, the protocol engine modifies the time interval preceding the application having focus. In other embodiments, the protocol engine modifies the time interval following the application having focus. In one embodiment, the protocol engine then determines the interval of time in which the application window has focus and modifies that time interval responsive to the type of application. In one embodiment, the protocol engine increases the identified time interval. In this embodiment, the protocol engine provides the user of the rendering an increased amount of time to review the application. In another embodiment, the protocol engine decreases the identified time interval. In this embodiment, the protocol engine provides the user of the rendering a decreased amount of time to review the application, reflecting the decreased amount of interest in the application.
In one embodiment, the protocol engine renders at least one packet in the recorded stream responsive to the modification. In one embodiment, the protocol engine renders the contents of the at least one packet in the recorded stream to a buffer. In one embodiment, rendering to a buffer does not render the contents of the packet in a perceptible manner. In another embodiment, the protocol engine renders the contents of the at least one packet in the recorded stream to a buffer and in a perceptible manner. In some embodiments, the protocol engine indicates the modified time interval in a playback data structure and a separate protocol engine renders the recorded session responsive to the information stored in the playback data structure.
Referring now to
In one embodiment, the protocol engine 2902 comprises a background protocol engine and a foreground protocol engine. In this embodiment, the background to protocol engine receives the recorded stream 2910 and generates the playback data structure 2904. In this embodiment, the foreground protocol engine receives the recorded stream 2910 and renders at least one packet in the recorded stream responsive to the generated playback data structure 2904. In one embodiment, the background protocol engine and the foreground protocol engine reside on the same device. In another embodiment, the background protocol engine resides on a first device and the foreground protocol engine resides on a second device.
In another embodiment, the system comprises a single protocol engine 2902 generating the playback data structure 2904 and rendering at least one packet in the recorded stream responsive to the generated playback data structure 2904.
In one embodiment, the protocol engine 2902 stores in the playback data structure at least one instruction for rendering the recorded session. In one embodiment, the instruction comprises a modification of an identified time interval for rendering the contents of a packet in the recorded session. In another embodiment, the protocol engine stores metadata in the playback data structure. In this embodiment, the metadata comprises higher order instructions for rendering the contents of the packet.
In one embodiment, the protocol engine renders the contents of at least one packet in the recorded session responsive to contents of a playback data structure. In one embodiment, the protocol engine renders the at least one packet in the recorded session in a perceptible manner and to a buffer. In another embodiment, the protocol engine renders the at least one packet in the recorded session to a buffer.
In some embodiments, the rendered contents of the packets provide a streamlined regeneration of the original display data. In other embodiments, the rendered contents of the packets provide a customized version of the display data. In one embodiment, the determination to render the contents of the packet in a perceptible manner is responsive to a policy or user request. These embodiments provide users with control over the rendering of the recorded session.
The present invention may be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The article of manufacture may be a floppy disk, a hard disk, a compact disc, a digital versatile disc, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs may be implemented in any programming language. Some examples of languages that can be used include C, C++, C#, or JAVA. The software programs may be stored on or in one or more articles of manufacture as object code.
While the invention has been shown and described with reference to specific preferred embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the following claims.
This application is a divisional of co-pending U.S. application entitled “System and Methods for Automatic Time-Warped Playback in Rendering a Recorded Computer Session,” U.S. application Ser. No. 11/036,840, filed on Jan. 14, 2005, which is incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4138719 | Swanstrom et al. | Feb 1979 | A |
4807029 | Tanaka | Feb 1989 | A |
4928247 | Doyle et al. | May 1990 | A |
4937036 | Beard et al. | Jun 1990 | A |
4949248 | Caro | Aug 1990 | A |
4965819 | Kannes | Oct 1990 | A |
4974173 | Stefik et al. | Nov 1990 | A |
5021949 | Morten et al. | Jun 1991 | A |
5062060 | Kolnick | Oct 1991 | A |
5072412 | Henderson et al. | Dec 1991 | A |
5083860 | Miyatake et al. | Jan 1992 | A |
5103305 | Watanabe | Apr 1992 | A |
5121497 | Kerr et al. | Jun 1992 | A |
5241625 | Epard et al. | Aug 1993 | A |
5307456 | MacKay | Apr 1994 | A |
5313581 | Giokas et al. | May 1994 | A |
5317732 | Gerlach et al. | May 1994 | A |
5359712 | Cohen et al. | Oct 1994 | A |
5382972 | Kannes | Jan 1995 | A |
5388197 | Rayner | Feb 1995 | A |
5392223 | Caci | Feb 1995 | A |
5392400 | Berkowitz et al. | Feb 1995 | A |
5404316 | Klingler et al. | Apr 1995 | A |
5408655 | Oren et al. | Apr 1995 | A |
5432932 | Chen et al. | Jul 1995 | A |
5437025 | Bale et al. | Jul 1995 | A |
5461711 | Wang et al. | Oct 1995 | A |
5550965 | Gabbe et al. | Aug 1996 | A |
5572258 | Yokoyama | Nov 1996 | A |
5572643 | Judson | Nov 1996 | A |
5574845 | Benson et al. | Nov 1996 | A |
5577188 | Zhu | Nov 1996 | A |
5577254 | Gilbert | Nov 1996 | A |
5619638 | Duggan et al. | Apr 1997 | A |
5623603 | Jiang et al. | Apr 1997 | A |
5657390 | Elgamal et al. | Aug 1997 | A |
5717879 | Moran et al. | Feb 1998 | A |
5721827 | Logan et al. | Feb 1998 | A |
5727950 | Cook et al. | Mar 1998 | A |
5729689 | Allard et al. | Mar 1998 | A |
5732216 | Logan et al. | Mar 1998 | A |
5742797 | Celi et al. | Apr 1998 | A |
5745759 | Hayden et al. | Apr 1998 | A |
5748499 | Trueblood | May 1998 | A |
5751362 | Lee | May 1998 | A |
5758110 | Boss et al. | May 1998 | A |
5761656 | Ben-Shachar | Jun 1998 | A |
5768614 | Takagi et al. | Jun 1998 | A |
5796566 | Sharma et al. | Aug 1998 | A |
5802206 | Marold | Sep 1998 | A |
5822436 | Rhoads | Oct 1998 | A |
5832119 | Rhoads | Nov 1998 | A |
5835090 | Clark et al. | Nov 1998 | A |
5838300 | Takagi et al. | Nov 1998 | A |
5838458 | Tsai | Nov 1998 | A |
5838906 | Doyle et al. | Nov 1998 | A |
5841978 | Rhoads | Nov 1998 | A |
5862260 | Rhoads | Jan 1999 | A |
5886707 | Berg | Mar 1999 | A |
5893053 | Trueblood | Apr 1999 | A |
5907704 | Gudmundson et al. | May 1999 | A |
5909559 | So | Jun 1999 | A |
5915001 | Uppaluru | Jun 1999 | A |
5923736 | Shachar | Jul 1999 | A |
5935212 | Kalajan et al. | Aug 1999 | A |
5968132 | Tokunaga et al. | Oct 1999 | A |
5983190 | Trower et al. | Nov 1999 | A |
5990852 | Szamrej | Nov 1999 | A |
6003030 | Kenner et al. | Dec 1999 | A |
6006242 | Poole et al. | Dec 1999 | A |
6011537 | Slotznick | Jan 2000 | A |
6022315 | Iliff | Feb 2000 | A |
6052120 | Nahi et al. | Apr 2000 | A |
6111954 | Rhoads | Aug 2000 | A |
6122403 | Rhoads | Sep 2000 | A |
6134596 | Bolosky et al. | Oct 2000 | A |
6141699 | Luzzi et al. | Oct 2000 | A |
6166729 | Acosta et al. | Dec 2000 | A |
6167432 | Jiang | Dec 2000 | A |
6173316 | De Boor et al. | Jan 2001 | B1 |
6181736 | McLaughlin et al. | Jan 2001 | B1 |
6185625 | Tso et al. | Feb 2001 | B1 |
6199076 | Logan et al. | Mar 2001 | B1 |
6199753 | Tracy et al. | Mar 2001 | B1 |
6201948 | Cook et al. | Mar 2001 | B1 |
6206829 | Iliff | Mar 2001 | B1 |
6219786 | Cunningham et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6237138 | Hameluck et al. | May 2001 | B1 |
6243375 | Speicher | Jun 2001 | B1 |
6256773 | Bowman-Amuah | Jul 2001 | B1 |
6263363 | Rosenblatt et al. | Jul 2001 | B1 |
6278466 | Chen | Aug 2001 | B1 |
6286030 | Wenig et al. | Sep 2001 | B1 |
6286036 | Rhoads | Sep 2001 | B1 |
6289382 | Bowman-Amuah | Sep 2001 | B1 |
6289461 | Dixon | Sep 2001 | B1 |
6295340 | Cannon et al. | Sep 2001 | B1 |
6307550 | Chen et al. | Oct 2001 | B1 |
6317761 | Landsman et al. | Nov 2001 | B1 |
6317781 | De Boor et al. | Nov 2001 | B1 |
6321252 | Bhola et al. | Nov 2001 | B1 |
6324573 | Rhoads | Nov 2001 | B1 |
6324647 | Bowman-Amuah | Nov 2001 | B1 |
6331855 | Schauser | Dec 2001 | B1 |
6332163 | Bowman-Amuah | Dec 2001 | B1 |
6338088 | Waters et al. | Jan 2002 | B1 |
6339832 | Bowman-Amuah | Jan 2002 | B1 |
6345239 | Bowman-Amuah | Feb 2002 | B1 |
6351777 | Simonoff | Feb 2002 | B1 |
6356437 | Mitchell et al. | Mar 2002 | B1 |
6366933 | Ball et al. | Apr 2002 | B1 |
6370573 | Bowman-Amuah | Apr 2002 | B1 |
6381341 | Rhoads | Apr 2002 | B1 |
6400806 | Uppaluru | Jun 2002 | B1 |
6400996 | Hoffberg et al. | Jun 2002 | B1 |
6401118 | Thomas | Jun 2002 | B1 |
6405252 | Gupta et al. | Jun 2002 | B1 |
6405364 | Bowman-Amuah | Jun 2002 | B1 |
6408331 | Rhoads | Jun 2002 | B1 |
6421726 | Kenner et al. | Jul 2002 | B1 |
6427063 | Cook et al. | Jul 2002 | B1 |
6427132 | Bowman-Amuah | Jul 2002 | B1 |
6434568 | Bowman-Amuah | Aug 2002 | B1 |
6434628 | Bowman-Amuah | Aug 2002 | B1 |
6437818 | Ludwig et al. | Aug 2002 | B1 |
6438231 | Rhoads | Aug 2002 | B1 |
6438594 | Bowman-Amuah | Aug 2002 | B1 |
6442748 | Bowman-Amuah | Aug 2002 | B1 |
6445468 | Tsai | Sep 2002 | B1 |
6466654 | Cooper et al. | Oct 2002 | B1 |
6470381 | De Boor et al. | Oct 2002 | B2 |
6473745 | Doerr et al. | Oct 2002 | B2 |
6473794 | Guheen et al. | Oct 2002 | B1 |
6477580 | Bowman-Amuah | Nov 2002 | B1 |
6477665 | Bowman-Amuah | Nov 2002 | B1 |
6482156 | Iliff | Nov 2002 | B2 |
6496850 | Bowman-Amuah | Dec 2002 | B1 |
6498955 | McCarthy et al. | Dec 2002 | B1 |
6502102 | Haswell et al. | Dec 2002 | B1 |
6502125 | Kenner et al. | Dec 2002 | B1 |
6502131 | Vaid et al. | Dec 2002 | B1 |
6502213 | Bowman-Amuah | Dec 2002 | B1 |
6519571 | Guheen et al. | Feb 2003 | B1 |
6523027 | Underwood | Feb 2003 | B1 |
6526335 | Treyz et al. | Feb 2003 | B1 |
6529909 | Bowman-Amuah | Mar 2003 | B1 |
6529948 | Bowman-Amuah | Mar 2003 | B1 |
6536037 | Guheen et al. | Mar 2003 | B1 |
6539336 | Vock et al. | Mar 2003 | B1 |
6539396 | Bowman-Amuah | Mar 2003 | B1 |
6539429 | Rakavy et al. | Mar 2003 | B2 |
6549949 | Bowman-Amuah | Apr 2003 | B1 |
6550057 | Bowman-Amuah | Apr 2003 | B1 |
6553129 | Rhoads | Apr 2003 | B1 |
6567533 | Rhoads | May 2003 | B1 |
6567813 | Zhu et al. | May 2003 | B1 |
6571282 | Bowman-Amuah | May 2003 | B1 |
6573907 | Madrane | Jun 2003 | B1 |
6574672 | Mitchell et al. | Jun 2003 | B1 |
6578068 | Bowman-Amuah | Jun 2003 | B1 |
6580808 | Rhoads | Jun 2003 | B2 |
6584493 | Butler | Jun 2003 | B1 |
6584569 | Reshef et al. | Jun 2003 | B2 |
6590998 | Rhoads | Jul 2003 | B2 |
6597736 | Fadel | Jul 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6601192 | Bowman-Amuah | Jul 2003 | B1 |
6601233 | Underwood | Jul 2003 | B1 |
6601234 | Bowman-Amuah | Jul 2003 | B1 |
6606479 | Cook et al. | Aug 2003 | B2 |
6606660 | Bowman-Amuah | Aug 2003 | B1 |
6606744 | Mikurak | Aug 2003 | B1 |
6609128 | Underwood | Aug 2003 | B1 |
6611867 | Bowman-Amuah | Aug 2003 | B1 |
6615166 | Guheen et al. | Sep 2003 | B1 |
6615199 | Bowman-Amuah | Sep 2003 | B1 |
6615253 | Bowman-Amuah | Sep 2003 | B1 |
6629081 | Cornelius et al. | Sep 2003 | B1 |
6633878 | Underwood | Oct 2003 | B1 |
6636242 | Bowman-Amuah | Oct 2003 | B2 |
6640145 | Hoffberg et al. | Oct 2003 | B2 |
6640238 | Bowman-Amuah | Oct 2003 | B1 |
6640244 | Bowman-Amuah | Oct 2003 | B1 |
6640249 | Bowman-Amuah | Oct 2003 | B1 |
6647128 | Rhoads | Nov 2003 | B1 |
6647130 | Rhoads | Nov 2003 | B2 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6658464 | Reisman | Dec 2003 | B2 |
6662357 | Bowman-Amuah | Dec 2003 | B1 |
6665706 | Kenner et al. | Dec 2003 | B2 |
6671818 | Mikurak | Dec 2003 | B1 |
6671876 | Podowski | Dec 2003 | B1 |
6675204 | De Boor et al. | Jan 2004 | B2 |
6678864 | Tsai | Jan 2004 | B1 |
6681029 | Rhoads | Jan 2004 | B1 |
6687745 | Franco et al. | Feb 2004 | B1 |
6691154 | Zhu et al. | Feb 2004 | B1 |
6697894 | Mitchell et al. | Feb 2004 | B1 |
6700990 | Rhoads | Mar 2004 | B1 |
6701345 | Carley et al. | Mar 2004 | B1 |
6701514 | Haswell et al. | Mar 2004 | B1 |
6704873 | Underwood | Mar 2004 | B1 |
6711474 | Treyz et al. | Mar 2004 | B1 |
6715145 | Bowman-Amuah | Mar 2004 | B1 |
6718535 | Underwood | Apr 2004 | B1 |
6721703 | Jackson et al. | Apr 2004 | B2 |
6721713 | Guheen et al. | Apr 2004 | B1 |
6742015 | Bowman-Amuah | May 2004 | B1 |
6751320 | Rhoads | Jun 2004 | B2 |
6760463 | Rhoads | Jul 2004 | B2 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6774926 | Ellis et al. | Aug 2004 | B1 |
6775392 | Rhoads | Aug 2004 | B1 |
6795506 | Zhang et al. | Sep 2004 | B1 |
6799221 | Kenner et al. | Sep 2004 | B1 |
RE38609 | Chen et al. | Oct 2004 | E |
6801927 | Smith et al. | Oct 2004 | B1 |
6802041 | Rehm | Oct 2004 | B1 |
6810488 | Teng | Oct 2004 | B2 |
6813366 | Rhoads | Nov 2004 | B1 |
6816904 | Ludwig et al. | Nov 2004 | B1 |
6824044 | Lapstun et al. | Nov 2004 | B1 |
6842906 | Bowman-Amuah | Jan 2005 | B1 |
6849045 | Iliff | Feb 2005 | B2 |
6850252 | Hoffberg | Feb 2005 | B1 |
6870921 | Elsey et al. | Mar 2005 | B1 |
6877043 | Mallory et al. | Apr 2005 | B2 |
6879701 | Rhoads | Apr 2005 | B1 |
6880123 | Landsman et al. | Apr 2005 | B1 |
6885736 | Uppaluru | Apr 2005 | B2 |
6888844 | Mallory et al. | May 2005 | B2 |
6891881 | Trachewsky et al. | May 2005 | B2 |
6898204 | Trachewsky et al. | May 2005 | B2 |
6907546 | Haswell et al. | Jun 2005 | B1 |
6914519 | Beyda | Jul 2005 | B2 |
6934376 | McLaughlin et al. | Aug 2005 | B1 |
6944279 | Elsey et al. | Sep 2005 | B2 |
6954800 | Mallory | Oct 2005 | B2 |
6957186 | Guheen et al. | Oct 2005 | B1 |
6968057 | Rhoads | Nov 2005 | B2 |
6968364 | Wong et al. | Nov 2005 | B1 |
6975655 | Fischer et al. | Dec 2005 | B2 |
6986459 | Paul et al. | Jan 2006 | B2 |
6988126 | Wilcock et al. | Jan 2006 | B2 |
6988236 | Ptasinski et al. | Jan 2006 | B2 |
6993101 | Trachewsky et al. | Jan 2006 | B2 |
6996605 | Low et al. | Feb 2006 | B2 |
7000019 | Low et al. | Feb 2006 | B2 |
7000031 | Fischer et al. | Feb 2006 | B2 |
7000180 | Balthaser | Feb 2006 | B2 |
7006881 | Hoffberg et al. | Feb 2006 | B1 |
7013323 | Thomas et al. | Mar 2006 | B1 |
7016084 | Tsai | Mar 2006 | B2 |
7023979 | Wu et al. | Apr 2006 | B1 |
7024456 | Simonoff | Apr 2006 | B1 |
7027055 | Anderson et al. | Apr 2006 | B2 |
7032030 | Codignotto | Apr 2006 | B1 |
7035285 | Holloway et al. | Apr 2006 | B2 |
7035427 | Rhoads | Apr 2006 | B2 |
7035907 | Decasper et al. | Apr 2006 | B1 |
7043529 | Simonoff | May 2006 | B1 |
7043669 | Brown | May 2006 | B2 |
7047092 | Wimsatt | May 2006 | B2 |
7054465 | Rhoads | May 2006 | B2 |
7058697 | Rhoads | Jun 2006 | B2 |
7069234 | Cornelius et al. | Jun 2006 | B1 |
7069332 | Shibata et al. | Jun 2006 | B2 |
7073126 | Khandekar | Jul 2006 | B1 |
7073189 | McElhatten et al. | Jul 2006 | B2 |
7082572 | Pea et al. | Jul 2006 | B2 |
7089487 | Tsai | Aug 2006 | B2 |
7092370 | Jiang et al. | Aug 2006 | B2 |
7100195 | Underwood | Aug 2006 | B1 |
7103197 | Rhoads | Sep 2006 | B2 |
7113596 | Rhoads | Sep 2006 | B2 |
7113614 | Rhoads | Sep 2006 | B2 |
7116781 | Rhoads | Oct 2006 | B2 |
7124101 | Mikurak | Oct 2006 | B1 |
7124175 | Wolfe et al. | Oct 2006 | B1 |
7130403 | Caspi et al. | Oct 2006 | B2 |
7130807 | Mikurak | Oct 2006 | B1 |
7133085 | Morita et al. | Nov 2006 | B2 |
7133837 | Barnes, Jr. | Nov 2006 | B1 |
7139999 | Bowman-Amuah | Nov 2006 | B2 |
7149698 | Guheen et al. | Dec 2006 | B2 |
7149788 | Gundla et al. | Dec 2006 | B1 |
7165041 | Guheen et al. | Jan 2007 | B1 |
7167844 | Leong et al. | Jan 2007 | B1 |
7171016 | Rhoads | Jan 2007 | B1 |
7171174 | Ellis et al. | Jan 2007 | B2 |
7174126 | McElhatten et al. | Feb 2007 | B2 |
7184531 | Crouch | Feb 2007 | B2 |
7185283 | Takahashi | Feb 2007 | B1 |
7213211 | Sanders et al. | May 2007 | B1 |
7225130 | Roth et al. | May 2007 | B2 |
7228340 | De Boor et al. | Jun 2007 | B2 |
7313613 | Brooking et al. | Dec 2007 | B1 |
7398320 | Minakuchi et al. | Jul 2008 | B1 |
7401116 | Chalfin et al. | Jul 2008 | B1 |
7490166 | Yang et al. | Feb 2009 | B2 |
7558806 | Bobrovskiy et al. | Jul 2009 | B2 |
7908325 | Pabla et al. | Mar 2011 | B1 |
20010019630 | Johnson | Sep 2001 | A1 |
20010056547 | Dixon | Dec 2001 | A1 |
20020032770 | Fertell et al. | Mar 2002 | A1 |
20020033844 | Levy et al. | Mar 2002 | A1 |
20020035451 | Rothermel | Mar 2002 | A1 |
20020038388 | Netter | Mar 2002 | A1 |
20020039481 | Jun et al. | Apr 2002 | A1 |
20020048450 | Zetts | Apr 2002 | A1 |
20020052932 | Curtis et al. | May 2002 | A1 |
20020054750 | Ficco et al. | May 2002 | A1 |
20020057295 | Panasyuk et al. | May 2002 | A1 |
20020126144 | Chenede | Sep 2002 | A1 |
20020149617 | Becker | Oct 2002 | A1 |
20020165922 | Wei | Nov 2002 | A1 |
20020174181 | Wei | Nov 2002 | A1 |
20020194272 | Zhu | Dec 2002 | A1 |
20030018662 | Li | Jan 2003 | A1 |
20030055896 | Hu et al. | Mar 2003 | A1 |
20030061355 | Yang et al. | Mar 2003 | A1 |
20030079224 | Komar et al. | Apr 2003 | A1 |
20030084169 | Zhu et al. | May 2003 | A1 |
20030085922 | Wei | May 2003 | A1 |
20030110266 | Rollins et al. | Jun 2003 | A1 |
20030135656 | Schneider et al. | Jul 2003 | A1 |
20030163704 | Dick et al. | Aug 2003 | A1 |
20030164853 | Zhu et al. | Sep 2003 | A1 |
20030167293 | Zhu et al. | Sep 2003 | A1 |
20030167301 | Zhu et al. | Sep 2003 | A1 |
20030167302 | Zhu et al. | Sep 2003 | A1 |
20030167303 | Zhu et al. | Sep 2003 | A1 |
20030167304 | Zhu et al. | Sep 2003 | A1 |
20030167305 | Zhu et al. | Sep 2003 | A1 |
20030167339 | Zhu et al. | Sep 2003 | A1 |
20030167418 | Zhu et al. | Sep 2003 | A1 |
20030177172 | Duursma et al. | Sep 2003 | A1 |
20030182375 | Zhu et al. | Sep 2003 | A1 |
20030191799 | Araujo et al. | Oct 2003 | A1 |
20030208529 | Pendyala et al. | Nov 2003 | A1 |
20030220973 | Zhu et al. | Nov 2003 | A1 |
20030226038 | Raanan et al. | Dec 2003 | A1 |
20040002048 | Thurmaier et al. | Jan 2004 | A1 |
20040017394 | Adachi | Jan 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040044521 | Chen et al. | Mar 2004 | A1 |
20040103438 | Yan et al. | May 2004 | A1 |
20040135974 | Favalora et al. | Jul 2004 | A1 |
20040143602 | Ruiz et al. | Jul 2004 | A1 |
20040205213 | Paz et al. | Oct 2004 | A1 |
20040207723 | Davis et al. | Oct 2004 | A1 |
20040240387 | Nuzman et al. | Dec 2004 | A1 |
20040267820 | Boss et al. | Dec 2004 | A1 |
20040267952 | He et al. | Dec 2004 | A1 |
20050019014 | Yoo et al. | Jan 2005 | A1 |
20050080850 | Salesky et al. | Apr 2005 | A1 |
20050132417 | Bobrovskiy et al. | Jun 2005 | A1 |
20050254775 | Hamilton et al. | Nov 2005 | A1 |
20060064716 | Sull et al. | Mar 2006 | A1 |
20060130124 | Richardson et al. | Jun 2006 | A1 |
20060161671 | Ryman et al. | Jul 2006 | A1 |
20060161959 | Ryman et al. | Jul 2006 | A1 |
20060274828 | Siemens et al. | Dec 2006 | A1 |
20070005795 | Gonzalez | Jan 2007 | A1 |
20070022155 | Owens et al. | Jan 2007 | A1 |
20070057952 | Swedberg et al. | Mar 2007 | A1 |
20070106681 | Haot et al. | May 2007 | A1 |
20070133524 | Kwon | Jun 2007 | A1 |
20070261097 | Siegman et al. | Nov 2007 | A1 |
20070276910 | Deboy et al. | Nov 2007 | A1 |
20070288640 | Schmieder | Dec 2007 | A1 |
20080068289 | Piasecki | Mar 2008 | A1 |
20090097361 | Nakamura et al. | Apr 2009 | A1 |
20090282444 | Laksono et al. | Nov 2009 | A1 |
20110196982 | Chen et al. | Aug 2011 | A1 |
20110286721 | Craner | Nov 2011 | A1 |
Number | Date | Country |
---|---|---|
0 475 581 | Mar 1992 | EP |
0 495 612 | Jul 1992 | EP |
0 645 695 | Mar 1995 | EP |
1 469 382 | Oct 2004 | EP |
2 327 836 | Feb 1999 | GB |
WO-2006076389 | Jul 2006 | WO |
Entry |
---|
“Remote Desktop Environments Reflected in Local Windows” IBM Technical Disclosure Bulletin, Mar. 1993, vol. 36, Issue 3, pp. 421-426. |
Adrian Nye, XLIB Programming Manual, Rel. 5, Third Edition, Jun. 30, 1994, O'Reilly Media, Inc., chapter 2. |
Chinese Office Action on 200780047081.9 dated Nov. 23, 2011. |
Chinese Office Action on 200780047081.9 dated May 14, 2012. |
Chinese Office Action on 200780047081.9 dated Aug. 21, 2012. |
Christiansen, B.O., Schauser, K.E., Munke, M.; “A Novel Codec for Thin Client Computing,”In Proceedings of the IEEE Data Compression Conference, Snowbird, UT, Mar. 28-30, 2000, pp. 1-10. |
Christiansen, B.O., Schauser, K.E., Munke, M.; “Streaming Thin Client Compression,” In Proceedings of the IEEE Data Compression Conference, Snowbird, UT, Mar. 27-29, 2001, pp. 1-10. |
Communication pursuant to Article 94(3) EP Application No. 07119879.0-1525; Dated Jun. 20, 2008; 8 pages. |
Communication pursuant to Article 94(3) EPC EP Application No. 07120005.9-1525; Dated Jun. 20, 2008; 9 pages. |
Communication Pursuant to Article 94(3) EPC for EP Application No. 0711991.3-1525; Dated Jun. 20, 2008; 12 pages. |
Communication pursuant to Article 96(2) EPC dated Oct. 24, 2007; EP Application No. 06718013.3-1525; 6 pages. |
Communication pursuant to Article 96(2) EPC EP Application No. 06718012.5-1525; Dated Oct. 24, 2007; 5 pages. |
Crusty, “The Un-Official XviD FAQ,” Mar. 4, 2004, pp. 1-50. |
Cruz, G. and Hill, R., “Capturing and Playing Multimedia Events with STREAMS”, in Proceedings of ACM Multimedia '94, San Francisco, CA, Oct. 15-20, 1994, pp. 193-200. |
De Alwis, B., “Screen Capturing and Playback Using VNC,” http://www.cs.ubc.ca/{bsd/vncrecording.html, Oct. 31, 2004. |
Dennis Baker, Using Xinerama to Multihead XFree86 V.4.0 [Online], Nov. 15, 2002, XP002417796. Available at: http://www.tldp.org/HOWTO/Xinerama-HOWTO/. |
European Exam Report for 06718012.5 dated May 27, 2011. |
European Search Report EP Application No. 07119879 completed Jan. 16, 2008; 5 pages. |
Extended European Search Report EP 0712005; Dated Dec. 14, 2007; 6 pages. |
Extended European Search Report from EP Application No. 07119991.3-1525 completed Mar. 4, 2008; 9 pages. |
Final Office Action dated Apr. 14, 2010, pertaining to U.S. Appl. No. 11/036,489, 24 pages. |
Final Office Action dated Jan. 6, 2009, pertaining to U.S. Appl. No. 11/036,486, 46 pages. |
Final Office Action dated Jan. 20, 2010, pertaining to U.S. Appl. No. 11/555,611, 19 pages. |
Final Office Action dated Jul. 7, 2009, pertaining to U.S. Appl. No. 11/036,489, 16 pages. |
Final Office Action dated Mar. 16, 2010, pertaining to U.S. Appl. No. 11/036,486, 41 pages. |
Final Office Action dated Nov. 9, 2009, pertaining to U.S. Appl. No. 11/555,615, 30 pages. |
Final Office Action dated Sep. 15, 2009, pertaining to U.S. Appl. No. 11/035,851, 13 pages. |
International Search Report for corresponding International Application No. PCT/US2007/021098, mailed Feb. 22, 2008, 4 pages. |
International Search Report for PCT/US2006/038629. Mailing Date Feb. 23, 2007. 3 pages. |
International Search Report, PCT/US2006/000887, Jul. 24, 2006. |
International Search Report, PCT/US2006/000888, Aug. 31, 2006. |
International Search Report, PCT/US2007/081751, Nov. 5, 2008. |
Krishnakumar A. S. et al., “VLSI Implementations of Communication Protocols—A Survey” IEEE Journal on Selected Areas in Communication, IEEE Service Center, Piscataway, NJ, vol. 7, No. 7, Sep. 1, 1989, pp. 1082-1090. |
Lamming, M.G., “Towards a Human Memory Prosthesis”, Technical Report EPC-91-116, Copyright.RTM. Rank Xerox EuroPARC, Published in Proceedings of International Workshop Lecture Notes in Computer Science '91, Dagstuhl, Berlin, Jul. 1991. |
Lamming, M.G., and Newman, W.M., “Activity-based Information Retrieval Technology in Support of Personal Memory,” Technical Report EPC-91-103.1, Copyright.RTM. Rank Xerox EuroPARC 1991, pp. 1-16. |
Non Final Office Action dated Apr. 14, 2009, pertaining to U.S. Appl. No. 11/035,851, 11 pages. |
Non Final Office Action dated Apr. 29, 2008, pertaining to U.S. Appl. No. 11/036,486, 20 pages. |
Non Final Office Action dated Aug. 3, 2009, pertaining to U.S. Appl. No. 11/036,486, 39 pages. |
Non Final Office Action dated Dec. 10, 2008, pertaining to U.S. Appl. No. 11/036,489, 17 pages. |
Non Final Office Action dated Jun. 3, 2009, pertaining to U.S. Appl. No. 11/555,611, 16 pages. |
Non Final Office Action dated Mar. 30, 2009, pertaining to U.S. Appl. No. 11/555,615, 25 pages. |
Non Final Office Action dated May 26, 2009, pertaining to U.S. Appl. No. 11/036,840, 11 pages. |
Non Final Office Action dated Sep. 4, 2009, pertaining to U.S. Appl. No. 11/036,489, 21 pages. |
Non-Final Office Action dated Dec. 23, 2009, pertaining to U.S. Appl. No. 11/035,511, 10 pages. |
Non-Final Office Action dated Jan. 4, 2010, pertaining to U.S. Appl. No. 11/035,851, 14 pages. |
Notice of Allowance on U.S. Appl. No. 11/035,511 dated Apr. 25, 2012. |
Notice of Allowance on U.S. Appl. No. 11/035,851 dated Dec. 12, 2011. |
Notice of Allowance on U.S. Appl. No. 11/036,489 dated Sep. 24, 2012. |
Notice of Allowance on U.S. Appl. No. 12/609,684 dated Apr. 6, 2012. |
Notice of Allowance on U.S. Appl. No. 12/609,731 dated Jul. 18, 2012. |
Office Action for U.S. Appl. No. 11/035,511 dated Mar. 30, 2011. |
Office Action for U.S. Appl. No. 11/035,851 dated Aug. 19, 2010. |
Office Action for U.S. Appl. No. 11/036,489 dated Jun. 7, 2011. |
Office Action for U.S. Appl. No. 11/555,611 dated Sep. 30, 2010. |
Office Action on U.S. Appl. No. 11/035,511 dated Jan. 18, 2012. |
Office Action on U.S. Appl. No. 11/036,489 dated Oct. 14, 2011. |
Office Action on U.S. Appl. No. 12/609,615 dated Oct. 31, 2012. |
Office Action on U.S. Appl. No. 12/609,615 dated Mar. 15, 2012. |
Office Action on U.S. Appl. No. 12/609,615 dated Jun. 19, 2012. |
Partial European Search Report completed on Dec. 14, 2007; EP Application No. EP07119991; 3 pages. |
Pedersen, E.R., McCall, K., Moran, T.P., and Halasz, F.G., “Tivoli: An Electronic Whiteboard for Informal Workgroup Meetings,” Interchi '93,Apr. 24-29, 1993, pp. 391-398. |
Reilly, R., “Today's Linux Screen Capture Technology,” Newsforge, http://software.newsforge.com/article.pl?sid=04/08/16/2128226, Aug. 17, 2004. |
Rhyne, J.R., and Wolf, C.G., “Tools for Supporting the Collaborative Process,” in Proceedings of the ACM Symposium on User Interface Software and Technology, Monterey, California, Nov. 15-18, 1992, pp. 161-170. |
Sandklef H., “Testing Applications with Xnee,” Linux Journal, vol. 2004, No. 117, 2004, pp. 1-6. |
Sandklef, H., “Xnee Manual,” Manual Version 1.08D, http://web.archive.org/web/20040627125613/www.gnu.org/software/xnee/www/m- anual/xnee.pdf, Oct. 3, 2003. |
ScreenPlay User's Guide, Release 2.0, Copyright. 1991-1993, RAD Technologies, Inc., Palo Alto, California, pp. 1-4, 9, 12-14, 30-37, and 44-66. |
Smith, Advanced Linux Networking, Jun. 11, 2002, Addison Wesley Professional, Chapter 14, Section 4. |
Stanonik, R., “Recording/Playing Both Gaze Date and Computer Interaction,” http://hci.ucsd.edu/eye/gaze.txt, Nov. 30, 2000. |
Stanonik, R., “Reversing the VNC Record File,” http://web.archive.org/web/20060703115503/http://hci.ucsd.edu/eye/reversi- ng.txt, Mar. 18, 2002. |
Wolf, C.G., Rhyne, J.R., and Briggs, L.K., “Communication and Information Retrieval with a Pen-based Meeting Support Tool,” CSCW 92 Proceedings, Nov. 1992, pp. 322-329. |
Written Opinion of the International Searching Authority for PCT/US2006/038629 dated Jan. 31, 2007. |
Written Opinion of the International Searching Authority to PCT/US2006/000887 (Jul. 4, 2006). |
Written Opinion of the International Searching Authority to PCT/US2006/000888(Jul. 7, 2006). |
X Desktop Group: “Extended window manager hints” [Online], May 13, 2003, Available at: http://standards.freedesktop.org/wm-spec/wm-spec-latest.html. |
Zeldovich, N. et al., “Interactive Performance Measurement with VNCPlay,” USENIX 2005 Annual Technical Conference. |
Number | Date | Country | |
---|---|---|---|
20100111494 A1 | May 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11036840 | Jan 2005 | US |
Child | 12685507 | US |