Opportunistic block transmission with time constraints

Information

  • Patent Grant
  • 11121928
  • Patent Number
    11,121,928
  • Date Filed
    Tuesday, March 10, 2020
    4 years ago
  • Date Issued
    Tuesday, September 14, 2021
    3 years ago
Abstract
A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.
Description
BACKGROUND

In requesting and receiving blocks of a stream enabled application, some blocks may be predicted as needed following other blocks. When blocks are predicted, there may be uncertainty as to how many predicted blocks to transmit. This may relate to an amount of data to be transmitted. If too many blocks are sent then a delay perceived by a user is increased, and/or network bandwidth is wasted, which may be expensive to a streaming service provider. If too few blocks are sent then bandwidth of a network connection may be underutilized. If a system refrains from transmitting blocks until a user actually requests them then the system must transmit the requested blocks while the user waits, expending the user's valuable time. In many cases users may desire to eliminate such waiting time.


The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.


SUMMARY

The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools, and methods that are meant to be exemplary and illustrative, not limiting in scope. In various embodiments, one or more of the above described problems have been reduced or eliminated, while other embodiments are directed to other improvements.


A technique for determining a data window size allows a set of predicted blocks to be transmitted using surplus bandwidth. Advantageously, predicted blocks are transmitted to a streaming playback device before the device needs the blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed without having to wait for their transmission, limiting user delays.


A system based on the technique may include a streaming playback device and a streaming server. The streaming server may provide the streaming playback device blocks that it will need prior to the blocks actually being requested. The streaming playback device may cache necessary blocks and then use them when needed. A caching system may request only those blocks that are not found in cache when requests for blocks include some but not all of the blocks in the cache.





BRIEF DESCRIPTION


FIG. 1 depicts a diagram of an example of a system for streaming software.



FIG. 2 depicts a diagram of an example of a system for efficiently transmitting blocks.



FIG. 3 depicts a flowchart of an example of a method for efficiently transmitting blocks.



FIG. 4 depicts a diagram of an example of a streaming playback device receiving blocks.



FIG. 5 depicts a flowchart of an example of a method for receiving blocks.



FIG. 6 depicts a diagram of an example of a streaming server transmitting blocks.



FIG. 7 depicts a flowchart of an example of a method for transmitting blocks.



FIG. 8 depicts a diagram of an example of a logical expression of a block probability table and a diagram of an example of partially filled response buffer.



FIG. 9 depicts a diagram of an example of a device for streaming software.





DETAILED DESCRIPTION

In the following description, several specific details are presented to provide a thorough understanding. One skilled in the relevant art will recognize, however, that the concepts and techniques disclosed herein can be practiced without one or more of the specific details, or in combination with other components, etc. In other instances, well-known implementations or operations are not shown or described in detail to avoid obscuring aspects of various examples disclosed herein.



FIG. 1 depicts a diagram 100 of an example of a system for streaming software. FIG. 1 includes software provider 102, stream enabler 104, and opportunistic streaming software system 106.


In the example of FIG. 1, software provider 102 supplies a software application. The software application may be provided as deliverables such as data, executable code, and libraries. The application may be provided by way of CD-ROM, DVD-ROM, download over a network, from an input/output (I/O) device, or via any known or convenient mechanism.


Resources, e.g. data, executable code, may be included in the deliverables supplied by the software provider 102. An application executing in a virtual execution environment may request resources, and it may be necessary to transmit blocks including resources to a streaming playback device for satisfaction of the resource request. A user may be required to wait while blocks including resources are transmitted to a streaming playback device.


The software application may have an interactive threshold. The interactive threshold may be an amount of time that a user is willing to wait for a system to access resources. In a non-limiting example, the interactivity threshold may be approximately 1/10 of a second because a 1/10 of a second delay is noticeable to a human being, and delays of greater than 1/10 of a second may decrease user satisfaction with the software. The interactivity threshold may be supplied by the software provider 102, may be determined through trial and error, may be determined by systematically executing the software application many times and averaging user responses, may be set to a default value (e.g. 1/10 of a second) or may be acquired by any method known or convenient.


In the example of FIG. 1, stream enabler 104 prepares the non-stream enabled software application provided by software provided 102 to be streamed. The stream enabler 104 breaks the non-stream enabled software application into blocks of an optimal block size. The optimal block size may be small so as to allow for aggregation of blocks with fine granularity. The optimal block size may be narrowed to a range, e.g. 512 bytes-32 k bytes. In some cases the optimal block size may be larger or smaller than the specified range and may be any size known or convenient. A deliverable is broken up into many pieces. Each of the pieces may be of the optimal block size, or the pieces may have a variable block size, all of which are the optimal block size or smaller. In some cases, it may even be desirable to increase block size to greater than the optimal block size. In a non-limiting example, the optimal block size is set to 4 kb and the non-stream enabled software application is broken up into blocks of a stream-enabled application each up to 4 kb in size, most being of 4 kb. An optimal size of 4 k may have certain advantages because that is also the page size of many computer systems. This can improve memory allocation efficiency, improve simplicity, and/or have other advantages.


In the example of FIG. 1, opportunistic streaming software system 106 may include a streaming software server and a streaming software playback device. A server may be either hardware or software or may be a combination of both hardware software, and firmware. The server may include blocks of the stream-enabled application. The server may be coupled to the streaming software playback device via a network. The network may have throughput and latency characteristics. The throughput and latency characteristics may be dynamically changing.


Alternatively, in lieu of or in addition to a server, an I/O device that includes at last some blocks of the stream enabled application could be used. In this case, the I/O interface may have relevant throughput and latency characteristics.



FIG. 2 depicts a diagram 200 of an example of a system for opportunistically transmitting blocks. FIG. 2 includes deliverables 202, stream enabler 204, and streaming software system 205.


In the example of FIG. 2, deliverables 202 may be deliverables of a non-stream enabled software application. The deliverables 202 may be the deliverables as discussed in reference to FIG. 1.


In the example of FIG. 2, stream enabler 204 may be hardware, software, or a combination of hardware and software. The stream enabler may be capable of taking deliverables of a non-stream enabled software application and breaking the deliverables up into blocks as discussed in reference to FIG. 1.


In the example of FIG. 2, streaming software system 205 includes streaming software playback device 206, and streaming server 208. In the example of FIG. 2, streaming playback device 205 includes network data engine 210, block granularity caching engine 212, and virtual environment 224.


In the example of FIG. 2, the block granularity caching engine 212 receives requested blocks as well as predictively streamed blocks from streaming server 208. Predictively streamed blocks may be blocks that were not requested, but were instead predicted to be needed by a stream enabled application executing in the virtual environment 224. Predictively streamed blocks may be stored in a local cache until needed. When predictively streamed blocks are needed resources included in the predictively streamed blocks may be accessed without requesting the blocks from streaming server 208.


In the example of FIG. 2, network data engine 210 may transmit and receive information associated with network latency and throughput to streaming server 208. It may be necessary for the network data engine 210 to receive a packet of information that includes a header having a timestamp, but no data. As will be discussed in reference to FIG. 8, packets may be transmitted and received in order to determine and update network latency information. Similarly the network data engine 210 may transmit data to the streaming server 208 to determine throughput.


In the example of FIG. 2, the virtual environment 224 may allow a stream-enabled application to execute. The virtualized execution environment 224 is a virtualized operating system enabling a streamed application to execute on a streaming playback device. A virtualized execution environment is discussed in U.S. patent application Ser. No. 09/098,095 entitled “Method and Apparatus to Allow Remotely Located Computer Programs and/or Data to be Accessed on a Local Computer in a Secure, Time-Limited Manner, with Persistent Caching,” which is incorporated by reference.


In the example of FIG. 2, streaming server 208 includes blocks 230, predictive block aggregator 232, and data window engine 234.


In the example of FIG. 2, the blocks 230 are blocks of a stream-enabled application. Blocks 230 may be included in a local storage, a database, or may be included in a non-local storage or a non-local database. A database may be a database, may be a file, or may be any known or convenient manner of storing data.


In the example of FIG. 2, the predictive block aggregator 232 may prioritize a number of blocks in order of likelihood of use by the streaming playback device 206. The predictive block aggregator 232 may add a number of blocks to a buffer in order of priority. The priority of the blocks may be determined as discussed in reference to FIG. 7A-B. Blocks may be added to the buffer until the buffer has reached a data window size.


In the example of FIG. 2, the data window engine 234 sets a data window size by considering the interactivity threshold of the stream-enabled application, the network latency, and the network throughput. Network latency and network throughput information may be considered on a continuously updated basis, may be infrequently updated, may be set only once, or may be disregarded. The interactivity threshold may be used in reference to the latency and throughput, or may be used independent of that information. The data window size may be used to limit the size of the buffer filled with predictively streamed blocks. In a non-limiting example, the predictively streamed blocks are limited to 10 k and blocks are set to 512 bytes. 20 predicted blocks are placed in the buffer, and the blocks are transmitted.


In the example of FIG. 2, in operation, the deliverables 202 are received by the stream enabler 204 and broken up into blocks and provided to streaming server 208 and stored in blocks 230. The network data engine 210 sends a request for a block to the streaming server 208 and the data window engine 234 calculates a data window. The predictive block aggregator 232 prioritizes the blocks 230 and places a number of the blocks 230 into a buffer up to the data window size set by the data window engine 234. The blocks in the buffer are transmitted to the streaming playback device 206 as predictively streamed blocks. The block granularity caching engine 212 stores the predictively streamed blocks in to a local cache. An application executing in the virtual environment 224 requests a resource, and the resource is at least partially satisfied from the blocks in the predictively streamed blocks in the cache.



FIG. 3 depicts a flowchart 300 of an example of a method for opportunistically transmitting blocks. The method is organized as a sequence of modules in the flowchart 300. However, it should be understood that these and modules associated with other methods described herein may be reordered for parallel execution or into different sequences of modules.


In the example of FIG. 3, the flowchart 300 starts at module 302 with executing a process associated with a stream-enabled application that provides a first request for resources. The resource may be needed for execution of a stream enabled application within a virtualized execution environment. The resources may be included in blocks of a stream enabled application stored locally or remotely on a streaming system.


In the example of FIG. 3, the flowchart 300 continues to module 304 with receiving, within a data window, one or more blocks including resources used to satisfy the first request for resources as well as one or more predictively streamed blocks. The data window may limit the number of predictively streamed blocks that may be transmitted. The resources in the one or more blocks may be used to satisfy the first request for resources. The predictively streamed blocks may be associated with a probability of being requested by the stream enabled application in the future.


In the example of FIG. 3, the flowchart 300 continues to module 306 with storing one or more predictively streamed blocks in a cache. The blocks may be blocks that include resources that have a high likelihood of being used by the stream enabled application for a subsequent request for resources. A streaming system may predictively stream the blocks in advance of the stream enabled application requesting resources included in the blocks.


In the example of FIG. 3, the flowchart 300 continues to module 308 with providing a second request for resources. The second request for resources may be associated with one or more blocks that were predictively streamed in advance of the second request for resources.


In the example of FIG. 3, the flowchart 300 continues to module 310 with checking the cache to find the one or more predictively streamed blocks to satisfy the block request. If blocks including the resource were predictively streamed in advance, the resource may be found in blocks in the cache. However, blocks including the resource may not have been predictively streamed, or may have gone unused for a sufficient amount of time for the blocks to have been replaced with other blocks that have been more recently predictively streamed.


In the example of FIG. 3, the flowchart 300 continues to module 312 with at least partially satisfying the request for the resource using the one or more predictively streamed blocks in the cache. In some cases the resource may be spread across more than one block. One or more blocks including the resource may not be in the cache when needed. Blocks not found in the cache must be requested so that all blocks necessary are available so that the resource may be provided. In the case that the resource is entirely included in blocks in the cache, the entire resource is produced from the blocks in the cache, and the resource request is satisfied. Having satisfied the request for the resource, the flowchart terminates.



FIG. 4 depicts a diagram 400 of an example of a streaming playback device receiving blocks. FIG. 4 includes streaming playback device 402, streaming system 412, and blocks 414.


In the example of FIG. 4, the streaming playback device 402 includes virtual environment 404, cache 406, block granularity caching engine 408, and interface 410.


In the example of FIG. 4, the virtual environment 404 may be a virtualized operating system enabling a streamed application to execute on a computing device as discussed in reference to FIG. 2.


In the example of FIG. 4, the cache 406 may store one or more blocks, the blocks including one or more resources. Blocks may be stored in a last-in-first-out (LIFO), associative cache, or any caching system known or convenient. Blocks that are used may be maintained in the cache 406 while blocks that are not used may be replaced by more recent predictively streamed blocks.


In the example of FIG. 4, the block granularity caching engine 408 may receive predicted blocks as they are transmitted to the streaming playback device 402. The blocks may be stored in the cache 406 by the block granularity caching engine 408. The block granularity caching engine 408 may intercept block requests from the virtual execution environment 404, and inspect the cache 406 to determine if some, all or none of the blocks are found in the cache 406. If some of the blocks are found in the cache 406, then the block granularity caching engine 408 may create a modified block request including a request for blocks that were requested by the virtual execution environment 404, but not found in the cache 406. If none of the blocks are found in the cache 406, the block granularity caching engine 408 may create a block request for all blocks requested by the virtual execution environment 404. If all of the blocks are found in the cache, the block granularity caching engine 408 may provide all blocks requested directly to the virtual execution environment 404, and may refrain from transmitting any request for blocks.


In the example of FIG. 4, the interface 410 may provide block requests to the streaming system 412. The interface 410 may receive blocks from the streaming system 412 and provide the blocks to the block granularity caching engine 408. An interface should be afforded a broad enough interpretation to include a bus within a single computing system, a wireless or wired connection between computing systems, or any known or convenient manner of transmitting blocks of a stream enabled application. The interface could be an input/output device, for reading a fixed media such as a CD-ROM, DVD-ROM, or other computer readable media.


In the example of FIG. 4, the streaming system 412 may be a separate computing device from streaming playback device 402, may be a fixed media from which a stream-enabled application is read, or may be any source of a stream enabled application. The streaming system 412 may include the blocks 414, or may be coupled to a computing device including the blocks 414. In the case of a fixed media, the streaming playback device 402 reads blocks from the input output device such as a CD-ROM, DVD-ROM, or other known or convenient computer readable medium.


In the example of FIG. 4, the blocks 414 may include blocks of a stream enabled application. Blocks 414 may be a file, a group of files, a database, a data store, or any manner known or convenient of storing a stream enabled application.


In the example of FIG. 4, in operation, a stream enabled application executing within the virtual execution environment 404 may request, for example, a resource in terms of a file offset and length. The virtual execution environment 404 may interpret the resource request as block 7. The block granularity caching engine 408 may then inspect the cache 406 to determine whether the requested block 7 is in the cache or not. If the requested block 7 is in the cache, then the block granularity caching engine 408 may provide the requested block to the virtual execution environment 404. If the requested block is not in the cache 406 then the block granularity caching engine 408 may then request the block 7 from the streaming system 412. The streaming system 412 may reply with block 7 and perhaps, by way of example, but not limitation, additional predictive blocks 7, 8, & 9. The additional blocks 8 and 9 may have a high likelihood of being needed after block 7 is requested. The block granularity caching engine 408 may provide the requested block 7 to the virtual execution environment 404, and may cache the predictively streamed blocks 8 and 9, as well as the requested block 7. The resources in the requested block 7 may be provided by the virtual execution environment 404 to the stream enabled application in terms of a file, offset and length.



FIG. 5 depicts a flowchart 500 of an example of a method for receiving blocks. The method is organized as a sequence of modules in the flowchart 500. However, it should be understood that these and modules associated with other methods described herein may be reordered for parallel execution or into different sequences of modules.


In the example of FIG. 5, the flowchart 500 starts at module 502 with requesting a resource of a stream enabled application. A stream enabled application may request the resource in terms of a file, offset and length.


In the example of FIG. 5, the flowchart 500 continues to module 504 with translating the request for the resource into a block ID of a block including the resource. The resource may be included in a block, or may be spread across one or more blocks. The translation may thus include a plurality of blocks or merely a single block.


In the example of FIG. 5, the flowchart 500 continues to module 506 with inspecting a cache for the block. It may be that the block has been previously received, and is stored in cache. The block may have been previously requested, or may have been predictively streamed because it was likely that the resource included in the block would be requested. If a block is present, then the block is locked, or otherwise prevented from being deleted until the resource request is responded to. A locked block may be unlocked after the resource request is responded to.


In the example of FIG. 5, the flowchart 500 continues to module 508 with transmitting a request for the block. If the block was not found in the cache it may be necessary to request the block.


In the example of FIG. 5, the flowchart 500 continues to module 510 with receiving the block as well as additional predictively streamed blocks. In the case that one or more blocks are likely to be requested after the block, predictively streamed blocks may be transmitted along with the block, taking into consideration, a window for transmission based on the amount of time that a user may be willing to wait as well as the amount of data that may be transmitted in the time.


In the example of FIG. 5, the flowchart 500 continues to module 512 with caching the additional predictively streamed blocks. These additional predictively streamed blocks may be stored for future use. Blocks in the cache that are not needed may be replaced by predictively streamed blocks. Any locked blocks may not be replaced by additional predictively streamed blocks. Having cached the additional predictively streamed blocks, the flowchart terminates.



FIG. 6 depicts a diagram 600 of an example of a streaming server transmitting blocks. FIG. 6 includes streaming server 602, and streaming playback device 616.


In the example of FIG. 6, streaming server 602 includes predictor 604, blocks 608, predicted block aggregation engine 612, output buffer 610, and interface 614.


In the example of FIG. 6, the predictor 604 determines a likelihood of blocks being requested. The predictor creates probability data. The predictor is discussed in more depth in U.S. patent application Ser. No. 10/988,014 entitled “System and Method for Predictive Streaming” by Jeffrey de Vries, incorporated herein by reference.


In the example of FIG. 6, the probability data 606 includes probabilities of blocks being requested after blocks that have already been requested, such as is discussed in reference to FIG. 7. The probability data may be expressed as a logical block probability table, as a data store of probabilities, or in any manner known or convenient.


In the example of FIG. 6, the output buffer 610 may be any computer readable medium capable of storing data. The output buffer 610 may be volatile, or non-volatile. In a non-limiting example, the output buffer may include random access memory. The output buffer 610 may store blocks prior to transmission.


In the example of FIG. 6, the predicted block aggregation engine 612 includes functionality to fill an output buffer with blocks from blocks 608 based on block probability data 606. The predictive block aggregation engine 612 may add blocks to the output buffer up to a limit set by a data window size as discussed in reference to FIG. 7. The data window size may be limited by user responsiveness requirements and a maximum throughput between interface 614 and streaming playback device 616. An engine normally includes a processor and memory including instructions for execution by the processor.


Notably, below, two examples are provided of pseudo code that could be implemented to fill the output buffer 610 with blocks. However, neither the first example, nor the second example are limiting. Any known or convenient manner of filling the output buffer 610 with blocks may be used.


In a non-limiting example, the predicted block aggregation engine 612 may implement a priority queue to fill the output buffer with blocks. Consider a priority queue PQ. PQ contains tuples (pq,bq) each containing a probability (pq) and a block number (bq). The priority queue may be ordered by probability. A minimum probability tuple may always sit at the top of the queue. Initially PQ is empty. Q may be a working queue of tuples (pw,bw) each containing a probability (pw) and a block number (bw). PROB may be an indexed probability table 606 storing a probability of needing a predicted block having seen one or more previous blocks. The probability may be set in the range of 0-1. Let N be the max number of blocks that you can put into the output buffer=data window size/block size.


In continuing the non-limiting example, the following pseudo-code could be implemented to fill an output buffer with predicted blocks up to a maximum data window:














PUSH (1.0, requested block) onto working queue Q


WHILE working queue Q is not empty DO


 POP (pw,bw) with probability pw and block number bw from


 working queue Q


 IF (PQ.size = N) (i.e. PQ already has N entries) DO


  IF (pw <= PQ.min (i.e. the min probability at the top of the PQ)) DO


   CONTINUE back at the WHILE (i.e. skip this block)


  END IF


  IF (PQ already contains an entry for block number bw) DO


  IF (the probability in the entry for block bw in PQ >= pw) DO


   CONTINUE back at the WHILE (i.e. keep existing entry)


   ELSE


    REMOVE the entry for block bw from PQ


   END IF


  END IF


 END IF


 PUSH (pw, bw) onto priority queue PQ (updating the min entry as


 necessary)


 FOR EACH predicted block bp for which PROB[bw][bp] > 0 DO


 PUSH (pw * PROB[bw][bp], bp) onto working queue Q


 END FOR LOOP


END WHILE LOOP


FOR EACH block in the priority queue DO


 go read the block data and put the block number and the block data


into the output buffer


END FOR LOOP


SEND output buffer containing requested block + top N-1 blocks


based on probability









Notably, one could use as many stages of look ahead as is desirable. One could have PROB[seen1][seen2] . . . [seen n] [predicted block]=the probability of seeing the predicted block give the sequence of previously seen blocks seen 1 . . . seen n. The additional stages of look ahead may provide better focused predictions of blocks. In another non-limiting example, the following pseudo code could be implemented to fill the output buffer with blocks. Q is a working queue of (probability, block number) tuples, sorted by order of decreasing probability, initially empty. Let probability table be same as discussed above relative to the first example.














PUSH (1.0, requested block) onto working queue Q


WHILE working queue Q is not empty AND output buffer is not full DO


 POP (pw,bw) with probability pw and block number bw from


 working queue Q


 IF output buffer doesn't already contain block bw


  ADD block number bw and data for block bw to output buffer


 END IF


 FOR EACH predicted block bp for which PROB[bw][bp] > 0 DO


  IF output buffer doesn't already contain block bp


   INSERT (pw * PROB[bw][bp], bp), sorted by decreasing


probability, into working queue Q


  END IF


 END FOR LOOP


END WHILE LOOP


SEND output buffer









In the example of FIG. 6, the data window engine 613 sets a data window size by considering the interactivity threshold of the stream-enabled application, the network latency, and the network throughput. Network latency and network throughput information may be considered on a continuously updated basis, may be infrequently updated, may be set only once, or may be disregarded. The interactivity threshold may be used in reference to the latency and throughput, or may be used independent of that information. The data window size may be used to limit the size of the buffer filled with predictively streamed blocks.


In the example of FIG. 6, the interface 614 may be an interface as discussed in reference to FIG. 9. It should be appreciated that the interface may be interpreted broadly enough to include connections within a single computing system. Additionally, the interface may any means of transmitting data, e.g., optical network, Ethernet, or any means known or convenient.


In the example of FIG. 6, the streaming playback device 616 may include a virtualized environment for a stream enabled application. Streaming playback device 616 may include a processor, non-volatile memory, and an interface. The streaming playback device 616 may include a virtual execution environment executing a stream enabled application that requests resources. A resource request may be translated into a block request.


In the example of FIG. 6, in operation, the streaming playback device 616 creates a request for a block. The request is provided to streaming server 602. The predictor 604 predicts a number of blocks that are likely to be requested after the block and stores the predictions in probability data 606. Predicted block aggregation engine 612 identifies the blocks in probability data 606 that are likely to be requested after the block, and adds zero or more blocks from the blocks 608 until the output buffer 610 has reached a data window size. Once the output buffer 610 has reached the data window size, the output buffer is provided, via the interface 614, to the streaming playback device 616.



FIG. 7 depicts a flowchart 700 of an example of a method for transmitting blocks. The method is organized as a sequence of modules in the flowchart 700. However, it should be understood that these and modules associated with other methods described herein may be reordered for parallel execution or into different sequences of modules.


In the example of FIG. 7, the flowchart 700 starts at module 702 with predicting one or more blocks that are associated with resources that will be requested by a stream-enabled application. As many stages of look ahead as is desirable may be used to provide predictions of future blocks. In a non-limiting example, given that the streaming playback device requested block 4, a predictor could predict that the next three blocks that will be requested by a streaming playback device will be, in order of probability, 7, 15, and 24.


In the example of FIG. 7, the flowchart 700 continues to module 704 with adding predicted blocks to an output buffer in order of priority until an output buffer has reached the data window size. The blocks may be added in priority, by likelihood of being requested, or by any known or convenient manner. In a non-limiting example, the blocks added may begin with the block requested followed by the block most likely be requested, until the maximum, e.g., 32K of blocks have been added to the buffer.


In the example of FIG. 7, the flowchart 700 continues to module 706 with transmitting the blocks in the output buffer. The blocks in the buffer are provided to the streaming software client. Having transmitted the blocks in the output buffer, the flowchart terminates.



FIG. 8 depicts a diagram 800A of an example of a logical expression of a block probability table and a diagram 800B of an example of partially filled response buffer.


In the example of FIG. 8, the diagram 800A includes probabilities of a streaming playback device requesting blocks. Each row of this table includes the probability of each block being needed after this block. Diagonally square (7,7), through square (13,13) include no probabilities as such would calculate the probability of a block being requested immediately after the block, itself, is requested. In the example of FIG. 8, for example, the probability of block 11 being requested after block 7 is 0.1, or 10%. In a non-limiting example, this might be because block 8 is only used by the streamed application when an error condition occurs, and that error condition only occurs about 10% of the time, e.g., a misspelled file name.


In the example of FIG. 8, the diagram 800B includes the partially filled response buffer storing a requested block as well as a plurality of predicted blocks. It may not be necessary or possible to fill the response buffer completely with blocks. In a non-limiting example, if block 7 is requested, and no blocks are predicted to follow block 7, then it may not be possible to fill a 10-block-long output buffer because there are not enough blocks to send.


In determining a data window size, the following non-limiting examples an interactive threshold may be calculated, however, any known or convenient method of calculating a data window size may be used. The following non-limiting examples are provided for clarity.


In a non-limiting example of a method for calculating a data window size MAX WINDOW may be an upper limit of the largest amount of data that the response buffer may hold. Throughput may be a value representing the amount of data per unit of time that may be transmitted on a network connecting a streaming server with a streaming playback device. The data window size may thus be calculated: data window=MIN[MAXIMUM_WINDOW, throughput*interactive threshold].


In a non-limiting example, an interactive threshold is a limit on the amount of time that an individual may be willing to wait for a block. A latency may be a time delay between deciding to transmit a block and transmitting the first bit of the block. A throughput may be a value representing the amount of data per unit of time that may be transmitted on a network connecting a streaming server with a streaming playback device. The data window size may thus be calculated: data window=(interactive threshold−round trip latency)*throughput.


In a non-limiting example a MIN_WINDOW_SIZE may be a lower limit of the smallest amount of data that the response buffer may hold. An interactive threshold and a latency may be as described above. The data window may be thus calculated data window=MAX [MIN_WINDOW_SIZE, (interactive threshold−(round trip latency))*throughput].


Any manner of determining a latency known or convenient may be used. Latency may be important because an interactive threshold is decreased by subtracting a latency from the time allowed for the interactive threshold. Any manner of determining a latency known or convenient may be used. Latency may be continuously recalculated throughout a streaming session so as to provide an accurate data window size.


Throughput may be dynamically determined by, at the beginning of the session, sending test buffers between a server and a client. The throughput may be calculated, based on the time required to transmit the buffers, and the data size of the buffers. The throughput could be run through a low pass filter to obtain an average throughput, to get a constant updated throughput. If something in the network switches routes, then we may adapt our window size. Alternatively, any manner of determining a throughput known or convenient may be used.



FIG. 9 depicts a diagram 900 of an example of a device for streaming software. The computing system 900 may be a computing system that can be used as a client computing system, such as a wireless client or a workstation, or a server computing system. The computing system 900 includes a computer 902, and a display device 906. The computer 902 includes a processor 908, interface 910, memory 912, display controller 914, and non-volatile storage 916. The computer 902 may be coupled to or include display device 906.


The computer 902 interfaces to external systems through the interface 910, which may include a modem, network interface, CD-ROM drive, DVD-ROM drive, or any known or convenient interface. An interface may include one or more input-output devices. Interface 910 may include one or more interfaces. An interface may include a device for reading a fixed media. An interface may receive deliverables. An interface may transmit a stream-enabled application. It will be appreciated that the interface 910 can be considered to be part of the computing system 900 or a part of the computer 902. The interface 910 can be an analog modem, ISDN modem, cable modem, token ring interface, satellite transmission interface (e.g. “direct PC”), or other interface for coupling a computing system to other computing systems.


The processor 908 may be, for example, a microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor. The memory 912 is coupled to the processor 908 by a bus 920. The memory 912 can be Dynamic Random Access Memory (DRAM) and can also include Static RAM (SRAM). The bus 920 couples the processor 908 to the memory 912, also to the non-volatile storage 916, and to the display controller 914.


The non-volatile storage 916 is often a magnetic hard disk, an optical disk, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory 912 during execution of software in the computer 902. One of skill in the art will immediately recognize that the terms “machine-readable medium” or “computer-readable medium” includes any type of storage device that is accessible by the processor 908 and also encompasses a carrier wave that encodes a data signal.


The computing system 900 is one example of many possible computing systems which have different architectures. For example, personal computers based on an Intel microprocessor often have multiple buses, one of which can be an I/O bus for the peripherals and one that directly connects the processor 908 and the memory 912 (often referred to as a memory bus). The buses are connected together through bridge components that perform any necessary translation due to differing bus protocols.


Network computers are another type of computing system that can be used in conjunction with the teachings provided herein. Network computers do not usually include a hard disk or other mass storage, and the executable programs are loaded from a network connection into the memory 912 for execution by the processor 908. A Web TV system, which is known in the art, is also considered to be a computing system, but it may lack some of the features shown in FIG. 9, such as certain input or output devices. A typical computing system will usually include at least a processor, memory, and a bus coupling the memory to the processor.


In addition, the computing system 900 is controlled by operating system software which includes a file management system, such as a disk operating system, which is part of the operating system software. One example of operating system software with its associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile storage 916 and causes the processor 908 to execute the various acts required by the operating system to input and output data and to store data in memory, including storing files on the non-volatile storage 916.


Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantifies take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computing system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computing system's registers and memories into other data similarly represented as physical quantities within the computing system memories or registers or other such information storage, transmission or display devices.


The teachings included herein also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, read-only memory (ROM), random access memory (RAM), EPROM, EEPROM, magnetic or optical cards, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, or any type of media suitable for storing electronic instructions, and each coupled to a computing system bus.


The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, there is no reference to any particular programming language, and various examples may be implemented using a variety of programming languages.


It will be appreciated to those skilled in the art that the preceding examples and embodiments are exemplary and not limiting in scope. It is intended that all permutations, enhancements, equivalents, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of these teachings. It is therefore intended that the following appended claims include all such modifications, permutations, and equivalents as fall within the true spirit and scope of these teachings.

Claims
  • 1. A method comprising: receiving, from a virtual execution environment, a first request for a resource of a stream-enabled application executing in the virtual execution environment;translating the first request for the resource of the stream-enabled application into one or more block IDs of one or more blocks that includes the resource;determining, using the one or more block IDs, which blocks of the one or more blocks that includes the resource are present in a cache;in response to determining which blocks of the one or more blocks that includes the resource are present in the cache: locking the blocks that are present in the cache;transmitting a second request for the blocks that are not present in the cache to a streaming server;receiving the blocks that are not present in the cache from the streaming server;determining one or more predictively streamed blocks that are likely to be requested following the first request for the resource;transmitting the one or more blocks that includes the resource to the virtual execution environment;transmitting the one or more predictively streamed blocks to the virtual execution environment;storing the one or more predictively streamed blocks in the cache;unlocking the blocks that include the resource that were present in the cache.
  • 2. The method of claim 1, wherein storing the one or more predictively streamed blocks in the cache comprises replacing one or more blocks in the cache that are not needed for executing the stream-enabled application.
  • 3. The method of claim 1, further comprising aggregating the one or more predictively streamed blocks prior to transmitting the one or more predictively streamed blocks to the virtual execution environment.
  • 4. The method of claim 1, wherein the one or more predictively streamed blocks are transmitted in an amount of time that is less than or equal to an interactivity threshold.
  • 5. The method of claim 4, wherein the interactivity threshold is an amount of time that a user is willing to wait for the virtual execution environment to access the resource.
  • 6. The method of claim 4, wherein the interactivity threshold is adapted to limit an amount of time for transmitting blocks to preserve a quality of execution of the stream-enabled application.
  • 7. The method of claim 1, further comprising: receiving, from the virtual execution environment, a request for a second resource of the stream-enabled application;determining whether the one or more predictively streamed blocks in the cache at least partially satisfy the request for the second resource;in response to determining that the one or more predictively streamed blocks in the cache at least partially satisfy the request for the second resource, partially satisfying the request for the second resource using the one or more predictively streamed blocks in the cache;requesting, from the streaming server, one or more additional blocks to completely satisfy the request for the second resource;transmitting the one or more additional blocks to the virtual execution environment.
  • 8. The method of claim 1, wherein the determining the one or more predictively streamed blocks that are likely to be requested is based on an occurrence of an error condition in executing the stream-enabled application.
  • 9. The method of claim 1, wherein the one or more predictively streamed blocks are transmitted as part of an output buffer, the one or more predictively streamed blocks being organized in the output buffer according to a probability of each of the one or more predictively streamed blocks being requested.
  • 10. The method of claim 9, wherein the probability of each of the one or more predictively streamed blocks being requested is based on probability data in a logical block probability table.
  • 11. A system comprising: a virtual execution environment;a cache;a block granularity caching engine;wherein, in operation, the block granularity caching engine: receives, from the virtual execution environment, a first request for a resource of a stream-enabled application executing in the virtual execution environment;translates the first request for the resource of the stream-enabled application into one or more block IDs of one or more blocks that includes the resource;determines, using the one or more block IDs, which blocks of the one or more blocks that includes the resource are present in the cache;in response to determining which blocks of the one or more blocks that includes the resource are present in the cache: locks the blocks that are present in the cache;transmits a second request for the blocks that are not present in the cache to a streaming server;receives the blocks that are not present in the cache from the streaming server;determines one or more predictively streamed blocks that are likely to be requested following the first request for the resource;transmits the one or more blocks that includes the resource to the virtual execution environment;transmits the one or more predictively streamed blocks to the virtual execution environment;stores the one or more predictively streamed blocks in the cache;unlocks the blocks that include the resource that were present in the cache.
  • 12. The system of claim 11, wherein storing the one or more predictively streamed blocks in the cache comprises replacing one or more blocks in the cache that are not needed for executing the stream-enabled application.
  • 13. The system of claim 11, wherein, in operation, the block granularity caching engine aggregates the one or more predictively streamed blocks prior to transmitting the one or more predictively streamed blocks to the virtual execution environment.
  • 14. The system of claim 11, wherein the one or more predictively streamed blocks are transmitted in an amount of time that is less than or equal to an interactivity threshold.
  • 15. The system of claim 14, wherein the interactivity threshold is an amount of time that a user is willing to wait for the virtual execution environment to access the resource.
  • 16. The system of claim 14, wherein the interactivity threshold is adapted to limit an amount of time for transmitting blocks to preserve a quality of execution of the stream-enabled application.
  • 17. The system of claim 11, wherein, in operation, the block granularity caching engine: receives, from the virtual execution environment, a request for a second resource of the stream-enabled application;determines whether the one or more predictively streamed blocks in the cache at least partially satisfy the request for the second resource;in response to determining that the one or more predictively streamed blocks in the cache at least partially satisfy the request for the second resource, partially satisfies the request for the second resource using the one or more predictively streamed blocks in the cache;requests, from the streaming server, one or more additional blocks to completely satisfy the request for the second resource;transmits the one or more additional blocks to the virtual execution environment.
  • 18. The system of claim 11, wherein the determining the one or more predictively streamed blocks that are likely to be requested is based on an occurrence of an error condition in executing the stream-enabled application.
  • 19. The system of claim 11, wherein the one or more predictively streamed blocks are transmitted as part of an output buffer, the one or more predictively streamed blocks being organized in the output buffer according to a probability of each of the one or more predictively streamed blocks being requested.
  • 20. A system comprising: means for receiving, from a virtual execution environment, a first request for a resource of a stream-enabled application executing in the virtual execution environment;means for translating the first request for the resource of the stream-enabled application into one or more block IDs of one or more blocks that includes the resource;means for determining, using the one or more block IDs, which blocks of the one or more blocks that includes the resource are present in a cache;in response to determining which blocks of the one or more blocks that includes the resource are present in the cache: means for locking the blocks that are present in the cache;means for transmitting a second request for the blocks that are not present in the cache to a streaming server;means for receiving the blocks that are not present in the cache from the streaming server;means for determining one or more predictively streamed blocks that are likely to be requested following the first request for the resource;means for transmitting the one or more blocks that includes the resource to the virtual execution environment;means for transmitting the one or more predictively streamed blocks to the virtual execution environment;means for storing the one or more predictively streamed blocks in the cache;means for unlocking the blocks that include the resource that were present in the cache.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation application of U.S. patent application Ser. No. 15/723,166, filed Oct. 2, 2017, now U.S. Pat. No. 10,587,473, which is a continuation application of U.S. patent application Ser. No. 15/064,286, filed Mar. 8, 2016, now U.S. Pat. No. 9,781,007, which is a continuation application of U.S. patent application Ser. No. 14/553,873, filed Nov. 25, 2014, now U.S. Pat. No. 9,300,752, which is a continuation application of U.S. patent application Ser. No. 13/962,554, filed Aug. 8, 2013, now U.S. Pat. No. 8,898,391, which is a continuation application of U.S. patent application Ser. No. 13/234,950 filed Sep. 16, 2011, now U.S. Pat. No. 8,527,706, which is a continuation application of U.S. patent application Ser. No. 12/062,789, filed Apr. 4, 2008, now U.S. Pat. No. 8,024,523, which claims priority to U.S. Provisional Patent Application No. 60/986,261, filed Nov. 7, 2007, all of which are incorporated herein by reference. The U.S. patent application Ser. No. 13/234,950, now U.S. Pat. No. 8,527,706, is also a continuation-in-part of U.S. patent application Ser. No. 11/388,381, filed Mar. 23, 2006, now U.S. Pat. No. 9,716,609, which claims priority to U.S. Provisional Patent Application No. 60/664,765, filed on Mar. 23, 2005, all of which are also incorporated herein by reference.

US Referenced Citations (340)
Number Name Date Kind
4562306 Chuo Dec 1985 A
4796220 Wolfe Jan 1989 A
4949257 Orbach Aug 1990 A
4970504 Chen Nov 1990 A
4999806 Chernow Mar 1991 A
5012512 Basso Apr 1991 A
5032979 Hecht Jul 1991 A
5047928 Wiedemer Sep 1991 A
5063500 Shorter Nov 1991 A
5109413 Comeford Apr 1992 A
5166886 Molnar Nov 1992 A
5210850 Kelly May 1993 A
5293556 Hill Mar 1994 A
5311596 Scott May 1994 A
5325489 Mitsuhira Jun 1994 A
5442791 Wrabetz Aug 1995 A
5481611 Owens Jan 1996 A
5495411 Ananda Feb 1996 A
5533123 Force Jul 1996 A
5537566 Konno Jul 1996 A
5544321 Theimer Aug 1996 A
5546526 Li Aug 1996 A
5547202 Tsumura Aug 1996 A
5548645 Ananda Aug 1996 A
5553139 Ross Sep 1996 A
5553143 Ross Sep 1996 A
5555376 Theimer Sep 1996 A
5590300 Lautzenheiser Dec 1996 A
5611050 Theimer Mar 1997 A
5629980 Stefik May 1997 A
5630049 Cardoza May 1997 A
5635906 Joseph Jun 1997 A
5638513 Ananda Jun 1997 A
5652887 Dewey Jul 1997 A
5666293 Metz Sep 1997 A
5696965 Dedrick Dec 1997 A
5701427 Lathrop Dec 1997 A
5706440 Compliment Jan 1998 A
5715403 Stefik Feb 1998 A
5758150 Bell May 1998 A
5761406 Kobayashi Jun 1998 A
5761445 Nguyen Jun 1998 A
5764906 Edelstein Jun 1998 A
5764918 Poulter Jun 1998 A
5765152 Erickson Jun 1998 A
5765153 Benantar Jun 1998 A
5768528 Stumm Jun 1998 A
5768539 Metz Jun 1998 A
5771354 Crawford Jun 1998 A
5778395 Whiting Jul 1998 A
5790753 Krishnamoorthy Aug 1998 A
5805809 Singh Sep 1998 A
5809144 Sirbu Sep 1998 A
5812865 Theimer Sep 1998 A
5812881 Ku Sep 1998 A
5815662 Ong Sep 1998 A
5818711 Schwabe Oct 1998 A
5822537 Katseff Oct 1998 A
5832289 Shaw Nov 1998 A
5835722 Bradshaw Nov 1998 A
5838910 Domenikos Nov 1998 A
5839910 Meller Nov 1998 A
5855020 Kirsch Dec 1998 A
5874986 Gibbon Feb 1999 A
5878425 Redpath Mar 1999 A
5881229 Singh Mar 1999 A
5881232 Cheng Mar 1999 A
5892915 Duso Apr 1999 A
5892953 Bhagria Apr 1999 A
5895454 Harrington Apr 1999 A
5895471 King Apr 1999 A
5901315 Edwards May 1999 A
5903721 Sixtus May 1999 A
5903732 Reed May 1999 A
5903892 Hoffert May 1999 A
5905868 Baghai May 1999 A
5905990 Inglett May 1999 A
5909545 Frese May 1999 A
5911043 Duffy Jun 1999 A
5918015 Suzuki Jun 1999 A
5923885 Johnson Jul 1999 A
5925126 Hsieh Jul 1999 A
5926552 Mckeon Jul 1999 A
5929849 Kikinis Jul 1999 A
5931907 Davies Aug 1999 A
5933603 Vahalia Aug 1999 A
5933822 Braden-Harder Aug 1999 A
5940591 Boyle Aug 1999 A
5943424 Berger Aug 1999 A
5948062 Tzelnic Aug 1999 A
5948065 Eilert Sep 1999 A
5949877 Traw Sep 1999 A
5950195 Stockwell Sep 1999 A
5953506 Kalra Sep 1999 A
5956717 Kraay Sep 1999 A
5960411 Hartman Sep 1999 A
5960439 Hamner Sep 1999 A
5961586 Pedersen Oct 1999 A
5961591 Jones Oct 1999 A
5963444 Shidara Oct 1999 A
5963944 Adams Oct 1999 A
5968176 Nessett Oct 1999 A
5973696 Arganat Oct 1999 A
5987454 Hobbs Nov 1999 A
5987608 Roskind Nov 1999 A
6003065 Yan Dec 1999 A
6003095 Pekowski Dec 1999 A
6014686 Elnozahy Jan 2000 A
6018619 Allard Jan 2000 A
6026166 LeBourgeois Feb 2000 A
6028925 Van Berkum Feb 2000 A
6038379 Fletcher Mar 2000 A
6038610 Belfiore Mar 2000 A
6047323 Krause Apr 2000 A
6049792 Hart Apr 2000 A
6049835 Gagnon Apr 2000 A
6061738 Osaku May 2000 A
6065043 Domenikos May 2000 A
6076104 McCue Jun 2000 A
6081842 Shachar Jun 2000 A
6085186 Christianson Jul 2000 A
6085193 Malkin Jul 2000 A
6088705 Lightstone Jul 2000 A
6092154 Curtis Jul 2000 A
6092194 Touboul Jul 2000 A
6094649 Bowen Jul 2000 A
6098072 Sluiman Aug 2000 A
6099408 Schneier Aug 2000 A
6101482 Diangelo Aug 2000 A
6101491 Woods Aug 2000 A
6101537 Edelstein Aug 2000 A
6108420 Larose Aug 2000 A
6115741 Domenikos Sep 2000 A
6138271 Keeley Oct 2000 A
6154878 Saboff Nov 2000 A
6157948 Inoue Dec 2000 A
6167510 Tran Dec 2000 A
6167522 Lee Dec 2000 A
6173311 Hassett Jan 2001 B1
6173330 Guo Jan 2001 B1
6185608 Hon Feb 2001 B1
6192398 Hunt Feb 2001 B1
6192408 Vahalia Feb 2001 B1
6195694 Chen Feb 2001 B1
6212640 Abdelnur Apr 2001 B1
6219693 Napolitano Apr 2001 B1
6226412 Schwab May 2001 B1
6226665 Deo May 2001 B1
6253234 Hunt Jun 2001 B1
6275470 Ricciulli Aug 2001 B1
6275496 Burns Aug 2001 B1
6278992 Curtis Aug 2001 B1
6281898 Nikolovska Aug 2001 B1
6282712 Davis Aug 2001 B1
6298356 Jawahar Oct 2001 B1
6301584 Ranger Oct 2001 B1
6301605 Napolitano Oct 2001 B1
6301629 Sastri Oct 2001 B1
6301685 Shigeta Oct 2001 B1
6311221 Raz Oct 2001 B1
6314425 Serbinis Nov 2001 B1
6317811 Deshpande Nov 2001 B1
6321260 Takeuchi Nov 2001 B1
6327674 Ito Dec 2001 B1
6330561 Cohen Dec 2001 B1
6343287 Kumar Jan 2002 B1
6347398 Parthasarathy Feb 2002 B1
6356946 Clegg Mar 2002 B1
6369467 Noro Apr 2002 B1
6370686 Delo Apr 2002 B1
6374402 Schmeidler Apr 2002 B1
6385696 Doweck May 2002 B1
6389467 Eyal May 2002 B1
6418554 Delo Jul 2002 B1
6418555 Mohammed Jul 2002 B2
6418556 Bennington Jul 2002 B1
6424991 Gish Jul 2002 B1
6425017 Dievendorff Jul 2002 B1
6449688 Peters Sep 2002 B1
6453334 Vinson Sep 2002 B1
6457076 Cheng Sep 2002 B1
6508709 Karmarkar Jan 2003 B1
6510458 Berstis Jan 2003 B1
6510462 Blumenau Jan 2003 B2
6510466 Cox Jan 2003 B1
6524017 Lecocq Feb 2003 B2
6574618 Eylon Jun 2003 B2
6584507 Bradley Jun 2003 B1
6587857 Carothers Jul 2003 B1
6594682 Peterson Jul 2003 B2
6598125 Romm Jul 2003 B2
6601103 Goldschmidt Jul 2003 B1
6601110 Marsland Jul 2003 B2
6605956 Farnworth Aug 2003 B2
6609114 Gressel Aug 2003 B1
6611812 Hurtado Aug 2003 B2
6622137 Ravid Sep 2003 B1
6622171 Gupta Sep 2003 B2
6636961 Braun Oct 2003 B1
6651251 Shoff Nov 2003 B1
6687745 Franco Feb 2004 B1
6694510 Willems Feb 2004 B1
6697869 Mallart Feb 2004 B1
6711619 Chandramohan Mar 2004 B1
6721870 Yochai Apr 2004 B1
6732179 Brown May 2004 B1
6735631 Oehrke May 2004 B1
6757708 Craig Jun 2004 B1
6757894 Eylon Jun 2004 B2
6763370 Schmeidler Jul 2004 B1
6772209 Chernock Aug 2004 B1
6775779 England Aug 2004 B1
6779179 Romm Aug 2004 B1
6785768 Peters Aug 2004 B2
6785865 Cote Aug 2004 B1
6801507 Humpleman Oct 2004 B1
6810525 Safadi Oct 2004 B1
6816909 Chang Nov 2004 B1
6816950 Nichols Nov 2004 B2
6832222 Zimowski Dec 2004 B1
6836794 Lucowsky Dec 2004 B1
6854009 Hughes Feb 2005 B1
6891740 Williams May 2005 B2
6918113 Patel Jul 2005 B2
6925495 Hedge Aug 2005 B2
6938096 Greschler Aug 2005 B1
6959320 Shah Oct 2005 B2
6970866 Pravetz Nov 2005 B1
6985915 Somalwar Jan 2006 B2
7024677 Snyder Apr 2006 B1
7028305 Schaefer Apr 2006 B2
7043524 Shah May 2006 B2
7047312 Aweya May 2006 B1
7051315 Artzi May 2006 B2
7062567 Benitez Jun 2006 B2
7093077 Cooksey Aug 2006 B2
7096253 Vinson Aug 2006 B2
7112138 Hendrick Sep 2006 B2
7130616 Janik Oct 2006 B2
7137072 Bauer Nov 2006 B2
7171390 Song Jan 2007 B1
7191441 Abbott Mar 2007 B2
7192352 Walker Mar 2007 B2
7197516 Hipp Mar 2007 B1
7246119 Kuwata Jul 2007 B2
7380014 Lecroy May 2008 B2
7451196 de Vries Nov 2008 B1
7529834 Birrell May 2009 B1
7711539 Kimmel May 2010 B1
20010003828 Peterson Jun 2001 A1
20010014878 Mitra Aug 2001 A1
20010027493 Wallace Oct 2001 A1
20010034736 Eylon Oct 2001 A1
20010037399 Eylon Nov 2001 A1
20010037400 Raz Nov 2001 A1
20010042833 Kenway Nov 2001 A1
20010044850 Raz Nov 2001 A1
20010044851 Rothman Nov 2001 A1
20020015106 Taylor, Jr. Feb 2002 A1
20020019864 Mayer Feb 2002 A1
20020027884 Halme Mar 2002 A1
20020035674 Vetrivelkumaran Mar 2002 A1
20020038374 Gupta Mar 2002 A1
20020042833 Hendler Apr 2002 A1
20020057893 Wood May 2002 A1
20020059402 Belanger May 2002 A1
20020065848 Walker May 2002 A1
20020078170 Brewer Jun 2002 A1
20020078203 Greschler Jun 2002 A1
20020083183 Pujare Jun 2002 A1
20020083187 Sim Jun 2002 A1
20020087717 Artzi Jul 2002 A1
20020087883 Wohlgemuth Jul 2002 A1
20020087963 Eylon Jul 2002 A1
20020091763 Shah Jul 2002 A1
20020091901 Romm Jul 2002 A1
20020116476 Eyal Aug 2002 A1
20020133491 Sim Sep 2002 A1
20020138640 Raz Sep 2002 A1
20020147849 Wong Oct 2002 A1
20020156911 Croman Oct 2002 A1
20020157089 Patel Oct 2002 A1
20020161908 Benitez Oct 2002 A1
20020174215 Schaefer Nov 2002 A1
20020174293 Fox Nov 2002 A1
20020176443 Wei Nov 2002 A1
20030004882 Holler Jan 2003 A1
20030009538 Shah Jan 2003 A1
20030056112 Vinson Mar 2003 A1
20030105816 Goswami Jun 2003 A1
20030128692 Mitsumori Jul 2003 A1
20030131197 Morrison Jul 2003 A1
20030138024 Williamson Jul 2003 A1
20030140160 Raz Jul 2003 A1
20030204550 Lutter Oct 2003 A1
20040003105 Berzosa Jan 2004 A1
20040036722 Warren Feb 2004 A1
20040128342 Maes Jul 2004 A1
20040133657 Smith Jul 2004 A1
20040199566 Carlson Oct 2004 A1
20040205289 Srinivasan Oct 2004 A1
20040230784 Cohen Nov 2004 A1
20040230971 Rachman Nov 2004 A1
20040267813 Rivers-Moore Dec 2004 A1
20040268361 Schaefer Dec 2004 A1
20050010607 Parker Jan 2005 A1
20050010670 Greschler Jan 2005 A1
20050041584 Lau Feb 2005 A1
20050091534 Nave Apr 2005 A1
20050114472 Tan May 2005 A1
20050193139 Vinson Sep 2005 A1
20050278642 Chang Dec 2005 A1
20050289617 Safadi Dec 2005 A1
20060010074 Zeitsiff Jan 2006 A1
20060031165 Nave Feb 2006 A1
20060041719 Chui Feb 2006 A1
20060047716 Keith, Jr. Mar 2006 A1
20060048136 de Vries Mar 2006 A1
20060062112 Wijnands Mar 2006 A1
20060106770 de Vries May 2006 A1
20060123185 de Vries Jun 2006 A1
20060136389 Cover Jun 2006 A1
20060168294 de Vries Jul 2006 A1
20060218165 de Vries Sep 2006 A1
20060230175 de Vries Oct 2006 A1
20070038642 Durgin Feb 2007 A1
20070043550 Tzruya Feb 2007 A1
20070067435 Landis Mar 2007 A1
20070074223 Lescouet Mar 2007 A1
20070126749 Tzruya Jun 2007 A1
20070129146 Tzruya Jun 2007 A1
20070129990 Tzruya Jun 2007 A1
20070130075 Song Jun 2007 A1
20070130292 Tzruyz Jun 2007 A1
20070168309 Tzruya Jul 2007 A1
20070196074 Jennings Aug 2007 A1
20080178298 Arai Jul 2008 A1
20090119458 de Vries May 2009 A1
20090119644 de Vries May 2009 A1
20100023640 Vinson Jan 2010 A1
Foreign Referenced Citations (20)
Number Date Country
0813325 Dec 1997 EP
0658837 Jun 2000 EP
1020824 Jul 2000 EP
1143349 Oct 2001 EP
1998040993 Sep 1998 WO
1998050853 Nov 1998 WO
1999057863 Nov 1999 WO
1999060458 Nov 1999 WO
2000004681 Jan 2000 WO
2000031657 Jun 2000 WO
2000031672 Jun 2000 WO
2000056028 Sep 2000 WO
2001027805 Apr 2001 WO
2001046856 Jun 2001 WO
2002044840 Jun 2002 WO
2006022745 Mar 2006 WO
2006047133 May 2006 WO
2006055445 May 2006 WO
2006102532 Sep 2006 WO
2006102621 Sep 2006 WO
Non-Patent Literature Citations (25)
Entry
Bailey, Peter et al., “Chart of Darkness: Mapping a Large Intranet,” Dept. of Computer Science, FEIT, The Australian National University, Canberra ACT 0200, Australia, pp. 1-23, Nov. 11, 1999 [retrieved online at http://research.microsoft.com/en-us/um/people/nickcr/pubs/bailey_tr00.pdf].
Boneh, Dan et al., “An Attack on RSA Given a Small Fraction of the Private Key Bits,” Advances in Cryptology—Asiacrypt '98, Lecture Notes in Computer Science, vol. 1514, pp. 25-34, Oct. 1998 [retrieved online at http://download.springer.com/static/pdf/450/chp%253A10.1007%252F3-540-49649-1_3.pdf?auth66=1394831295_318c7a44939193b5a2aff612b2a047ac&ext=.pdf].
Brin, Sergey et al., “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Proceedings of the Seventh International World Wide Web Conference, pp. 107-117, Apr. 1998 [retrieved online at http://ilpubs.stanford.edu:8090/361/1/1998-8.pdf].
Chang, K. et al., “Dynamic Computation of TCP Maximum Window Size for Directly Connected Hosts,” IBM Technical Disclosure Bulletin, vol. 37, No. 04A, pp. 601-607, Apr. 4, 1994.
Chu, Yang-Hua et al., “Referee: Trust Management for Web Applications,” Proceedings of the Sixth International World Wide Web Conference, 1997, retrieved online on Jun. 15, 2006 at http://www.si.umich.edu/˜presnick/papers/Referee/www6-referee.html.
Faupel, Matthew, “Status of Industry Work on Signed Mobile Code,” Joint European Networking Conference (JENC), May 1997, 313-1-313-8.
Fiedler, David et al., “UNIX System V. Release 4 Administration,” Second Edition, 1991, 1-13, 37-116, 152-153, 175-200, 291-312, Hayden Books, Carmel, Indiana, USA.
George, Binto et al., “Secure Transaction Processing in Firm Real-Time Database Systems,” SIGMOD International Conference on Management of Data 1997, 462-473, V26, Issue 2, Association of Computing Machinery (ACM) Press, Tucson, Arizona, United States, May 13, 1997.
Gralla, Preston, “How the Internet Works: Chapter 44—Shopping on the Internet,” IEEE Personal Communications, Aug. 1999, 260-67, QUE—A division of Macmillan Computer Publishing, Millennium Edition.
Microsoft Corp., “Computer Dictionary,” 3rd edition, 1997, pp. 119 & 305, Microsoft Press.
Microsoft Corp., “Understanding Universal Plug and Play,” pp. 1-39, Feb. 2000.
Morrow, Brian et al., “Indexing Within—The Lost Gold Within the Enterprise” Endeavors Technology, Aug. 22, 2000, pp. 1-6.
Mullender, Sape J. et al., “Amoeba:A Distributed Operating System for the 1990's,” Computer Magazine, May 1990, 44-53, 23(5).
Nakayoshi et al., “A Secure Private File System with Minimal System Administration,” Communications, Computers and Signal Processing, 1997 IEEE Pacific Rim Conference, 251-255, vol. 1.
O'Mahony, Donal, “Security Considerations in a Network Management Environment,” 1994, 12-17, vol. 8, IEEE, USA.
Pyarali, Irfan et al., “Design and Performance of an Object-Oriented Framework for High-Speed Electronic Medical Imaging,” Fall 1996, Computing Systems Journal, 331-375, vol. 9, http://www.cs.wustl.edu/˜schmidt/PDF/COOTS-96.pdf.
Rappaport, Avi, “Robots & Spiders & Crawlers: How Web and Intranet Search Engines Follow Links To Build Indexes,” Infoseek Software, pp. 1-38 (Oct. 1999).
Reinhardt, Robert B., “An Architectural Overview of UNIX Network Security,” ARINC Research Corporation, Sep. 19, 1992, retrieved online on Jun. 15, 2006 at http://www.clusit.it/whitepapers/unixnet.pdf.
Sirbu, Marvin et al., “Netbill: An Internet Commerce System Optimized for Network-Delivered Services,” IEEE Personal Communications, 2(4):34-39, Aug. 1995.
Tardo, Joseph et al., “Mobile Agent Security and Telescript,” 4th International Conference of the IEEE Computer Society (IEEE CompCon1996), Feb. 1996.
International Application No. PCT/US2004/028195, Search Report and Written Opinion dated May 2, 2006.
International Application No. PCT/US2005/041024, Search Report and Written Opinion dated Feb. 27, 2007.
International Application No. PCT/US2006/010637, Search Report and Written Opinion dated Sep. 25, 2007.
International Application No. PCT/US2006/010904, Search Report and Written Opinion dated Dec. 26, 2007.
U.S. Appl. No. 09/996,180, filed Nov. 27, 2001.
Related Publications (1)
Number Date Country
20200322221 A1 Oct 2020 US
Provisional Applications (2)
Number Date Country
60986261 Nov 2007 US
60664765 Mar 2005 US
Continuations (6)
Number Date Country
Parent 15723166 Oct 2017 US
Child 16814604 US
Parent 15064286 Mar 2016 US
Child 15723166 US
Parent 14553873 Nov 2014 US
Child 15064286 US
Parent 13962554 Aug 2013 US
Child 14553873 US
Parent 13234950 Sep 2011 US
Child 13962554 US
Parent 12062789 Apr 2008 US
Child 13234950 US
Continuation in Parts (1)
Number Date Country
Parent 11388381 Mar 2006 US
Child 12062789 US