A portion of the disclosure of this patent document may contain material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
This invention relates generally to using behavioral event driven data to perform predictive cache population. More specifically the invention relates to using of a behavioral event correlation system to make predictions regarding entries that are to be populated in a cache.
Algorithms controlling cache behavior have traditionally been of interest to computer scientists. Particularly, research has focused on a question at the root of cache behavior as to how to predict what resources will be accessed next. This issue drives both choosing which items to evict from a cache when it reaches the limits of its size, as well as techniques for selectively populating the cache with items in advance of their expected need.
In general, a cache is a region of relatively fast data storage used to store frequently used items from a larger region of relatively slow data storage. Without loss of generality, these regions could be main memory and a disk, a processor cache and main memory, etc; their performance relative to each other is important.
Recommended algorithms have been used to create static pre-computations of item relationships that implement a large amount of storage and typically cannot be done in real time. This static pre-computation, which traditionally builds and stores a matrix of item relationships, is too space inefficient and complex to use as a cache prediction mechanism for anything but the most expensive cached results. Static pre-computation also suffers from a very serious problem of being insufficiently reactive to changes in system behavior, which ultimately undermines its effectiveness as a cache prediction algorithm in all but the most coarse-grained contexts.
In order for a cache prediction algorithm to logically be useful, it should be cheaper to compute than the operation itself. Traditionally, most cache prediction mechanisms have focused on very cheap first-order approximations of the relative usefulness of a given cache entry. The usefulness of a cache entry is almost always defined by its probability of subsequent access. In order to do this, traditional cache prediction mechanisms focus on how recently a given item was accessed, or on statistics of data page sizes and very high-level approximations.
For example, devices in I/O paths generally pre-fetch as much data as possible: a 1 KB read from an application might be turned into an 8 KB read in the operating system (with the remaining 7 KB cached), and the disk that actually satisfies the request might read 64 KB from the platters into its local memory. This is a performance advantage because the latency associated with preparing to satisfy a disk request is quite high since it is advantageous to read as much data as possible while the heads and platters are in the correct configuration.
This is a performance advantage because the differences in latencies are very high, and there is generally a computational (rather than I/O) component that separates I/O requests. This simple scheme is based on the principle of spatial locality such that items that are accessed together tend to be located together (for example, if you read the third paragraph of this document, you are likely to read the fourth paragraph as well). Unfortunately, that scheme is not able to pre-fetch the next document you will read because the latency involved in getting the next document might be quite significant, as no caching is available to help.
Therefore what has been needed is a system and method for the dynamic generation of correlation scores between arbitrary objects to create a list of correlated items.
According to one embodiment, a method is disclosed. The method includes a client accessing a cache for a value of an object based on an object identification (ID), initiating a request to a cache loader if the cache does not include a value for the object, the cache loader performing a lookup in an object table for the object ID corresponding to the object, the cache loader retrieving a vector of execution context IDs, from an execution context table that correspond to the object IDs looked up in the object table and the cache loader performing an execution context lookup in an execution context table for every retrieved execution context ID in the vector to retrieve object IDs from an object vector.
In a further embodiment, a system is disclosed. The system includes a client, a cache to receive access requests from the client to retrieve a value of an object based on an object ID, an object table and an execution context table. The system also includes a cache loader that receives a request if the cache does not include a value for the object, performs a lookup in the object table for the object ID corresponding to the object, retrieves a vector of execution context IDs from the execution context table that correspond to the object IDs looked up in the object table and performs an execution context lookup in an execution context table for every retrieved execution context ID in the vector to retrieve object IDs from an object vector.
The inventive body of work will be readily understood by referring to the following detailed description in conjunction with the accompanying drawings, in which:
A detailed description of the inventive body of work is provided below. While several embodiments are described, it should be understood that the inventive body of work is not limited to any one embodiment, but instead encompasses numerous alternatives, modifications, and equivalents. In addition, while numerous specific details are set forth in the following description in order to provide a thorough understanding of the inventive body of work, some embodiments can be practiced without some or all of these details. Moreover, for the purpose of clarity, certain technical material that is known in the related art has not been described in detail in order to avoid unnecessarily obscuring the invention.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
According to one embodiment, objects relating to a cache memory are uniquely identified in some common namespace (e.g., inode numbers in a file system, or memory address locations). An object may be relatively coarse (e.g., a file) or relatively fine (e.g., a block of memory). In a further embodiment, each context of execution (e.g., thread, process, etc.) is uniquely identifiable.
In a further embodiment, tables of relationships between execution contexts and system objects are dynamically generated, with a correlation score of object to other objects computed by dynamic examination of table contents. In one embodiment, the system maintains two tables, the contents of each of which is a set of vectors. In one table the key values are the identifications (IDs) of the execution contexts, and the vectors include the object IDs of each object they have accessed in a predefined time period.
The second table is the mirror image of the first table where the key values are the object IDs of the objects themselves, and the vectors are the list of execution contexts in which they were accessed. In one embodiment, the contents of these vectors are stored in time order. However in other embodiments, the vector contents may be stored based upon other parameters as well.
Embodiments of the invention may include various processes as set forth below. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain steps. Alternatively, these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.
Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the pretransmitted invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
Score Correlation
These parameters may include restrictions (for example, only to consider certain types of behavior) or other options that modify the runtime behavior of an algorithm. In processing block 103, a list of objects for the seed object is then retrieved via a lookup of stored relationship data in a storage table. The list is then restricted appropriately by whatever may have been specified in the set of parameters, thus establishing a pivot set, processing block 104. In processing block 105, each object that has interacted with all members of the pivot set are determined by applying appropriate restrictions creating a candidate set.
In processing block 106 as the candidate set is generated, a score for each member of the pivot set's contribution is computed, and these scores are summed. In one embodiment, these summed values include a histogram that is input to the scoring algorithm. In processing block 107, the scored members of the candidate set are then processed by a scoring algorithm, such as the vector cosine or Pearson's R. At processing block 108, the scores are sorted to compute a list of correlated items. In one embodiment, each component of this process is computed on a single processor. In another embodiment, processing blocks 103, 104, 105, 106, 107 and 108 may each be computed in parallel on multiple processors or in a distributed fashion across multiple systems.
In one embodiment, an entity is something about which information can be gathered. An entity can be uniquely identified through the assignation of a unique numerical value or through some other process. Entities can be broadly grouped together into types. For example, all people might be grouped together into a “people” type. This type information may be encoded into the uniquely identifying value described above.
Entities are logically linked by classification events. A classification event may be the result of behavioral observations, textual analysis, or some other means of associating one entity with another. For example, a classification event can be decomposed into a binary expression A*B, where the * operator implies action. For example, “user U loads Web page W” could be expressed as U*W, “label L applies to item I” could be expressed as L*I, or “user U played song S” as U*S. Information about entities is stored in a data store.
The data store may be represented through any of several different mechanisms: a hash table, a balanced tree such as a Red-Black Tree, or some other data structure. The data store provides the ability, given an entity, to look up all other entities that have been associated with the particular entity. This information may be stored as an ordered list or some other data structure. Other information may be stored as well. For example, the time might be recorded, the intent that gave rise to the classification event (e.g., a person purchasing an item), or business level metadata about the transaction (sale status for a product, mature content flags for an article, etc.). An association method is used to generate a list of entities that are related to some other item. The process of association requires at least one initial entity, the seed, and a series of parameters. These parameters may include restrictions placed on the members of both the complete pivot set and partial candidate sets.
In the event that the entity 202 in the pivot set 203 is not excluded, then the list entities 205 associated with this particular entity 202 in the pivot set 203 is retrieved, producing a partial candidate set 204. Each list entity 205 in the partial candidate set 204 is inspected in turn. The list entity 205 may be excluded from further consideration by the established restriction criteria (e.g., if the relationship represents a view of an item, and only purchase data is being considered), we move on to the next list entity 205 in the partial candidate set 204.
Otherwise, the list entity 205 is assigned a raw score. Restriction criteria are based on a number of factors, such as the semantic relationship (intent) associated with the list entity's 205 inclusion in the partial candidate list 204 or some other criteria. A running sum of raw scores, each of which represents a weighted relationship between an item in the pivot set and the candidate set, for each list entity 205 is maintained, and is not depicted here. Based upon the summation of raw scores and the final scoring algorithm, entities will be generated for serving to a website or end user, not depicted here.
Referring now to
In
In
System Overview
The client 310 in this embodiment represents a computer that is used by an end-user to interact with the web sites 312 and/or server 314 via the network 316. The client 310 can be, for example, a personal computer or another network-capable device, such as a personal digital assistant (PDA), a cellular telephone, a pager, a video game system, a television “set-top box” etc. Although
The web sites 312 are locations on the network 316 that provide web pages to the clients 310 via the network 316. The web sites 312 can be, for example, media sites that primarily provide content such as news to the end-users, retailer sites that enable the end-users to purchase items, social networking sites that enable end-users to interact with other people, and hybrid sites that provide a mix of these features. Those of skill in the art will recognize that there may be an unlimited number of different types of web sites 312 with which the clients 310 can interact. Although
The end-users of the clients 310 interact with the web sites 312 to establish relationships. For example, assume an end-user views a web page for a digital camera, and then views a web page for a memory card for that camera. These actions create relationships between the end-user and the camera, and between the end-user and the memory card. The web sites 312 observe relationships such as these, and provide messages to the server 314 describing them.
In addition, the web sites 312 receive recommendations from the server 314. These recommendations are provided to the end-users, typically by including the recommendations on web pages served to the end-users' clients 310. The recommendations can be for arbitrary and/or heterogeneous items and the web sites can request that the server 314 provide recommendations for only specified types of items. For example, the recommendations can include items an end-user might want to purchase, news stories the end-user might want to read, bands the end-user might like, discussion groups in which the end-user might want to participate, etc.
The server 314 receives descriptions of relationships from the web sites 312 and/or clients 310 and provides recommendations in return. In one embodiment, the server 314 performs collaborative filtering on the received relationships to generate the recommendations. In other embodiments, the server 314 performs the method of
The network 316 represents the communication pathways between the clients 310, web sites 312, and server 314. In one embodiment, the network 316 is the Internet. The network 316 can also utilize dedicated or private communications links that are not necessarily part of the Internet. In one embodiment, the network 316 uses standard communications technologies and/or protocols. Thus, the network 316 can include links using technologies such as 802.11, integrated services digital network (ISDN), digital subscriber line (DSL), asynchronous transfer mode (ATM), etc.
Similarly, the networking protocols used on the network 316 can include multi-protocol label switching (MPLS), the transmission control protocol/Internet protocol (TCP/IP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (SMTP), the file transfer protocol (FTP), etc. The data exchanged over the network 316 can be represented using technologies and/or formats including the hypertext markup language (HTML), the extensible markup language (XML), the web services description language (WSDL), etc. In addition, all or some of links can be encrypted using conventional encryption technologies such as the secure sockets layer (SSL), Secure HTTP and/or virtual private networks (VPNs). In another embodiment, the entities can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above.
The processor 402 may be any general-purpose processor such as an INTEL x86, SUN MICROSYSTEMS SPARC, or POWERPC compatible-CPU. The storage device 1008 is, in one embodiment, a hard disk drive but can also be any other device capable of storing data, such as a writeable compact disk (CD) or DVD, or a solid-state memory device. The memory 406 may be, for example, firmware, read-only memory (ROM), non-volatile random access memory (NVRAM), and/or RAM, and holds instructions and data used by the processor 402. The pointing device 414 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 410 to input data into the computer system 400. The graphics adapter 412 displays images and other information on the display 418. The network adapter 416 couples the computer system 400 to the network 408.
As is known in the art, the computer 400 is adapted to execute computer program modules. As used herein, the term “module” refers to computer program logic and/or data for providing the specified functionality. A module can be implemented in hardware, firmware, and/or software. In one embodiment, the modules are stored on the storage device 408, loaded into the memory 406, and executed by the processor 402.
The types of computers 400 utilized by the entities of
Predictive Cache Loading
As discussed above, server 314 receives descriptions of relationships and provides recommendations in return. Server 314 may also dynamically build tables of relationships between execution contexts and system objects, with a correlation score of object to other objects computed by dynamic examination of the table contents.
A mechanism for enhancing recommendation system performance includes providing weight values as context to algorithms. These weight values describe relevant metadata about the object. In one embodiment, the metadata stored with the value is a time required to load the value from the backing store. In another embodiment, the metadata stored is the time since the previous request. In yet another embodiment, the metadata stored is the size of returned object.
Furthermore, it is possible to store both values, along with more metadata as available to enhance algorithmic performance. One specific embodiment of the use of this data is to more highly score values that are more difficult to load or are loaded more quickly after a given value. In another embodiment, these data vectors can be used during cache eviction to bias toward not evicting data from cache that is more expensive to recompute.
In one embodiment, the cache entries are blocks of data residing in stable storage with the object IDs being the address of the block of data on the storage. In another embodiment, the cache entries include vectors of behavior, with the object Id being the database key of a person or item the behavior relates to. Without loss of generality, the object IDs may be any logical reference to a system object.
In one embodiment, the cache value is a computed result based on the object ID. In another embodiment, the cached value is the contents of the disk block read from a storage device. Without loss of generality, the cached value of the object may be larger or smaller in storage size than the object ID, and both the object ID and cache value may be of arbitrary size depending on the embodiment. In one embodiment, a cache client 506 is included, which represents any software or application that accesses data stored in the cache 501.
In this diagram, the cache 501 includes keys and values for objects a, b, d and g. The cache client 506 initiates a cache request 508 to the cache 501 for a value of an object based on its object ID. If the cache 501 includes a value for the given object it is returned to the cache client 506. In one embodiment, the access of an existing cache key asynchronously triggers a cache load request 509. In another embodiment, the access of an existing cache key performs a synchronous load request 509. In yet another embodiment, the access of an existing cache key does not trigger a cache load request 509.
If the cache 501 does not include a value for the cache key, cache 501 initiates a request to the cache loader 507. When the cache load request 509 is triggered to the cache loader 507, the cache loader 507 initiates a lookup in an object table 502 for the object ID requested. In one embodiment, the cache loader 507 initiates an object lookup 511 before performing a store lookup 513 for the object key's value. In another embodiment, the cache loader 507 initiates an object lookup 511 after performing a store lookup 513 and returning the object value.
In one embodiment, the cache loader 507 initiates an object lookup 511 in the object table 502 for the single current object ID being requested. In another embodiment, the cache loader 507 initiates an object lookup 511 in the object table 502 for the last N object Ids requested by any cache client 506. In another embodiment, the cache loader 507 initiates an object lookup 511 for the last N object IDs requested only by the current cache client 506.
The cache loader 507 retrieves the set of execution context vectors, which includes a set of execution context IDs that uniquely identify the execution context, from the execution context table 505 that correspond to the keys that it looked up in the object table. For every execution context ID, represented in
The cache loader 507 then uses the contents of the execution context vectors 505 and the object vectors 504 in combination with a scoring algorithm (e.g. as discussed above with regard to
The highly scored object values obtained by the scoring algorithm are then sent to backing store 510 via a store lookup 513. Without loss of generality, this backing store 510 can be either a static data store or a computational process with or without backing data that generates values for the object keys. Once the object contents for the submitted object IDs are obtained from the backing store 510, they are inserted into the cache via a cache store request 514, and made available to satisfy future object value requests.
The above description is included to illustrate the operation of the preferred embodiments and is not meant to limit the scope of the invention. The scope of the invention is to be limited only by the following claims. From the above discussion, many variations will be apparent to one skilled in the relevant art that would yet be encompassed by the spirit and scope of the invention.
This is a non-provisional application based on provisional application Ser. No. 60/995,896 filed on Sep. 28, 2007, and claims priority thereof.
Number | Name | Date | Kind |
---|---|---|---|
4258386 | Cheung | Mar 1981 | A |
4546382 | McKenna et al. | Oct 1985 | A |
4833308 | Humble | May 1989 | A |
4930011 | Kiewit | May 1990 | A |
4972504 | Daniel, Jr. et al. | Nov 1990 | A |
5019961 | Addesso et al. | May 1991 | A |
5099319 | Esch | Mar 1992 | A |
5117349 | Tirfing et al. | May 1992 | A |
5128752 | Von Kohorn | Jul 1992 | A |
5155591 | Wachob | Oct 1992 | A |
5201010 | Deaton et al. | Apr 1993 | A |
5223924 | Strubbe | Jun 1993 | A |
5227874 | Von Kohorn | Jul 1993 | A |
5231494 | Wachob | Jul 1993 | A |
5237620 | Deaton et al. | Aug 1993 | A |
5249262 | Baule | Sep 1993 | A |
5285278 | Holman | Feb 1994 | A |
5305196 | Deaton | Apr 1994 | A |
5315093 | Stewart | May 1994 | A |
5327508 | Deaton et al. | Jul 1994 | A |
5331544 | Lu et al. | Jul 1994 | A |
5351075 | Herz et al. | Sep 1994 | A |
5388165 | Deaton et al. | Feb 1995 | A |
5410344 | Graves et al. | Apr 1995 | A |
5430644 | Deaton et al. | Jul 1995 | A |
5446919 | Wilkins | Aug 1995 | A |
5448471 | Deaton et al. | Sep 1995 | A |
5471610 | Kawaguchi et al. | Nov 1995 | A |
5515098 | Carles | May 1996 | A |
5537586 | Amram et al. | Jul 1996 | A |
5544049 | Henderson et al. | Aug 1996 | A |
5559549 | Hendricks et al. | Sep 1996 | A |
5563998 | Yaksich et al. | Oct 1996 | A |
5563999 | Yaksich et al. | Oct 1996 | A |
5592560 | Deaton et al. | Jan 1997 | A |
5604542 | Dedrick | Feb 1997 | A |
5608447 | Farry et al. | Mar 1997 | A |
5619709 | Caid et al. | Apr 1997 | A |
5621812 | Deaton et al. | Apr 1997 | A |
5632007 | Freeman | May 1997 | A |
5636346 | Saxe | Jun 1997 | A |
5638457 | Deaton et al. | Jun 1997 | A |
5642485 | Deaton et al. | Jun 1997 | A |
5644723 | Deaton et al. | Jul 1997 | A |
5649114 | Deaton et al. | Jul 1997 | A |
5649186 | Ferguson | Jul 1997 | A |
5659469 | Deaton et al. | Aug 1997 | A |
5661516 | Carles | Aug 1997 | A |
5675662 | Deaton et al. | Oct 1997 | A |
5687322 | Deaton et al. | Nov 1997 | A |
5704017 | Heckerman | Dec 1997 | A |
5724521 | Dedrick | Mar 1998 | A |
5740549 | Reilly et al. | Apr 1998 | A |
5754938 | Herz et al. | May 1998 | A |
5754939 | Herz et al. | May 1998 | A |
5758257 | Herz et al. | May 1998 | A |
5758259 | Lawler | May 1998 | A |
5761601 | Nemirofsky et al. | Jun 1998 | A |
5761662 | Dasan | Jun 1998 | A |
5774170 | Hite et al. | Jun 1998 | A |
5774868 | Cragun et al. | Jun 1998 | A |
5786845 | Tsuria | Jul 1998 | A |
5794210 | Goldhaber et al. | Aug 1998 | A |
5796952 | Davis et al. | Aug 1998 | A |
5805974 | Hite | Sep 1998 | A |
5832457 | O'Brien et al. | Nov 1998 | A |
5835087 | Herz et al. | Nov 1998 | A |
5848396 | Gerace | Dec 1998 | A |
5867799 | Lang et al. | Feb 1999 | A |
5918014 | Robinson | Jun 1999 | A |
5926205 | Krause | Jul 1999 | A |
5930764 | Melchione | Jul 1999 | A |
5933811 | Angles | Aug 1999 | A |
5948061 | Merriman et al. | Sep 1999 | A |
5970469 | Scroggie et al. | Oct 1999 | A |
5974396 | Anderson et al. | Oct 1999 | A |
5974399 | Giuliani et al. | Oct 1999 | A |
5977964 | Williams et al. | Nov 1999 | A |
5978799 | Hirsch | Nov 1999 | A |
5991735 | Gerace | Nov 1999 | A |
6002393 | Hite et al. | Dec 1999 | A |
6005597 | Barrett et al. | Dec 1999 | A |
6009409 | Adler | Dec 1999 | A |
6009410 | LeMole | Dec 1999 | A |
6012051 | Sammon, Jr. | Jan 2000 | A |
6014634 | Scroggie et al. | Jan 2000 | A |
6020883 | Herz et al. | Feb 2000 | A |
6026370 | Jermyn | Feb 2000 | A |
6029195 | Herz | Feb 2000 | A |
6035280 | Christensen | Mar 2000 | A |
6038591 | Wolfe | Mar 2000 | A |
6055573 | Gardenswartz | Apr 2000 | A |
6084628 | Sawyer | Jul 2000 | A |
6088510 | Sims | Jul 2000 | A |
6088722 | Herz et al. | Jul 2000 | A |
6119098 | Guyot et al. | Sep 2000 | A |
6134532 | Lazarus et al. | Oct 2000 | A |
6160570 | Sitnik | Dec 2000 | A |
6160989 | Hendricks et al. | Dec 2000 | A |
6177931 | Alexander et al. | Jan 2001 | B1 |
6185541 | Scroggie et al. | Feb 2001 | B1 |
6216129 | Eldering | Apr 2001 | B1 |
6236978 | Tuzhilin | May 2001 | B1 |
6266649 | Linden et al. | Jul 2001 | B1 |
6285999 | Page | Sep 2001 | B1 |
6298348 | Eldering | Oct 2001 | B1 |
6317722 | Jacobi et al. | Nov 2001 | B1 |
6321221 | Bieganski | Nov 2001 | B1 |
6327574 | Kramer et al. | Dec 2001 | B1 |
6415368 | Glance et al. | Jul 2002 | B1 |
6438579 | Hosken | Aug 2002 | B1 |
6457010 | Eldering et al. | Sep 2002 | B1 |
6542878 | Heckerman et al. | Apr 2003 | B1 |
6560578 | Eldering | May 2003 | B2 |
6571279 | Herz et al. | May 2003 | B1 |
6592612 | Samson et al. | Jul 2003 | B1 |
6643696 | Davis | Nov 2003 | B2 |
6820062 | Gupta et al. | Nov 2004 | B1 |
6871186 | Tuzhilin et al. | Mar 2005 | B1 |
7062510 | Eldering | Jun 2006 | B1 |
7069272 | Snyder | Jun 2006 | B2 |
7080070 | Gavarini | Jul 2006 | B1 |
7216123 | Kamvar et al. | May 2007 | B2 |
7370004 | Patel et al. | May 2008 | B1 |
7693827 | Zamir et al. | Apr 2010 | B2 |
20010014868 | Herz et al. | Aug 2001 | A1 |
20010021914 | Jacobi et al. | Sep 2001 | A1 |
20020095676 | Knee | Jul 2002 | A1 |
20020099812 | Davis | Jul 2002 | A1 |
20020143916 | Mendiola et al. | Oct 2002 | A1 |
20020147895 | Glance et al. | Oct 2002 | A1 |
20020194058 | Eldering | Dec 2002 | A1 |
20030037041 | Hertz | Feb 2003 | A1 |
20030088872 | Maissel et al. | May 2003 | A1 |
20030154442 | Papierniak | Aug 2003 | A1 |
20040088300 | Avery et al. | May 2004 | A1 |
20040122910 | Douglass et al. | Jun 2004 | A1 |
20040225578 | Hager et al. | Nov 2004 | A1 |
20050071251 | Linden et al. | Mar 2005 | A1 |
20060200556 | Brave et al. | Sep 2006 | A1 |
20070053513 | Hoffberg | Mar 2007 | A1 |
20090089259 | Musumeci et al. | Apr 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20090089259 A1 | Apr 2009 | US |
Number | Date | Country | |
---|---|---|---|
60995896 | Sep 2007 | US |