1. Field of the Invention
The field of the invention relates to methods and systems for performing scanning operations on data. More particularly, the field of the invention relates to a grid-based method and system for performing such operations.
2. Description of the Related Art
As modern enterprise environments trend towards a paperless workplace, electronic data is often created at a high rate. This electronic data takes a variety of forms which may include emails, documents, spreadsheets, images, databases, etc. Businesses have a need to effectively and securely store all of this electronic data in ways which are time and cost effective. However, there are problems that arise with these tasks due to the sheer amount of electronic data created and stored within a modern business.
For example, some electronic files which enter a business' computing environment may need to be scanned before or shortly after they are stored, and scanning a large number of files can consume substantial computing resources. One common reason to scan a file is to search for computer viruses or other malicious software code which can corrupt other data or harm a business' computing infrastructure. As the prevalence and sophistication of computer viruses and other forms of harmful software have increased, virus scanners have become an indispensable tool for businesses.
Typically, scanners are implemented either as real-time “filters” or as off-line “batch” processes. The filters, sometimes implemented as file system filter drivers, are software products that insert themselves into the I/O processing path of the operating system. Filters intercept certain types of file I/O requests and check the file contents for known virus signatures, suspicious characteristics, or suspicious patterns of activity. When such suspicious patterns are detected, the filter blocks the completion of the I/O request and takes some protective action, such as deleting or quarantining the suspect file.
As virus authors apply more sophisticated techniques, such as self-mutating or encrypted code, the filter logic required to detect such viruses becomes more and more complex, demanding more processing time and memory from the computer system to inspect the files. This can adversely affect the performance of the system and, in some cases, force a user to downgrade the level of protection in order to keep the system at a usable level of responsiveness.
Batch scanners take a different approach to scanning computer data for viruses. Rather than scanning files as certain I/O requests are made, batch scanners systematically traverse the file system in search of malicious software code. While they do not interfere with other applications directly, i.e. by increasing the latency of I/O requests, batch scanners can place a large processing load on the system. For this reason, they are typically run at night or during off-hours, when the computer system is not actively in use. In some cases, because batch scanners run intermittently, viruses may have hours or even days to propagate between scans. Filters may also suffer from this drawback as new virus types may emerge and infect the system before the filter's database of virus signatures has been updated to meet the threat.
It can be difficult to scale traditional methods of scanning computer files, whether for viruses or some other reason, to meet the needs of large file systems and active servers because both methods consume substantial resources from the host operating system. Filters can add significant latency to each I/O request, slowing the system down incrementally, whereas batch scanners can create a period of peak activity which noticeably degrades the performance of other applications.
Therefore, there is a need for a computer system capable of removing at least a portion of the computing burden associated with virus scanning. Ideally, such a system would be easily scalable to grow to meet future needs.
This specification describes different embodiments of a grid-based system for performing scanning operations on computer data. In some embodiments, the scanning operations comprise scanning files for viruses and other types of malicious software code. In other embodiments, the scanning operations may comprise scanning files for any type of content defined by a user of the system. In any case, the grid-based system can reduce the computing burden on a computing system by distributing the computing load amongst a grid of processing elements. In one embodiment, the system comprises event detectors to detect file scanning events as well as one or more distributed scanning elements to perform the actual file scanning. Some embodiments may also include a grid coordinator to monitor the grid configuration, perform necessary updates to the grid, and to take pre-determined actions based on the results of the file scans.
In another embodiment, a grid-based system for performing scanning operations on computer data can be incorporated into a multi-purpose data storage system. The data storage system can perform a suite of storage-related operations on electronic data for one or more client computers in a networked environment. The storage system can be composed of modular storage cells which function in a coordinated manner. These cells can act as building blocks to create a data storage system that is scalable and adaptable in terms of the storage capacity and functionality that it provides for a computing system.
The storage-related operations performed by the data storage system may include data backup, migration, and recovery. Many other storage-related operations are also possible. This specification describes one embodiment of the invention where such a data storage system can be adapted to include a computing grid for performing file scanning operations on data stored in the system by one or more client computers.
It may be advantageous for a data storage and backup system to perform file scanning for several reasons. One reason is to detect the presence of computer viruses or other malicious software code in any file that is stored in the system before the virus has an opportunity to spread and corrupt other data stored in the system. Another reason to perform file scanning on files as they are stored in the system may be to aid in the enforcement of administrative policies which restrict certain uses of the host computing system. For example, local administrative policy may prohibit files containing pornography, copyrighted material, or frivolous data which wastes available resources such as music or game files. Files may be scanned for content to identify the presence of any prohibited material so that appropriate administrative action can be taken.
File scanning can require significant computing resources. Unfortunately, due to the sheer number of files that exist in a modern computing environment, performing such file scanning on each and every file can place a tremendous computing load on the host computing system. The added computational burden from performing these operations can introduce unreasonable latency into the host computing system, severely hampering its ability to respond to other computing requests from users.
One solution to this problem, according to one embodiment of the disclosed inventions, is to integrate a computing grid within the host computing system. Such a computing grid can fulfill at least a portion of the scanning needs of the host computing system thereby freeing up the system for other uses. As discussed below, the computing grid can be dedicated to scanning files within the host computing system, whether for viruses or some other type of content, though the computing grid can be used for a wide variety of other computational purposes. Therefore, the computing grid described below will often be referred to as a scanning grid with the understanding that it could also be used for other purposes.
The scanning grid can be integrated into a wide variety of computing systems.
The scanning grid incorporated into the host computer system 100 can include one or more event detectors 196, one or more grid scanning elements 112, and one or more grid coordinators 140. An event detector 196 can be used to detect when scanning events arise which could be advantageously handled by the scanning grid rather than the host computing system. In one embodiment, an event detector 196 is programmed to detect file scanning events generated by a client computer or from some other source. File scanning events may include the creation of new files by a user of a client computer served by the network filing system, modifications to existing files, or the occurrence of any other set of circumstances which could beneficially trigger a scan event.
The event detector 196 may be implemented as a file system filter driver on a file server 120 which intercepts file creation and change requests as they are processed by the operating system of the file server 120. Event detectors of this type are illustrated in
After a file scan event is detected, there is a choice between scanning the new or modified file prior to it being stored or storing the file and then scanning it in due course. The advantage of the former alternative is that it prevents the introduction of a contaminated file into the file system. However, this method may also tend to lengthen the time required to store the file, increasing latency of file system I/O operations. The advantage of the latter alternative is that no additional file storage latency is introduced, but the cost is that the file system may be exposed to a file contaminated with a virus for a short time until the file can be scanned and appropriate action taken. For this reason, some embodiments of the invention may include a user configurable option to appropriately balance the tradeoff of system performance with data integrity according to the user's needs.
When an event detector 196 detects a file scanning event it may then determine the identity and location of the particular file or files to be scanned. Depending on the particular implementation of the host computing system 100, the information needed to uniquely identify a file will vary. For example, a network hostname with a fully qualified file path may be necessary to uniquely identify a file. In embodiments where storage devices 115 are connected to the storage system via a Fibre Channel-based SAN (illustrated in
Once an event detector 196 has assembled sufficient information to uniquely identify and locate the file or files-to-be-scanned, it can packetize the information and generate an event message detailing the information necessary for a grid scanning element 112 to access and scan the file(s) which triggered the scanning event. A load-sharing algorithm can be performed to determine which of the plurality of grid scanning elements 112 (no such load sharing algorithm would be required in cases where the scanning grid is configured with a single grid scanning element 112) should handle a particular scanning event. In one embodiment, the load-sharing algorithm can be performed by an event detector 196 to elect a single grid scanning element 112 to handle the detected scanning event. In such an embodiment, the event detector 196 may notify the specifically elected grid scanning element 112 of the scanning event. In another embodiment, the event detector 196 may notify each of the plurality of grid scanning elements 112 of the scanning event and then each individual grid scanning element 112 may separately perform the load-balancing algorithm to determine whether it has been elected to handle the detected event. Once a grid scanning element 112 has been elected, it will handle the scanning event while other grid scanning elements 112 generally will ignore the detected event.
In some embodiments, the event message is sent to each of the active grid scanning elements 112. (Information on the activity status of each grid scanning element can be supplied to the event detector 196 by a grid coordinator 140.) In cases where an event message is sent to a grid scanning element 112 over the LAN 110, the event detector 196 can reduce network utilization by sending a single multicast protocol message, such as a User Datagram Protocol (UDP) datagram.
Grid scanning elements 112 can be implemented in a number of ways. For example, a grid scanning element 112 may comprise a network appliance device coupled to the LAN 110. In other embodiments, a grid scanning element 112 may comprise a software module run by a file server computer 120. Each grid scanning element 112 may include a processing unit to carry out file scanning operations, a locally attached non-volatile memory, a conventional network interface such as Ethernet, and one or more storage network interfaces (Fibre Channel, SCSI, etc.), as dictated by the configuration of the host computing system 100 and other factors. It is within the ability of one of ordinary skill in the art to determine a satisfactory configuration for the grid scanning elements 112 in various embodiments of the invention.
Once a grid scanning element 112 has been elected, it accesses the data-to-be-scanned based on information it has received from an event detector 196. Depending upon the particular host computing system into which the scanning grid is integrated, the data may be accessed through a file server computer 120, a SAN network (illustrated in
Arrows 404 of
As discussed, the load-balancing algorithm may be performed by an event detector 196 or by the grid scanning elements 112. The load-balancing algorithm can take many different forms. In one embodiment, the load-balancing algorithm may dictate that each grid scanning element 112 take its turn in a pre-determined order. For example, if the computing grid is configured with two scanning elements 112, then the first scanning element will handle the first event, the second scanning element will handle the second scanning event, the first scanning element will handle the third event, and so on.
In other embodiments, more sophisticated load-balancing algorithms can be used. For example, a mathematical hash function can be applied to the pathname, or some other unique attribute, of the file-to-be-scanned. After the hash function has been applied and a numeric result has been obtained, a modulo operation can be performed wherein the numeric hash function result is divided by the number of active grid scanning elements 112 with the remainder of the division specifying the scanning element which has been elected. One choice for a hash function is the well-known MD5 cryptographic hash function. In addition to the load-balancing algorithms for electing a grid scanning element 112 which have been disclosed, any other type of load-balancing algorithms can be implemented in accordance with various embodiments of the invention.
At blocks 406 and 408 of the multi-cast implementation of
A dynamic method for handling file scan events as they arise has been disclosed. However, it may also be advantageous to perform file scanning on files within the data storage and backup system which are not new and have not been recently updated, such that they would trigger a file scan event in an event detector. These files may consist of data stored prior to the time when means were available to perform the type of scanning operations discussed in this specification. It may be advantageous in some cases to systematically access and scan these older files for the same reasons it is advantageous to scan newly created or updated files. Therefore, to the extent that a grid scanning element 112 is idle, it can be programmed to systematically traverse storage devices 115 for files that have never been scanned or perhaps have not been scanned by up-to-date algorithms. In some embodiments, older files in need of being scanned can be identified by creating a database listing each file in the file system along with a flag entry that stores whether or not the file has ever been scanned along with the date the last scan was performed. This information is then updated after the file scan is complete.
When a scanning element 112 encounters an old file that needs to be scanned, a scan event arises and a similar method can be used as was described in connection with the dynamic handling of file scan events as they arise. Namely, a load-balancing algorithm can be performed to determine which scanning element is to handle the scan event. In the case where the scanning element itself has traversed the file system and found the already existing file-to-be-scanned, it may be beneficial to automatically elect that very scanning element to perform the scan on that file. In some embodiments, however, a separate grid component (not shown) may be deployed to traverse the file system in search of existing files that need to be scanned. In these embodiments, the same sort of scanning element election algorithm discussed above could be employed.
As illustrated in
The following are some exemplary functions which may be performed by the grid coordinator 140: monitoring the activity status of grid scanning elements 112 and event detectors 112 and notifying active grid components of any change to the configuration of the grid; receiving scan reports from the grid scanning elements 112 and processing them according to user preferences; and distributing configuration changes and software updates to components of the grid as needed. The grid coordinator 140 can also be programmed to perform other functions as needed. It should be appreciated that the term “grid coordinator” can also apply to a set of discrete components which implement some or all of these tasks.
The grid coordinator 140 may use any combination of multi-cast messages and individual transmissions to carry out its functions. The method of communication employed by the grid coordinator 140 will likely vary according to the configuration of the computing grid and the purpose for which it has been deployed. However, it is well within the ability of one of ordinary skill in the art to modify and adapt the concepts disclosed in this specification without departing from the scope of the described inventions.
One task that is performed by a grid coordinator 140 in certain embodiments of the invention is monitoring the activity status of each event detector 196 or grid computing element 112 to detect changes to the scanning grid architecture. For example, in some instances the computational load of scanning files may increase over time in conjunction with changes or growth in utilization of a host computing system. In these cases additional scanning elements 112 can be added as needed to keep up with increasing load demands of the host computing system. Whenever a grid scanning element 112 is added or removed from the grid, a grid coordinator 140 may notify the other grid components and make necessary adjustments for the successful continued operation of the grid. One instance of an adjustment that may be necessary when a new grid scanning element 112 is added to the grid is that the load-balancing algorithm may need to be adjusted to account for the presence of the new scanning element 112.
In one embodiment, each of the grid components, including event detectors 196 and grid scanning elements 112, can be configured to report their status to the grid coordinator. Operational status reports can be sent by grid components periodically at specified intervals. This may take the form of a simple “heartbeat” signal which a grid component sends periodically to make the grid coordinator aware that the component is still operational. In other embodiments a grid component may only send a status report when a change in operational status is anticipated.
In other embodiments, the grid coordinator 140 itself may poll grid components to determine their operational status. The grid coordinator 140 may transmit periodic requests for status reports from grid components, or it may request status reports according to some other schedule.
When the grid coordinator 140 detects a change in the operational status of any grid component, whether by that component failing to send a heartbeat signal or failing to respond to a status request, it may transmit a notification of the status change to the other grid components. This information can be used by the various grid components to update the scanning element 112 election procedure or for any other reason for which that information may be of use. In some embodiments, the grid coordinator may use a multi-cast protocol to transmit the notification of the status change, while in other embodiments individual transmissions to the remaining grid components may be preferable.
The grid coordinator 140 can also receive reports from the grid scanning elements 112 regarding the outcome of a scan that has been performed. In embodiments where scanning elements are deployed for computer virus scanning, a report can be sent to the grid coordinator detailing that the scan was completed, whether or not a virus was found, etc. In embodiments where scanning elements are deployed to search for file content violations of local administrative policy, a report can be sent detailing whether or not prohibited file content was found. In embodiments where the grid scanning elements are deployed to serve some other purpose, any other kind of appropriate report can be generated by the scanning elements 112 and sent to the grid coordinator 140.
The grid coordinator 140 may then take some course of action based on the scan report. In some cases the course of action may be pre-determined and user-defined. In this type of embodiment, the grid coordinator 140 may include a policy database. The policy database may be configurable by a user and may contain a list of report results, such as “virus detected” or “pornography detected,” as well as corresponding actions to be performed when the associated scan report is received. In other embodiments, the grid coordinator 140 may be endowed with learning algorithms to independently determine what course of action to take based on its past experience or based on a set of training data that has been provided to guide its actions.
A non-comprehensive list of actions that could be taken by the grid coordinator 140 based on a scan report includes deleting a virus-contaminated file, quarantining the file, or notifying an administrator via email of a possible violation of administrative policy such as detected pornography, game, or music files.
On occasion, a user may wish to update the software associated with a grid component, e.g. change the algorithms used by event detectors 196 to detect scanning events or the algorithms used to elect grid scanning elements. A user may also wish to change the configuration of the grid. The grid coordinator 140 may serve as a software and configuration update service for the rest of the grid components in these situations. The user may submit these and other changes to the grid coordinator 140 via an included user interface. The user interface may consist of any type of interface known in the art. In one embodiment, the user interface is implemented by a web server packaged with the coordinating service. This type of interface can be useful because it allows a remote user to re-configure and update the grid.
The grid coordinator 140 can perform these updates periodically or according to any other schedule. It can transmit updates via multi-cast or individual transmissions as appropriate. The grid coordinator 140 may also monitor the progress and completion of installing the updates.
While embodiments of the invention have been discussed primarily in the context of the host computing system illustrated in
Storage Area Networks (SAN) and Network Attached Storage (NAS) are known in the art and the components of the scanning grid operate similarly in the context of these systems to the ways in which they have been described above, primarily in the context of the host computing system of
Various embodiments of scanning grids incorporated into host computing systems have been disclosed. According to these embodiments, the computational load from file scanning can be shifted from the host computing system to the grid. There is a tradeoff, however, between client versus grid-based scanning. Using the host computing system to perform a portion of the file scanning may increase latency for other operations on the host computing system, whereas off-loading the virus scanning will result in the need for computing capital expenditures in the form of purchasing grid components. Therefore, some embodiments of the disclosed inventions may provide user-configurable options to balance this performance tradeoff by allocating file scanning tasks between the host computing system and the grid as desired.
Scanning grids, according to various embodiments of the invention, can also be included in several types of multi-purpose data storage systems that perform a suite of storage-related operations on electronic data for one or more client computers in a networked environment. In one embodiment, the storage system can be composed of modular storage cells which function in a coordinated manner. These cells can act as building blocks to create a data storage system that is scalable and adaptable in terms of the storage capacity and functionality that it provides for a host computing system. The storage-related operations performed by the data storage system may include data backup, migration, and recovery.
Storage cells of this type can be combined and programmed to function together in many different configurations to suit the particular data storage needs of a given set of users. Each storage cell 550 may participate in various storage-related activities, such as backup, data migration, quick data recovery, etc. In this way storage cells can be used as modular building blocks to create scalable data storage and backup systems which can grow or shrink in storage-related functionality and capacity as needs dictate. This type of system is exemplary of the CommVault QiNetix system, and also the CommVault GALAXY backup system, available from CommVault Systems, Inc. of Oceanport, N.J. Similar systems are further described in U.S. patent applications Ser. Nos. 09/610,738 AND 11/120,619, which are hereby incorporated by reference in their entirety.
As shown, the storage cell 550 may generally comprise a storage manager 500 to direct various aspects of data storage operations and to coordinate such operations with other storage cells. The storage cell 550 may also comprise a data agent 595 to control storage and backup operations for a client computer 585 and a media agent 505 to interface with a physical storage device 515. Each of these components may be implemented solely as computer hardware or as software operating on computer hardware.
Generally speaking, the storage manager 500 may be a software module or other application that coordinates and controls storage operations performed by the storage operation cell 550. The storage manager 500 may communicate with some or all elements of the storage operation cell 550 including client computers 585, data agents 595, media agents 505, and storage devices 515, to initiate and manage system backups, migrations, and data recovery. If the storage cell 550 is simply one cell out of a number of storage cells which have been combined to create a larger data storage and backup system, then the storage manager 500 may also communicate with other storage cells to coordinate data storage and backup operations in the system as a whole.
In one embodiment, the data agent 595 is a software module or part of a software module that is generally responsible for archiving, migrating, and recovering data from a client computer 585 stored in an information store 590 or other memory location. Each client computer 585 may have at least one data agent 595 and the system can support multiple client computers 185. In some embodiments, data agents 595 may be distributed between a client 585 and the storage manager 500 (and any other intermediate components (not shown)) or may be deployed from a remote location or its functions approximated by a remote process that performs some or all of the functions of data agent 595.
Embodiments of the disclosed inventions may employ multiple data agents 595 each of which may backup, migrate, and recover data associated with a different application. For example, different individual data agents 595 may be designed to handle Microsoft Exchange data, Lotus Notes data, Microsoft Windows file system data, Microsoft Active Directory Objects data, and other types of data known in the art. Other embodiments may employ one or more generic data agents 595 that can handle and process multiple data types rather than using the specialized data agents described above.
Generally speaking, a media agent 505 may be implemented as software module that conveys data, as directed by a storage manager 500, between a client computer 585 and one or more storage devices 515 such as a tape library, a magnetic media storage device, an optical media storage device, or any other suitable storage device. The media agent 505 controls the actual physical level data storage or retrieval to and from a storage device 515. Media agents 505 may communicate with a storage device 515 via a suitable communications path such as a SCSI or fiber channel communications link. In some embodiments, the storage device 515 may be communicatively coupled to a data agent 505 via a SAN or a NAS system, or a combination of the two.
It should be appreciated that any given storage cell in a modular data storage and backup system, such as the one described, may comprise different combinations of hardware and software components besides the particular configuration illustrated in
Preferred embodiments of the claimed inventions have been described in connection with the accompanying drawings. While only a few preferred embodiments have been explicitly described, other embodiments will become apparent to those of ordinary skill in the art of the claimed inventions based on this disclosure. Therefore, the scope of the disclosed inventions is intended to be defined by reference to the appended claims and not simply with regard to the explicitly described embodiments of the inventions.
Number | Name | Date | Kind |
---|---|---|---|
4686620 | Ng | Aug 1987 | A |
4995035 | Cole et al. | Feb 1991 | A |
5005122 | Griffin et al. | Apr 1991 | A |
5093912 | Dong et al. | Mar 1992 | A |
5133065 | Cheffetz et al. | Jul 1992 | A |
5193154 | Kitajima et al. | Mar 1993 | A |
5212772 | Masters | May 1993 | A |
5226157 | Nakano et al. | Jul 1993 | A |
5239647 | Anglin et al. | Aug 1993 | A |
5241668 | Eastridge et al. | Aug 1993 | A |
5241670 | Eastridge et al. | Aug 1993 | A |
5276860 | Fortier et al. | Jan 1994 | A |
5276867 | Kenley et al. | Jan 1994 | A |
5287500 | Stoppani, Jr. | Feb 1994 | A |
5321816 | Rogan et al. | Jun 1994 | A |
5333315 | Saether et al. | Jul 1994 | A |
5347653 | Flynn et al. | Sep 1994 | A |
5410700 | Fecteau et al. | Apr 1995 | A |
5448724 | Hayashi et al. | Sep 1995 | A |
5491810 | Allen | Feb 1996 | A |
5495607 | Pisello et al. | Feb 1996 | A |
5504873 | Martin et al. | Apr 1996 | A |
5519865 | Kondo et al. | May 1996 | A |
5544345 | Carpenter et al. | Aug 1996 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5559957 | Balk | Sep 1996 | A |
5619644 | Crockett et al. | Apr 1997 | A |
5638509 | Dunphy et al. | Jun 1997 | A |
5673381 | Huai et al. | Sep 1997 | A |
5699361 | Ding et al. | Dec 1997 | A |
5729743 | Squibb | Mar 1998 | A |
5737747 | Vishlitsky et al. | Apr 1998 | A |
5751997 | Kullick et al. | May 1998 | A |
5758359 | Saxon | May 1998 | A |
5761677 | Senator et al. | Jun 1998 | A |
5764972 | Crouse et al. | Jun 1998 | A |
5778395 | Whiting et al. | Jul 1998 | A |
5812398 | Nielsen | Sep 1998 | A |
5813009 | Johnson et al. | Sep 1998 | A |
5813017 | Morris | Sep 1998 | A |
5829046 | Tzelnic et al. | Oct 1998 | A |
5832510 | Ito et al. | Nov 1998 | A |
5875478 | Blumenau | Feb 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5892917 | Myerson | Apr 1999 | A |
5901327 | Ofek | May 1999 | A |
5907621 | Bachman et al. | May 1999 | A |
5924102 | Perks | Jul 1999 | A |
5950205 | Aviani, Jr. | Sep 1999 | A |
5953721 | Doi et al. | Sep 1999 | A |
5974563 | Beeler, Jr. | Oct 1999 | A |
6021415 | Cannon et al. | Feb 2000 | A |
6026414 | Anglin | Feb 2000 | A |
6052735 | Ulrich et al. | Apr 2000 | A |
6061692 | Thomas et al. | May 2000 | A |
6076148 | Kedem et al. | Jun 2000 | A |
6094416 | Ying | Jul 2000 | A |
6131095 | Low et al. | Oct 2000 | A |
6131190 | Sidwell | Oct 2000 | A |
6148412 | Cannon et al. | Nov 2000 | A |
6154787 | Urevig et al. | Nov 2000 | A |
6154852 | Amundson et al. | Nov 2000 | A |
6161111 | Mutalik et al. | Dec 2000 | A |
6167402 | Yeager | Dec 2000 | A |
6175829 | Li et al. | Jan 2001 | B1 |
6212512 | Barney et al. | Apr 2001 | B1 |
6240416 | Immon et al. | May 2001 | B1 |
6260069 | Anglin | Jul 2001 | B1 |
6269431 | Dunham | Jul 2001 | B1 |
6275953 | Vahalia et al. | Aug 2001 | B1 |
6301592 | Aoyama et al. | Oct 2001 | B1 |
6324581 | Xu et al. | Nov 2001 | B1 |
6328766 | Long | Dec 2001 | B1 |
6330570 | Crighton et al. | Dec 2001 | B1 |
6330642 | Carteau | Dec 2001 | B1 |
6343324 | Hubis et al. | Jan 2002 | B1 |
6350199 | Williams et al. | Feb 2002 | B1 |
RE37601 | Eastridge et al. | Mar 2002 | E |
6356801 | Goodman et al. | Mar 2002 | B1 |
6374336 | Peters et al. | Apr 2002 | B1 |
6389432 | Pothapragada et al. | May 2002 | B1 |
6418478 | Ignatius et al. | Jul 2002 | B1 |
6421683 | Lamburt | Jul 2002 | B1 |
6421711 | Blumenau et al. | Jul 2002 | B1 |
6421779 | Kuroda et al. | Jul 2002 | B1 |
6430575 | Dourish et al. | Aug 2002 | B1 |
6438586 | Hass et al. | Aug 2002 | B1 |
6487561 | Ofek et al. | Nov 2002 | B1 |
6487644 | Huebsch et al. | Nov 2002 | B1 |
6519679 | Devireddy et al. | Feb 2003 | B2 |
6538669 | Lagueux, Jr. et al. | Mar 2003 | B1 |
6542909 | Tamer et al. | Apr 2003 | B1 |
6542972 | Ignatius et al. | Apr 2003 | B2 |
6564228 | O'Connor | May 2003 | B1 |
6581143 | Gagne et al. | Jun 2003 | B2 |
6625623 | Midgley et al. | Sep 2003 | B1 |
6647396 | Parnell et al. | Nov 2003 | B2 |
6658436 | Oshinsy et al. | Dec 2003 | B2 |
6658526 | Nguyen et al. | Dec 2003 | B2 |
6732124 | Koseki et al. | May 2004 | B1 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6775790 | Reuter et al. | Aug 2004 | B2 |
6847984 | Midgley et al. | Jan 2005 | B1 |
6871163 | Hiller et al. | Mar 2005 | B2 |
6886020 | Zahavi et al. | Apr 2005 | B1 |
6947935 | Horvitz et al. | Sep 2005 | B1 |
6983322 | Tripp et al. | Jan 2006 | B1 |
6996616 | Leighton et al. | Feb 2006 | B1 |
7003519 | Biettron et al. | Feb 2006 | B1 |
7035880 | Crescenti et al. | Apr 2006 | B1 |
7103740 | Colgrove et al. | Sep 2006 | B1 |
7130970 | Devassy et al. | Oct 2006 | B2 |
7167895 | Connelly | Jan 2007 | B1 |
7181444 | Porter et al. | Feb 2007 | B2 |
7216043 | Ransom et al. | May 2007 | B2 |
7240100 | Wein et al. | Jul 2007 | B1 |
7246207 | Kottomtharayil et al. | Jul 2007 | B2 |
7246211 | Beloussov et al. | Jul 2007 | B1 |
7330997 | Odom | Feb 2008 | B1 |
7346623 | Prahlad et al. | Mar 2008 | B2 |
7346676 | Swildens et al. | Mar 2008 | B1 |
7359917 | Winter et al. | Apr 2008 | B2 |
7454569 | Kavuri et al. | Nov 2008 | B2 |
7500150 | Sharma et al. | Mar 2009 | B2 |
7529748 | Wen et al. | May 2009 | B2 |
7533103 | Brendle et al. | May 2009 | B2 |
7583861 | Hanna et al. | Sep 2009 | B2 |
7590997 | Perez | Sep 2009 | B2 |
7613752 | Prahlad et al. | Nov 2009 | B2 |
7627598 | Burke | Dec 2009 | B1 |
7627617 | Kavuri et al. | Dec 2009 | B2 |
20020004883 | Nguyen et al. | Jan 2002 | A1 |
20020049738 | Epstein | Apr 2002 | A1 |
20020069324 | Gerasimov et al. | Jun 2002 | A1 |
20020083055 | Pachet et al. | Jun 2002 | A1 |
20020087550 | Carlyle et al. | Jul 2002 | A1 |
20020133476 | Reinhardt | Sep 2002 | A1 |
20020174107 | Poulin | Nov 2002 | A1 |
20030018607 | Lennon et al. | Jan 2003 | A1 |
20030115219 | Chadwick | Jun 2003 | A1 |
20030130993 | Mendelevitch et al. | Jul 2003 | A1 |
20030182583 | Turco | Sep 2003 | A1 |
20040010487 | Prahlad et al. | Jan 2004 | A1 |
20040010493 | Kojima et al. | Jan 2004 | A1 |
20040015468 | Beier et al. | Jan 2004 | A1 |
20040015514 | Melton et al. | Jan 2004 | A1 |
20040139059 | Conroy et al. | Jul 2004 | A1 |
20040254919 | Giuseppini | Dec 2004 | A1 |
20040255161 | Cavanaugh | Dec 2004 | A1 |
20040260678 | Verbowski et al. | Dec 2004 | A1 |
20050021537 | Brendle et al. | Jan 2005 | A1 |
20050033800 | Kavuri et al. | Feb 2005 | A1 |
20050037367 | Fiekowsky et al. | Feb 2005 | A9 |
20050044114 | Kottomtharayil et al. | Feb 2005 | A1 |
20050050075 | Okamoto et al. | Mar 2005 | A1 |
20050114406 | Borthakur et al. | May 2005 | A1 |
20050154695 | Gonzalez et al. | Jul 2005 | A1 |
20050182773 | Feinsmith | Aug 2005 | A1 |
20050182797 | Adkins et al. | Aug 2005 | A1 |
20050188248 | O'Brien et al. | Aug 2005 | A1 |
20050193128 | Dawson et al. | Sep 2005 | A1 |
20050203964 | Matsunami et al. | Sep 2005 | A1 |
20050216453 | Sasaki et al. | Sep 2005 | A1 |
20050228794 | Navas et al. | Oct 2005 | A1 |
20050257083 | Cousins | Nov 2005 | A1 |
20050262097 | Sim-Tang et al. | Nov 2005 | A1 |
20050289193 | Arrouye et al. | Dec 2005 | A1 |
20060004820 | Claudatos et al. | Jan 2006 | A1 |
20060010227 | Atluri | Jan 2006 | A1 |
20060031225 | Palmeri et al. | Feb 2006 | A1 |
20060031263 | Arrouye et al. | Feb 2006 | A1 |
20060031287 | Ulrich et al. | Feb 2006 | A1 |
20060101285 | Chen et al. | May 2006 | A1 |
20060106814 | Blumenau et al. | May 2006 | A1 |
20060195449 | Hunter et al. | Aug 2006 | A1 |
20060224846 | Amarendran et al. | Oct 2006 | A1 |
20060253495 | Png | Nov 2006 | A1 |
20060259468 | Brooks et al. | Nov 2006 | A1 |
20060259724 | Saika | Nov 2006 | A1 |
20060294094 | King et al. | Dec 2006 | A1 |
20070027861 | Huentelman et al. | Feb 2007 | A1 |
20070033191 | Hornkvist et al. | Feb 2007 | A1 |
20070112809 | Arrouye et al. | May 2007 | A1 |
20070179995 | Prahlad et al. | Aug 2007 | A1 |
20070185914 | Prahlad et al. | Aug 2007 | A1 |
20070185915 | Prahlad et al. | Aug 2007 | A1 |
20070185916 | Prahlad et al. | Aug 2007 | A1 |
20070185917 | Prahlad et al. | Aug 2007 | A1 |
20070185921 | Prahlad et al. | Aug 2007 | A1 |
20070185925 | Prahlad et al. | Aug 2007 | A1 |
20070185926 | Prahlad et al. | Aug 2007 | A1 |
20070192360 | Prahlad et al. | Aug 2007 | A1 |
20070192385 | Prahlad et al. | Aug 2007 | A1 |
20070198570 | Prahlad et al. | Aug 2007 | A1 |
20070198593 | Prahlad et al. | Aug 2007 | A1 |
20070198601 | Prahlad et al. | Aug 2007 | A1 |
20070198608 | Prahlad et al. | Aug 2007 | A1 |
20070198611 | Prahlad et al. | Aug 2007 | A1 |
20070198612 | Prahlad et al. | Aug 2007 | A1 |
20070198613 | Prahlad et al. | Aug 2007 | A1 |
20070203937 | Prahlad et al. | Aug 2007 | A1 |
20070203938 | Prahlad et al. | Aug 2007 | A1 |
20070288536 | Sen et al. | Dec 2007 | A1 |
20080021921 | Horn | Jan 2008 | A1 |
20080059515 | Fulton | Mar 2008 | A1 |
20080091655 | Gokhale et al. | Apr 2008 | A1 |
20080228771 | Prahlad et al. | Sep 2008 | A1 |
20080243796 | Prahlad et al. | Oct 2008 | A1 |
20080249996 | Prahlad et al. | Oct 2008 | A1 |
20080263029 | Guha et al. | Oct 2008 | A1 |
20080294605 | Prahlad et al. | Nov 2008 | A1 |
20090287665 | Prahlad et al. | Nov 2009 | A1 |
Number | Date | Country |
---|---|---|
0 259 912 | Mar 1988 | EP |
0 405 926 | Jan 1991 | EP |
0 467 546 | Jan 1992 | EP |
0 774 715 | May 1997 | EP |
0 809 184 | Nov 1997 | EP |
0 899 662 | Mar 1999 | EP |
0 981 090 | Feb 2000 | EP |
1 174 795 | Jan 2002 | EP |
WO 9513580 | May 1995 | WO |
WO 9912098 | Mar 1999 | WO |
WO 9914692 | Mar 1999 | WO |
WO 2003060774 | Jul 2003 | WO |
WO 2005055093 | Jun 2005 | WO |
WO 2007062254 | May 2007 | WO |
WO 2007062429 | May 2007 | WO |
WO 2008049023 | Apr 2008 | WO |
Number | Date | Country | |
---|---|---|---|
20090193113 A1 | Jul 2009 | US |