1. Field of the Invention
The present invention generally relates to parsing of documents such as an XML™ document and, more particularly to parsing a document or other logical sequence of network data packets for detecting potential intrusion or an attack on a node of a network.
2. Description of the Prior Art
The field of digital communications between computers and the linking of computers into networks has developed rapidly in recent years, similar, in many ways to the proliferation of personal computers of a few years earlier. This increase in interconnectivity and the possibility of remote processing has greatly increased the effective capability and functionality of individual computers in such networked systems. Nevertheless, the variety of uses of individual computers and systems, preferences of their users and the state of the art when computers are placed into service has resulted in a substantial degree of variety of capabilities and configurations of individual machines and their operating systems, collectively referred to as “platforms” which are generally incompatible with each other to some degree particularly at the level of operating system and programming language.
This incompatibility of platform characteristics and the simultaneous requirement for the capability of communication and remote processing and a sufficient degree of compatibility to support it has resulted in the development of object oriented programming (which accommodates the concept of assembling an application as well as data as a group of more or less generalized modules through a referencing system of entities, attributes and relationships) and a number of programming languages to embody it. Extensible Markup Language™ (XML™) is such a language which has come into widespread use and can be transmitted as a document over a network of arbitrary construction and architecture.
In such a language, certain character strings correspond to certain commands or identifications, including special characters and other important data (collectively referred to as control words) which allow data or operations to, in effect, identify themselves so that they may be, thereafter treated as “objects” such that associated data and commands can be translated into the appropriate formats and commands of different applications in different languages in order to engender a degree of compatibility of respective connected platforms sufficient to support the desired processing at a given machine. The detection of these character strings is performed by an operation known as parsing, similar to the more conventional usage of resolving the syntax of an expression, such as a sentence, into its component parts and describing them grammatically.
When parsing an XML™ document, a large portion and possibly a majority of the central processor unit (CPU) execution time is spent traversing the document searching for control words, special characters and other important data as defined for the particular XML™ standard being processed. This is typically done by software which queries each character and determines if it belongs to the predefined set of strings of interest, for example, a set of character strings comprising the following “<command>”, “<data=dataword>”, “<endcommand>”, etc. If any of the target strings are detected, a token is saved with a pointer to the location in the document for the start of the token and the length of the token. These tokens are accumulated until the entire document has been parsed.
The conventional approach is to implement a table-based finite state machine (FSM) to search for these strings of interest. The state table resides in memory and is designed to search for the specific patterns in the document. The current state is used as the base address into the state table and the ASCII representation of the input character is an index into the table. For example, assume the state machine is in state 0 (zero) and the first input character is ASCII value 02, the absolute address for the state entry would be the sum/concatenation of the base address (state 0) and the index/ASCII character (02). The FSM begins with the CPU fetching the first character of the input document from memory. The CPU then constructs the absolute address in the state table in memory corresponding to the initialized/current state and the input character and then fetches the state data from the state table. Based on the state data that is returned, the CPU updates the current state to the new value, if different (indicating that the character corresponds to the first character of a string of interest) and performs any other action indicated in the state data (e.g. issuing a token or an interrupt if the single character is a special character or if the current character is found, upon a further repetition of the foregoing, to be the last character of a string of interest).
The above process is repeated and the state is changed as successive characters of a string of interest are found. That is, if the initial character is of interest as being the initial character of a string of interest, the state of the FSM can be advanced to a new state (e.g. from initial state 0 to state 1). If the character is not of interest, the state machine would (generally) remain the same by specifying the same state (e.g. state 0) or not commanding a state update) in the state table entry that is returned from the state table address. Possible actions include, but are not limited to, setting interrupts, storing tokens and updating pointers. The process is then repeated with the following character. It should be noted that while a string of interest is being followed and the FSM is in a state other than state 0 (or other state indicating that a string of interest has not yet been found of currently being followed) a character may be found which is not consistent with a current string but is an initial character of another string of interest. In such a case, state table entries would indicate appropriate action to indicate and identify the string fragment or portion previously being followed and to follow the possible new string of interest until the new string is completely identified or found not to be a string of interest. In other words, strings of interest may be nested and the state machine must be able to detect a string of interest within another string of interest, and so on. This may require the CPU to traverse portions of the XML™ document numerous times to completely parse the XML™ document.
The entire XML™ or other language document is parsed character-by-character in the above-described manner. As potential target strings are recognized, the FSM steps through various state character-by-character until a string of interest is fully identified or a character inconsistent with a possible string of interest is encountered (e.g. when the string is completed/fully matched or a character deviates from a target string). In the latter case, no action is generally taken other than returning to the initial state or a state corresponding to the detection of an initial character of another target string. In the former case, the token is stored into memory along with the starting address in the input document and the length of the token. When the parsing is completed, all objects will have been identified and processing in accordance with the local or given platform can be started.
Since the search is generally conducted for multiple strings of interest, the state table can provide multiple transitions from any given state. This approach allows the current character to be analyzed for multiple target strings at the same time while conveniently accommodating nested strings.
It can be seen from the foregoing that the parsing of a document such as an XML™ document requires many repetitions and many memory accesses for each repetition. Therefore, processing time on a general purpose CPU is necessarily substantial. A further major complexity of handling the multiple strings lies in the generation of the large state tables and is handled off-line from the real-time packet processing. However, this requires a large number of CPU cycles to fetch the input character data, fetch the state data and update the various pointers and state addresses for each character in the document. Thus, it is relatively common for the parsing of a document such as an XML™ document to fully pre-empt other processing on the CPU or platform and to substantially delay the processing requested.
It has been recognized in the art that, through programming, general-purpose hardware can be made to emulate the function of special purpose hardware and that special purpose data processing hardware will often function more rapidly than programmed general purpose hardware even if the structure and program precisely correspond to each other since there is less overhead involved in managing and controlling special purpose hardware. Nevertheless, the hardware resources required for certain processing may be prohibitively large for special purpose hardware, particularly where the processing speed gain may be marginal. Further, special purpose hardware necessarily has functional limitations and providing sufficient flexibility for certain applications such as providing the capability of searching for an arbitrary number of arbitrary combinations of characters may also be prohibitive. Thus, to be feasible, special purpose hardware must provide a large gain in processing speed while providing very substantial hardware economy; requirements which are increasingly difficult to accommodate simultaneously as increasing amounts of functional flexibility or programmability are needed in the processing function required.
In this regard, the issue of system security is also raised by both interconnectability and the amount of processing time required for parsing a document such as an XML™ document. On the one hand, any process which requires an extreme amount of processing time at relatively high priority is, in some ways, similar to some characteristics of a denial-of-service (DOS) attack on the system or a node thereof or can be a tool that can be used in such an attack.
DOS attacks frequently present frivolous or malformed requests for service to a system for the purpose of maliciously consuming and eventually overloading available resources. Proper configuration of hardware accelerators can greatly reduce or eliminate the potential to overload available resources. In addition, systems often fail or expose security weaknesses when overloaded. Thus, eliminating overloads is an important security consideration.
Further, it is possible for some processing to begin and some commands to be executed before parsing is completed since the state table must be able to contain CPU commands at basic levels which are difficult or impossible to secure without severe compromise of system performance. In short, the potential for compromise of security would be necessarily reduced by reduction of processing time for processes such as XML™ parsing but no technique for significantly reducing the processing time for such parsing has been available.
Many security systems rely on the ability to detect an attempted security breach at a very early stage and a security breach may be difficult or impossible to interrupt quickly or through programmed intervention, once begun. For example, a highly secure system has been proposed and is disclosed in U.S. patent applications Ser. Nos. 09/973,769 and 09/973,776, both assigned to the assignee of the present application. These applications disclose a system having two levels of internodal communications, one at very high speed, by which a node at which a possible attack or intrusion is detected can be compartmentalized and then automatically repaired, if necessary, before reconnection to the network. Acceleration of parsing therefore supports early response to a potential attack and is particularly advantageous in a system such as that disclosed in the system described in the above-incorporated patent applications since an appropriate control of the network can be initiated as an incident of parsing and can thus be initiated at an earlier time if parsing can be significantly accelerated. Proper network control, initiated in a timely fashion in response to a detection alert can effect intrusion prevention in addition to intrusion detection.
The present invention provides a hardware parser accelerator which provides extreme acceleration of parsing of documents for detection of signatures of a possible intrusion, attack or other security breach in a networked computer system at speeds which accommodate network transmission packet speeds for potentially real-time intrusion detection and prevention actions.
In order to accomplish this and other objects of the invention, an intrusion detection system, possibly implemented within a document parser is provided, comprising a character buffer for a plurality of bytes of a document, a state table addressable in accordance with a byte of a document and a state to access at least one of an interrupt or exception and next state data from the state table, a register for storing next state data, an adder for combining contents of the register with a subsequent byte of a document to form a further address into the state memory, and a bus for communicating the interrupt or exception to a host CPU.
The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of a preferred embodiment of the invention with reference to the drawings, in which:
Referring now to the drawings, and more particularly to
It should be noted that an XML™ document is used herein as an example of one type of logical data sequence which can be processed using an accelerator in accordance with the invention. Other logical data sequences can also be constructed from network data packet contents such as user terminal command strings intended for execution by shared server computers. Such command strings are frequently generated by malicious users and sent to shatred server computers as part of a longer term intrusion attempt. The accelerator in accordance with the invention is suitable for processing many such logical data sequences.
It will also be helpful observe that many entries in the portion of the state table illustrated in
It should be appreciated, however, that the intrusion detection system is intended to be applicable to any type of digital file and is not limited to text files or particular languages which may be used to represent particular applications or data structures at or exceeding packet transmission speeds which can accommodate real-time digital transmissions over networks through which security attacks are generally perpetrated. Thus, the invention may be implemented as an arrangement for providing only intrusion detection; in which case substantially optimum performance at lowest cost would be expected. However, the goal of performing intrusion detection at signal transmission speeds can also be achieved as a special mode of operation of a parser accelerator in which some operations are omitted to provide further acceleration, possibly augmented by alternative state table memory arrangements as will be described below and which is presently considered to be preferred. Therefore, the invention will be described in the context of a parser accelerator in the interest of completeness and to convey a more thorough understanding of the scope of advantages provided by the invention even though that context is more complex than necessary for the invention to function as intended for real-time, high-speed intrusion detection.
In
It will be helpful to note several aspects of the state table entries shown, particularly in conveying an understanding of how even the small portion of the exemplary state table illustrated in
1. In the state table shown, only two entries in the row for state 0 include an entry other than “stay in state 0” which maintains the initial state when the character being tested does not match the initial character of any string of interest. The single entry which provides for progress to state 1 corresponds to a special case where all strings of interest begin with the same character. Any other character that would provide progress to another state would generally but not necessary progress to a state other than state 1 but a further reference to the same state that could be reached through another character may be useful to, for example, detect nested strings. The inclusion of a command (e.g. “special interrupt”) with “stay in state 0” illustrated at {state 0, FD} would be used to detect and operate on special single characters.
2. In states above state 0, an entry of “stay in state n” provides for the state to be maintained through potentially long runs of one or more characters such as might be encountered, for example, in numerical arguments of commands, as is commonly encountered. The invention provides special handling of this type of character string to provide enhanced acceleration, as will be discussed in detail below.
3. In states above state 0, an entry of “go to state 0” signifies detection of a character which distinguishes the string from any string of interest, regardless of how many matching characters have previously been detected and returns the parsing process to the initial/default state to begin searching for another string of interest. (For this reason, the “go to state 0” entry will generally be, by far, the most frequent or numerous entry in the state table.) Returning to state 0 may require the parsing operation to return to a character in the document subsequent to the character which began the string being followed at the time the distinguishing character was detected.
4. An entry including a command with “go to state 0 indicates completion of detection of a complete string of interest. In general, the command will be to store a token (with an address and length of the token) which thereafter allows the string to be treated as an object. However, a command with “go to state n” provides for launching of an operation at an intermediate point while continuing to follow a string which could potentially match a string of interest.
5. To avoid ambiguity at any point where the search branches between two strings of interest (e.g. strings having n−1 identical initial characters but different n-th characters, or different initial characters), it is generally necessary to proceed to different (e.g. non-consecutive) states, as illustrated at {state 1, 01} and {state 1, FD}. Complete identification of a string of arbitrary length n will require n−1 states except for the special circumstances of included strings of special characters and strings of interest which have common initial characters. For these reason, the number of states and rows of the state table must usually be extremely large, even for relatively modest numbers of strings of interest.
7. Conversely to the previous paragraph, most states can be fully characterized by one or two unique entries and a default “go to state 0”. This feature of the state table of
As alluded to above, in the parsing operation, as conventionally performed, begins with the system in a given default/initial state, depicted in
A high-level schematic block diagram of the parser accelerator 100 in accordance with the invention is illustrated in
As a general overview, the document such as an XML™ document is stored externally in DRAM 120 which is indexed by registers 112, 114 and transferred by, preferably, thirty-two bit words to and input buffer 130 which serves as a multiplexer for the pipelines. Each pipeline includes a copy of a character palette 140, state table 160 and a next state palette 170; each accommodating a compressed form of part of the state table. The output of the next state palette 170 contains both the next state address portion of the address into entries in the state table 160 and the token value to be stored, if any. Operations in the character palette 140 and the next state palette 170 are simple memory accesses into high speed internal SRAM which may be performed in parallel with each other as well as in parallel with simple memory accesses into the high speed external DRAM forming the state table 160 (which may also be implemented as a cache). Therefore, only a relatively few clock cycles of the CPU initially controlling these hardware elements (but which, once started, can function autonomously with only occasional CPU memory operation calls to refresh the document data and to store tokens) are required for an evaluation of each character in the document. The basic acceleration gain is the reduction of the sum of all memory operation durations per character in the CPU plus the CPU overhead to the duration of a single autonomously performed memory operation in high-speed SRAM or DRAM.
It should be understood that memory structures referred to herein as “external” is intended to connote a configuration of memories 120, 140, which is preferred by the inventors at the present time in view of the amount of storage required and access from the hardware parser accelerator and/or the host CPU. In other words, it may be advantageous for handling of tokens and some other operations to provide an architecture of the parser accelerator in accordance with the invention to facilitate sharing of the memory or at least access to the memory by the host CPU as well as the hardware accelerator. No other connotation intended and a wide variety of hardware alternatives such as synchronous DRAM (SDRAM) will be recognized as suitable by those skilled in the art in view of this discussion.
Referring now to
Referring now to
More specifically, the state table control register 162 stores and provides the length of each entry in the state table 160 of
Referring now to
As shown in
Thus, it is seen that the use of a character palette, a state memory in an abbreviated form and a next state memory articulate the function of the conventional state memory operations into separate stages; each of which can be performed extremely rapidly with relatively little high speed memory which can thus be duplicated to form parallel pipelines operating on respective characters of a document in turn and in parallel with other operations and storage of tokens. Therefore, the parsing process can be greatly accelerated relative to even a dedicated processor which must perform all of these functions in sequence before processing of another character can be started.
In summary, the accelerator has access to the program memory of the host CPU where the character data (sometimes referred to as packet data connoting transmission of a network) and state table are located. The accelerator 100 is under control of the main CPU via memory-mapped registers. The accelerator can interrupt the main CPU to indicate exceptions, alarms and terminations, which, in the context of intrusion detection may be referred to generically as a pattern matching alert. an intrusion event alert or the like. When parsing is to be started, pointers (112, 114) are set to the beginning and end of the input buffer 130 data to be analyzed, the state table to be used (as indicated by base address 182 and other control information (e.g. 142) is set up within the accelerator.
To initiate operation of the accelerator, the CPU issues a command to the accelerator which, in response, fetches a first thirty-two bit word of data from the CPU program memory (e.g. 120 or a cache) and places it into the input buffer 130 from which the first byte/ASCII character is selected. The accelerator fetches the state information corresponding to the input character (i.e.
The accelerator next selects the next byte to be analyzed from input buffer 130 and repeats the process with the new state information which will already be available to adder 150. The operation or token information storage can be performed concurrently. This continues until all four characters of the input word have been analyzed. Then (or concurrently with the analysis of the fourth character by prefetching) buffers 112, 114 are compared to determine if the end of the document buffer 120 is reached and, if so, an interrupt is sent back to the CPU. If not, a new word is fetched, the buffer 112 is updated and the processing is repeated.
Since the pointers and counters are implemented in dedicated hardware they can be updated in parallel rather than serially as would be required if implemented in software. This reduces the time to analyze a byte of data to the time required to fetch the character from a local input buffer, generate the state table address from high speed local character palette memory, fetch the corresponding state table entry from memory and to fetch the next state information, again from local high speed memory. Some of these operations can be performed concurrently in separate parallel pipelines and other operations specified in the state table information (partially or entirely provided through the next state palette) may be carried out while analysis of further characters continues.
Thus, it is clearly seen that the invention provides substantial acceleration of the parsing process through a small and economical amount of dedicated hardware. While the parser accelerator can interrupt the CPU, the processing operation is entirely removed therefrom after the initial command to the parser accelerator. However, since substantial time is required for processing of tokens even when performed concurrently with other parsing operations, the acceleration provided as described above is not optimal for detection of a possible intrusion or security breach, particularly in view of the fact that operations which are difficult or impossible to secure can be initiated by the issuance of commands in the course of the parsing process.
Referring now to
All functional elements of the arrangement of
Specifically, the input buffer 120 and the input word buffer 130, together with the address registers 112, 114, adder 150 and state table base address register 182 are identical to the corresponding elements described above and function in an identical manner to access state table 160. The difference resides principally in the omission of the character palette and the next state pallette memories and the data in the state table and internal format thereof. The state table is essentially of the same width, 256 characters as in the embodiment of
As in the embodiment of
Thus, the characters are tested in sequence and no updating of any registers other than registers 112, 114 is required until a character is encountered which is the first character of a string of interest. That is, until such a detection, even the state is unchanged and the next state is not updated in register 180. Therefore, the document can be screened for initial characters with extreme speed. When an initial character of a string of interest is encountered the next state data is read from the state table, register 180 is updated, new state table data is loaded into the state memory if not already present and the next character is processed in the same manner. The state table memory is much smaller than for the XML™ parser described above. This allows for the state table memory to be implemented on board the chip with other logic and elements of
While the architecture of the system including the invention as embodied as shown in either
In view of the foregoing, it is seen that the invention provides for extremely rapid screening of a document for signatures which may indicate the possibility of an attempted attack within the context and environment of a hardware parser accelerator which significantly reduces time for parsing of a document such as an XML™ document to a fraction of the time which has been required prior to the present invention. The intrusion detection parser of the present invention requires no additional elements or hardware beyond that of the parser accelerator in accordance with the invention and can issue interrupts and/or exceptions prior to any intrusion process becoming executable.
While the invention has been described in terms of a single preferred embodiment, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the appended claims.
This application claims benefit of priority of U.S. Provisional Patent Application Ser. No. 60/421,773, filed Oct. 29, 2002, the entire contents of which are hereby fully incorporated by reference. Further, this application is related to U.S. patent applications Ser. No. 10/334,086, published as U.S. Patent Application Publication No. 2004/0083466 A1 and U.S. patent application Ser. No. 10/331,315, published as U.S. Patent Application Publication No. 2004/0083221 A1, corresponding to U.S. Provisional Patent applications 60/421,774 and 60/421,775, respectively) which are assigned to the assignee of this invention and also fully incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4279034 | Baxter | Jul 1981 | A |
4527270 | Sweeton | Jul 1985 | A |
4556972 | Chan et al. | Dec 1985 | A |
4622546 | Sfarti et al. | Nov 1986 | A |
4879716 | McNally et al. | Nov 1989 | A |
5003531 | Farinholt et al. | Mar 1991 | A |
5027342 | Boulton et al. | Jun 1991 | A |
5193192 | Seberger | Mar 1993 | A |
5214778 | Glider et al. | May 1993 | A |
5247664 | Thompson et al. | Sep 1993 | A |
5280577 | Trevett et al. | Jan 1994 | A |
5319776 | Hile et al. | Jun 1994 | A |
5379289 | DeSouza et al. | Jan 1995 | A |
5414833 | Hershey et al. | May 1995 | A |
5511213 | Correa | Apr 1996 | A |
5513345 | Sato et al. | Apr 1996 | A |
5600784 | Bissett et al. | Feb 1997 | A |
5606668 | Shwed | Feb 1997 | A |
5621889 | Lermuzeaux et al. | Apr 1997 | A |
5649215 | Itoh | Jul 1997 | A |
5655068 | Opoczynski | Aug 1997 | A |
5666479 | Kashimoto et al. | Sep 1997 | A |
5684957 | Kondo et al. | Nov 1997 | A |
5696486 | Poliquin et al. | Dec 1997 | A |
5737526 | Periasamy et al. | Apr 1998 | A |
5742771 | Fontaine | Apr 1998 | A |
5798706 | Kraemer et al. | Aug 1998 | A |
5805801 | Holloway et al. | Sep 1998 | A |
5815647 | Buckland et al. | Sep 1998 | A |
5832227 | Anderson et al. | Nov 1998 | A |
5848410 | Walls et al. | Dec 1998 | A |
5850515 | Lo et al. | Dec 1998 | A |
5905859 | Holloway et al. | May 1999 | A |
5919257 | Trostle | Jul 1999 | A |
5919258 | Kayashima et al. | Jul 1999 | A |
5920698 | Ben-Michael et al. | Jul 1999 | A |
5922049 | Radia et al. | Jul 1999 | A |
5958015 | Dascalu | Sep 1999 | A |
5969632 | Diamant et al. | Oct 1999 | A |
5982890 | Akatsu | Nov 1999 | A |
5991881 | Conklin et al. | Nov 1999 | A |
5995963 | Nanba et al. | Nov 1999 | A |
6000045 | Lewis | Dec 1999 | A |
6006019 | Takei | Dec 1999 | A |
6021510 | Nachenberg | Feb 2000 | A |
6083276 | Davidson et al. | Jul 2000 | A |
6094731 | Waldin et al. | Jul 2000 | A |
6119236 | Shipley | Sep 2000 | A |
6151624 | Teare et al. | Nov 2000 | A |
6167448 | Hemphill et al. | Dec 2000 | A |
6173333 | Jolitz et al. | Jan 2001 | B1 |
6182029 | Friedman | Jan 2001 | B1 |
6233704 | Scott et al. | May 2001 | B1 |
6279113 | Vaidya | Aug 2001 | B1 |
6282546 | Gleichauf et al. | Aug 2001 | B1 |
6295276 | Datta et al. | Sep 2001 | B1 |
6301668 | Gleichauf et al. | Oct 2001 | B1 |
6304973 | Williams | Oct 2001 | B1 |
6321338 | Porras et al. | Nov 2001 | B1 |
6363489 | Comay et al. | Mar 2002 | B1 |
6366934 | Cheng et al. | Apr 2002 | B1 |
6370648 | Diep | Apr 2002 | B1 |
6374207 | Li et al. | Apr 2002 | B1 |
6393386 | Zager et al. | May 2002 | B1 |
6405318 | Rowland | Jun 2002 | B1 |
6408311 | Baisley et al. | Jun 2002 | B1 |
6418446 | Lection et al. | Jul 2002 | B1 |
6421656 | Cheng et al. | Jul 2002 | B1 |
6446110 | Lection et al. | Sep 2002 | B1 |
6684335 | Epstein et al. | Jan 2004 | B1 |
6697950 | Ko | Feb 2004 | B1 |
6792546 | Shanklin et al. | Sep 2004 | B1 |
6862588 | Beged-Dov et al. | Mar 2005 | B1 |
20010056504 | Kuznetsov | Dec 2001 | A1 |
20020010715 | Chinn et al. | Jan 2002 | A1 |
20020013710 | Shimakawa | Jan 2002 | A1 |
20020035619 | Dougherty et al. | Mar 2002 | A1 |
20020038320 | Brook | Mar 2002 | A1 |
20020059528 | Dapp | May 2002 | A1 |
20020066035 | Dapp | May 2002 | A1 |
20020069318 | Chow et al. | Jun 2002 | A1 |
20020073091 | Jain et al. | Jun 2002 | A1 |
20020073119 | Richard | Jun 2002 | A1 |
20020082886 | Manganaris et al. | Jun 2002 | A1 |
20020083343 | Crosbie et al. | Jun 2002 | A1 |
20020087882 | Schneier et al. | Jul 2002 | A1 |
20020091999 | Guinart | Jul 2002 | A1 |
20020099710 | Papierniak | Jul 2002 | A1 |
20020099715 | Jahnke et al | Jul 2002 | A1 |
20020099734 | Yassin et al. | Jul 2002 | A1 |
20020103829 | Manning et al. | Aug 2002 | A1 |
20020108059 | Canion et al. | Aug 2002 | A1 |
20020111963 | Gebert et al. | Aug 2002 | A1 |
20020111965 | Kutter | Aug 2002 | A1 |
20020112224 | Cox | Aug 2002 | A1 |
20020116550 | Hansen | Aug 2002 | A1 |
20020116585 | Scherr | Aug 2002 | A1 |
20020116644 | Richard | Aug 2002 | A1 |
20020120697 | Generous et al. | Aug 2002 | A1 |
20020122054 | Hind et al. | Sep 2002 | A1 |
20020133484 | Chau et al. | Sep 2002 | A1 |
20020143819 | Han et al. | Oct 2002 | A1 |
20020152244 | Dean et al. | Oct 2002 | A1 |
20020156772 | Chau et al. | Oct 2002 | A1 |
20020165872 | Meltzer et al. | Nov 2002 | A1 |
20030041302 | McDonald | Feb 2003 | A1 |
20030229846 | Sethi et al. | Dec 2003 | A1 |
20040025118 | Renner | Feb 2004 | A1 |
20040073870 | Fuh et al. | Apr 2004 | A1 |
20040083221 | Dapp et al. | Apr 2004 | A1 |
20040083387 | Dapp et al. | Apr 2004 | A1 |
20040083466 | Dapp et al. | Apr 2004 | A1 |
20040172234 | Dapp et al. | Sep 2004 | A1 |
20040194016 | Liggitt | Sep 2004 | A1 |
20050039124 | Chu et al. | Feb 2005 | A1 |
20050177543 | Chen et al. | Aug 2005 | A1 |
Number | Date | Country |
---|---|---|
2307529 | Sep 2001 | CA |
WO0211399 | Feb 2002 | WO |
WO 02095543 | Nov 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20040083387 A1 | Apr 2004 | US |
Number | Date | Country | |
---|---|---|---|
60421773 | Oct 2002 | US | |
60421774 | Oct 2002 | US | |
60421775 | Oct 2002 | US |