Java™ is an object-orientated programming language developed by Sun Microsystems. The Java language is small, simple and portable across platforms and operating systems, both at the source and at the binary level. This makes the Java programming language very popular on the Internet.
Java's platform independence and code compaction are the most significant advantages of Java over conventional programming languages. In conventional programming languages, the source code of a program is sent to a compiler which translates the program into machine code or processor.
Java™ operates differently. The Java™ compiler takes a Java™ program and, instead of generating machine code for a particular processor, generates bytecodes. Bytecodes are instructions that look like machine code, but aren't specific to any processor. To execute a Java™ program, a bytecode interpreter takes the Java™ bytecode converts them to equivalent native processor instructions and executes the Java™ program. The Java™ bytecode interpreter is one component of the Java™ Virtual Machine.
Having the Java™ programs in bytecode form means that instead of being specific to any one system, the programs can run on any platform and any operating system as long a Java™ Virtual Machine is available. This allows a binary bytecode file to be executable across platforms.
The disadvantage of using bytecodes is execution speed. System specific programs that run directly on the hardware from which they are compiled, run significantly faster that Java™ bytecodes, which must be processed by the Java™ Virtual Machine. The processor must both convert the Java™ bytecodes into native instructions in the Java™ Virtual Machine and execute the native instructions.
One way to speed up the Java™ Virtual Machine is by techniques such as the “Just in Time” (JIT) interpreter, and even faster interpreters known as “Hot Spot JITs” interpreters. The JIT versions all result in a JIT compile overhead to generate native processor instructions. These JIT interpreters also result in additional memory overhead.
The slow execution speed of Java™ and overhead of JIT interpreters have made it difficult for consumer appliances requiring local-cost solutions with minimal memory usage and low energy consumption to run Java™ programs. The performance requirements for existing processors using the fastest JITs more than double to support running the Java™ Virtual Machine in software. The processor performance requirements could be met by employing superscalar processor architectures or by increasing the processor clock frequency. In both cases, the power requirements are dramatically increased. The memory bloat that results from JIT techniques, also goes against the consumer application requirements of low cost and low power.
It is desired to have an improved system for implementing Java™ programs that provides a low-cost solution for running Java™ programs for consumer appliances.
The present invention generally relates to Java™ hardware accelerators used to translate Java™ bytecodes into native instructions for a central processing unit (CPU). One embodiment of the present invention comprises a reissue buffer, the reissue buffer associated with a hardware accelerator and adapted to store converted native instructions issued to the CPU along with associated native program counter values. When the CPU returns from an interrupt the reissue buffer examines the program counter to determine whether to reissue a stored native instruction value from the reissue buffer. In this way, returns from interrupts can be efficiently handled without reloading the hardware accelerator with the instructions to convert.
Another embodiment of the present invention comprises a hardware accelerator to convert stacked-base instructions into register-based instructions native to a central processing unit. The hardware accelerator includes a native program counter monitor. The native program counter monitor checks whether the native program counter is within a hardware accelerator program counter range. When the hardware accelerator program counter is within the hardware accelerator program counter range, the hardware accelerator is enabled and converted native instructions are sent to the CPU from the hardware accelerator, the native program counter is not used to determine instructions to load from memory.
In this manner, the hardware accelerator can spoof the native program counter to be within a certain range which corresponds to the program counter range in which the stacked-base instructions are stored. By monitoring the program counter, the hardware accelerator can always tell when it needs to be operating and needs to not operate. Thus if a interrupt occurs, causing the data program counter to move to a range outside of the hardware accelerator program counter range, there need be no explicit instruction to the hardware accelerator from the CPU handling the interrupt to stall the hardware accelerator.
Yet another embodiment of the present invention comprises a hardware accelerator operably connected to a central processing unit, the hardware accelerator adapted to convert stack-based instructions into register-based instructions native to the central processing unit. The hardware accelerator includes a microcode stage. The microcode stage includes microcode memory. The microcode memory output includes a number of fields, the fields including a first set of fields corresponding to native instruction fields and a control bit field which affects the interpretation of the first set of fields by the microcode controlled logic to produce a native instruction. Use of a microcode portion allows the same general hardware accelerator architecture to work with a variety of central processing units. In a preferred embodiment, the microcode portion is separate from a decode portion.
The present invention may be further understood from the following description in conjunction with the drawings.
The Java™ hardware accelerator can do some or all of the following tasks:
In a preferred embodiment, the Java™ Virtual Machine functions of bytecode interpreter, Java™ register, and Java™ stack are implemented in the hardware Java™ accelerator. The garbage collection heap and constant pool area can be maintained in normal memory and accessed through normal memory referencing. In one embodiment, these functions are accelerated in hardware, e.g. write barrier.
The major advantages of the Java™ hardware accelerator is to increase the speed in which the Java™ Virtual Machine operates, and allow existing native language legacy applications, software base, and development tools to be used. A dedicated microprocessor in which the Java™ bytecodes were the native instructions would not have access to those legacy applications.
Although the Java™ hardware accelerator is shown in
In block 38, the system switches to the Java™ hardware accelerator mode. In the Java™ hardware accelerator mode, Java™ bytecode is transferred to the Java™ hardware accelerator 22, converted into native instructions then sent to the CPU for operation.
The Java™ accelerator mode can produce exceptions at certain Java™ bytecodes. These bytecodes are not processed by the hardware accelerator 22 but are processed in the CPU 26. As shown in block 40, the system operates in the native mode but the Java™ Virtual Machine is implemented in the accelerator which does the bytecode translation and handles the exception created in the Java™ accelerator mode.
The longer and more complicated bytecodes that are difficult to handle in hardware can be selected to produce the exceptions.
In a preferred embodiment, the hardware Java™ registers 44 can include additional registers for the use of the instruction translation hardware 42. These registers can include a register indicating a switch to native instructions configuration and control registers and a register indicating the version number of the system.
The Java™ PC can be used to obtain bytecode instructions from the instruction cache 24 or memory. In one embodiment the Java™ PC is multiplexed with the normal program counter 54 of the central processing unit 26 in multiplexer 52. The normal PC 54 is not used during the operation of the Java™ hardware bytecode translation. In another embodiment, the normal program counter 54 is used as the Java™ program counter.
The Java™ registers are a part of the Java™ Virtual Machine and should not be confused with the general registers 46 or 48 which are operated upon by the central processing unit 26. In one embodiment, the system uses the traditional CPU register file 46 as well as a Java™ CPU register file 48. When native code is being operated upon the multiplexer 56 connects the conventional register file 46 to the execution logic 26c of the CPU 26. When the Java™ hardware accelerator is active, the Java™ CPU register file 48 substitutes for the conventional CPU register file 46. In another embodiment, the conventional CPU register file 46 is used.
As described below with respect to
The register files for the CPU could alternately be implemented as a single register file with native instructions used to manipulate the loading of operand stack and variable values to and from memory. Alternately, multiple Java™ CPU register files could be used: one register file for variable values, another register file for the operand stack values, and another register file for the Java™ frame stack holding the method environment information.
The Java™ accelerator controller (co-processing unit) 64 can be used to control the hardware Java™ accelerator, read in and out from the hardware Java™ registers 44 and Java™ stack 50, and flush the Java™ accelerator instruction translation pipeline upon a “branch taken” signal from the CPU execute logic 26c.
The CPU 26 is divided into pipeline stages including the instruction fetch 26a, instruction decode 26b, execute logic 26c, memory access logic 26d, and writeback logic 26e. The execute logic 26c executes the native instructions and thus can determine whether a branch instruction is taken and issue the “branch taken” signal. In one embodiment, the execute logic 26c monitors addresses for detecting branches.
The decoded bytecodes are sent to a state machine unit 74 and Arithmetic Logic Unit (ALU) 76. The ALU 76 is provided to rearrange the bytecode instructions to make them easier to be operated on by the state machine 74 and perform various arithmetic functions including computing memory references. The state machine 74 converts the bytecodes into native instructions using the lookup table 78. Thus, the state machine 74 provides an address which indicates the location of the desired native instruction in the microcode look-up table 78. Counters are maintained to keep a count of how many entries have been placed on the operand stack, as well as to keep track of and update the top of the operand stack in memory and in the register file. In a preferred embodiment, the output of the microcode look-up table 78 is augmented with indications of the registers to be operated on in the native CPU register file at line 80. The register indications are from the counters and interpreted from bytecodes. To accomplish this, it is necessary to have a hardware indication of which operands and variables are in which entries in the register file. Native Instructions are composed on this basis. Alternately, these register indications can be sent directly to the Java™ CPU register file 48 shown in FIG. 3.
The state machine 74 has access to the Java™ registers in 44 as well as an indication of the arrangement of the stack and variables in the Java™ CPU register file 48 or in the conventional CPU register file 46. The buffer 82 supplies the translated native instructions to the CPU.
The operation of the Java™ hardware accelerator of one embodiment of the parent invention is illustrated in
As shown in
Consistency must be maintained between the Hardware Java™ Registers 44, the Java™ CPU register file 48 and the data memory. The CPU 26 and Java™ Accelerator Instruction Translation Unit 42 are pipelined and any changes to the hardware java registers 44 and changes to the control information for the Java™ CPU register file 48 must be able to be undone upon a “branch taken” signal. The system preferably uses buffers (not shown) to ensure this consistency. Additionally, the Java™ instruction translation must be done so as to avoid pipeline hazards in the instruction translation unit and CPU.
In the parent invention the Java™ hardware translator can combine the iload_n and iadd bytecode into a single native instruction. As shown in
As shown in
For some byte codes such as SiPush, BiPush, etc., the present invention makes available sign extended data for the immediate field of the native instruction being composed (120) by the hardware and microcode. This data can alternatively be read as a coprocessor register. The coprocessor register read/write instruction can be issued by hardware accelerator as outlined in the present invention. Additionally, the microcode has several fields that aid in composing the native instruction.
The Java™ hardware accelerator of the parent invention is particularly well suited to a embedded solution in which the hardware accelerator is positioned on the same chip as the existing CPU design. This allows the prior existing software base and development tools for legacy applications to be used. In addition, the architecture of the present embodiment is scalable to fit a variety of applications ranging from smart cards to desktop solutions. This scalability is implemented in the Java™ accelerator instruction translation unit of FIG. 4. For example, the lookup table 78 and state machine 74 can be modified for a variety of different CPU architectures. These CPU architectures include reduced instruction set computer (RISC) architectures as well as complex instruction set computer (CISC) architectures. The present invention can also be used with superscalar CPUs or very long instruction word (VLIW) computers.
The reissue buffer 106 monitors the native PC value 110. In a preferred embodiment, when the hardware accelerator is active, the hardware accelerator does not use the native PC value to determine the memory location to load the instructions from memory. The native PC value is instead maintained within a spoofed range which indicates that the hardware accelerator is active. In a preferred embodiment, the native PC monitor 110 detects whether the native PC value is within the spoofed range. If so, the multiplexer 112 sends the converted instructions from the hardware accelerator to the CPU 101. If not, the native instructions from memory are loaded to the CPU 101. When in the spoofed range, the addresses sourced to memory are the Java™ PC from the accelerator. Otherwise the native PC is sourced to memory.
If an interrupt occurs, the native PC value will go to a value outside the spoofed range. The PC monitor 110 will then stall the hardware accelerator. When a return from interrupt occurs, the CPU 101 will be flushed, and upon return from interrupt, the native PC value 108 returned to the PC value prior to the interrupt. The reissue buffer 106 will then reissue stored native instructions flushed from CPU 101 to the CPU 101 that corresponds to this prior native PC value. With the use of this system, the hardware accelerator does not need to be flushed upon an interrupt, nor do previously converted Java™ bytecodes need to be reloaded into the hardware accelerator. The use of the reissue buffer 106 can thus speed the operation and recovery from interrupt.
The CPU 101 is associated with a register file 113. This register file is the native CPU's normal register file, operably connected to the CPU's ALU but is shown separately here for illustration. The register file 113 stores Stack and Var values which can be used by the converted instructions. The Stack- and Variable-managers 114 keep track of any information stored in the register file 113 and use it to help the microcode stage operations. As described below, in one embodiment there are a fixed number of registers used for Stack values and Variable value. For example, six registers can be used for the top six Stack values and six registers used for six Variable values.
In another embodiment of the present invention, the Stack and Variable manager assigns Stack and Variable values to different registers in the register file. An advantage of this alternate embodiment is that in some cases the Stack and Var values may switch due to an Invoke Call and such a switch can be more efficiently done in the Stack and Var manager 114 rather than producing a number of native instructions to implement this.
In one embodiment a number of important values can be stored in the hardware accelerator to aid in the operation of the system. These values stored in the hardware accelerator help improve the operation of the system, especially when the register files of the CPU are used to store portions of the Java™ stack.
The hardware accelerator preferably stores an indication of the top of the stack value. This top of the stack value aids in the loading of stack values from the memory. The top of the stack value is updated as instructions are converted from stack-based instructions to register-based instructions. When instruction level parallelism is used, each stack-bases instruction which is part of a single register-based instruction needs to be evaluated for its effects on the Java™ stack.
In one embodiment, an operand stack depth value is maintained in the hardware accelerator. This operand stack depth indicates the dynamic depth of the operand stack in the CPU's register files. Thus, if four stack values are stored in the register files, the stack depth indicator will read “4.” Knowing the depth of the stack in the register file helps in the loading and storing of stack values in and out of the register files.
In a preferred embodiment, a minimum stack depth value and a maximum stack depth value are maintained within the hardware accelerator. The stack depth value is compared to the maximum and minimum stack depths. When the stack value goes below the minimum value, the hardware accelerator composes load instructions to load stack values from the memory into the register file of the CPU. When the stack depth goes above the maximum value, the hardware accelerator composes store instructions to store stack values back out to the memory.
In one embodiment, at least the top four (4) entries of the operand stack in the CPU register file operated as a ring buffer, the ring buffer maintained in the accelerator and operably connected to a overflow/underflow unit.
The hardware accelerator also preferably stores an indication of the operands and variables stored in the register file of the CPU. These indications allow the hardware accelerator to compose the converted register-based instructions from the incoming stack-based instructions.
The hardware accelerator also preferably stores an indication of the variable base and operand base in the memory. This allows for the composing of instructions to load and store variables and operands between the register file of the CPU and the memory. For example, When a Var is not available in the register file, the hardware issues load instructions. The hardware adapted to multiply the Var number by four and adding the Var base to produce the memory location of the Var. The instruction produced is based on knowledge that the Var base is in a temporary native CPU register. The Var number times four can be made available as the immediate field of the native instruction being composed, which may be a memory access instruction with the address being the content of the temporary register holding a pointer to the Vars base plus an immediate offset. Alternatively, the final memory location of the Var may be read by the CPU as an instruction saved by the accelerator and then the Var can be loaded.
In one embodiment, the hardware accelerator marks the variables as modified when updated by the execution of Java™ byte codes. The hardware accelerator can copy variables marked as modified to the system memory for some bytecodes.
In one embodiment, the hardware accelerator composes native instructions wherein the native instructions operands contains at least two native CPU register file references where the register file contents are the data for the operand stack and variables.
The accelerator ALU 144 is used to calculate index addresses and the like. The accelerator ALU is connected to the register pool. The use of the accelerator ALU allows certain simple calculations to be moved from the CPU unit to the hardware accelerator unit, and thus allows the Java™ bytecodes to be converted into fewer native instructions. The Variable Selection+Other Control unit 146 determines which registers are used as Vars. The Var control line from the ILP Logic unit 142 indicates how these Vars are interpreted. A Var and associated Var control line can be made available for each operand field in the native CPU's instruction.
In one embodiment, the hardware accelerator issues native load instructions when a variable is not present in the native CPU register file, the memory address being computed by the ALU in the hardware accelerator.
The microcode stage 104′ shown in
Register remapping unit 174 does register remapping. In conventional CPUs, some registers are reserved. Register remapping unit 174 allows the decoder logic to assume that the Stack and Var registers are virtual, which simplifies the calculations. Multiplexer 176 allows the value on line 171 to be passed without being modified.
In a preferred embodiment, one of the functions of the Stack-and-Var register manager is to maintain an indication of the top of the stack. Thus, if for example registers R1-R4 store the top 4 stack values from memory or by executing byte codes, the top of the stack will change as data is loaded into and out of the register file. Thus, register R2 can be the top of the stack and register R1 be the bottom of the stack in the register file. When a new data is loaded into the stack within the register file, the data will be loaded into register R3, which then becomes the new top of the stack, the bottom of the stack remains R1. With two more items loaded on the stack in the register file, the new top of stack in the register file will be R1 but first R1 will be written back to memory by the accelerators overfiow/underlfow unit, and R2 will be the bottom of the partial stack in the CPU register file.
In actuality, as can be understood with respect to the figures described above, the Var 3 and Var 5 are already stored within the two register files. The value of these register files is determined by the system. The instructions iload 3, iload 5 and iadd are done by determining which two registers store Var 3 and Var S and also determining which register is to store the new top of the stack. If Var 3 is stored in register R9 and Var 5 is stored in register R11 and the top of the stack is to be stored in register R2, the converted native instruction is an add of the value within register R9 to the value within register R11 and store the value into register R2. This native instruction thus does the operation of three bytecodes at the same time, resulting in the instruction level parellelism as operated on a native CPU.
Additionally within the hardware accelerator a ALU is deployed where the decoded byte code instructions for bytecodes such as GOTO and GOTO_W, the immediate branch offset following the bytecode instruction is sign extended and added to the Java™ PC of the current bytecode instruction and the result is stored in the Java™ PC register. JSR and JSR_W bytecode instructions also do this in addition to pushing the Java™ PC of the next byte code instruction on the operand stack.
The Java™ PC is incremented by a value calculated by the hardware accelerator. This increment value is based on the number of bytes being disposed of during the current decode which may include more than one byte code due to ILP. Similarly, SiPush and BiPush instructions are also sign extended and made available in the immediate field of the native instruction being composed. In some processors, the immediate field of the native instruction has a smaller bit width than is desired for the offsets or sign extended constants so this data may be read as memory mapped or I/O mapped reads.
While the present invention has been described with reference to the above embodiments, this description of the preferred embodiments and methods is not meant to be construed in a limiting sense. For example, the term Java™ in the specification or claims should be construed to cover successor programming languages or other programming languages using basic Java™ concepts (the use of generic instructions, such as bytecodes, to indicate the operation of a virtual machine). It should also be understood that all aspects of the present invention are not to be limited to the specific descriptions, or to configurations set forth herein. Some modifications in form and detail the various embodiments of the disclosed invention, as well as other variations in the present invention, will be apparent to a person skilled in the art upon reference to the present disclosure. It is therefore contemplated that the following claims will cover any such modifications or variations of the described embodiment as falling within the true spirit and scope of the present invention.
The present application is a continuation-in-part of the application entitled “Java Virtual Machine Hardware for RISC and CISC Processors,” filed Dec. 8, 1998, now U.S. Pat. No. 6,332,215 inventor Mukesh K. Patel, et al., U.S. Application Ser. No. 09/208,741. The present application also claims priority to a provisional application 60/239,298 filed Oct. 10, 2000 entitled “Java Hardware Accelerator Using Microcode Engine”.
Number | Name | Date | Kind |
---|---|---|---|
3889243 | Drimak | Jun 1975 | A |
4236204 | Groves | Nov 1980 | A |
4524416 | Stanley et al. | Jun 1985 | A |
4587612 | Fisk et al. | May 1986 | A |
4587632 | Ditzel | May 1986 | A |
4631663 | Chilinski et al. | Dec 1986 | A |
4763255 | Hopkins et al. | Aug 1988 | A |
4783738 | Li et al. | Nov 1988 | A |
4860191 | Nomura et al. | Aug 1989 | A |
4922414 | Holloway et al. | May 1990 | A |
4961141 | Hopkins et al. | Oct 1990 | A |
4969091 | Muller | Nov 1990 | A |
5077657 | Cooper et al. | Dec 1991 | A |
5113522 | Dinwiddie, Jr. et al. | May 1992 | A |
5136696 | Beckwith et al. | Aug 1992 | A |
5142681 | Driscoll et al. | Aug 1992 | A |
5163139 | Haigh et al. | Nov 1992 | A |
5193180 | Hastings | Mar 1993 | A |
5201056 | Daniel et al. | Apr 1993 | A |
5218711 | Yoshida | Jun 1993 | A |
5241636 | Kohn | Aug 1993 | A |
5265206 | Shackelford et al. | Nov 1993 | A |
5307492 | Benson | Apr 1994 | A |
5313614 | Goettelmann et al. | May 1994 | A |
5333296 | Bouchard et al. | Jul 1994 | A |
5335344 | Hastings | Aug 1994 | A |
5355460 | Eickemeyer et al. | Oct 1994 | A |
5430862 | Smith et al. | Jul 1995 | A |
5481684 | Richter et al. | Jan 1996 | A |
5490256 | Mooney et al. | Feb 1996 | A |
5535329 | Hastings | Jul 1996 | A |
5542059 | Blomgren | Jul 1996 | A |
5574927 | Scantlin | Nov 1996 | A |
5577233 | Goettelmann et al. | Nov 1996 | A |
5619665 | Emma | Apr 1997 | A |
5619666 | Coon et al. | Apr 1997 | A |
5634118 | Blomgren | May 1997 | A |
5638525 | Hammond et al. | Jun 1997 | A |
5650948 | Gafter | Jul 1997 | A |
5659703 | Moore et al. | Aug 1997 | A |
5668999 | Gosling | Sep 1997 | A |
5692170 | Isaman | Nov 1997 | A |
5740441 | Yellin et al. | Apr 1998 | A |
5740461 | Jaggar | Apr 1998 | A |
5748964 | Gosling | May 1998 | A |
5752035 | Trimberger | May 1998 | A |
5761477 | Wahbe et al. | Jun 1998 | A |
5764908 | Shoji et al. | Jun 1998 | A |
5768593 | Walters et al. | Jun 1998 | A |
5774868 | Cragun et al. | Jun 1998 | A |
5778178 | Arunachalam | Jul 1998 | A |
5781750 | Blomgren et al. | Jul 1998 | A |
5784584 | Moore et al. | Jul 1998 | A |
5794068 | Asghar et al. | Aug 1998 | A |
5805895 | Breternitz, Jr. et al. | Sep 1998 | A |
5809336 | Moore et al. | Sep 1998 | A |
5838165 | Chatter | Nov 1998 | A |
5838948 | Bunza | Nov 1998 | A |
5875336 | Dickol et al. | Feb 1999 | A |
5889996 | Adams | Mar 1999 | A |
5898850 | Dickol et al. | Apr 1999 | A |
5898885 | Dickol et al. | Apr 1999 | A |
5903761 | Tyma | May 1999 | A |
5905895 | Halter | May 1999 | A |
5920720 | Toutonghi et al. | Jul 1999 | A |
5923892 | Levy | Jul 1999 | A |
5925123 | Tremblay et al. | Jul 1999 | A |
5926832 | Wing et al. | Jul 1999 | A |
5937193 | Evoy | Aug 1999 | A |
5940858 | Green | Aug 1999 | A |
5946487 | Dangelo | Aug 1999 | A |
5946718 | Green | Aug 1999 | A |
5953741 | Evoy et al. | Sep 1999 | A |
5983334 | Coon et al. | Nov 1999 | A |
5999731 | Yellin et al. | Dec 1999 | A |
6003038 | Chen | Dec 1999 | A |
6009499 | Koppala | Dec 1999 | A |
6014723 | Tremblay et al. | Jan 2000 | A |
6021469 | Tremblay et al. | Feb 2000 | A |
6026485 | O'Connor et al. | Feb 2000 | A |
6031992 | Cmelik et al. | Feb 2000 | A |
6038643 | Tremblay et al. | Mar 2000 | A |
6052526 | Chatt | Apr 2000 | A |
6065108 | Tremblay et al. | May 2000 | A |
6067577 | Beard | May 2000 | A |
6071317 | Nagel | Jun 2000 | A |
6075940 | Gosling | Jun 2000 | A |
6076141 | Tremblay et al. | Jun 2000 | A |
6081665 | Nilsen | Jun 2000 | A |
6108768 | Koppala et al. | Aug 2000 | A |
6110226 | Bothner | Aug 2000 | A |
6118940 | Alexander, III et al. | Sep 2000 | A |
6125439 | Tremblay et al. | Sep 2000 | A |
6131144 | Koppala | Oct 2000 | A |
6131191 | Cierniak et al. | Oct 2000 | A |
6139199 | Rodriguez | Oct 2000 | A |
6141794 | Dice et al. | Oct 2000 | A |
6158048 | Lueh et al. | Dec 2000 | A |
6167488 | Koppala | Dec 2000 | A |
6209077 | Robertson et al. | Mar 2001 | B1 |
6233678 | Bala | May 2001 | B1 |
6275903 | Koppala et al. | Aug 2001 | B1 |
6292883 | Augusteijn et al. | Sep 2001 | B1 |
6317872 | Gee et al. | Nov 2001 | B1 |
6321323 | Nugroho et al. | Nov 2001 | B1 |
6330659 | Poff et al. | Nov 2001 | B1 |
6349377 | Lindwer | Feb 2002 | B1 |
6374286 | Gee et al. | Apr 2002 | B1 |
6477702 | Yellin et al. | Nov 2002 | B1 |
6532531 | O'Connor et al. | Mar 2003 | B1 |
6606743 | Raz et al. | Aug 2003 | B1 |
Number | Date | Country | |
---|---|---|---|
60239298 | Oct 2000 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09208741 | Dec 1998 | US |
Child | 09687777 | US |