Security of program executables and microprocessors based on compiler-architecture interaction

Information

  • Patent Grant
  • 7996671
  • Patent Number
    7,996,671
  • Date Filed
    Friday, November 12, 2004
    20 years ago
  • Date Issued
    Tuesday, August 9, 2011
    13 years ago
Abstract
A method, for use in a processor context, wherein instructions in a program executable are encoded with plural instruction set encodings. A method wherein a control instruction encoded with an instruction set encoding contains information about decoding of an instruction that is encoded with another instruction set encoding scheme. A method wherein instruction set encodings are randomly generated at compile time. A processor framework wherein an instruction is decoded during execution with the help of information provided by a previously decoded control instruction.
Description
TECHNICAL FIELD

This invention relates generally to improving security in microprocessors based on compiler architecture interaction, without degrading performance or significantly increasing power and energy consumption. More particularly, it relates to protecting program executables against reengineering of their content and protecting processors against physical microprobing or accessing information about a program executed at runtime.


BACKGROUND

Microprocessors (referred to herein simply as “processors”) execute instructions during their operation. It is very important to improve security during execution and preferably energy/power consumption should not significantly be affected by the solution.


Security is compromised when reengineering of program executables is possible. In addition, security of a solution such as an application intellectual property or algorithm is compromised when a processor can be physically microprobed and program information extracted without permission.


Program intellectual property can be re-engineered from binaries without requiring access to source codes. As reported by the Business Software Alliance, software piracy cost the software industry 11 billion dollars in 1998. Furthermore, an increasing number of tamper-resistant secure systems used in both military and commercial domains, e.g., smartcards, pay-TV, mobile phones, satellite systems, and weapon systems, can be attacked more easily once application semantics, including critical security algorithms and protocols, are re-engineered.


A key reason for many of these problems is that current microprocessors use a fixed Instruction Set Architecture (ISA) with predefined opcodes. Due to complexity, cost, and time-to-market issues related to developing proprietary microprocessors, most recent applications use commodity COTS components.


In software compiled for such systems, individual instruction opcodes as well as operands in binaries can be easily disassembled given that ISA documentation is widely available. Conventional systems can also be microprobed and these instructions extracted from the instruction memory systems or instruction buses.


Making the ISA reconfigurable (as a way to address these problems) is challenging, especially since a practically feasible solution would need to achieve this without significantly affecting chip area, performance and power consumption. Furthermore, it would be advantageous and practical if it could be retrofitted as add-on technology to commercially available microprocessor intellectual property (IP) cores from leading design houses such as ARM and MIPS. It should be backward compatible, e.g., capable of executing existing binaries to avoid losing existing software investments. Another critical aspect is migrating existing binaries to this new secure mode of execution without access to application source codes.


For a given processor there are typically many compilers available provided by many different vendors. These compilers have their own advantages and disadvantages. Moreover, when compiling an application many pieces of codes are added from precompiled libraries or hand-written assembly language.


Accordingly, if security-awareness is introduced at the executable-level, rather than at source code level, as in one embodiment of the present invention, by transforming the executable itself, significant practical advantages could be achieved. The goal would be to security-optimize executables that may have been fully optimized previously with a source-level compiler targeting a design aspect such as performance.


SUMMARY

The compiler-architecture interaction based processor framework described herein addresses the foregoing need to improve security in a practical and flexible manner.


The approach provides security without adverse effects on chip area, performance and with scalability to different processor instructions sets and easy integration with other compilation tools.


In one embodiment, the executable file itself provides a convenient interface between, for example, performance-oriented optimizations and security-oriented optimizations. Security optimizations in the compiler can be combined with, for example, energy oriented optimizations. Nevertheless, the invention is not limited to binary level compilation or compilation performed on executables; it can be also be added in source-level compilers without limitation.


The new microprocessor framework proposed in this invention has the potential to solve the security problems raised above in a unified manner. It is based on a tightly integrated compiler-architecture framework.


We call this microprocessor VISC, or (Virtual Instruction Set Computing), based on its property that instructions (and thus instruction opcodes) in application binaries are continuously reconfigured, changing meaning at a very fine compiler-managed program granularity. The actual reference ISA of the microprocessor is only visible during execution in the processor pipeline and used as a reference to generate control signals. Viewed in another way, there is no reference ISA just a reference control mechanism. VISC binaries can be made specific to a unique product and/or each binary can be made unique even if it is based on the same source code. This is achieved by having the initial reconfiguration-related secret, that is hidden in the processor, product specific.


Based on this framework, security-aware microprocessors can be developed that will make software copy and tamper resistant. We would argue that a security-aware tightly integrated compiler-architecture approach such as VISC is ideal to protect software against various types of security attacks including physical microprobing of the memory system and buses as well as enables unique defense on non-invasive attacks such as differential power analysis.


Supporting basic VISC security requires a special security-aware compiler or a security-aware binary morphing tool or similar source-level compiler, and small modifications to a microprocessor. Aspects related to security features provided, e.g., tamper and copy resistance, resistance against physical tampering with attacks such as fault injection and power analysis, and prevention of unauthorized execution are reviewed in the Description.


In addition to conventional instructions, VISC requires either modifications to instructions or new so-called static control instructions to drive the security architecture.


A key aspect of VISC is to communicate to execution engines how to decode/un-scramble instructions. Similarly, it could communicate how to execute instructions in their most energy efficient mode in addition to what operations to execute. Such static control instructions are continuously inserted in the instruction stream to provide the control information required to decode and control short sequences of instructions in the instruction stream. The control instructions can also be based on co-processor instructions not requiring modifications to a processor's regular ISA and thus easily integrated in existing processor cores.


In one embodiment, a special static decode unit first un-scrambles and then decodes these control instructions and generates control signals.


In one embodiment, to support a secure execution environment, the very initial few basic blocks or just the initial scrambling key are encrypted using public-key or symmetric cryptography. A VISC microprocessor contains the private key to decrypt these blocks or keys at runtime. Downloading the key can also be supported.


The remaining part of the instruction stream is scrambled/reconfigured at compile time in the following manner: At the beginning of each basic block, or larger blocks such as super blocks, a special static control instruction is inserted. Note that the reconfiguration of instructions can be done by selecting fully random reconfiguration keys at each reconfiguration step. This is possible as a security static instruction, that is scrambled with a previous random key, contains the reconfiguration parameters required for decoding the next instruction(s).


A security instruction encodes a scrambling scheme for the Block. This same static instruction can be leveraged to support compiler-managed power optimizations.


During code generation, all the instructions in the referred basic block are scrambled using the same random scheme. At runtime, once a static control instruction is decoded, it (reconfigures) the un-scrambling hardware to the specified scheme. Examples of scrambling that can be supported with minimal circuit delay and area overhead include: flipping of selected bits, rotations, and combination of those. These transformations are not intended to be limiting: other schemes can be easily derived and controlled similarly.


Once instructions are un-scrambled and decoded, the execution in the execution stage proceeds as normal in the similarly to the reference or original ISA.


Each static instruction is either encrypted or scrambled based on a scrambling scheme defined in a previous static instruction in the control-flow graph. At runtime, a secure boot code that accesses the private key from an EEPROM type of device decrypts and executes the first block or so until a static instruction is reached. Initial parameters that are required, such as the length of the code that needs to be decrypted with the private key, or just the initial reconfiguration key, could be provided in the same encrypted way.


Once a static control instruction is reached, subsequent static instructions take over control for the remainder of the execution. Note that this means that the performance impact of the private key based decryption is amortized across the whole duration of the program.


If an entire program were encrypted using public-key or symmetric-key cryptography, it would make execution prohibitively slow and power/energy consumption would increase even if decryption were implemented in hardware.


In contrast, the approach in this invention has very little area, performance, or power overhead. The initial decryption overhead is insignificant for most of the applications. In a software implementation using ECC this decryption takes 1.1 million cycles.


A static security control instruction can be folded from the pipeline or executed in parallel with any instruction from a previous basic block.


In general, there might be several compilers available for one particular processor or an instruction set architecture. The approach based on executables can be easily integrated with some or all of these compilers without requiring changes in the source-level compiler.


The invention can be used to improve security on any type of device that includes a processor. For example, the invention can be used on personal computers, devices containing embedded controllers, sensor networks, network appliances, and hand-held devices, cellular telephones, and/or emerging applications based on other device technologies.


Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods and materials similar or equivalent to those described herein can be used in practice, suitable methods and materials are described below. In addition, the materials, methods, and examples are illustrative only and not intended to be limiting.


Other features and advantages of the invention will become apparent from the following description, including the claims and drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram showing the relationship between a source-level compiler, instruction set architecture, and microarchitecture in a processor.



FIG. 2 is a block diagram showing the relationship between the executable-level re-compiler, conventional source-level compiler, instruction set architecture, and microarchitecture in a processor.



FIG. 3 is a block diagram showing an embodiment of a VISC architecture.





DESCRIPTION

In the embodiments described herein, a processor framework uses compile-time information extracted from an executable or source-level and the executable is transformed so as to reduce vulnerability due to binary reengineering or extracting information at runtime in a processor. The security is improved without significantly affecting chip area, power or performance.


Referring to FIG. 1, a compiler 10 is a software system that translates applications from high-level programming languages (e.g., C, C++, Java) into machine specific sequences of instructions. An instruction set architecture (ISA) 12 is a set of rules that defines the encoding of operations into machine specific instructions. The ISA acts as the interface between the compiler 10 and the microarchitecture (14). A computer program is a collection of machine level instructions that are executed to perform a desired functionality. Micro-architectural (or architectural) components 14 primarily comprise hardware and/or software techniques that are used during execution of the program. The actual machine can be a microprocessor or any other device that is capable of executing instructions that conform to the encoding defined in the ISA.


Compile-time refers to the time during which the program is translated from a high-level programming language into a machine-specific stream of instructions, and it is not part of execution or runtime. Runtime is the time it takes to execute translated machine instructions on the machine. Compilation is typically performed on a different host machine than execution and it is done on the source program.


In contrast, executable-level re-compilation is a process that uses an application's executable as input and performs analysis and modifications at the executable-level.


Referring to FIG. 2, an executable-level re-compiler or executable-level compiler 24 is a software system that translates applications from executable-level into machine-specific sequences of instructions 26. The remaining components 26 and 28 are similar to the components 12 and 14 in a conventional compiler system as shown in FIG. 1.


The present invention is applicable to both source-level and binary-level compilers and processors using binaries from either of these tools.


VISC Architecture


In one embodiment, protection in a VISC architecture is provided by a combination of public-key encryption such as ECC or RSA, and a continuous compiler-enabled fine-granularity scrambling/reconfiguration of instructions.


The private key is stored in a tamper-resistant device and used at runtime to decrypt the initial reconfiguration key or sequence before execution starts. The corresponding public part of the encryption key is used during compilation and it is made available in the software compilation tools so that anyone can generate binaries.


In one aspect, only a header block or initial reconfiguration parameter is chosen to be encrypted with the public key. The decryption can be implemented with microcode, can be hardware supported, or can be implemented with trusted software.


The rest of the code is scrambled at a very fine compiler managed program granularity. This has the effect of having multiple ISAs in the same application binary. Moreover, these ISAs can be generated fully randomly. Un-scrambling is managed by the inserted static security control instructions that identify un-scrambling rules to be used at runtime.


The boundary between the encrypted part and the first scrambled block can be placed anywhere in the binary.


Another embodiment is to have the private key encrypt the key of a faster symmetric key encryption such as DES, Triple DES, or AES. A disadvantage of such an approach is that one would also need the secret key for the symmetric cipher to generate code.


In one embodiment, the mapping between the un-scrambling schemes supported at runtime and the encoding of static control instructions is reconfigurable with reconfiguration parameters that are stored encrypted and can be decrypted only with the private key.


A reconfigurable un-scrambling logic and a special decode unit (called static decode) support un-scrambling of the specified instructions from each block, and decoding of the static control instructions.


Programming is done by executing trusted instructions, e.g., that are part of the microcode, and is defined by the encrypted reconfiguration parameter. For example, 32-bit numbers can define new permutations of instruction bits that alter reference scrambling schemes encoded with static instructions.


In one aspect, programming of this mapping is done at the beginning of execution after the decryption of the initial header block is completed and before any static instruction is executed. Scrambling of the binary at compile time is done at a very fine program granularity such as basic blocks or at larger granularity such as super blocks. The compiler must guarantee, when selecting these blocks, that there is no branching out and there are no branch instructions elsewhere that would branch in. This scrambling can be combined with additional transformations that are based on program addresses.


The size of these blocks is variable, ranging from a small number of instructions to possibly tens or hundreds of instructions. The security-aware compiler decides what scrambling rule to apply to each block and inserts a control instruction at the top.


In one aspect, after scrambling, instruction opcodes remain (mostly) valid, just their meanings are changed. This is because the majority of possible bit permutations in opcode fields are already used up in modern ISAs, so permutations of opcode bits will likely just create other valid instructions. The length of the blocks scrambled with one key/scheme is program and compilation dependent. The schemes used can be generated randomly at compile time.


At runtime, as soon as the initial header block or reconfiguration key is decrypted, the approach goes into a phase overseen by the static control instructions. The execution of the application from that point on is driven by the control instructions that always preset the un-scrambling rule to be applied in a subsequent sequence of instructions or instruction. Execution otherwise happens as usual in the pipeline.


In one aspect, a basic VISC approach can be combined with additional mechanisms, enabled by the VISC compiler-architecture framework, to protect against leaking out information from various parts of the microprocessor. A key benefit is that its protection boundary can be extended to incorporate most of the processor domains, like the memory system including caches and buses, that are normally the target of security attacks. Moreover, it is easy to combine the compiler-driven approach with runtime techniques for additional level of security.


In one embodiment we have implemented, even the data that is in the execute stage before a memory operation is modified by randomly modifying the data at compile time and taking into account these modifications in the codes that consume the data. This would mean not only that the date is obfuscated in the memory system but even when passes the execute stage. This significantly increases the hurdles for differential power attacks and there will be no correlation between the data in the pipeline across several executions of the same code.


The content that must be preserved tamper resistant is the device that contains the private key used to initialize. A VISC system protects against physical probing in key parts of the microprocessor including the memory system, buses, and in some cases hide the data in the pipeline making execution of a sequence fully dynamic and changing from one execution to another. In addition, deep sub-micron feature sizes in future process generations make reconstructing the circuitry of complex processor pipelines expensive and very time-consuming to perform. This is important as in many secure systems sensitive information has a strategic value to the opponent only if extracted within a certain time-frame.


Protection in Caches and External Memory


In one embodiment of this invention the executable file is analyzed by a binary scanning tool in order to gain information about the various sections and any other symbolic information that may be available. This information is used in order to create a secure version of the program.


Instructions in VISC systems are decrypted or un-scrambled solely during decoding. This implies that binaries are protected everywhere in the instruction memory including L1/L2 caches, buses, and external memory. While the approach in this embodiment does not stop someone from inserting instructions, the meaning of these instructions will be different from the one intended. Attacks based on physical tampering that require reading instruction caches/memory are strongly protected against.


A solution to protect the data memory is not automatically provided by a reconfigurable instruction set approach. While in a VISC system instruction operands can also be in scrambled form, during execution L1 data caches and data memory will contain valid data. Although existing tampering schemes typically rely on having access to both instructions and data, without other architectural or compiler level protection valid data could potentially be accessed and may present a possible security weakness.


In one embodiment of this invention, a solution is to encrypt each cache block and add a tag containing a hash calculated on the data values. A tag also protects against unauthorized modifications. When data values are modified, the stored tag will not match the calculated one and a security related exception can be raised. The tag can be put into a small cache if this aspect would need to be able to get most of the modification attempts (that would increase the hurdles for an attacker significantly) or into an indexed small SRAM if all memory stores would need to be protected. This depends on application.


Adding this type of support on L1 data caches could however affect performance somewhat even with a pipelined hardware implementation.


A lower overhead approach that can be provided in VISC in many application areas is based on security-aware compilation and minor microarchitecture support. The approach requires alias analysis to determine the possible location sets for each memory access at compile time. A location set contains the possible memory ranges a memory access can go to at runtime.


Within disjoint groups of location sets, a scrambling rule similar to the one used for instructions would be defined and written into the register file. A small number of special registers could be provided if register pressure becomes an issue. One would need to make sure at compile time that memory accesses within each location set group are always generated with the same scrambling rule.


In one embodiment, the approach could have only a single rule for data. During runtime, whenever data is written, it is written scrambled. During execution of a load instruction, before the data is loaded into the register file it is un-scrambled. Simple techniques such as based on bit flipping would add little overhead and could possibly be supported without increasing cache access time.


One could also support multiple active rules within a basic block. In that case, the static control instruction would need to identify the special register that should be used during runtime to un-scramble the memory instruction. A simple solution to support two different rules would be to have them defined in a static control instruction. For example, a bit vector could tell which memory operations would be using which scheme in a basic block.


As there are typically 1 to 3 memory operations in a basic block, three bits would be enough to encode it. More then two rules per block would require more bits per memory instruction. In each loop one could use two data scrambling rules, but may provide different scrambling between various loops if possible.


A block-level overview of an embodiment of a VISC architecture 40 is shown in FIG. 3. As shown in the figure, software is strongly protected in (at boundary 42) caches and external memory and buses. Only physical tampering with the core processor circuitry (below boundary 42) and the device 44 that contains the private key may compromise VISC's protection of software. The figure shows one possible embodiment and is not intended to be limiting; different execution mechanisms and memory hierarchies are also possible and the protection boundary could be extended to other processor areas with VISC.


Further Protection Against Security Attacks


Cryptographic research has traditionally evaluated cipher systems by modeling cryptographic algorithms as ideal mathematical objects. Conventional techniques such as differential and linear cryptanalysis are very useful for exploring weaknesses in algorithms represented as mathematical objects. These techniques, however, cannot address weaknesses that are due to a particular implementation that often require both software and hardware components. The realities of a physical implementation can be extremely difficult to control and often result in leakage of side-channel information. Successful attacks are frequently based on a combination of attacks that are used to leak information about a hidden secret, and use this side-channel information to reduce the search space and mount an exhaustive attack.


There is a wide range of security attacks that can be mounted to reveal secrets in secure systems. Techniques developed have shown how surprisingly little side-channel information is required to break some common ciphers. Attacks have been proposed that use such information as timing measurements, power consumption, electromagnetic emissions and faulty hardware.


Protection Against Unauthorized Execution


Only processors that have the right private key could start executing VISC binaries correctly. Several products that contain the microprocessor could contain the same key, but one could also provide support to an authorized party to upload a new key making the binary run only on one particular product. Additional techniques, such as based on tagging each cache line in the instruction memory (with a hash such as MD5), can be provided to protect against unauthorized modifications in the instruction memory. Nevertheless, VISC already provides a lazy protection against unauthorized execution even without additional tagging. The protection is lazy as modification is not immediately stopped but only typically after 1 to 3 instructions.


Power Analysis Attacks


A Simple Power Attack (SPA) is based on directly monitoring the system's power consumption. Different attacks are possible depending on the capabilities of the attacker. In some situations the attacker may be allowed to run only a single encryption or decryption operation. Other attackers may have unlimited access to the hardware. The most powerful attackers not only have unlimited access, but also have detailed knowledge of the software.


A Differential Power Attack (DPA) is more powerful than an SPA attack because the attacker does not need to know as many details about how the algorithm was implemented. The technique also gains strength by using statistical analysis. The objective of the DPA attacks usually is to determine the secret key used by a cryptographic algorithm.


In both of these cases small variations in the power consumed by different instructions and variations in power consumed while executing data memory read/write instructions can reveal data values. One way to prevent a power analysis attack is to mask the side-channel information with random calculations that increase the measurement noise.


The approach in an embodiment of this invention to reduce the vulnerability of DPA attack is based similarly on increasing the measurement noise. The approach is leveraging BlueRISC's compiler-architecture based power/energy optimization framework.


In one aspect, a method optimizes power consumption by providing multiple access paths managed by the compiler.


For example, in the case of the data cache, several mechanisms are provided such as fully statically managed, statically speculative, and various dynamic access modes. Energy reduction is achieved by eliminating redundant activity in the data cache by leveraging static information about data access patterns available at compile time.


The compiler predicts for each memory access which is the most energy efficient and selects that path. The energy consumed per memory instruction is lowest for the statically managed modes and highest for the conventional modes. With provided architectural support one can detect if an access path is incorrectly chosen at compile time and the operation is rerun in a conventional way. The variation, in the preferred embodiment, in consumed energy per memory access can be as much as a factor of 10.


The energy reduction approach alone is often enough to introduce enough power noise/variation given that various data memory access paths are already selected in a program and data dependent manner. As compared to introducing random calculations, such an approach to increase the measurement noise would have minimal performance impact.


The compiler approach that is used for energy optimization can be used to further randomize the power cost for data accesses in critical portions of codes such as cryptographic algorithms. This is accomplished at compile time by deliberately selecting access paths with different power requirements.


In addition, as described in earlier section, one could make sure at compile time that even the content of data passed into the execute stages before store operations is modified from execution to execution for the same program. This would make it much more difficult to correlate data before stores with differential binary analysis. Because the approach is compiler driven, the level of protection can be adjusted at compile time in an application specific manner without requiring changes to the hardware.


Fault Injection Attacks


The cryptographic literature lists attacks that are based on injecting faults in secure systems. For example, in 1996, Biham and Shamir describe an attack on DES using 200 ciphertexts. They introduce one-bit errors produced by environmental stress such as low levels of ionizing radiation. Another type of attack is demonstrated based on glitches introduced to the clock or power supply. The idea is that by varying the timing and the duration of the glitch the processor can be forced to execute a number of wrong instructions. Fault injection can be used in conjunction with other attacks such as power analysis.


The feasibility of such an attack in the context of VISC is questionable. It is more likely that such changes in the instruction stream would cause irrelevant and uninformative exceptions rather than leak side-channel information. This is due to the fact that VISC instructions can be protected efficiently against power analysis and that VISC instructions and opcodes are scrambled with different and reconfigurable keys, so such leaked information would be very difficult to interpret.


Spoofing, Splicing and Replay


Splicing attacks involve duplicating or reordering segments of instructions. Replaying attacks are based on recording ciphertext (valid sequences of instructions) and executing them at later times. Spoofing attacks would typically involve changing data in the data memory. These attacks would be likely combined with some sort of physical tampering such as power analysis and fault injection. The objective is again to reveal some secret.


These attacks could be eliminated by the tagging of the instruction memory and the encryption/tagging of the data memory mentioned earlier. But, even without tagging support, using such attacks in VISC is unlikely to be successful since VISC instructions are reconfigured at a fine compiler defined (and variable) granularity.


In one aspect, in VISC it is possible to have each instruction scrambled differently. For example, one could have a combined scheme where the key used to un-scramble is shifted after every instruction in the block, effectively changing the meaning of the opcodes at the individual instruction granularity.


A combination of various randomly-selected scrambling approaches and the variable length blocks make reconstructing valid sequences of instructions very hard. An exhaustive search would involve trying out possible block boundaries and thus exhaustively trying to re-engineer sequences of instructions. Permutations of bits in instructions will likely result in other legal instructions, further increasing the complexity of distinguishing real instruction sequences from just sets of valid instructions.


Furthermore, as mentioned earlier, the meaning of the static instructions, i.e., the mapping between the scrambling schemes supported and the bits encoding them, is reconfigurable.


To give an approximation of the difficulty of breaking this encoding, one would need to try 2 to the power of 32 permutations (for a 32-bit ISA) for each instruction and try to combine variable length sequences of such instructions into valid instruction sequences. Note that many valid solutions may exist. Even if an individual static control instruction were to be recovered, its actual meaning would still be unknown (as it is modified by the reconfiguration mechanism). Furthermore, it would be impossible to distinguish real static (or regular) instructions from permutations of other real instructions.


VISC Security-Aware ISA


A key aspect is to scramble the interface between architecture and compilation. The mechanism to achieve this is based on inserting decoding related bits into the instruction stream and scrambling at compile time instruction blocks referred to by these bits. Our objective with this is to develop a microprocessor that effectively appears as a reconfigurable ISA machine. Note that the approach is more powerful than that, as not only the opcodes but even operand bits are reconfigured from their original meaning.


In one aspect, one can encode the various scrambling schemes with one compact static instruction format. One possibility is to use a single static control information related instruction per basic block or super block that encodes the control information for the whole duration of the block at runtime. The added hardware component would be a different decode unit that we call static decode. This unit would be responsible for generating the control signals for the whole duration of the block. Overall, the idea to encode static information for the duration of a basic block is interesting because the order of execution is known.


To illustrate this aspect with a simple example, imagine that there are two un-scrambling related bits in a control instruction. For example, if a control instruction contains two zero un-scrambling bits, the basic block instructions encoded with it could have two (or more) specific bit positions flipped. Un-scrambling can be done with very little overhead, by simply inverting those bits for all instructions (in the controlled block) at runtime.


An advantage of this embodiment is that it does not require any modification to existing instructions so it would be easy to add to existing designs. Most of the designs have reserved opcodes for future ISA extensions or co-processors where static control instructions could be incorporated. With our compiler-enabled instruction cache optimization techniques, the effect of code dilution on power consumption is practically negligible as most of these added instructions will be fetched from a compiler-managed energy efficient cache.


To extend the possible encoded combinations from 2 to the power of 23 (such as is available in an ARM ISA) one would need to periodically insert static region control instructions that would alter the meaning of a number of subsequent regular control instructions. This way the possible scrambling schemes supported can be extended to all 2 to the power of 32 permutations.


In one aspect, individual blocks have different scramblings. The actual scheme can be randomized at compile time, by randomly selecting a scrambling approach for each block. Such a scheme would make any two instances of the binary for the same application to look different. Furthermore, the initial configuration parameter that defines the mapping between the bits in the control instructions and the supported schemes, can also be used to make the scrambling scheme different for various applications or chips.


Static instructions at the root of control-flow subtrees define scrambling of static instructions in the subtrees. This can be supported easily at runtime; a particular un-scrambling scheme would be used at runtime until a new static instruction would override it and so on. As each static instruction operates on well defined boundaries, such as basic blocks, without possibility to enter that code from elsewhere or branch out, there is a well defined mapping between scrambling at compile time and un-scrambling at runtime that guarantees correct execution.


Other Embodiments

The invention is not limited to the specific embodiments described herein. Other types of compiler analyses and/or architecture support may be used. The invention may be applied to control security aspects in any appropriate component of a processor. The optimizations and/or analyses may be performed in a different order and combined with other techniques, in other embodiments.


Other embodiments not described herein are also within the scope of the following claims.

Claims
  • 1. A processor comprising: storage memory to store an instruction stream that corresponds to a single program executable, the instruction stream comprising coded control instructions and other coded instructions, at least some of the other coded instructions having different instruction set architectures, each of at least some of the coded control instructions comprising control information that defines a rule for use in decoding a corresponding other coded instruction that is located in a subsequent part of the instruction stream; anda single instruction-decoder comprising a static-decode part and an instruction-decode part, the static-decode part to decode a coded control instruction in the instruction stream to obtain control information from the coded control instruction, the control information being usable by the instruction-decode part to decode a corresponding other coded instruction in the instruction stream.
  • 2. The processor of claim 1, further comprising: a compiler to generate the instruction stream, wherein generating comprises adding the coded control instructions at compile-time.
  • 3. The processor of claim 1, wherein at least some of the coded control instructions are encoded using keys.
  • 4. The processor of claim 3, wherein at least some of the other coded instructions comprise constant fields that are coded at compile time.
  • 5. The processor of claim 3, wherein at least some of the other coded instructions are encoded using keys associated with logical program addresses.
  • 6. The processor of claim 1, wherein the coded control instructions comprise co-processor instructions.
  • 7. The processor of claim 1, further comprising a compiler to, at compile time: code uncoded instructions to produce the other coded instructions;generate the coded control instructions for corresponding other coded instructions using information about coding used to produce the corresponding other coded instructions; andgenerate the instruction stream from the other coded instructions and the coded control instructions.
  • 8. The processor of claim 1, wherein the processor is configured to decrypt a header block of the instruction stream using a public key.
  • 9. The processor of claim 1, wherein at least some of the other coded instructions are encoded by flipping selected bits, rotating bits, or a combination of flipping bits and rotating bits.
  • 10. A processor comprising: storage memory to store coded instructions for an instruction stream that corresponds to a single program executable, at least some of the coded instructions being scrambled for security so that the instruction stream comprises at least two different instruction set architectures; anda single instruction-decoder to decode a first type of the coded instructions to produce control information that defines schemes according to which bits of one or more corresponding coded instructions of a second type are scrambled in the instruction stream, wherein the single instruction-decoder is configured according to a scheme defined in the control information for a coded instruction of the first type to decode a corresponding and succeeding coded instruction of the second type, the single instruction-decoder being part of a pipeline of the processor.
  • 11. The processor of claim 10, wherein the first type of coded instruction comprises coded control instructions and the second type of coded instructions comprises other coded instructions; and wherein the single instruction-decoder is configured to: (i) obtain, from a first coded control instruction, first control information for decoding a first other coded instruction corresponding to a first instruction set architecture, (ii) decode the first other coded instruction using the first control information, (iii) obtain, from a second coded control instruction, second control information for decoding a second other coded instruction corresponding to a second instruction set architecture, and (iv) decode the second other coded instruction using the second control information.
  • 12. The processor of claim 10, wherein the pipeline comprises an execution stage.
  • 13. The processor of claim 12, wherein the processor is configured to use coded instructions that are changed at run-time.
  • 14. The processor of claim 10, wherein at least some of the coded instructions are scrambled for security by flipping selected bits, rotating bits, or a combination of flipping bits and rotating bits.
  • 15. A processor comprising: storage memory to store an instruction stream that corresponds to a single program executable, the instruction stream comprising coded control instructions and other coded instructions that are scrambled for security resulting in least two different instructions set architectures;wherein coded control instructions in the instruction stream comprise control information defining schemes according to which bits of corresponding other coded instructions in the coded instruction stream are scrambled; anda single instruction-decoder, which is part of a pipeline of the processor, to decode other coded instructions in the instruction stream using control information from corresponding coded control instructions and information in the other coded instructions.
  • 16. The processor of claim 15, wherein schemes according to which bits of corresponding other coded instructions are scrambled comprise flipping selected bits in a coded instruction, rotating bits in a coded instruction, or a combination of flipping bits and rotating bits in a coded instruction.
  • 17. A method for use in a processor, the method comprising: storing an instruction stream that corresponds to a single program executable, the instruction stream comprising coded control instructions and other coded instructions, at least some of the other coded instructions having different instruction set architectures, each of at least some of the coded control instructions comprising control information that defines a rule for use in decoding a corresponding other coded instruction that is located in a subsequent part of the instruction stream; andperforming decoding using a single instruction-decoder comprised of a static decode part and an instruction-decode part, the static-decode part decoding a coded control instruction in the instruction stream to obtain control information from the coded control instruction, the control information being usable by the instruction-decode part to decode a corresponding other coded instruction in the instruction stream.
  • 18. The method of claim 17, further comprising: generating the instruction stream, wherein generating comprises adding the coded control instructions at compile-time.
  • 19. The method of claim 18, wherein at least some of the other coded instructions are encoded using key-based encryption.
  • 20. The method of claim 17, wherein at least some of the coded control instructions are encoded using keys.
  • 21. The method of claim 20, further comprising: generating the instruction stream, wherein generating comprises adding the coded control instructions at run-time.
  • 22. The method of claim 20, wherein at least some of the other coded instructions are encoded using keys associated with logical program addresses.
  • 23. The method of claim 17, wherein at least some of the other control instructions comprise co-processor instructions.
  • 24. The method of claim 17, further comprising, at compile time: coding uncoded instructions to produce the other coded instructions;generating the coded control instructions for corresponding other coded instructions using information about coding used to produce the corresponding other coded instructions; andgenerating the instruction stream from the other coded instructions and the coded control instructions.
  • 25. The method of claim 17, further comprising decrypting a header block of the instruction stream using a public key.
  • 26. The method of claim 17, wherein at least some of the other coded instructions are encoded by flipping selected bits, rotating bits, or a combination of flipping bits and rotating bits.
  • 27. A method for use in a processor, the method comprising: storing coded instructions for an instruction stream that corresponds to a single program executable, at least some of the coded instructions being scrambled for security so that the instruction stream comprises at least two different instruction set architectures; andusing a single instruction-decoder, decoding a first type of the coded instructions to produce control information that defines schemes according to which one or more corresponding coded instructions of a second type are scrambled in the instruction stream, wherein the single instruction-decoder is configured according to a scheme defined in the control information for a coded instruction of the first type to decode a corresponding and succeeding coded instruction of the second type, the single instruction-decoder being part of a pipeline of the processor.
  • 28. The method of claim 27, wherein the pipeline comprises an execution stage.
  • 29. The method of claim 28, further comprising changing at least some of the coded instructions at run-time.
  • 30. The method of claim 27, wherein the first type of coded instruction comprises coded control instructions and the second type of coded instructions comprises other coded instructions; and wherein decoding performed by the single instruction-decoder comprises: (i) obtaining, from a first coded control instruction, first control information for decoding a first other coded instruction corresponding to a first instruction set architecture, (ii) decoding the first other coded instruction using the first control information, (iii) obtaining, from a second coded control instruction, second control information for decoding a second other coded instruction corresponding to a second instruction set architecture, and (iv) decoding the second other coded instruction using the second control information.
  • 31. The method of claim 27, wherein at least some of the coded instructions are scrambled for security by flipping selected bits, rotating bits, or a combination of flipping bits and rotating bits.
  • 32. A method for use in a processor, the method comprising: storing an instruction stream that corresponds to a single program executable, the instruction stream comprising coded control instructions and other coded instructions that are scrambled for security resulting in at least two different instruction set architectures;wherein coded control instructions in the instruction stream comprise control information defining schemes according to which bits of corresponding other coded instructions in the coded instruction stream are scrambled; anddecoding instructions in the pipeline of the processor using a single instruction-decoder, wherein decoding comprises decoding other coded instructions in the instruction stream using control information from corresponding coded control instructions and information in the other coded instructions.
  • 33. The method of claim 32, wherein schemes according to which bits of corresponding other coded instructions are scrambled comprise flipping selected bits in a coded instruction, rotating bits in a coded instruction, or a combination of flipping bits and rotating bits in a coded instruction.
  • 34. A processor comprising: storing means for storing coded instructions for an instruction stream that corresponds to a single program executable, at least some of the coded instructions being scrambled for security so that the instruction stream comprises at least two different instruction set architectures; anddecoding means comprising a single instruction-decoder for decoding a first type of the coded instructions to produce control information that defines schemes according to which bits of one or more corresponding coded instructions of a second type are scrambled in the instruction stream, wherein the single instruction-decoder is configured according to a scheme defined in the control information for a coded instruction of the first type to decode a corresponding and succeeding coded instruction of the second type, the single instruction-decoder being part of a pipeline of the processor.
  • 35. The processor of claim 34, wherein at least some of the coded instructions are scrambled for security by flipping selected bits, rotating bits, or a combination of flipping bits and rotating bits.
  • 36. A processor comprising: storing means for storing an instruction stream that corresponds to a single program executable, the instruction stream comprising coded control instructions and other coded instructions that are scrambled for security resulting in at least two different instruction set architectures;wherein coded control instructions in the instruction stream comprise control information defining schemes according to which bits of corresponding other coded instructions in the coded instruction stream are scrambled; anddecoding means comprising a single instruction-decoder for decoding instructions in the pipeline of the processor, wherein decoding comprises decoding other coded instructions in the instruction stream using control information from corresponding coded control instructions and information in the other coded instructions.
  • 37. The processor of claim 36, wherein schemes according to which bits of corresponding other coded instructions are scrambled comprise flipping selected bits in a coded instruction, rotating bits in a coded instruction, or a combination of flipping bits and rotating bits in a coded instruction.
  • 38. A processor comprising: storage means to store an instruction stream that corresponds to a single program executable, the instruction stream comprising coded control instructions and other coded instructions, at least some of the other coded instructions having different instruction set architectures, each of at least some of the coded control instructions comprising control information that defines a rule for use in decoding a corresponding other coded instruction that is located in a subsequent part of the instruction stream; anddecoding means comprising a single instruction-decoder, the single instruction decoder comprising a static-decode part and an instruction-decode part, the static-decode part to decode a coded control instruction in the instruction stream to obtain control information from the coded control instruction, the control information being usable by the instruction-decode part to decode a corresponding other coded instruction in the instruction stream.
  • 39. The processor of claim 38, wherein at least some of the other coded instructions are encoded using key-based encryption.
  • 40. The processor of claim 38, wherein the processor is configured to decrypt a header block of the instruction stream using a public key.
  • 41. The processor of claim 38, wherein at least some of the other coded instructions are encoded using key-based encryption.
  • 42. The processor of claim 38, wherein at least some of the other coded instructions are encoded by flipping selected bits, rotating bits, or a combination of flipping bits and rotating bits.
RELATED U.S. APPLICATION DATA

This application claims the benefits of U.S. Provisional Application No. 60/520,838, filed on Nov. 17, 2003, and Confirmation No 3408, entitled: IMPROVING SECURITY OF PROGRAM EXECUTABLES AND MICROPROCESSORS BASED ON COMPILER—ARCHITECTURE INTERACTION, the contents of which are hereby incorporated by reference into this application as if set forth herein in full.

US Referenced Citations (258)
Number Name Date Kind
3603934 Heath Sep 1971 A
4003033 O'Keefe et al. Jan 1977 A
4037090 Raymond Jul 1977 A
4042972 Grunes et al. Aug 1977 A
4050058 Garlic Sep 1977 A
4067059 Derchak Jan 1978 A
4079455 Ozga Mar 1978 A
4101960 Stokes et al. Jul 1978 A
4110822 Porter Aug 1978 A
4125871 Martin Nov 1978 A
4128873 Lamiaux Dec 1978 A
4138720 Chu et al. Feb 1979 A
4181942 Forster et al. Jan 1980 A
4255785 Chamberlin et al. Mar 1981 A
4354228 Moore et al. Oct 1982 A
4376977 Bruinshorst Mar 1983 A
4382279 Mgon May 1983 A
4403303 Howes et al. Sep 1983 A
4410939 Kawakami Oct 1983 A
4434461 Puhl Feb 1984 A
4435758 Lorie et al. Mar 1984 A
4450519 Guttag et al. May 1984 A
4463421 Laws Jul 1984 A
4538239 Magar Aug 1985 A
4541045 Kromer Sep 1985 A
4562537 Barnett et al. Dec 1985 A
4577282 Caudel et al. Mar 1986 A
4592013 Prame May 1986 A
4604695 Widen et al. Aug 1986 A
4607332 Goldberg Aug 1986 A
4626988 George et al. Dec 1986 A
4649471 Briggs Mar 1987 A
4665495 Thaden May 1987 A
4679140 Gotou et al. Jul 1987 A
4709329 Hecker Nov 1987 A
4713749 Magar et al. Dec 1987 A
4714994 Oklobdzija et al. Dec 1987 A
4720812 Kao et al. Jan 1988 A
4772888 Kimura Sep 1988 A
4773038 Hillis et al. Sep 1988 A
4777591 Chang et al. Oct 1988 A
4787032 Culley et al. Nov 1988 A
4803621 Kelly Feb 1989 A
4860198 Takenaka Aug 1989 A
4870562 Kimoto Sep 1989 A
4873626 Gifford Oct 1989 A
4931986 Daniel et al. Jun 1990 A
4992933 Taylor Feb 1991 A
5021993 Matoba et al. Jun 1991 A
5036460 Takahira Jul 1991 A
5038282 Gilbert et al. Aug 1991 A
5045995 Levinthal et al. Sep 1991 A
5070451 Moore et al. Dec 1991 A
5111389 McAuliffe et al. May 1992 A
5121498 Gilbert et al. Jun 1992 A
5127091 Boufarah Jun 1992 A
5136697 Johnson Aug 1992 A
5193202 Jackson et al. Mar 1993 A
5224214 Rosich Jun 1993 A
5230079 Grondalski Jul 1993 A
5276895 Grondalski Jan 1994 A
5361367 Fijany et al. Nov 1994 A
5410669 Biggs et al. Apr 1995 A
5430854 Sprague et al. Jul 1995 A
5440749 Moore et al. Aug 1995 A
5479624 Lee Dec 1995 A
5481684 Richter et al. Jan 1996 A
5481693 Blomgren et al. Jan 1996 A
5497478 Murata Mar 1996 A
5524223 Lazaravich et al. Jun 1996 A
5542059 Blomgren Jul 1996 A
5542074 Kim et al. Jul 1996 A
5551039 Weinberg et al. Aug 1996 A
5555386 Nomura Sep 1996 A
5555428 Radigan et al. Sep 1996 A
5560028 Sachs et al. Sep 1996 A
5579520 Bennett Nov 1996 A
5590283 Hillis et al. Dec 1996 A
5590356 Gilbert Dec 1996 A
5598546 Blomgren Jan 1997 A
5604913 Koyanagi et al. Feb 1997 A
5608886 Blomgren et al. Mar 1997 A
5630143 Maher et al. May 1997 A
5637932 Koreeda et al. Jun 1997 A
5638525 Hammond et al. Jun 1997 A
5638533 Law Jun 1997 A
5652894 Hu et al. Jul 1997 A
5655122 Wu Aug 1997 A
5655124 Lin Aug 1997 A
5659722 Blaner et al. Aug 1997 A
5659778 Gingold et al. Aug 1997 A
5664950 Lawrence Sep 1997 A
5666519 Hayden Sep 1997 A
5684973 Sullivan et al. Nov 1997 A
5696958 Mowry et al. Dec 1997 A
5704053 Santhanam Dec 1997 A
5721893 Holler et al. Feb 1998 A
5727229 Kan et al. Mar 1998 A
5737572 Nunziata Apr 1998 A
5737749 Patel et al. Apr 1998 A
5742804 Yeh et al. Apr 1998 A
5752068 Gilbert May 1998 A
5758112 Yeager et al. May 1998 A
5758176 Agarwal et al. May 1998 A
5774685 Dubey Jun 1998 A
5774686 Hammond et al. Jun 1998 A
5778241 Bindloss et al. Jul 1998 A
5781750 Blomgren et al. Jul 1998 A
5790877 Nishiyama et al. Aug 1998 A
5794062 Baxter Aug 1998 A
5805907 Loper Sep 1998 A
5805915 Wilkinson et al. Sep 1998 A
5812811 Dubey et al. Sep 1998 A
5822606 Morton Oct 1998 A
5848290 Yoshida et al. Dec 1998 A
5854934 Hsu et al. Dec 1998 A
5857104 Natarjan et al. Jan 1999 A
5864697 Shiell Jan 1999 A
5864707 Tran et al. Jan 1999 A
5870581 Redford Feb 1999 A
5872987 Wade et al. Feb 1999 A
5875324 Tran et al. Feb 1999 A
5875464 Kirk Feb 1999 A
5884057 Blomgren et al. Mar 1999 A
5887166 Mallick et al. Mar 1999 A
5903750 Yeh et al. May 1999 A
5924117 Luick Jul 1999 A
5930490 Bartkowiak Jul 1999 A
5930509 Yates et al. Jul 1999 A
5933650 van Hook et al. Aug 1999 A
5933860 Emer Aug 1999 A
5946222 Redford Aug 1999 A
5949995 Freeman Sep 1999 A
5960467 Mahalingaiah et al. Sep 1999 A
5966544 Sager Oct 1999 A
5991857 Koetje et al. Nov 1999 A
5996061 Lopez-Aguado et al. Nov 1999 A
6006328 Drake Dec 1999 A
6021484 Park Feb 2000 A
6044469 Horstmann Mar 2000 A
6049330 Redford Apr 2000 A
6052703 Redford Apr 2000 A
6058469 Baxter May 2000 A
6067609 Meeker et al. May 2000 A
6067622 Moore May 2000 A
6076158 Sites et al. Jun 2000 A
6078745 De Greef et al. Jun 2000 A
6089460 Hazama Jul 2000 A
6105139 Dey et al. Aug 2000 A
6108775 Shiell et al. Aug 2000 A
6119205 Wicki et al. Sep 2000 A
6121905 Redford Sep 2000 A
6130631 Redford Oct 2000 A
6175892 Sazzad et al. Jan 2001 B1
6178498 Sharangpani et al. Jan 2001 B1
6211864 Redford Apr 2001 B1
6212542 Kahle et al. Apr 2001 B1
6216223 Revilla et al. Apr 2001 B1
6219796 Bartley Apr 2001 B1
6256743 Lin Jul 2001 B1
6272512 Golliver et al. Aug 2001 B1
6272676 Haghighat Aug 2001 B1
6282623 Halahmi et al. Aug 2001 B1
6282628 Dubey et al. Aug 2001 B1
6282639 Puziol et al. Aug 2001 B1
6286135 Santhanam Sep 2001 B1
6289505 Goebel Sep 2001 B1
6292879 Fong Sep 2001 B1
6301705 Doshi et al. Oct 2001 B1
6327661 Kocher Dec 2001 B1
6334175 Chih Dec 2001 B1
6341371 Tandri Jan 2002 B1
6381668 Lunteren Apr 2002 B1
6385720 Tanaka et al. May 2002 B1
6393520 Yoshikawa May 2002 B2
6404439 Coulombe et al. Jun 2002 B1
6412105 Maslennikov Jun 2002 B1
6430674 Trivedi et al. Aug 2002 B1
6430693 Lin Aug 2002 B2
6446181 Ramagopal et al. Sep 2002 B1
6452864 Condemi et al. Sep 2002 B1
6473339 De Ambroggi et al. Oct 2002 B2
6477646 Krishna et al. Nov 2002 B1
6487640 Lipasti Nov 2002 B1
6487651 Jackson et al. Nov 2002 B1
6502188 Zuraski, Jr. et al. Dec 2002 B1
6529943 Ohi Mar 2003 B1
6539543 Guffens et al. Mar 2003 B1
6550004 Henry et al. Apr 2003 B1
6560776 Breggin et al. May 2003 B1
6571331 Henry et al. May 2003 B2
6574740 Odaohhara Jun 2003 B1
6601161 Rappoport et al. Jul 2003 B2
6611910 Sharangpani et al. Aug 2003 B2
6625740 Datar Sep 2003 B1
6643739 Van De Waerdt et al. Nov 2003 B2
6658578 Laurenti et al. Dec 2003 B1
6671798 Puziol et al. Dec 2003 B1
6675305 Mohammad Jan 2004 B1
6687838 Orenstien et al. Feb 2004 B2
6732253 Redford May 2004 B1
6772323 Krishnan et al. Aug 2004 B2
6795781 Aldridge et al. Sep 2004 B2
6813693 Chilimbi Nov 2004 B2
6826652 Chauvel et al. Nov 2004 B1
6931518 Redford Aug 2005 B1
6934865 Moritz et al. Aug 2005 B2
6970985 Moritz et al. Nov 2005 B2
6988183 Wong Jan 2006 B1
7024393 Peinado et al. Apr 2006 B1
7036118 Ulery et al. Apr 2006 B1
7080366 Kramskoy et al. Jul 2006 B2
7089594 Lal et al. Aug 2006 B2
7162617 Ota et al. Jan 2007 B2
7185215 Cook et al. Feb 2007 B2
7278136 Moritz et al. Oct 2007 B2
7293164 DeWitt et al. Nov 2007 B2
7299500 Klebe et al. Nov 2007 B1
7430670 Horning et al. Sep 2008 B1
7467377 Wu et al. Dec 2008 B2
7487340 Luick Feb 2009 B2
7493607 Moritz Feb 2009 B2
7564345 Devadas et al. Jul 2009 B2
7600265 Davydov et al. Oct 2009 B2
7613921 Scaralata Nov 2009 B2
7639805 Li et al. Dec 2009 B2
7676661 Mohan et al. Mar 2010 B1
20010032309 Henry et al. Oct 2001 A1
20010044891 McGrath Nov 2001 A1
20010056531 McFarling Dec 2001 A1
20020073301 Kahle et al. Jun 2002 A1
20020095566 Sharangpani et al. Jul 2002 A1
20020104077 Charnell Aug 2002 A1
20020116578 Sakai et al. Aug 2002 A1
20030014742 Seth et al. Jan 2003 A1
20030041230 Rappoport et al. Feb 2003 A1
20030066061 Wu et al. Apr 2003 A1
20040010679 Moritz et al. Jan 2004 A1
20040010782 Moritz et al. Jan 2004 A1
20040010783 Moritz et al. Jan 2004 A1
20040015923 Hemsing et al. Jan 2004 A1
20040139340 Johnson et al. Jul 2004 A1
20040154011 Wang et al. Aug 2004 A1
20040158691 Redford Aug 2004 A1
20040162964 Ota et al. Aug 2004 A1
20040205740 Lavery et al. Oct 2004 A1
20050055678 Sakai Mar 2005 A1
20050066153 Sharangpani et al. Mar 2005 A1
20050108507 Chheda et al. May 2005 A1
20050114850 Chheda et al. May 2005 A1
20050154867 DeWitt et al. Jul 2005 A1
20050172277 Chheda et al. Aug 2005 A1
20050210249 Lee et al. Sep 2005 A1
20050262332 Rappoport et al. Nov 2005 A1
20060179329 Terechko et al. Aug 2006 A1
20070294181 Chheda et al. Dec 2007 A1
20080126766 Chheda et al. May 2008 A1
20090300590 Moritz et al. Dec 2009 A1
Foreign Referenced Citations (17)
Number Date Country
0314277 May 1989 EP
0552816 Jul 1993 EP
0679991 Nov 1995 EP
0945783 Sep 1999 EP
0681236 Nov 2000 EP
2201015 Aug 1988 GB
10-289305 Oct 1998 JP
2002-7359 Jan 2002 JP
WO8700318 Jan 1987 WO
WO9119269 Dec 1991 WO
WO9304438 Mar 1993 WO
WO9914685 Mar 1999 WO
WO0239271 May 2002 WO
WO0239272 May 2002 WO
WO0244895 Jun 2002 WO
WO0246885 Jun 2002 WO
WO2004006060 Jan 2004 WO
Related Publications (1)
Number Date Country
20050108507 A1 May 2005 US
Provisional Applications (1)
Number Date Country
60520838 Nov 2003 US