Failure of a computer, as well as application programs, executing on a computer can often result in the loss of significant amounts of data and intermediate calculations. The cause of failure can be either hardware or software related, but in either instance the consequences can be expensive, particularly when data manipulations are interrupted in mid-stream. In the case of large software applications, a failure might require an extensive effort to regenerate the status of the application's state prior to the failure.
Generally, checkpoint and restoration techniques periodically save the process state during normal execution, and thereafter restore the saved state to a backup process following a failure. In this manner, the amount of lost work is minimized to progress made by the application process since the restored checkpoint.
Traditionally, computers have stored the checkpoint data in either system memory coupled to the computer's processor, or on other input/output (I/O) storage devices such as magnetic tape or disk. I/O storage devices can be attached to a system through an I/O bus such as a PCI (originally named Peripheral Component Interconnect), or through a network such as Fiber Channel, Infiniband, ServerNet, or Ethernet. I/O storage devices are typically slow, with access times of more than one millisecond. They utilize special I/O protocols such as small computer systems interface (SCSI) protocol or transmission control protocol/internet protocol (TCP/IP), and typically operate as block exchange devices (e.g., data is read or written in fixed size blocks of data). A feature of these types of storage I/O devices is that they are persistent such that when they lose power or are re-started they retain the information stored on them previously. In addition, I/O storage devices can be accessed from multiple processors through shared I/O networks, even after some processors have failed.
As used herein, the term “persistent” refers to a computer memory storage device that can withstand a power reset without loss of the contents in memory. Persistent memory devices have been used to store data for starting or restarting software applications. In simple systems, persistent memory devices are static and not modified as the software executes. The initial state of the software environment is stored in persistent memory. In the event of a power failure to the computer or some other failure, the software restarts its execution from the initial state. One problem with this approach is that all intermediate calculations will have to be recomputed. This can be particularly onerous if large amounts of user data must be reloaded during this process. If some or all of the user data is no longer available, it may not be possible to reconstruct the pre-failure state.
System memory is generally connected to a processor through a system bus where such memory is relatively fast with guaranteed access times measured in tens of nanoseconds. Moreover, system memory can be directly accessed with byte-level granularity. System memory, however, is normally volatile such that its contents are lost if power is lost or if a system embodying such memory is restarted. Also, system memory is usually within the same fault domain as a processor such that if a processor fails, the attached memory also fails and may no longer be accessed. Metadata, which describes the layout of memory, is also lost when power is lost or when the system embodying such memory is restarted.
Prior art systems have used battery-backed dynamic random access memory (BBDRAM), solid-state disks, and network-attached volatile memory. Prior BBDRAM, for example, may have some performance advantages over true persistent memory. It is not, however, globally accessible. Moreover, BBDRAM that lies within the same fault domain as an attached CPU will be rendered inaccessible in the event of a CPU failure or operating system crash. Accordingly, BBDRAM is often used in situations where all system memory is persistent so that the system may be restarted quickly after a power failure or reboot. BBDRAM is still volatile during long power outages such that alternate means must be provided to store its contents before batteries drain. Importantly, this use of BBDRAM is very restrictive and not amenable for use in network-attached persistent memory applications, for example.
Battery-backed solid-state disks (BBSSD) have also been proposed for other implementations. These BBSSDs provide persistent memory, but functionally they emulate a disk drive. An important disadvantage of this approach is the additional latency associated with access to these devices through I/O adapters. This latency is inherent in the block-oriented and file-oriented storage models used by disks and, in turn, BBSSDs, which do not bypass the host computer's operating system. While it is possible to modify solid-state disks to eliminate some shortcomings, inherent latency cannot be eliminated because performance is limited by the I/O protocols and their associated device drivers. As with BBDRAM, additional technologies are required for providing the checkpoint state of an application program in a failed domain to a backup copy of the application program running in an operational domain.
In some embodiments, a system includes a network interface attached to a persistent memory unit. The persistent memory unit is configured to receive checkpoint data from a primary process, and to provide access to the checkpoint data for use in a backup process to support recovery capability in the event of a failure of the primary process. The network interface is configured to provide address translation information between virtual and physical addresses in the persistent memory unit. In other embodiments, the persistent memory unit is capable of storing multiple updates to the checkpoint state. The checkpoint state and the updates to the checkpoint state, if any, can be retrieved by the backup process periodically, or all at once upon failure of the primary process.
In yet other embodiments, a method for recovering the operational state of a primary process includes mapping virtual addresses of a persistent memory unit to physical addresses of the persistent memory unit, and receiving checkpoint data regarding the operational state of the primary process in the persistent memory unit. In some embodiments, the checkpoint data is provided to a backup process. In still other embodiments, the context information regarding the addresses is provided to the primary process and the backup process.
In other embodiments, the persistent memory unit provides the checkpoint data to the backup process when the primary process fails. Alternatively, in still other embodiments, the persistent memory unit can be configured to store multiple sets of checkpoint data sent from the processor at successive time intervals, or to provide the multiple sets of checkpoint data to the backup process at one time.
These and other embodiments will be understood upon an understanding of the present disclosure by one of ordinary skill in the art to which it pertains.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain its principles:
Whereas prior art systems have used persistent memory only in the context of block-oriented and file-oriented I/O architectures with their relatively large latencies, the present teachings describe memory that is persistent like traditional I/O storage devices, but that can be accessed like system memory with fine granularity and low latency. Systems according to the present teachings allow application programs to store one or more checkpoint states, which can be accessed by a backup copy of the application in the event of a hardware or software failure that prevents the primary application program from executing.
As shown in
SAN 112 accesses NPMU 102 via network interface (NI) 114. NPMU 102 combines the durability and recoverability of storage I/O with the speed and fine-grained access of system memory. Like storage, the contents of NPMU 102 can survive the loss of power or system restart. Like remote memory, NPMU 102 can be accessed across SAN 112. However, unlike directly-connected memory, NPMU 102 can continue to be accessed even after one or more processor nodes 104, 106 have failed.
Primary process 116 running on processor node 104 can initiate remote commands, for example, a write command to send data for checkpoint state 120 in NPMU 102. Primary process 116 can also provide data for checkpoint state 120 periodically. Backup process 122 running on processor node 106 is configured to perform the functions of primary process 116 in the event of a failure of primary process 116. Backup process 122 can also initiate remote read and write operations to NPMU 102, such as a read command to access checkpoint state 120 periodically and/or upon failure of primary process 116.
In a write operation initiated by processor node 104, for example, once data has been successfully stored in NPMU 102, the data is durable and will survive a power outage or failure of processor node 104, 106. In particular, memory contents will be maintained as long as NPMU 102 continues to function correctly, even after the power has been disconnected for an extended period of time, or the operating system on processor node 104, 106 has been rebooted. In addition to data transfer operations, NPMU 102 can be configured to respond to various management commands.
In some embodiments, processor nodes 104, 106 are computer systems that include at least one central processing unit (CPU) and system memory wherein the CPU is configured to run operating systems 144, 146. Processor nodes 104, 106 can additionally be configured to run one or more of any type of application program, such as primary process 116 and backup process 118. Although system 100 is shown with two processor nodes 104, 106, additional processor nodes (not shown) can communicate with SAN 112 as well as with processor nodes 104, 106 over a network (not shown) via network interfaces 108, 110, 114.
In some embodiments, SAN 112 is a RDMA-enabled network connecting multiple network interface units (NI), such as NIs 108, 110, and 114 that can perform byte-level memory operations between two processor nodes 104, 106, or between processor nodes 104, 106 and a device such as NPMU 102, without notifying operating systems 144, 146. In this case, SAN 112 is configured to perform virtual to physical address translation to map contiguous network virtual address spaces onto discontiguous physical address spaces. This type of address translation allows for dynamic management of NPMU 102. Commercially available SANs 112 with RDMA capability include, but are not limited to, ServerNet, GigaNet, Infiniband, and all Virtual Interface Architecture compliant SANs.
Processor nodes 104, 106 are generally attached to SAN 112 through respective NIs 108, 110, however, many variations are possible. More generally, however, a processor node need only be connected to an apparatus for communicating read and write operations. For example, in another implementation of this embodiment, processor nodes 104, 106 include various CPUs on a motherboard that utilize a data bus, for example a PCI bus, instead of SAN 112. It is noted that the present teachings can be scaled up or down to accommodate larger or smaller implementations as needed.
Network interfaces (NI) 108, 110, 114 are communicatively coupled to NPMU 102 to allow for access to the persistent memory contained with NPMU 102. Any suitable technology can be utilized for the various components of
Notably, memory access granularity can be adjusted as required in system 100. The access speed of memory in NPMU 102 should also be fast enough to support the transfer rates of the data communication scheme implemented for system 100.
It should be noted that persistent information is provided to the extent the persistent memory in use may hold data. For example, in many applications, persistent memory may be required to store data regardless of the amount of time power is lost; whereas in another application, persistent memory may only be required for a few minutes or hours.
Memory management functionality can be provided in system 100 to create one or more independent, indirectly-addressed memory regions. Moreover, NPMU meta-data can be provided for memory recovery after loss of power or processor failure. Meta-data can include, for example, the contents and layout of the protected memory regions within NPMU 102. In this way, NPMU 102 stores the data as well as the manner of using the data. When the need arises, NPMU 102 can provide the meta-data to backup process 122 to allow system 100 to recover from a power or system failure associated with primary process 116.
In the embodiment of system 100 shown in
For example, primary process 116 may read in large blocks of data during initialization, and update various segments of the data at different phases of operation. The initial checkpoint state 120 can include a backup of all the data, while update areas 128-132 can be used to store smaller segments of the data as the segments are updated. Backup process 122 can then initialize itself with checkpoint state 120, and apply data from the subsequent update areas 128-132 in the order they were written. Further, backup process 122 does not have to wait until primary process 116 fails to begin initializing itself with data from checkpoint state 120 and update areas 128-132. This is especially true when there is potential to overflow the amount of storage space available for checkpoint state 120 and update areas 128-132. This is also true when it would take a greater amount of time than desired for backup process 122 to recreate the state of primary process 116 after primary process 116 fails.
Whether backup process 122 reads checkpoint state 120 and update areas 128-132 periodically, or when primary process 116 fails, backup process 122 can read any previously unread portion of checkpointed state 120 and update areas 128-132 before taking over for primary process 116.
Utilizing NPMU 102 allows primary process 116 to store checkpoint state 120 regardless of the identity, location, or operational state of backup process 122. Backup process 122 can be created in any remote system that has access to NPMU 102. Primary process 116 can write checkpoint state 120 and/or update areas 128-132 whenever required without waiting for backup process 122 to acknowledge receipt of messages. Additionally, NPMU 102 allows efficient use of available information technology (IT) resources since backup process 122 only needs to execute when either (1) primary process 116 fails; or (2) to periodically read information from checkpoint state 120 and/or update areas 128-132 to avoid overflowing NPMU 102. In contrast, some previously known checkpointing techniques utilize message passing between a primary process and a backup process to communicate checkpoint information. The primary process thus required information regarding the identity and location of the backup process. Additionally, the backup process had to be operational in previously known systems in order to synchronize with the primary process to receive the checkpoint message.
Further, NPMU 102 can be implemented in hardware, thereby providing fast access for read and write operations. Other previously known checkpointing techniques store checkpoint information on magnetic or optical media, which requires much more time to access than NPMU 102.
Various embodiments of NPMU 102 can be managed to facilitate resource allocation and sharing. In some embodiments, NPMU 102 is managed by persistent memory manager (PMM) 140, as shown in
Note that because NPMU 102 can be durable, and can maintain a self-describing body of persistent data, meta-data related to existing persistent memory regions can be stored on NPMU 102. PMM 140 can perform management tasks that will keep the meta-data on NPMU 102 consistent with the persistent data stored on NPMU 102. In this manner, the NPMU's stored data can always be interpreted using the NPMU's stored meta-data and thereby recovered after a possible system shutdown or failure. NPMU 102 thus maintains in a persistent manner not only the data being manipulated but also the state of the processing of such data. Upon a need for recovery, system 100 using an NPMU 102 is thus able to recover and continue operation from the memory state in which a power failure or operating system crash occurred.
As described with reference to
As shown, PM virtual address 402 can actually correspond to a PM physical address 436, and so on. Accordingly, NPMU 102 can provide the appropriate translation from the PM virtual address space to the PM physical address space and vice versa. In this way, the translation mechanism allows NPMU 102 to present contiguous virtual address ranges to processor nodes 104, 106, while still allowing dynamic management of the NPMU's physical memory. This can be important because of the persistent nature of the data on an NPMU 102. Due to configuration changes, the number of processes accessing a particular NPMU 102, or possibly the sizes of their respective allocations, may change over time. The address translation mechanism allows NPMU 102 to readily accommodate such changes without loss of data. The address translation mechanism further allows easy and efficient use of persistent memory capacity by neither forcing the processor nodes 104, 106 to anticipate future memory needs in advance of allocation or forcing the processor nodes 104, 106 to waste persistent memory capacity through pessimistic allocation.
With reference again to
When processor node 104 requests PMM 140 to open (i.e., allocate and then begin to use) a region of persistent memory in NPMU 102, NPMU's NI 114 can be programmed by PMM 140 to allow processor node 104 to access the appropriate region. This programming allocates a block of network virtual addresses and maps (i.e., translates) them to a set of physical pages in physical memory. The range of PM virtual addresses can be contiguous regardless how many pages of PM physical address are to be accessed. The physical pages can, however, be anywhere within the PM physical memory. Upon successful set-up of the translation, NPMU 102 can notify the requesting processor node 104 of the PM virtual address of the contiguous block. Once open, processor node 104 can access NPMU memory pages by issuing read or write operations to NPMU 102. NPMU 102 can also notify subsequent requesting processor nodes 106 that wish to access data provided by processor node 104 of the virtual address of corresponding memory. PMM 140 can translate the virtual address to the corresponding physical address of the memory to provide requested information, such as checkpoint state 120 and/or update areas 128-132, to backup process 122 in processor 106.
In some embodiments, backup process 122 can be configured with information regarding the location of checkpoint state 120 and/or update areas 128 -132. In other embodiments, backup process 122 can issue a message requesting the location of checkpoint state 120 and update areas 128-132 from PMM 140, NPMU 102, and/or primary process 116 at runtime. PMM 140, NPMU 102, and/or primary process 116 then issue a response message with the requested location of checkpoint state 120 and update areas 128-132 in NPMU 102. In some embodiments, PMM 140 records information regarding the starting and ending address of the latest update to checkpoint state 120, whether the latest information resides in checkpoint state 120 or update areas 128-132. The starting and ending addresses of the most current checkpoint state 120 and update areas 128-132 can then be provided upon request to backup process 122. Permission to access memory resources in NPMU 102 can be maintained in Translation and Protection Table (TPT) 142, which is shown in NPMU 102. PMM 140 can create entries in TPT 142 with appropriate permissions at the time of creating or opening persistent memory regions. For instance, primary process 116 requests PMM 140 to create a region with permissions to write. Subsequently, backup process 122 opens that region with permissions to read.
Primary process 116 and backup process 122 can communicate with PMM 140 and access NPMU 102 through their respective NIs. Respective operating systems (OS) 144, 146 in processors 104, 106 manage not only access to NIs 108, 110, but also maintain context information about the connections created through that NI 108, 110. Information regarding access right and connection contexts can be stored by respective processors 104, 106.
Primary process 116 and backup process 122 must obtain permission from their respective operating systems 144, 146 in order to send requests to PMM 140 to open or create a region in NPMU 102. PMM 140 sets up appropriate entries in TPT 142 and returns the granted access rights to the requestor.
Only after the access rights have been obtained will respective operating systems 144, 146 allow primary process 116 or backup process 122 to write or read the physical memory contents from NPMU 102 within their open regions. The access rights are enforced by NI 114, which configures its state from entries in TPT 142 maintained by PMM 140 at NPMU 102.
In some embodiments, if primary process 116 or backup process 122 chooses to establish a connection with NPMU 102, and then sends write or read requests over that connection, the access rights can be ‘bound’ to the connection and need not be repeated with each request. If primary process 116 and backup process 122 choose to send requests to the NPMU 102 without first establishing a connection, then each request can include the authentication information contained in the access rights.
NPMU 102 can directly authenticate PMM 140. A variety of implementation schemes can be utilized. In some embodiments, PMM 140 takes ownership of certain NPMUs 102 when a particular NPMU 102 is first connected to SAN 112. In such a situation, PMM 140 initializes TPT 142 on NPMU 102 to grant itself write permission into TPT 142. Other embodiments can utilize password-based authentication, in which NPMU 102 validates requests from PMM 140 using a pre-configured password known only to PMMs 140. A variety of other schemes are possible, including certificate-based authentication, which requires SAN 112 to support a third party authentication service to authenticate the communicating entities to each other.
The further functionality of the present approach as shown, for example, in
A remote write to persistent memory is similar. Processor node 104 provides a starting PM network virtual address and offset as well as a context identifier (in the case of multiple address spaces) for NPMU 102. As before, the PM network virtual address range must fall within the allocated range. Processor node 104 also provides a pointer to the physical address of the data to be transmitted. NI 108 in processor node 104 then issues a remote write command to NI 114 in NPMU 102 and begins sending data. NI 114 translates the start address to a physical address in NPMU 102 using translation tables associated with the region. Also, NPMU 102 stores data starting at the translated physical address. NI 114 continues translating addresses even if NPMU 102 reaches page boundaries since the physical pages of contiguous PM network virtual addresses do not necessarily translate to contiguous PM physical addresses. When the write command is completed, NI 108 marks the write transfer as completed. Any waiting processes can then be notified and, in turn, processed.
It should be noted that in latency testing of one embodiment of NPMU 102 according to the present teachings, memory accesses well within 80 microseconds could be achieved. The performance of NPMU 102 compares very favorably to alternative I/O operations requiring over eight hundred microseconds. Indeed this result is possible because the latencies of I/O operations, including their necessary interrupts, are avoided. The NPMU according to the present teachings therefore has the persistence of storage with the fine-grained access of system memory.
In some embodiments, processor units 104, 106, NPMU 102, and PMM 140 can be implemented on a computer system 500 such as shown in
A computer readable volatile memory such as random access memory (RAM) 509 can also be coupled to bus 502 to load information and instructions to be executed by CPU 504. Moreover, computer-readable read only memory (ROM) 510 can also be coupled to bus 502 to store static information and instructions that can be accessed by CPU 504. A data storage device 512 such as a magnetic or optical disk media can also be coupled to bus 502 to store large amounts of information and instructions. An alphanumeric input device 514 including alphanumeric and function keys, and a cursor control device 516 such as a mouse, can be coupled to bus 502 to enable a user to input information and commands to CPU 504.
One or more communications ports 518 can be included in system 500 to enable communication with various peripheral devices such as printers; external networks such as SAN 112; and other processing systems such as processor nodes 104, 106 (
Display 522 can be coupled to bus 502 to display information to a user of system 500. Display 522 may be a liquid crystal device, cathode ray tube, or other display device suitable for creating graphic images and alphanumeric characters recognizable by the user. The alphanumeric input device 514 and cursor control device 516 allow the computer user to dynamically signal the two dimensional movement of a visible symbol (pointer) on display 522.
In some embodiments, components in computer system 500 can communicate with each other and with other external networks via suitable interface links such as any one or combination of TI, ISDN, cable line, a wireless connection through a cellular or satellite network, or a local data transport system such as Ethernet or token ring over a local area network. Any suitable communication protocol, such as Hypertext Transfer Protocol (HTTP) or Transfer Control Protocol/Internet Protocol (TCT/IP), can be utilized to communicate with other components in external networks. Additionally, computer system 500 can be embodied in any suitable computing device, and so include personal data assistants (PDAs), telephones with display areas, network appliances, desktops, laptops, X-window terminals, or other such computing devices.
Logic instructions can be stored on a computer readable medium, or accessed in the form of electronic signals. The logic modules, processing systems, and circuitry described herein may be implemented using any suitable combination of hardware, software, and/or firmware, such as Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuit (ASICs), or other suitable devices. The logic modules can be independently implemented or included in one of the other system components. Similarly, other components are disclosed herein as separate and discrete components. These components may, however, be combined to form larger or different software modules, logic modules, integrated circuits, or electrical assemblies, if desired.
While the present disclosure describes various embodiments, these embodiments are to be understood as illustrative and do not limit the claim scope. Many variations, modifications, additions and improvements of the described embodiments are possible. For example, those having ordinary skill in the art will readily implement the processes necessary to provide the structures and methods disclosed herein. Variations and modifications of the embodiments disclosed herein may also be made while remaining within the scope of the following claims. The functionality and combinations of functionality of the individual modules can be any appropriate functionality. In the claims, unless otherwise indicated the article “a” is to refer to “one or more than one”.
Number | Name | Date | Kind |
---|---|---|---|
4590554 | Glazer et al. | May 1986 | A |
5008786 | Thatte | Apr 1991 | A |
5446857 | Russ | Aug 1995 | A |
5546534 | Malcolm | Aug 1996 | A |
5715386 | Fulton, III et al. | Feb 1998 | A |
5721918 | Nilsson et al. | Feb 1998 | A |
5751997 | Kullick et al. | May 1998 | A |
5842222 | Lin et al. | Nov 1998 | A |
5845082 | Murakami | Dec 1998 | A |
5864657 | Stiffler | Jan 1999 | A |
5864849 | Bohannon et al. | Jan 1999 | A |
6044475 | Chung et al. | Mar 2000 | A |
6079030 | Masubuchi | Jun 2000 | A |
6088773 | Kano et al. | Jul 2000 | A |
6105148 | Chung et al. | Aug 2000 | A |
6141773 | St. Pierre et al. | Oct 2000 | A |
6185702 | Shirakihara et al. | Feb 2001 | B1 |
6195760 | Chung et al. | Feb 2001 | B1 |
6266781 | Chung et al. | Jul 2001 | B1 |
6279119 | Bissett et al. | Aug 2001 | B1 |
6374264 | Bohannon et al. | Apr 2002 | B1 |
6546404 | Davis et al. | Apr 2003 | B1 |
6622263 | Stiffler et al. | Sep 2003 | B1 |
6678704 | Bridge et al. | Jan 2004 | B1 |
6691245 | DeKoning | Feb 2004 | B1 |
6704831 | Avery | Mar 2004 | B1 |
6721806 | Boyd et al. | Apr 2004 | B2 |
6742136 | Christensen et al. | May 2004 | B2 |
6823474 | Kampe et al. | Nov 2004 | B2 |
6836830 | Yamagami et al. | Dec 2004 | B1 |
6883068 | Tsirigotis et al. | Apr 2005 | B2 |
6941410 | Traversat et al. | Sep 2005 | B1 |
6957237 | Traversat et al. | Oct 2005 | B1 |
6983303 | Pellegrino et al. | Jan 2006 | B2 |
7065620 | Ballard et al. | Jun 2006 | B2 |
7069307 | Lee et al. | Jun 2006 | B1 |
7080221 | Todd et al. | Jul 2006 | B1 |
7082553 | Wang | Jul 2006 | B1 |
7099901 | Sutoh et al. | Aug 2006 | B2 |
7165186 | Viswanatham et al. | Jan 2007 | B1 |
7222194 | Kano et al. | May 2007 | B2 |
7251747 | Bean et al. | Jul 2007 | B1 |
7260737 | Lent et al. | Aug 2007 | B1 |
7308607 | Reinhardt et al. | Dec 2007 | B2 |
7418626 | Aino et al. | Aug 2008 | B2 |
7487390 | Saike | Feb 2009 | B2 |
7529897 | Waldspurger et al. | May 2009 | B1 |
7577692 | Corbett et al. | Aug 2009 | B1 |
7716323 | Gole et al. | May 2010 | B2 |
8261125 | Wenzel | Sep 2012 | B2 |
8631133 | Jonnala et al. | Jan 2014 | B1 |
20020026604 | Bissett et al. | Feb 2002 | A1 |
20020032691 | Rabii et al. | Mar 2002 | A1 |
20020032883 | Kampe et al. | Mar 2002 | A1 |
20020073325 | Ho et al. | Jun 2002 | A1 |
20020087916 | Meth | Jul 2002 | A1 |
20020103819 | Duvillier et al. | Aug 2002 | A1 |
20020120791 | Somalwar et al. | Aug 2002 | A1 |
20020124117 | Beukema et al. | Sep 2002 | A1 |
20020184398 | Orenshteyn | Dec 2002 | A1 |
20030018828 | Craddock et al. | Jan 2003 | A1 |
20030093627 | Neal et al. | May 2003 | A1 |
20030120864 | Lee et al. | Jun 2003 | A1 |
20030163780 | Kossa | Aug 2003 | A1 |
20030221075 | Achiwa et al. | Nov 2003 | A1 |
20040025052 | Dickenson | Feb 2004 | A1 |
20040030731 | Iftode et al. | Feb 2004 | A1 |
20040034814 | Thompson | Feb 2004 | A1 |
20040037319 | Pandya | Feb 2004 | A1 |
20040049580 | Boyd et al. | Mar 2004 | A1 |
20040049600 | Boyd et al. | Mar 2004 | A1 |
20040049774 | Boyd et al. | Mar 2004 | A1 |
20040111523 | Hall et al. | Jun 2004 | A1 |
20040148360 | Mehra et al. | Jul 2004 | A1 |
20040225719 | Kisley et al. | Nov 2004 | A1 |
20040230862 | Merchant et al. | Nov 2004 | A1 |
20040260726 | Hrle et al. | Dec 2004 | A1 |
20050015460 | Gole et al. | Jan 2005 | A1 |
20050129039 | Biran et al. | Jun 2005 | A1 |
20050138461 | Allen et al. | Jun 2005 | A1 |
20050144310 | Biran et al. | Jun 2005 | A1 |
20050251785 | Vertes et al. | Nov 2005 | A1 |
Number | Date | Country |
---|---|---|
2002140315 | May 2002 | JP |
2003015901 | Jan 2003 | JP |
2001175042 | Jun 2011 | JP |
Entry |
---|
DE Office Action (with translation), German Patent Office, Jul. 26, 2011, Munchen, Germany. |
Number | Date | Country | |
---|---|---|---|
20050132250 A1 | Jun 2005 | US |