The present invention is directed to data storage management. In particular, the present invention is directed to methods and apparatuses for re-setting snapshots.
Historically, the need to store digital files, documents, pictures, images and other data continues to increase rapidly. In connection with the electronic storage of data, various data storage systems have been devised for the rapid and secure storage of large amounts of data. Such systems may include one or a plurality of storage devices that are used in a coordinated fashion. Systems in which data can be distributed across multiple storage devices such that data will not be irretrievably lost if one of the storage devices (or in some cases, more than one storage device) fails are also available. Systems that coordinate operation of a number of individual storage devices can also provide improved data access and/or storage times. Examples of systems that can provide such advantages can be found in the various RAID (redundant array of independent disks) levels that have been developed. Whether implemented using one or a plurality of storage devices, the storage provided by a data storage system can be treated as one or more storage volumes.
In order to facilitate the availability of desired data, it is often advantageous to maintain different versions of a data storage volume. Indeed, data storage systems are available that can provide at least limited data archiving through backup facilities and/or snapshot facilities. The use of snapshot facilities greatly reduces the amount of storage space required for archiving large amounts of data.
Snapshots provide a versatile feature that is useful for data recovery operations, such as backup and recovery of storage elements. However, traditional backup systems are limited by snapshot system requirements and restrictions. More specifically, backup systems using snapshot applications for backups typically have to configure backup settings based on the features and parameters defined by the snapshot application. Most backup systems perform systematic backups (e.g., daily, weekly, monthly, etc.) of a data storage volume. The frequency of the backups generally depends upon the use of the storage volume and whether a significant amount of data changes on a regular basis. Furthermore, most backup systems utilize only a fixed number of snapshots. With this restriction, backup systems are generally required to delete the oldest snapshot prior to creating a new snapshot that represents the current point-in-time of the data storage volume. This is cumbersome to backup system managers because each time a new snapshot is created it is assigned new attributes such as a World Wide Name (WWN), Logical Unit Number (LUN), Serial Number, and the like, which requires the backup system manager to reconfigure the backup application for the new snapshot volume information. Furthermore, since, in most cases, the volume looks like a new volume to the backup application, one cannot perform incremental backups due to the fact that the backup application does not know about the “new” volume.
The present invention is directed to solving these and other problems and disadvantages of the prior art. In accordance with embodiments of the present invention, a service for efficiently resetting a snapshot is provided. The method generally comprises receiving a command to reset a first snapshot of a master volume, where the first snapshot includes a first array partition and first snapshot data. The method continues by disassociating the first array partition from the first snapshot and associating the first array partition with a second snapshot of the master volume. After disassociating and associating the first array partition, the method continues by updating the first array partition to reflect a size of the master volume at a point-in time associated with the second snapshot. Typically the first array partition is disassociated from the first snapshot prior to being associated with the second snapshot, but such an order of events is not required. For example, the disassociation and association may be contemporaneous or the association may occur prior to the disassociation.
As used herein, the terms “partition” and “array partition” represent the entity (e.g., the snapshot and corresponding master volume) to the outside world, that is applications running outside the storage controller. In essence, the array partition represents the attributes and identity of the volume. In other words, the array partition represents the name, LUN, and serial number of the volume. These identifying characteristics may be used by a backup application and other entities such as access control lists, zoning, and so on to indicate how the volume is to be used. One aspect of the present invention is that these identifying characteristics remain unchanged during and after a snapshot reset. Only the data underlying is changed, so that to the external application, it appears as though the data has been updated as opposed to having a snapshot being removed and then having the data replaced.
In accordance with at least some embodiments of the present invention, the first snapshot (i.e., the snapshot firstly associated with the first array partition) is deleted and the second snapshot (i.e., the snapshot secondly associated with the first array partition) is created as a new snapshot via the reset operation. However, in accordance with certain embodiments of the present invention, the second snapshot may correspond to a snapshot in existence before the reset operation is initiated. Accordingly, an array partition may be transferred between existing snapshots as well as from an existing snapshot to a new snapshot.
In accordance with at least some embodiments of the present invention, a device for controlling a storage system is provided. The device may comprise a reset application adapted to receive a reset command and in response to receiving a reset command, transfer a first array partition associated with first snapshot of a master volume to a second snapshot of the master volume, where the first array partition provides access to a virtual disk drive which can read and/or write fixed memory block addresses. The array partition transferred from the first snapshot to the second snapshot carries with it certain attributes (e.g., a global identifier such as a WWN, a LUN, a Serial Number, and a zoning of the LUN) that allow a backup application to reference the second snapshot in the same way that it previously referenced the first snapshot. Accordingly, reconfiguration of backup systems is minimized due to the use of a recycled array partition realized under a snapshot reset operation.
Additional features and advantages of embodiments of the present invention will become more readily apparent from the following description, particularly when taken together with the accompanying drawings.
In accordance with embodiments of the present invention, a snapshot is a block level point-in-time representation of data on a storage volume. The data is essentially frozen in time at the instant that the snapshot is taken. Although data on the storage volume may change as a result of write operations, the data represented by the snapshot will remain constant and frozen in time at the instant that the snapshot was taken. In order to preserve snapshot data, a backing store, also known as a snap pool, is used to store data that is not otherwise represented in the storage volume and snapshot metadata. All data and metadata associated with the snapshot is stored in the backing store. In accordance with embodiments of the present invention, data is stored within the snapshot in “chunks”. A chunk is equivalent to a number of Logical Block Addresses (LBAs). Alternatively or in addition, data can be stored within subchunks. A subchunk is a fixed size subset of a chunk. Pointers, table entries, or other data structures can be used to identify the location of a chunk in the backing store.
The data storage systems 104, 108 are typically interconnected to one another through an in-band network 120. The in-band network 120 may also interconnect the data storage systems 104, 108 to a host computer 112 and/or an administrative computer 116. The electronic data system 100 may also include an out-of-band network 124 interconnecting some or all of the electronic data system 100 nodes 104, 108, 112 and/or 116. For example, one or more host computers 112 are connected to each data storage system 104, 108. For instance, a first data storage system 104 is connected to a second data storage system 108 across some distance by a Fibre Channel or a TCP/IP network 120, and each of these data storage systems 104, 108 is connected to a host computer 112 through an in-band 120 and/or an out-of-band 124 network.
The in-band or storage area network 120 generally functions to transport data between data storage systems 104 and/or 108 and host devices 112, and can be any data pipe capable of supporting multiple initiators and targets. Accordingly, examples of in-band networks 120 include Fibre Channel (FC), iSCSI, parallel SCSI, Ethernet, ESCON, or FICON connections or networks, which may typically be characterized by an ability to transfer relatively large amounts of data at medium to high bandwidths. The out-of-band network 124 generally functions to support the transfer of communications and/or commands between various network nodes, such as data storage resource systems 104, 108, host computer 112, and/or administrative computers 116, although such data may also be transferred over the in-band communication network 120. Examples of an out-of-band communication network 124 include a local area network (LAN) or other transmission control protocol/Internet protocol (TCP/IP) network. In general, the out-of-band communication network 124 is characterized by an ability to interconnect disparate nodes or other devices through uniform user interfaces, such as a web browser. Furthermore, the out-of-band communication network 124 may provide the potential for globally or other widely distributed management of data storage systems 104, 108 via TCP/IP.
Every electronic data system node or computer 104, 108, 112 and 116, need not be interconnected to every other node or device through both the in-band network 120 and the out-of-band network 124. For example, no host computer 112 needs to be interconnected to any other host computer 112, data storage system 104, 108, or administrative computer 116 through the out-of-band communication network 124, although interconnections between a host computer 112 and other devices 104, 108, 116 through the out-of-band communication network 124 are not prohibited. As another example, an administrative computer 116 may be interconnected to at least one storage system 104 or 108 through the out-of-band communication network 124. An administrative computer 116 may also be interconnected to the in-band network 120 directly, although such an interconnection is not required. For example, instead of a direct connection, an administrator computer 116 may communicate with a controller of a data storage system 104, 108 using the in-band network 120.
In general, a host computer 112 exchanges data with one or more of the data storage systems 104, 108 in connection with the performance of the execution of application programming, whether that application programming concerns data management or otherwise. Furthermore, an electronic data system 100 may include multiple host computers 112. An administrative computer 116 may provide a user interface for controlling aspects of the operation of the storage systems 104, 108. The administrative computer 116 may be interconnected to the storage system 104, 108 directly, and/or through a bus or network 120 and/or 124. In accordance with still other embodiments of the present invention, an administrative computer 116 may be integrated with a host computer 112. In addition, multiple administrative computers 116 may be provided as part of the electronic data system 100. Furthermore, although two data storage systems 104, 108 are shown in
A data storage system 104, 108, in accordance with embodiments of the present invention, may be provided with a first controller slot 208a. In addition, other embodiments may include additional controller slots, such as a second controller slot 208b. As can be appreciated by one of skill in the art, a controller slot 208 may comprise a connection or set of connections to enable a controller 212 to be operably interconnected to other components of the data storage system 104, 108. Furthermore, a data storage system 104, 108 in accordance with embodiments of the present invention includes at least one controller 212a. For example, while the data storage system 104, 108 is operated in a single controller, non-failover mode, the data storage system 104, 108 may include exactly one controller 212. A data storage system 104, 108 in accordance with other embodiments of the present invention may be operated in a dual redundant active-active controller mode by providing a second controller 212b. When a second controller 212b is used in addition to a first controller 212a, the second controller slot 208b receives the second controller. As can be appreciated by one of skill in the art, the provision of two controllers, 212a and 212b, permits data to be mirrored between the controllers 212a-212b, providing redundant active-active controller operation.
One or more busses or channels 216 are generally provided to interconnect a controller or controllers 212 through the associated controller slot or slots 208 to the storage devices 204. Furthermore, while illustrated as a single shared bus or channel 216, it can be appreciated that a number of dedicated and/or shared buses or channels may be provided. Additional components that may be included in a data storage system 104 include one or more power supplies 224 and one or more cooling units 228. In addition, a bus or network interface 220 may be provided to interconnect the data storage system 104, 108 to the bus or network 112, and/or to a host computer 108 or administrative computer 116.
Although illustrated as a complete RAID system in
A controller 212 also generally includes memory 308. The memory 308 is not specifically limited to memory of any particular type. For example, the memory 308 may comprise a solid-state memory device, or a number of solid-state memory devices. In addition, the memory 308 may include separate non-volatile memory 310 and volatile memory 312 portions. As can be appreciated by one of skill in the art, the memory 308 may include a read cache 316 and a write cache 320 that are provided as part of the volatile memory 312 portion of the memory 308, although other arrangements are possible. By providing caches 316, 320, a storage controller 212 can improve the speed of input/output (I/O) operations between a host 112 and the data storage devices 204 comprising an array or array partition. Examples of volatile memory 312 include DRAM and SDRAM.
The non-volatile memory 310 may be used to store data that was written to the write cache of memory 308 in the event of a power outage affecting the data storage system 104. The non-volatile memory portion 310 of the storage controller memory 308 may include any type of data memory device that is capable of retaining data without requiring power from an external source. Examples of non-volatile memory 310 include, but are not limited to, compact flash or other standardized non-volatile memory devices.
A volume information block 324 may be stored in the non-volatile memory 310, although in accordance with at least some embodiments of the present invention, the volume information block 324 resides in volatile memory 312. The volume information block 324 comprises data that may be used to represent attribute and state information for master volumes, backing stores, and/or snapshots. Each master volume, backing store, and snapshot is typically associated with a different volume information block 324. The volume information block 324 is generally employed by the processor 304 to determine whether certain data is located on master volumes, backing stores, and/or snapshots and whether such data is safe to access based on the state of each. For example, the state of a master volume or backing store may be such that if data access were attempted, data corruption may occur. Accordingly, the volume information block 324 may be referenced prior to data access during an I/O operation.
The memory 308 also includes portions of the memory 308 comprising a region that provides storage for controller code 328. The controller code 328 may comprise a number of components, including an I/O application 332 comprising instructions for accessing and manipulating data. The I/O application 332 may provide the controller 212 with the ability to perform read and/or write operations of data on a storage volume and/or on a snapshot. The I/O application 332 may reference the volume information block 324 prior to executing such operations. The I/O application 332 may also employ the read and write caches 316 and 320 respectively when performing such operations.
A snapshot application 334 is an example of another application that may be included in the controller code 328. Although depicted as separate from the I/O application 332, the snapshot application 334 may comprise functionality similar to the I/O application 332. The snapshot application 334 is essentially responsible for the creation and management of various snapshots on a given storage volume.
In accordance with at least some embodiments of the present invention, the snapshot application may comprise a snapshot reset application 336. While the snapshot reset application 336 is shown as being a module within the snapshot application 334, one skilled in the art will appreciate that alternative embodiments of the present invention may employ the snapshot reset application 336 as a separate entity from the snapshot application 334.
The snapshot reset application 336 may be adapted to reset an older snapshot with newer snapshot data. In accordance with at least some embodiments of the present invention, the snapshot reset application 336 may perform a synchronized snapshot delete/create operation that allows the snapshot created under the create portion of the operation to assume at least some properties of the snapshot deleted under the delete portion of the operation. For instance, the snapshot reset application 336 may afford the ability to transfer an array partition associated with one snapshot to another snapshot. In a more specific example, the snapshot reset application 336 may disassociate an array partition with a snapshot that is going to be deleted and then re-associate it with a new snapshot that is being created for a given storage volume.
A storage controller 212 may additionally include other components. For example, a bus and/or network interface 344 may be provided for operably interconnecting the storage controller 212 to the remainder of the data storage system 104, for example through a controller slot 208 and a bus or channel 216. Furthermore, the interface 344 may be configured to facilitate removal or replacement of the storage controller 212 in a controller slot 208 as a field replaceable unit (FRU). In addition, integral signal and power channels may be provided for interconnecting the various components of the storage controller 212 to one another.
With reference to
Each snapshot 408 may be created with a separate and distinct array partition 412 and snapshot data 416. The array partition 412 of each snapshot 408 provides the controller 212 access to a virtual disk drive which can read or write fixed blocks addressed by LBA. An array partition can have a number of assigned attributes such as a LUN, Serial Number, zoning on the LUN (access control for the LUN), a global identifier such as a WWN, and the like. The array partition 412 attributes describe how a host 112 and/or administrative computer 116 can access 10 the snapshot 408, and more specifically the snapshot data 416. In accordance with at least some embodiments of the present invention, the array partition 412 is transferable between snapshots 408. The array partition 412 may define or store the attributes for reference by other applications in the form of metadata.
The snapshot data 416, on the other hand, is the actual data representing the point-in-time of the master volume 404 when the snapshot 408 was taken. The snapshot data 416 may be divided logically into chunks, subchunks and/or any other data organizational format known in the art. The snapshot data 416 may be updated with a copy on write (COW) operation that occurs with a change to the master volume 404. In accordance with at least some embodiments of the present invention, the snapshot data 416 may initially be empty. But as changes occur that alter the master volume 404 from the point-in-time captured by the snapshot 408, a COW operation causes the data from the master volume 404 to be transferred to the snapshot data 416 prior to executing the requested change of the master volume 404.
Referring now to
As can be seen in
After the older snapshot 408 has been marked for reset, the snapshot reset application 336 marks the older snapshot 408 for deletion (step 508). The deletion mark for the older snapshot 408 may also be registered in the volume information block 324. By marking the older snapshot 408 for reset prior to deletion, the older snapshot 408 is temporarily maintained in memory, on disk, or anywhere else where storage is available. Whereas if the older snapshot 408 were simply marked for deletion, then the older snapshot 408 would be deleted and its array partition 412 would be lost. By marking the older snapshot 408 for reset and then deletion, the snapshot reset application 336 also causes the older snapshot 408 to be disassociated from its array partition 412 (step 512). The disassociation of the older snapshot 408 from the array partition 412 releases the array partition 412 thereby allowing it to be assigned to another snapshot 408. The actual attributes of the array partition 412 are not altered as a result of the disassociation but rather are freed from the older snapshot 408 for use by another snapshot 408. The array partition 412 may be selectively associated and disassociated by altering the reference between a given snapshot 408 and the array partition 412 in the volume information block 324.
Once the older snapshot 408 and its array partition 412 have been adequately disassociated, the snapshot reset application 336 continues by updating the array partition 412 to reflect the current size of the master volume 404 (step 516). Most times, user changes do not affect the size of the master volume 404. Therefore, the snapshots are usually of the same size. However, since the array partition 412 is used to describe the current size of the master volume 404 at the point-in-time corresponding to the snapshot 408, this step is important to cover the case when the user changes the size of the master volume. Accordingly, since the array partition 412 is disassociated with the older snapshot 408 it needs to be updated to reflect the size of the master volume 404 at the current point-in-time rather than a previous point-in-time. However, all other volume attributes such as its global identifier, LUN, Serial Number, and zoning of the LUN remain unchanged.
The snapshot reset application 336 continues by creating a new snapshot 408 that will provide a representation of the master volume 404 at the point-in-time corresponding to the execution of the reset operation (step 520). The creation of a new snapshot 408 generally comprises allocating the required memory for the new snapshot 408, then generating and storing the various data structures that will ultimately be employed to store snapshot data 416. The data structures created in this step are preferably organized in a tabular fashion employing pointers and the like but any other data organization technique known in the art may also be employed.
While creating the new snapshot 408, the snapshot reset application 336 associates the new snapshot 408 with the array partition 412 from the older snapshot 408 (step 524). The association causes the new snapshot 408 to receive the same array partition attributes previously assigned to the older snapshot 408. This way the snapshot can be referred to in the same manner, for example by its global identifier and Serial Number. This association also allows the newer snapshot 408 to be assigned to the same LUN that the older snapshot 408 was assigned to. This allows a backup application running on a host computer 112 and/or administrative computer 116 to reference the new snapshot 408 in the same way the older snapshot 408 was referenced. Furthermore, it obviates the need to allocate additional backing store resources to the new snapshot 408.
After the new snapshot 408 has been associated with the array partition 412 from the older snapshot 408, the age of the new snapshot 408 and correspondingly the age of the array partition 412 is set to reflect the master volume 404 at the current point-in-time (step 528). Accordingly, the new snapshot 408 is assigned an age that is newer as compared to all other snapshots of the master volume 404. As a result of the new snapshot 408 being assigned such an age all changes to the master volume 404 are reflected by COW operations that write data to the new snapshot 408, even though some of that data may reflect a point-in-time associated with another snapshot 408. Data will continue to be written to the new snapshot 408 until another snapshot 408 of the master volume 404 is taken at which point the newest snapshot 408 will begin receiving the data.
With the creation of the new snapshot 408 completed, the reset indicator for the older snapshot 408 is cleared from the volume information block 324 (step 532). The release of the reset indicator leaves only the delete indicator marked in the volume information block 324. Accordingly, the snapshot reset application 336 identifies that the older snapshot 408 is no longer needed for a reset operation and initiates the deletion of the older snapshot 408 (step 536). The snapshot reset application 336 deletes the older snapshot 408 by deleting the snapshot data 416 associated with the older snapshot 408 as well as the data structures previously used to store the snapshot data 416. The snapshot reset application 336 may also delete any residual information previously associated with the older snapshot 408 from the volume information block 324. Thereafter, the method ends and the controller 212 awaits receipt of a new command (step 540).
The foregoing discussion of the invention has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit the invention to the form disclosed herein. Consequently, variations and modifications commensurate with the above teachings, within the skill and knowledge of the relevant art, are within the scope of the present invention. The embodiments described hereinabove are further intended to explain the best mode presently known of practicing the invention and to enable others skilled in the art to utilize the invention in such, or in other embodiments, and with the various modifications required by their particular application or use of the invention. It is intended that the appended claims be construed to include alternative embodiments to the extent permitted by the prior art.
This application is a divisional of application Ser. No. 11/768,127, filed Jun. 25, 2007. This application is related to the following co-pending U.S. patent applications: Ser. No.Filing(Docket No.)DateTitle11/561,512Nov. 20, 2006DATA REPLICATION METHOD AND(DHP0057 US)APPARATUS11/561,680Nov. 20, 2006PULL DATA REPLICATION MODEL(DHP0058 US)11/779,965Jul. 19, 2007METHOD AND APPARATUS FOR(DHP0068 US)SEPARATING SNAPSHOT PRESERVED ANDWRITE DATA11/945,940Nov. 27, 2007METHOD AND APPARATUS FOR MASTER(DHP0070 US)VOLUME ACCESS DURING VOLUME COPY11/747,109May 10, 2007AUTOMATIC TRIGGERING OF BACKING(DHP0071 US)STORE RE-INITIALIZATION11/747,127May 10, 2007BACKING STORE REINITIALIZATION(DHP0072 US)METHOD AND APPARATUS11/624,565Jan. 18, 2007DELETION OF ROLLBACK SNAPSHOT(DHP0073 US)PARTITION11/624,524Jan. 18, 2007METHOD AND APPARATUS FOR QUICKLY(DHP0074 US)ACCESSING BACKING STORE METADATA12/540,243Aug. 12, 2009SNAPSHOT PRESERVED DATA CLONING(DHP0083 US)
Number | Name | Date | Kind |
---|---|---|---|
5551046 | Mohan et al. | Aug 1996 | A |
5778189 | Kimura et al. | Jul 1998 | A |
5812843 | Yamazaki et al. | Sep 1998 | A |
5963962 | Hitz et al. | Oct 1999 | A |
6073209 | Bergsten et al. | Jun 2000 | A |
6076148 | Kedem | Jun 2000 | A |
6289356 | Hitz et al. | Sep 2001 | B1 |
6292808 | Obermarck et al. | Sep 2001 | B1 |
6341341 | Grummon et al. | Jan 2002 | B1 |
6548634 | Ballinger et al. | Apr 2003 | B1 |
6557079 | Mason, Jr. et al. | Apr 2003 | B1 |
6594744 | Humlicek et al. | Jul 2003 | B1 |
6615223 | Shih et al. | Sep 2003 | B1 |
6711409 | Zavgren et al. | Mar 2004 | B1 |
6771843 | Huber et al. | Aug 2004 | B1 |
6792518 | Armangau et al. | Sep 2004 | B2 |
6907512 | Hill et al. | Jun 2005 | B2 |
6957362 | Armangau et al. | Oct 2005 | B2 |
7047380 | Tormasov et al. | May 2006 | B2 |
7050457 | Erfurt et al. | May 2006 | B2 |
7100089 | Phelps et al. | Aug 2006 | B1 |
7165156 | Cameron et al. | Jan 2007 | B1 |
7191304 | Cameron et al. | Mar 2007 | B1 |
7194550 | Chamdani et al. | Mar 2007 | B1 |
7206961 | Mutalik et al. | Apr 2007 | B1 |
7243157 | Levin et al. | Jul 2007 | B2 |
7272686 | Yagisawa et al. | Sep 2007 | B2 |
7313581 | Bachmann et al. | Dec 2007 | B1 |
7363444 | Ji et al. | Apr 2008 | B2 |
7373366 | Chatterjee et al. | May 2008 | B1 |
7426618 | Vu et al. | Sep 2008 | B2 |
7526640 | Bejarano et al. | Apr 2009 | B2 |
7593973 | Lee et al. | Sep 2009 | B2 |
20010039629 | Feague et al. | Nov 2001 | A1 |
20020083037 | Lewis et al. | Jun 2002 | A1 |
20020091670 | Hitz et al. | Jul 2002 | A1 |
20020099907 | Castelli et al. | Jul 2002 | A1 |
20020112084 | Deen et al. | Aug 2002 | A1 |
20030154314 | Mason, Jr. et al. | Aug 2003 | A1 |
20030158863 | Haskin et al. | Aug 2003 | A1 |
20030167380 | Green et al. | Sep 2003 | A1 |
20030188223 | Alexis et al. | Oct 2003 | A1 |
20030191745 | Jiang et al. | Oct 2003 | A1 |
20030229764 | Ohno et al. | Dec 2003 | A1 |
20040030727 | Armangau et al. | Feb 2004 | A1 |
20040030846 | Armangau et al. | Feb 2004 | A1 |
20040034647 | Paxton et al. | Feb 2004 | A1 |
20040054131 | Ballinger et al. | Mar 2004 | A1 |
20040093555 | Therrien et al. | May 2004 | A1 |
20040117567 | Lee et al. | Jun 2004 | A1 |
20040133718 | Kodama et al. | Jul 2004 | A1 |
20040172509 | Takeda et al. | Sep 2004 | A1 |
20040204071 | Bahl et al. | Oct 2004 | A1 |
20040260673 | Hitz et al. | Dec 2004 | A1 |
20040267836 | Armangau et al. | Dec 2004 | A1 |
20050004979 | Berkowitz et al. | Jan 2005 | A1 |
20050044088 | Lindsay et al. | Feb 2005 | A1 |
20050065985 | Tummala et al. | Mar 2005 | A1 |
20050066095 | Mullick et al. | Mar 2005 | A1 |
20050071393 | Ohno et al. | Mar 2005 | A1 |
20050122791 | Hajeck et al. | Jun 2005 | A1 |
20050166022 | Watanabe et al. | Jul 2005 | A1 |
20050182910 | Stager et al. | Aug 2005 | A1 |
20050193180 | Fujibayashi et al. | Sep 2005 | A1 |
20050198452 | Watanabe et al. | Sep 2005 | A1 |
20050240635 | Kapoor et al. | Oct 2005 | A1 |
20050246397 | Edwards et al. | Nov 2005 | A1 |
20050246503 | Fair et al. | Nov 2005 | A1 |
20060020762 | Urmston | Jan 2006 | A1 |
20060053139 | Marzinski et al. | Mar 2006 | A1 |
20060064541 | Kano et al. | Mar 2006 | A1 |
20060107006 | Green et al. | May 2006 | A1 |
20060155946 | Ji et al. | Jul 2006 | A1 |
20060212481 | Stacey et al. | Sep 2006 | A1 |
20060271604 | Shoens et al. | Nov 2006 | A1 |
20070011137 | Kodama | Jan 2007 | A1 |
20070038703 | Tendjoukian et al. | Feb 2007 | A1 |
20070055710 | Malkin et al. | Mar 2007 | A1 |
20070094466 | Sharma et al. | Apr 2007 | A1 |
20070100808 | Balogh et al. | May 2007 | A1 |
20070143563 | Pudipeddi et al. | Jun 2007 | A1 |
20070185973 | Wayda et al. | Aug 2007 | A1 |
20070186001 | Wayda et al. | Aug 2007 | A1 |
20070198605 | Saika | Aug 2007 | A1 |
20070266066 | Kapoor et al. | Nov 2007 | A1 |
20070276885 | Valiyaparambil et al. | Nov 2007 | A1 |
20080072003 | Vu et al. | Mar 2008 | A1 |
20080082593 | Komarov et al. | Apr 2008 | A1 |
20080177954 | Lee et al. | Jul 2008 | A1 |
20080177957 | Lee et al. | Jul 2008 | A1 |
20080256141 | Wayda et al. | Oct 2008 | A1 |
20080256311 | Lee et al. | Oct 2008 | A1 |
20080281875 | Wayda et al. | Nov 2008 | A1 |
20080281877 | Wayda et al. | Nov 2008 | A1 |
20090307450 | Lee et al. | Dec 2009 | A1 |
Number | Date | Country |
---|---|---|
2165912 | Jun 1997 | CA |
1003103 | May 2005 | EP |
WO 9429807 | Dec 1994 | WO |
WO 0250716 | Jun 2002 | WO |
WO 2005111773 | Nov 2005 | WO |
WO 2005111802 | Nov 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20100223428 A1 | Sep 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11768127 | Jun 2007 | US |
Child | 12780891 | US |