A portion of the disclosure of this patent document may contain command formats and other computer language listings, all of which are subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
This application relates to virtual storage and, more specifically, to leveraging flash storage with deduplication capabilities as high tier storage.
Host processor systems may store and retrieve data using a storage device containing a plurality of host interface units (i.e. host adapters), disk drives, and other physical storage, and disk interface units (i.e., disk adapters). Such storage devices are provided, for example, by EMC Corporation of Hopkinton, Mass., and are disclosed in, for example, U.S. Pat. No. 5,206,939 to Yanai et a., U.S. Pat. No. 5,778,394 to Galtzur et al., U.S. Pat. No. 5,845,147 to Vishlitzky et al., and U.S. Pat. No. 5,857,208 to Ofek. The host systems access the storage device through a plurality of channels provided therewith. Host systems provide data and access control information through the channels of the storage device and the storage device provides data to the host systems also through the channels. The host systems do not address the physical storage of the storage device directly, but rather access what appears to the host systems as a plurality of logical volumes. The logical volumes may or may not correspond to the actual disk drives and/or other physical storage.
Example embodiments of the present invention relate a method, a system, and a computer program product for extent-based tiering for virtual storage using full LUNs. The method includes exposing a virtual LUN comprising a first LUN in a first tier of storage having a first latency and a second LUN in a second tier of storage having a second latency and managing the virtual LUN according to properties of the first LUN, properties of the second LUN, and a policy.
The above and further advantages of the present invention may be better under stood by referring to the following description taken into conjunction with the accompanying drawings in which:
Historically, large storage arrays manage many disks which have been identical. However it is possible to use different types of disks and group the like kinds of disks into Tiers based on the performance characteristics of the disks. A group of fast but small disks may be a fast Tier. As well, a group of solid state drives could be another fast Tier. A group of slow but large disks may be a slow Tier. It may be possible to have other Tiers with other properties or constructed from a mix of other disks to achieve a performance or price goal. Storing often referenced, or hot, data on the fast Tier and less often referenced, or cold, data on the slow tier may create a more favorable customer cost profile than storing all data on a single kind of disk.
In addition to a storage tier, there may be a construct referred to as a storage pool. A storage pool (“pool”), as in the case of a group of storage tiers, may be made up of devices with different performance and cost characteristics. As in the case of storage tiers, it may be advantageous to locate the hot or most accessed data to the devices within the storage pool with the best performance characteristics while storing the cold, i.e. least accessed data, on the devices that have slower performance characteristics. This can lead to a lower cost system having both faster and slower devices that can emulate the performance of a more expensive system having only faster storage devices.
Early approaches have either required the customer to only use a single kind of disk or had the customer manage different tiers of disk by designing which data should be stored on which tier when the data storage definitions are created. Typically, having customers manually manage tiers or pools of storage requires the customer to do a lot of work to categorize their data and to create the storage definitions for where the different categories of storage should be put. Previous approaches required not only categorizing the data and manually placing the data on different tiers or pools, but also keeping the data classification up to date on an ongoing basis to react to changes in customer needs. Conventionally, storage of the data has also been expanded through the use of a cache. Generally, this has led to a problem of how to determine what data to store in the cache or on what storage tier.
In certain embodiments, the current techniques may track the “temperature” of data. In general, temperature corresponds to how often and how recently the data has been accessed. In general, hot data refers to data that has been accessed often and recently. In general, cold data refers to data that has not been accessed recently or often. Usually, hot data may be stored on a faster storage tier and cold data may be migrated to a slower storage tier. In certain embodiments, the current techniques may enable data migration between storage tiers based on access requests to the data on the data storage system.
Co-owned application Ser. Nos. 12/494,622, 12/640,244, 12/639,469 and 12/640,244, titled “FACILITATING DATA MIGRATION BETWEEN TIERS,” “AUTOMATED DATA RELOCATION AMONG STORAGE TIERS BASED ON STORAGE LOAD,” “LOGICAL UNIT MIGRATION ASSISTANT FOR HARDWARE-BASED STORAGE TIERING,” and “AUTOMATED DATA RELOCATION AMONG STORAGE TIERS BASED ON STORAGE LOAD,” respectively, provide a description of Fully Automated Storage Tiering (FAST) and are hereby incorporated by reference.
Typical server environments have one or more hosts access storage. Conventionally, some of the hosts may be virtual hosts or virtual machines. Generally, each virtual machine or host has a LUN or logical unit corresponding to storage space it may access. Typically, this LUN corresponds to a portion of one or more physical disks mapped to the LUN or logical drive.
Conventional Server virtualization products may have developed the capability to execute migrations of virtual machines, the underlying storage, or both to address load balancing and high availability requirements with certain limitations. Typically, conventional solutions usually require disruptive failover (i.e. failure of one site to transfer the processes to the back-up site), merged SANs, and do not work with heterogeneous products. Thus, in typical systems, if a Virtual Machine were migrated to another environment, such as a server at another location outside of a site, the virtual machine would no longer have read write access to the LUN. However, it is desirable to be able to migrate a virtual machine and have it still be able to have read write access to the underlying storage.
In certain embodiments of the instant disclosure, storage resources are enabled to be aggregated and virtualized to provide a dynamic storage infrastructure to complement the dynamic virtual server infrastructure. In an embodiment of the current invention, users are enabled to access a single copy of data at different geographical locations concurrently, enabling a transparent migration of running virtual machines between data centers. In some embodiments, this capability may enable for transparent load sharing between multiple sites while providing the flexibility of migrating workloads between sites in anticipation of planned events. In other embodiments, in case of an unplanned event that causes disruption of services at one of the data centers, the failed services may be restarted at the surviving site with minimal effort while minimizing recovery time objective (RTO).
In some embodiments of the current techniques the IT infrastructure including servers, storage, and networks may be virtualized. In certain embodiments, resources may be presented as a uniform set of elements in the virtual environment. In other embodiments of the current techniques local and distributed federation is enabled which may allow transparent cooperation of physical data elements within a single site or two geographically separated sites. In some embodiments, the federation capabilities may enable collection of the heterogeneous data storage solutions at a physical site and present the storage as a pool of resources. In some embodiments, virtual storage is enabled to span multiple data centers
In some embodiments, virtual storage or a virtual storage layer may have a front end and a back end. The back end may consume storage volumes and create virtual volumes from the consumed volumes. The virtual volumes may be made up of portions or concatenations of the consumed volumes. For example, the virtual volumes may stripped across the consumed volumes or may be made up of consumed volumes running a flavor of RAID. Usually, the front-end exposes these volumes to hosts.
An example embodiment of a virtual service layer or virtual service appliance is EMC Corporation's VPLEX®. In some embodiments of the instant disclosure, a storage virtualization appliance has a back-end exposes LUNs to hosts and a front-end which talks to storage arrays, which may enable data mobility. In certain embodiments, storage may be added or removed from the virtual service layer transparently to the user
Accordingly, example embodiments of the present invention leverage the first LUN having the first latency 270 (e.g., XtremIO) as a high tier for tiering with VPLEX. Further, example embodiments of the present invention may leverage the de-duplication abilities of the first LUN having the first latency 270 (e.g., XtremIO) to allow significantly simpler tiering. The advantage of having tiering at a LUN level between the first LUN having the first latency 270 and the second LUN having the second latency 280 is that storage services may be leveraged. For instance, if the virtualization layer 260 supports array-aware snapshots they can be leveraged for tiered LUNs as well. Further, replication can be performed on the virtual LUN 265, such as by RecoverPoint by EMC Corporation of Hopkinton, Mass.
To create a virtual LUN 265, a first LUN having a first latency 270 may be created in a first tier of storage (e.g., XtremIO) and a second LUN having a second latency 280 may be created in a second tier of storage (e.g., Symmetrix). It should be understood that both the first LUN having the first latency 270 and the second LUN having the second latency 280 may be thin.
As will be described in greater detail below, the virtualization layer 260 need not manage any mapping between the first LUN having the first latency 270 and the second LUN having the second latency 280, such as pointers.
As illustrated in
It should be understood that, to direct writes to the second tier 380, the I/O 362 may be written to the first tier 370 at a first time and then, at a second time, written to the second tier 380 and then erased from the first tier 370. In certain embodiments, a special pattern may be written to the first tier indicating that the data is not stored in the first tier 370. The special pattern may be a specific random mask which can be set per volume or per the system (e.g., a 512 byte “not exist mask”). It is important to note that since XtremIO is de-duplication aware, the write same command will be handled very efficiently and will save storage space.
Example embodiments may use the special pattern if the storage device for the first tier 370 supports deduplication so use of the same special pattern will save storage space when deduplicated; otherwise the punch command may be used to punch out data, and replace the data with zeros. Accordingly, in certain embodiments the first tier 370 may perform as a cache providing the performance of, for example, low latency flash storage with the capacity and economics of traditional storage arrays. If the write operation includes special pattern blocks of data, the write I/O may be written to both LUNs.
As illustrated in
It should be understood that, as illustrated in
Complexity in managing traditional tiered storage between two separate devices is mapping. For example, some traditional tiered storage, such as FAST™ by EMC Corporation of Hopkinton, Mass., kept mapping metadata with the data on flash and SATA storage devices and required consistency across multiple storage nodes.
However, in example embodiments of the present invention, tiering is achieved without keeping a mapping between the first tier 570 and the second tier 580. The virtualization device 555 may determine whether data is stored in the correct tier (855). For example, as illustrated in
Similarly, as illustrated in
In certain embodiments, the virtualization device 555 may determine whether data is in the correct tier (855) according to storage tiering statistics maintained by the second tier 580 and moving active extents to the first tier 570. Data may be stored in the second tier 580 to reduce cost, with active data stored in the higher tier. In other embodiments, another tiering option is based on I/O pattern, as illustrated in
As illustrated in
As illustrated in
The methods and apparatus of this invention may take the form, at least partially, of program code (i.e., instructions) embodied in tangible non-transitory media, such as floppy diskettes, CD-ROMs, hard drives, random access or read only-memory, or any other machine-readable storage medium. When the program code is loaded into and executed by a machine, such as the computer of
The logic for carrying out the method may be embodied as part of the aforementioned system, which is useful for carrying out a method described with reference to embodiments shown. For purposes of illustrating the present invention, the invention is described as embodied in a specific configuration and using special logical arrangements, but one skilled in the art will appreciate that the device is not limited to the specific configuration but rather only by the claims included with this specification.
Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Accordingly, the present implementations are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
| Number | Name | Date | Kind |
|---|---|---|---|
| 7203741 | Marco et al. | Apr 2007 | B2 |
| 7719443 | Natanzon | May 2010 | B1 |
| 7840536 | Ahal et al. | Nov 2010 | B1 |
| 7840662 | Natanzon | Nov 2010 | B1 |
| 7844856 | Ahal et al. | Nov 2010 | B1 |
| 7860836 | Natanzon et al. | Dec 2010 | B1 |
| 7882286 | Natanzon et al. | Feb 2011 | B1 |
| 7934262 | Natanzon et al. | Apr 2011 | B1 |
| 7958372 | Natanzon | Jun 2011 | B1 |
| 8037162 | Marco et al. | Oct 2011 | B2 |
| 8041940 | Natanzon et al. | Oct 2011 | B1 |
| 8060713 | Natanzon | Nov 2011 | B1 |
| 8060714 | Natanzon | Nov 2011 | B1 |
| 8103937 | Natanzon et al. | Jan 2012 | B1 |
| 8108634 | Natanzon et al. | Jan 2012 | B1 |
| 8214612 | Natanzon | Jul 2012 | B1 |
| 8250149 | Marco et al. | Aug 2012 | B2 |
| 8271441 | Natanzon et al. | Sep 2012 | B1 |
| 8271447 | Natanzon et al. | Sep 2012 | B1 |
| 8332687 | Natanzon et al. | Dec 2012 | B1 |
| 8335761 | Natanzon | Dec 2012 | B1 |
| 8335771 | Natanzon et al. | Dec 2012 | B1 |
| 8341115 | Natanzon et al. | Dec 2012 | B1 |
| 8364920 | Parkison | Jan 2013 | B1 |
| 8370648 | Natanzon | Feb 2013 | B1 |
| 8380885 | Natanzon | Feb 2013 | B1 |
| 8392458 | Fukatani | Mar 2013 | B2 |
| 8392680 | Natanzon et al. | Mar 2013 | B1 |
| 8429362 | Natanzon et al. | Apr 2013 | B1 |
| 8433869 | Natanzon et al. | Apr 2013 | B1 |
| 8438135 | Natanzon et al. | May 2013 | B1 |
| 8464101 | Natanzon et al. | Jun 2013 | B1 |
| 8478955 | Natanzon et al. | Jul 2013 | B1 |
| 8495304 | Natanzon et al. | Jul 2013 | B1 |
| 8510279 | Natanzon et al. | Aug 2013 | B1 |
| 8521691 | Natanzon | Aug 2013 | B1 |
| 8521694 | Natanzon | Aug 2013 | B1 |
| 8543609 | Natanzon | Sep 2013 | B1 |
| 8560671 | Yahalom | Oct 2013 | B1 |
| 8583885 | Natanzon | Nov 2013 | B1 |
| 8600945 | Natanzon et al. | Dec 2013 | B1 |
| 8601085 | Ives et al. | Dec 2013 | B1 |
| 8627012 | Derbeko et al. | Jan 2014 | B1 |
| 8683592 | Dotan et al. | Mar 2014 | B1 |
| 8694700 | Natanzon et al. | Apr 2014 | B1 |
| 8706700 | Natanzon et al. | Apr 2014 | B1 |
| 8712962 | Natanzon et al. | Apr 2014 | B1 |
| 8719497 | Don et al. | May 2014 | B1 |
| 8725691 | Natanzon | May 2014 | B1 |
| 8725692 | Natanzon et al. | May 2014 | B1 |
| 8726066 | Natanzon et al. | May 2014 | B1 |
| 8732403 | Nayak | May 2014 | B1 |
| 8738813 | Natanzon et al. | May 2014 | B1 |
| 8745004 | Natanzon et al. | Jun 2014 | B1 |
| 8751828 | Raizen et al. | Jun 2014 | B1 |
| 8769336 | Natanzon et al. | Jul 2014 | B1 |
| 8805786 | Natanzon | Aug 2014 | B1 |
| 8806161 | Natanzon | Aug 2014 | B1 |
| 8825848 | Dotan et al. | Sep 2014 | B1 |
| 8832399 | Natanzon et al. | Sep 2014 | B1 |
| 8850143 | Natanzon | Sep 2014 | B1 |
| 8850144 | Natanzon et al. | Sep 2014 | B1 |
| 8862546 | Natanzon et al. | Oct 2014 | B1 |
| 8892835 | Natanzon et al. | Nov 2014 | B1 |
| 8898112 | Natanzon et al. | Nov 2014 | B1 |
| 8898409 | Natanzon et al. | Nov 2014 | B1 |
| 8898515 | Natanzon | Nov 2014 | B1 |
| 8898519 | Natanzon et al. | Nov 2014 | B1 |
| 8914595 | Natanzon | Dec 2014 | B1 |
| 8924668 | Natanzon | Dec 2014 | B1 |
| 8930500 | Marco et al. | Jan 2015 | B2 |
| 8930947 | Derbeko et al. | Jan 2015 | B1 |
| 8935498 | Natanzon | Jan 2015 | B1 |
| 8949180 | Natanzon et al. | Feb 2015 | B1 |
| 8954673 | Natanzon et al. | Feb 2015 | B1 |
| 8954796 | Cohen et al. | Feb 2015 | B1 |
| 8959054 | Natanzon | Feb 2015 | B1 |
| 8977593 | Natanzon et al. | Mar 2015 | B1 |
| 8977826 | Meiri et al. | Mar 2015 | B1 |
| 8996460 | Frank et al. | Mar 2015 | B1 |
| 8996461 | Natanzon et al. | Mar 2015 | B1 |
| 8996827 | Natanzon | Mar 2015 | B1 |
| 9003138 | Natanzon et al. | Apr 2015 | B1 |
| 9026696 | Natanzon et al. | May 2015 | B1 |
| 9031913 | Natanzon | May 2015 | B1 |
| 9032160 | Natanzon et al. | May 2015 | B1 |
| 9037818 | Natanzon et al. | May 2015 | B1 |
| 9043530 | Sundaram | May 2015 | B1 |
| 9063994 | Natanzon et al. | Jun 2015 | B1 |
| 9069479 | Natanzon | Jun 2015 | B1 |
| 9069709 | Natanzon et al. | Jun 2015 | B1 |
| 9081754 | Natanzon et al. | Jul 2015 | B1 |
| 9081842 | Natanzon et al. | Jul 2015 | B1 |
| 9087008 | Natanzon | Jul 2015 | B1 |
| 9087112 | Natanzon et al. | Jul 2015 | B1 |
| 9104529 | Derbeko et al. | Aug 2015 | B1 |
| 9110914 | Frank et al. | Aug 2015 | B1 |
| 9116811 | Derbeko et al. | Aug 2015 | B1 |
| 9128628 | Natanzon et al. | Sep 2015 | B1 |
| 9128855 | Natanzon et al. | Sep 2015 | B1 |
| 9134914 | Derbeko et al. | Sep 2015 | B1 |
| 9135119 | Natanzon et al. | Sep 2015 | B1 |
| 9135120 | Natanzon | Sep 2015 | B1 |
| 9146878 | Cohen et al. | Sep 2015 | B1 |
| 9152339 | Cohen et al. | Oct 2015 | B1 |
| 9152578 | Saad et al. | Oct 2015 | B1 |
| 9152814 | Natanzon | Oct 2015 | B1 |
| 9158578 | Derbeko et al. | Oct 2015 | B1 |
| 9158630 | Natanzon | Oct 2015 | B1 |
| 9160526 | Raizen et al. | Oct 2015 | B1 |
| 9177670 | Derbeko et al. | Nov 2015 | B1 |
| 9189339 | Cohen et al. | Nov 2015 | B1 |
| 9189341 | Natanzon et al. | Nov 2015 | B1 |
| 9201736 | Moore et al. | Dec 2015 | B1 |
| 9223659 | Natanzon et al. | Dec 2015 | B1 |
| 9225529 | Natanzon et al. | Dec 2015 | B1 |
| 9235481 | Natanzon et al. | Jan 2016 | B1 |
| 9235524 | Derbeko et al. | Jan 2016 | B1 |
| 9235632 | Natanzon | Jan 2016 | B1 |
| 9244997 | Natanzon et al. | Jan 2016 | B1 |
| 9256605 | Natanzon | Feb 2016 | B1 |
| 9274718 | Natanzon et al. | Mar 2016 | B1 |
| 9275063 | Natanzon | Mar 2016 | B1 |
| 9286052 | Solan et al. | Mar 2016 | B1 |
| 9305009 | Bono et al. | Apr 2016 | B1 |
| 9323750 | Natanzon et al. | Apr 2016 | B2 |
| 9330155 | Bono et al. | May 2016 | B1 |
| 9336094 | Wolfson et al. | May 2016 | B1 |
| 9336230 | Natanzon | May 2016 | B1 |
| 9367260 | Natanzon | Jun 2016 | B1 |
| 9378096 | Erel et al. | Jun 2016 | B1 |
| 9378219 | Bono et al. | Jun 2016 | B1 |
| 9378261 | Bono et al. | Jun 2016 | B1 |
| 9383937 | Frank et al. | Jul 2016 | B1 |
| 9389800 | Natanzon et al. | Jul 2016 | B1 |
| 9405481 | Cohen et al. | Aug 2016 | B1 |
| 9405684 | Derbeko et al. | Aug 2016 | B1 |
| 9405765 | Natanzon | Aug 2016 | B1 |
| 9411535 | Shemer et al. | Aug 2016 | B1 |
| 9459804 | Natanzon et al. | Oct 2016 | B1 |
| 9460028 | Raizen et al. | Oct 2016 | B1 |
| 9471579 | Natanzon | Oct 2016 | B1 |
| 9477407 | Marshak et al. | Oct 2016 | B1 |
| 9501542 | Natanzon | Nov 2016 | B1 |
| 9507732 | Natanzon et al. | Nov 2016 | B1 |
| 9507845 | Natanzon et al. | Nov 2016 | B1 |
| 9514138 | Natanzon et al. | Dec 2016 | B1 |
| 9524218 | Veprinsky et al. | Dec 2016 | B1 |
| 9529885 | Natanzon et al. | Dec 2016 | B1 |
| 9535800 | Natanzon et al. | Jan 2017 | B1 |
| 9535801 | Natanzon et al. | Jan 2017 | B1 |
| 9547459 | BenHanokh et al. | Jan 2017 | B1 |
| 9547591 | Natanzon et al. | Jan 2017 | B1 |
| 9552405 | Moore et al. | Jan 2017 | B1 |
| 9557921 | Cohen et al. | Jan 2017 | B1 |
| 9557925 | Natanzon | Jan 2017 | B1 |
| 9563517 | Natanzon et al. | Feb 2017 | B1 |
| 9563684 | Natanzon et al. | Feb 2017 | B1 |
| 9575851 | Natanzon et al. | Feb 2017 | B1 |
| 9575857 | Natanzon | Feb 2017 | B1 |
| 9575894 | Natanzon et al. | Feb 2017 | B1 |
| 9582382 | Natanzon et al. | Feb 2017 | B1 |
| 9588703 | Natanzon et al. | Mar 2017 | B1 |
| 9588847 | Natanzon et al. | Mar 2017 | B1 |
| 9594822 | Natanzon et al. | Mar 2017 | B1 |
| 9600377 | Cohen et al. | Mar 2017 | B1 |
| 9619255 | Natanzon | Apr 2017 | B1 |
| 9619256 | Natanzon et al. | Apr 2017 | B1 |
| 9619264 | Natanzon et al. | Apr 2017 | B1 |
| 9619543 | Natanzon et al. | Apr 2017 | B1 |
| 9632881 | Natanzon | Apr 2017 | B1 |
| 9639295 | Natanzon et al. | May 2017 | B1 |
| 9639383 | Natanzon | May 2017 | B1 |
| 9639592 | Natanzon et al. | May 2017 | B1 |
| 9652333 | Bournival et al. | May 2017 | B1 |
| 9658929 | Natanzon et al. | May 2017 | B1 |
| 9659074 | Natanzon et al. | May 2017 | B1 |
| 9665305 | Natanzon et al. | May 2017 | B1 |
| 9668704 | Fuimaono et al. | Jun 2017 | B2 |
| 9672117 | Natanzon et al. | Jun 2017 | B1 |
| 9678680 | Natanzon et al. | Jun 2017 | B1 |
| 9678728 | Shemer et al. | Jun 2017 | B1 |
| 9684576 | Natanzon et al. | Jun 2017 | B1 |
| 9690504 | Natanzon et al. | Jun 2017 | B1 |
| 9696939 | Frank et al. | Jul 2017 | B1 |
| 9710177 | Natanzon | Jul 2017 | B1 |
| 9720618 | Panidis et al. | Aug 2017 | B1 |
| 9722788 | Natanzon et al. | Aug 2017 | B1 |
| 9727429 | Moore et al. | Aug 2017 | B1 |
| 9733969 | Derbeko et al. | Aug 2017 | B2 |
| 9737111 | Lustik | Aug 2017 | B2 |
| 9740572 | Natanzon et al. | Aug 2017 | B1 |
| 9740573 | Natanzon | Aug 2017 | B1 |
| 9740880 | Natanzon et al. | Aug 2017 | B1 |
| 9749300 | Cale et al. | Aug 2017 | B1 |
| 9772789 | Natanzon et al. | Sep 2017 | B1 |
| 9798472 | Natanzon et al. | Oct 2017 | B1 |
| 9798490 | Natanzon | Oct 2017 | B1 |
| 9804934 | Natanzon et al. | Oct 2017 | B1 |
| 9811431 | Natanzon et al. | Nov 2017 | B1 |
| 9823865 | Natanzon et al. | Nov 2017 | B1 |
| 9823973 | Natanzon | Nov 2017 | B1 |
| 9832261 | Don et al. | Nov 2017 | B2 |
| 9846698 | Panidis et al. | Dec 2017 | B1 |
| 9875042 | Natanzon et al. | Jan 2018 | B1 |
| 9875162 | Panidis et al. | Jan 2018 | B1 |
| 20100199036 | Siewert | Aug 2010 | A1 |
| 20100281230 | Rabii | Nov 2010 | A1 |
| 20110055471 | Thatcher | Mar 2011 | A1 |
| 20110055498 | Kano | Mar 2011 | A1 |
| 20130114339 | Kawamura | May 2013 | A1 |
| 20130159359 | Kumar | Jun 2013 | A1 |
| 20130238832 | Dronamraju | Sep 2013 | A1 |
| 20130254504 | Saito | Sep 2013 | A1 |
| 20130297869 | Mills | Nov 2013 | A1 |
| 20130346724 | Ranade | Dec 2013 | A1 |
| 20150067231 | Sundarrajan | Mar 2015 | A1 |
| 20150081993 | Christopher | Mar 2015 | A1 |