Storage systems may utilize an array of random access storage device such as solid-state drives (SSDs, also known as solid-state disks) to provide high performance scale-out storage.
RAID (Redundant Array of Inexpensive/Independent Disks) can provide increased resiliency and reliability to storage arrays. RAID allows reconstruction of failed disks (and parts of disks) through the use of redundancy. RAID 6 defines block-level striping with double distributed parity (N+2) and provides fault tolerance of two disk failures, so that a storage array can continue to operate with up to two failed disks, irrespective of which two disks fail. The double parity provided by RAID 6 also gives time to rebuild the array without the data being at risk if a single additional disk fails before the rebuild is complete. To provide efficient reads, data is stored “in the clear,” whereas parity information can be based on a suitable coding scheme.
U.S. Pat. No. 8,799,705, issued on Aug. 5, 2014, which is hereby incorporated by reference in its entirety, describes a data protection scheme similar to RAID 6, but adapted to take advantage of random access storage.
Existing RAID techniques may be designed to work with an array of disks having equal storage capacity (or “size”). Over time, disk capacities may increase, making it desirable or even necessary to use larger disks when expanding a storage array. Replacing legacy disks with larger disks can be wasteful.
Described herein are embodiments of a process for efficiently allocating RAID stripes across an array of disks (e.g., SSDs). In some embodiments, the process can be used to allocate RAID stripes across a “heterogeneous” storage array (i.e., an array of different sized disks). Also described herein are embodiments of a storage system that utilize said processing.
According to one aspect of the disclosure, a method comprises: aggregating chunks of data to fill a stripe with N data chunks; determining free capacity information for a plurality of disks within a storage array; selecting, from the plurality of disks, N+k disks based upon the free capacity information; generating k parity chunks using the N data chunks within the stripe; and writing each of the N data and k parity chunks to a respective one of the N+k disks.
In some embodiments, wherein selecting N+k disks based upon the free capacity information comprises selecting a set of N+k disks having a largest free capacity among the plurality of disks. In certain embodiments, wherein each of the plurality of disks is divided into a plurality of fixed-size chunks, wherein determining free capacity information for a plurality of disks comprises calculating a number of unoccupied chunks within each disk. In one embodiment, selecting a stripe to fill having a largest number of unoccupied data chunks. In certain embodiments, aggregating chunks of data comprises aggregating the chunks of data in a write cache. In some embodiments, the plurality of disks includes a plurality of solid state drives (SSDs). In various embodiments, at least two of the disks within the storage array have different capacities.
According to another aspect of the disclosure, a system comprises a processor and a memory storing computer program code that when executed on the processor causes the processor to execute embodiments of the method described hereinabove.
According to yet another aspect of the disclosure, a computer program product may be tangibly embodied in a non-transitory computer-readable medium, the computer-readable medium storing program instructions that are executable to perform embodiments of the method described hereinabove.
The foregoing features may be more fully understood from the following description of the drawings in which:
The drawings are not necessarily to scale, or inclusive of all elements of a system, emphasis instead generally being placed upon illustrating the concepts, structures, and techniques sought to be protected herein.
Before describing embodiments of the structures and techniques sought to be protected herein, some terms are explained. As used herein, the term “storage system” may be broadly construed so as to encompass, for example, private or public cloud computing systems for storing data as well as systems for storing data comprising virtual infrastructure and those not comprising virtual infrastructure. As used herein, the terms “client” and “user” may refer to any person, system, or other entity that uses a storage system to read/write data.
As used herein, the term “storage device” may refer to any non-volatile memory (NVM) device, including hard disk drives (HDDs), flash devices (e.g., NAND flash devices), and next generation NVM devices, any of which can be accessed locally and/or remotely (e.g., via a storage attached network (SAN)). The term “storage array” may be used herein to refer to any collection of storage devices. In some embodiments, a storage array may provide data protection using RAID 4, RAID 5, RAID 6, or the like.
As used herein, the term “random access storage device” may refer to any non-volatile random access memory (i.e., non-volatile memory wherein data can be read or written in generally the same amount of time irrespective of the physical location of data inside the memory). Non-limiting examples of random access storage devices may include NAND-based flash memory, single level cell (SLC) flash, multilevel cell (MLC) flash, and next generation non-volatile memory (NVM). For simplicity of explanation, the term “disk” may be used synonymously with “storage device” herein.
While vendor-specific terminology may be used herein to facilitate understanding, it is understood that the concepts, techniques, and structures sought to be protected herein are not limited to use with any specific commercial products.
In the embodiment shown, the system components include a routing subsystem 102a, a control subsystem 102b, a data subsystem 102c, and a write cache 102d. In one embodiment, the components 102 may be provided as software components, i.e., computer program code that, when executed on a processor, may cause a computer to perform functionality described herein. In a certain embodiment, the storage system 100 includes an operating system (OS) and one or more of the components 102 may be provided as user space processes executable by the OS. In other embodiments, a component 102 may be provided, at least in part, as hardware such as digital signal processor (DSP) or an application specific integrated circuit (ASIC) configured to perform functionality described herein.
The routing subsystem 102a may be configured to receive read and write requests from clients 116 using, for example, an external application programming interface (API) and to translate client requests into internal commands. In some embodiments, the routing subsystem 102a is configured to receive Small Computer System Interface (SCSI) commands from clients. In certain embodiments, the system 100 may store data in fixed-size chunks, for example 4K chunks, where each chunk may have a unique hash value (referred to herein as a “chunk hash”). In such embodiments, the routing subsystem 102a may be configured to split data into fixed-size chunks and to calculate the corresponding chunk hashes. In one embodiment, chunk hashes are calculated using Secure Hash Algorithm 1 (SHA-1) processing. In some embodiments, a chunk corresponds to a fixed number of contiguous blocks within a storage device.
The control subsystem 102b may be configured to maintain a mapping between I/O addresses associated with data and the corresponding chunk hashes. As shown in
The data subsystem 102c may be configured to maintain a mapping between chunk hashes and physical storage addresses (i.e., storage locations within the storage array 106 and/or within individual disks 108). As shown in
As shown, in some embodiments, the system may include a write cache 102d that may be configured to cache content data prior to writing to the storage array 106. Thus, the data subsystem 102c may be configured to send writes to the write cache 102d and, once enough writes have been collected, to commit the writes to disk 108. In one embodiment, the write cache 102d may form a portion of the data subsystem 102c.
It will be appreciated that combinations of the A2H 112 and H2P 114 tables can provide multiple levels of indirection between the logical (or “I/O”) address a client 116 uses to access data and the physical address where that data is stored. Among other advantages, this may give the storage system 100 freedom to move data within the storage array 106 without affecting a client's 116 access to that data (e.g., if a disk 108 fails).
In various embodiments, the storage system 100 may provide data protection through redundancy such that if a disk 108 fails, the data stored therein may be recovered using information stored within other disks of the storage array 106 to a replacement disk. In certain embodiments, the storage system may be configured to provide double parity data protection. Thus, the storage system 100 may be able to tolerate the loss of at least two disks 108 concurrently. In one embodiment, data subsystem 102c may implement a data protection scheme similar to RAID 6, but adapted to take advantage of random access storage. In various embodiments, the storage system 100 can use data protection techniques described within U.S. Pat. No. 8,799,705, issued on Aug. 5, 2014, which is hereby incorporated by reference in its entirety.
Unlike some existing RAID systems, the storage system 100 may use fine granularity to obviate the need to keep dedicated spare disk space, according to some embodiments. In particular, the disks 108 can be logically divided into relatively small chunks (e.g., 4K chunks). A RAID stripe includes of N+k such chunks, N of which comprise data (e.g., user data or other content) and k of which comprise parity information calculated based on the N chunks. Because data is stored in relatively small chunks, a single write request received from a client 116 can result in many writes to the disk array 106. Moreover, updating any chunk within a stripe may require updating the k parity chunks.
According to some embodiments, the data subsystem 102c may aggregate chunk writes using the write cache 102d, which caches content data prior to writing to the disk array 106. In some embodiments, the data subsystem 102c may seek to aggregate enough chunks to fill a stripe so that an entire stripe can be written to disk(s) at the same time, thereby minimizing the number of parity updates. The data subsystem 102c can choose to write aggregated data to a new stripe or to an existing stripe with unused chunks (or “holes”). Such holes can result from client 116 updates when content-based addressing is used: if a client 116 updates the same I/O address with different content, a new chunk hash may be calculated that results in the data being written to a different physical storage location. In one embodiment, the data subsystem 102 may select an existing stripe with the largest number of unused (or “unoccupied”) disk chunks. In some embodiments, the stripe size can be dynamic. For example, a maximum stripe size may be defined (e.g., 23+2) and, if no such stripes are available when writing (due to holes created by “old” blocks), a smaller stripe size can be used (e.g., 10+2).
In various embodiments, the data subsystem 102c may be configured to use a data protection scheme that does not require equal-sized disks 108, embodiments of which are described below in conjunction with
In some embodiments, the system 100 includes features used in EMC® XTREMIO®.
Each disk 202 has a given capacity, which may be the same as or different from any other disk 202. A disk 202 may logically be divided up into relatively small fixed-size chunks (e.g., 4K chunks). In the simplified example of
The process can provide N+k RAID protection, while utilizing the available capacity of disks 202. In an embodiment, most or all of the capacity can be utilized. A stripe may include N data chunks (denoted in
For a given stripe, each of its N+k chunks should be stored on different disks 202 to provide the desired RAID protection. This is illustrated by
For L disks, there are
(“L choose N+k”) possible layouts for a stripe. The choice of which disks 202 are used to store individual stripes can affect allocation efficiency over the entire array 200. Choosing the optimal layout for a given stripe can be viewed as an optimization problem that may increase in complexity as the number of disks L increases and/or as the stripe size N+k approaches L/2.
To reduce complexity, a heuristic for chunk allocation may be used in some embodiments. Consider each disk 202 as a pool of X fixed-size chunks, where X may vary between disks 202. Per stripe, choose N+k disks 202 across which to store the stripe based upon the amount of free (or “unused”) capacity within each disk 202. In some embodiments, free capacity is measured as the number of unoccupied chunks on a disk. In certain embodiments, free capacity is measured as a percentage (e.g., a percentage of chunks that are unoccupied). When writing a stripe, the set of N+k disks that have largest free capacity may be used.
In some embodiments, the data subsystem 102c keeps track of which stripes are allocated to which disks 202. In one embodiment, the data subsystem 102c tracks the number of unoccupied chunks per disk 202.
As an example, assume that the data subsystem 102c (
At block 304, requests to write chunks of data may be received. In some embodiments, the requests may be received in response to user/client writes. At block 306, writes may be aggregated until there are enough writes to fill the stripe with N data chunks. In some embodiment, the process can aggregate N−M writes, where N is the number of data chunks that can be stored within the stripe and M is the number of those chunks that are currently occupied. In some embodiments, writes can be aggregated using a write cache 102d (
At block 308, the free capacity of each disk within a storage array may be determined. In some embodiments, a disk's free capacity is measured as the number of unoccupied chunks on that disk.
At block 310, N+k disks may be selected using the disk free capacity information. In the embodiment shown, the set of N+k disks with the largest free capacity may be selected. At block 312, k parity chunks may be generated using the N data chunks within the stripe (i.e., the data chunks aggregated at block 306 in addition to any existing data chunks within the stripe). Any suitable technique can be used to generate the parity chunks. At block 314, the N data chunks and the k parity chunks may be written to the selected N+k disks. In some embodiments, one chunk may be written to each of the selected N+k disks.
In the embodiment shown, computer instructions 412 may include routing subsystem instructions 412a that may correspond to an implementation of a routing subsystem 102a (
Processing may be implemented in hardware, software, or a combination of the two. In various embodiments, processing is provided by computer programs executing on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer. Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
All references cited herein are hereby incorporated herein by reference in their entirety.
Having described certain embodiments, which serve to illustrate various concepts, structures, and techniques sought to be protected herein, it will be apparent to those of ordinary skill in the art that other embodiments incorporating these concepts, structures, and techniques may be used. Elements of different embodiments described hereinabove may be combined to form other embodiments not specifically set forth above and, further, elements described in the context of a single embodiment may be provided separately or in any suitable sub-combination. Accordingly, it is submitted that scope of protection sought herein should not be limited to the described embodiments but rather should be limited only by the spirit and scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5204958 | Cheng et al. | Apr 1993 | A |
5453998 | Dang | Sep 1995 | A |
5603001 | Sukegawa et al. | Feb 1997 | A |
6085198 | Skinner et al. | Jul 2000 | A |
6125399 | Hamilton | Sep 2000 | A |
6671694 | Baskins et al. | Dec 2003 | B2 |
7073115 | English et al. | Jul 2006 | B2 |
7203796 | Muppalaneni et al. | Apr 2007 | B1 |
7472249 | Cholleti et al. | Dec 2008 | B2 |
7908484 | Haukka et al. | Mar 2011 | B2 |
8341479 | Bichot | Dec 2012 | B2 |
8386425 | Kadayam et al. | Feb 2013 | B1 |
8386433 | Kadayam | Feb 2013 | B1 |
8484536 | Cypher | Jul 2013 | B1 |
8566673 | Kidney | Oct 2013 | B2 |
8694849 | Micheloni | Apr 2014 | B1 |
8706932 | Kanapathippillai | Apr 2014 | B1 |
8799705 | Hallak et al. | Aug 2014 | B2 |
9026729 | Hallak et al. | May 2015 | B1 |
9063910 | Hallak et al. | Jun 2015 | B1 |
9104326 | Frank et al. | Aug 2015 | B2 |
9606734 | Ioannou | Mar 2017 | B2 |
9606870 | Meiri | Mar 2017 | B1 |
9703789 | Bowman et al. | Jul 2017 | B2 |
9841908 | Zhao | Dec 2017 | B1 |
20020002642 | Tyson | Jan 2002 | A1 |
20030061227 | Baskins et al. | Mar 2003 | A1 |
20030196023 | Dickson | Oct 2003 | A1 |
20040267835 | Zwilling et al. | Dec 2004 | A1 |
20060085674 | Ananthamurthy | Apr 2006 | A1 |
20060271540 | Williams | Nov 2006 | A1 |
20070089045 | Corbett et al. | Apr 2007 | A1 |
20070240125 | Degenhardt et al. | Oct 2007 | A1 |
20070283086 | Bates | Dec 2007 | A1 |
20080082969 | Agha et al. | Apr 2008 | A1 |
20080235793 | Schunter et al. | Sep 2008 | A1 |
20090172464 | Byrne | Jul 2009 | A1 |
20090216953 | Rossi | Aug 2009 | A1 |
20100005233 | Hosokawa | Jan 2010 | A1 |
20100250611 | Krishnamurthy | Sep 2010 | A1 |
20110087854 | Rushworth et al. | Apr 2011 | A1 |
20110137916 | Deen et al. | Jun 2011 | A1 |
20110302587 | Nishikawa et al. | Dec 2011 | A1 |
20120023384 | Naradasi et al. | Jan 2012 | A1 |
20120124282 | Frank et al. | May 2012 | A1 |
20120158736 | Milby | Jun 2012 | A1 |
20120204077 | D'Abreu et al. | Aug 2012 | A1 |
20120233432 | Feldman et al. | Sep 2012 | A1 |
20130036289 | Welnicki et al. | Feb 2013 | A1 |
20130212074 | Romanski et al. | Aug 2013 | A1 |
20130290285 | Gopal et al. | Oct 2013 | A1 |
20130318053 | Provenzano et al. | Nov 2013 | A1 |
20130326318 | Haswell | Dec 2013 | A1 |
20130346716 | Resch | Dec 2013 | A1 |
20140019764 | Gopal et al. | Jan 2014 | A1 |
20140032992 | Hara et al. | Jan 2014 | A1 |
20140122823 | Gupta et al. | May 2014 | A1 |
20140188805 | Vijayan | Jul 2014 | A1 |
20140189212 | Slaight | Jul 2014 | A1 |
20140208024 | Simionescu | Jul 2014 | A1 |
20140244598 | Haustein et al. | Aug 2014 | A1 |
20150019507 | Aronovich | Jan 2015 | A1 |
20150098563 | Gulley et al. | Apr 2015 | A1 |
20150149789 | Seo et al. | May 2015 | A1 |
20150186215 | Das Sharma et al. | Jul 2015 | A1 |
20150199244 | Venkatachalam | Jul 2015 | A1 |
20150205663 | Sundaram | Jul 2015 | A1 |
20150269023 | Taranta, II | Sep 2015 | A1 |
20160011941 | He | Jan 2016 | A1 |
20160110252 | Hyun et al. | Apr 2016 | A1 |
20160132270 | Miki | May 2016 | A1 |
20160188487 | Fekete | Jun 2016 | A1 |
20170123995 | Freyensee | May 2017 | A1 |
20170255515 | Kim et al. | Sep 2017 | A1 |
20170262191 | Dewakar | Sep 2017 | A1 |
Number | Date | Country |
---|---|---|
2014-206884 | Oct 2014 | JP |
Entry |
---|
Response to U.S. Office Action dated Jun. 10, 2016 corresponding to U.S. Appl. No. 14/228,971; Response filed Aug. 17, 2016; 10 Pages. |
U.S. Final Office Action dated Oct. 4, 2016 corresponding to U.S. Appl. No. 14/228,971; 37 Pages. |
U.S. Appl. No. 15/282,546, filed Sep. 30, 2016, Kucherov et al. |
U.S. Appl. No. 15/281,593, filed Sep. 30, 2016, Braunschvig et al. |
U.S. Appl. No. 15/281,597, filed Sep. 30, 2016, Bigman. |
Request for Continued Examination (RCE) and Response to U.S. Final Office Action dated Oct. 4, 2016 corresponding to U.S. Appl. No. 14/228,971; RCE and Response filed on Jan. 4, 2017; 19 Pages. |
U.S. Appl. No. 14/228,971, filed Mar. 28, 2014, Shoikhet et al. |
U.S. Appl. No. 14/228,360, filed Mar. 28, 2014, Lempel et al. |
U.S. Appl. No. 14/228,982, filed Mar. 28, 2014, Ben-Moshe et al. |
U.S. Appl. No. 14/229,491, filed Mar. 28, 2014, Luz et al. |
U.S. Appl. No. 14/496,359, filed Sep. 25, 2014, Love et al. |
U.S. Appl. No. 14/751,652, filed Jun. 26, 2015, Natanzon et al. |
U.S. Appl. No. 14/979,890, filed Dec. 28, 2015, Meiri et al. |
U.S. Appl. No. 15/085,168, filed Mar. 30, 2016, Meiri et al. |
U.S. Appl. No. 15/081,137, filed Mar. 25, 2016, Natanzon et al. |
U.S. Appl. No. 15/079,205, filed Mar. 24, 2016, Dorfman et al. |
U.S. Appl. No. 15/079,213, filed Mar. 24, 2016, Ben-Moshe et al. |
U.S. Appl. No. 15/079,215, filed Mar. 24, 2016, Krakov et al. |
U.S. Office Action dated Aug. 27, 2015 corresponding to U.S. Appl. No. 14/228,971; 23 Pages. |
Response to U.S. Office Action dated Aug. 27, 2015 corresponding to U.S. Appl. No. 14/228,971; Response filed on Jan. 14, 2016; 10 Pages. |
U.S. Final Office Action dated Feb. 25, 2016 corresponding to U.S. Appl. No. 14/228,971; 27 Pages. |
U.S. Office Action dated Sep. 22, 2015 corresponding to U.S. Appl. No. 14/228,982; 17 Pages. |
Response to U.S. Office Action dated Sep. 22, 2015 corresponding to U.S. Appl. No. 14/228,982; Response filed on Feb. 1, 2016; 10 Pages. |
Notice of Allowance dated Apr. 26, 2016 corresponding to U.S. Appl. No. 14/228,982; 9 Pages. |
U.S. Office Action dated Jan. 12, 2016 corresponding to U.S. Appl. No. 14/229,491; 12 Pages. |
U.S. Office Action dated Dec. 4, 2014 corresponding to U.S. Appl. No. 14/496,262; 16 Pages. |
Response to U.S. Office Action dated Dec. 4, 2014 corresponding to U.S. Appl. No. 14/496,262; Response filed on Dec. 11, 2014; 12 Pages. |
U.S. Notice of Allowance dated Jan. 9, 2015 corresponding to U.S. Appl. No. 14/496,262; 8 Pages. |
312 Amendment filed Feb. 5, 2015 corresponding to U.S. Appl. No. 14/496,262; 9 Pages. |
U.S. Notice of Allowance dated Mar. 16, 2015 corresponding to U.S. Appl. No. 14/620,631; 10 Pages. |
EMC Corporation, “Introduction to the EMC XtremIO Storage Array;” Version 4.0; White Paper—A Detailed Review; Apr. 2015; 65 Pages. |
Vijay Swami, “XtremIO Hardware/Software Overview & Architecture Deepdive;” EMC On-Line Blog; Nov. 13, 2013; Retrieved from < http://vjswami.com/2013/11/13/xtremio-hardwaresoftware-overview-architecture-deepdive/>; 18 Pages. |
Response to U.S. Non-Final Office Action dated Feb. 9, 2017 for U.S. Appl. No. 14/228,971; Response filed on May 9, 2017; 12 Pages. |
Request for Continued Examination (RCE) and Response to Final Office Action dated Feb. 25, 2016 corresponding to U.S. Appl. No. 14/228,971; Response filed on May 25, 2016; 12 Pages. |
U.S. Office Action dated Jun. 10, 2016 corresponding to U.S. Appl. No. 14/228,971; 27 Pages. |
Response to Office Action dated Jan. 12, 2016 corresponding to U.S. Appl. No. 14/229,491; Response filed on Jun. 2, 2016; 7 Pages. |
Notice of Allowance dated Jul. 25, 2016 corresponding to U.S. Appl. No. 14/229,491; 10 Pages. |
Office Action dated Jul. 15, 2016 corresponding to U.S. Appl. No. 14/751,652; 11 Pages. |
U.S. Non-Final Office Action dated Apr. 21, 2017 for U.S. Appl. No. 15/079,215; 53 Pages. |
U.S. Non-Final Office Action dated Dec. 1, 2017 for U.S. Appl. No. 14/979,890; 10 Pages. |
Response to U.S. Non-Final Office Action dated Oct. 4, 2017 for U.S. Appl. No. 14/228,971; Response filed Jan. 26, 2018; 11 Pages. |
Response to U.S. Non-Final Office Action dated Nov. 13, 2017 for U.S. Appl. No. 15/079,213; Response filed Feb. 13, 2018; 9 Pages. |
Response to U.S. Non-Final Office Action dated Nov. 28, 2017 for U.S. Appl. No. 15/079,205; Response filed Feb. 28, 2018; 11 Pages. |
Response to U.S. Non-Final Office Action dated Dec. 1, 2017 for U.S. Appl. No. 14/979,890; Response filed Feb. 28, 2018; 9 Pages. |
U.S. Non-Final Office Action dated Oct. 4, 2017 for U.S. Appl. No. 14/228,971; 37 pages. |
Notice of Allowance dated Sep. 22, 2017 for U.S. Appl. No. 15/079,215; 9 Pages. |
Response (w/RCE) to U.S. Final Office Action dated Jun. 20, 2017 for U.S. Appl. No. 14/228,971; Response filed Sep. 13, 2017; 14 Pages. |
U.S. Final Office Action dated May 29, 2018 for U.S. Appl. No. 14/228,971; 35 pages. |
U.S. Non-Final Office Action dated May 31, 2018 for U.S. Appl. No. 15/281,593; 10 pages. |
U.S. Final Office Action dated Jun. 20, 2017 for U.S. Appl. No. 14/228,971; 40 Pages. |
U.S. Non-Final Office Action dated Nov. 13, 2017 for U.S. Appl. No. 15/079,213; 9 pages. |
Response to U.S. Non-Final Office Action dated Dec. 22, 2017 for U.S. Appl. No. 15/282,546; Response filed May 17, 2018; 8 Pages. |
U.S. Non-Final Office Action dated Feb. 9, 2017 for U.S. Appl. No. 14/228,971; 38 Pages. |
U.S. Non-Final Office Action dated Nov. 28, 2017 corresponding to U.S. Appl. No. 15/079,205; 9 Pages. |
U.S. Non-Final Office Action dated Dec. 22, 2017 corresponding to U.S. Appl. No. 15/282,546; 12 Pages. |