There are many classes of storage devices. For example, there are solid-state disks (SSDs) and hard disk drives (HDDs). HDDs may be further classified as Serial Attached SCSI (SAS), Near Line SAS (NL-SAS), and Serial ATA (SATA). SSDs may be more expensive but provide faster read/write times compared to HDDs. SAS and NL-SAS may be more expensive but provide faster data retrieval compared to SATA disks. Compression can be used to reduce the amount of data stored on a given class of storage. Within a given data set or workload, some data may be stored compressed, while other data may be stored uncompressed. Some systems support multiple different compression algorithms and provide processes to choose among the different algorithms, and to allow data compressed with different algorithms to coexist.
In accordance with one aspect of the disclosure, a method comprises: determining compression performance data, an expected I/O operations per second (IOPS) value, an expected data set size, and skew data for each of the logical data sets; determining resource requirements for each of the logical data sets using the corresponding skew data, compression performance data, expected data set size, and expected IOPS value; and determining, based on the resource requirements determined for each of the logical data sets, a set of resources for a storage array that can handle the workload.
In various embodiments, the method further comprises collecting I/O statistics for each of the logical data sets, wherein determining an expected I/O operations per second (IOPS) value, an expected data set size, and skew data for a logical data set comprises determining an expected I/O operations per second (IOPS) value, an expected data set size, and skew data for the logical data set using the corresponding collecting I/O statistics. In another embodiment, the method further comprises determining an application type associated with one or more of the logical data sets, wherein determining compression performance data and skew data for each of the one or more logical data sets comprises retrieving historic compression performance data and skew data for each of the one or more logical data sets.
In some embodiments, determining a set of resources for a storage array that can handle the workload comprising determining a number of processors, a number of storage devices, and an amount of memory that can handle the workload. In one embodiment, determining resource requirements for a logical data set comprises: selecting first and second compression algorithms from a set of available compression algorithms; selecting a data partition value X, where 0≤X≤100; determining storage requirements for the logical data set using the corresponding compression performance data and expected data set size, where the storage requirements are based on compressing X % of data using the first compression algorithm and the remaining data using the second compression algorithm; and determining processing requirements for the logical data set using the corresponding compression performance data, expected IOPS value, and skew data, wherein the processing requirements are based compressing the most active X % of data using the first compression algorithm and compressing the remaining data using the second compression algorithm.
In another embodiment, the steps of determining storage requirements and determining processing requirements are repeated for different first and second compression algorithms and different data partition values, wherein the resource requirements determined include the least expensive storage requirements least expensive storage requirements and processing requirements determined.
According to another aspect of the disclosure, a system comprises one or more processors; a volatile memory; and a non-volatile memory storing computer program code that when executed on the processor causes execution across the one or more processors of a process operable to perform embodiments of the method described hereinabove.
According to yet another aspect of the disclosure, a computer program product tangibly embodied in a non-transitory computer-readable medium, the computer-readable medium storing program instructions that are executable to perform embodiments of the method described hereinabove.
The foregoing features may be more fully understood from the following description of the drawings in which:
The drawings are not necessarily to scale, or inclusive of all elements of a system, emphasis instead generally being placed upon illustrating the concepts, structures, and techniques sought to be protected herein.
Before describing embodiments of the structures and techniques sought to be protected herein, some terms are explained. In certain embodiments, as may be used herein, the term “storage system” may be broadly construed so as to encompass, for example, private or public cloud computing systems for storing data as well as systems for storing data comprising virtual infrastructure and those not comprising virtual infrastructure. In some embodiments, as may be used herein, the terms “client,” “customer,” and “user” may refer to any person, system, or other entity that uses a storage system to read/write data.
In many embodiments, as may be used herein, the term “storage device” may refer to any non-volatile memory (NVM) device, including hard disk drives (HDDs), flash devices (e.g., NAND flash devices), and next generation NVM devices, any of which can be accessed locally and/or remotely (e.g., via a storage attached network (SAN)). In certain embodiments, the term “storage array” may be used herein to refer to any collection of storage devices. In some embodiments herein, for simplicity of explanation, the term “disk” may be used synonymously with “storage device.”
In certain embodiments, the term “I/O request” or simply “I/O” may be used to refer to an input or output request. In some embodiments, an I/O request may refer to a data read or write request.
In the embodiment of
The compression subsystem 114 can compress data in the storage pool 112 using one or more compression algorithms 114a. Compression algorithms 114a may include one or more compression algorithms—such as Lempel-Ziv (LZ), DEFLATE, and Lempel-Ziv-Renau (LZR)—along with different configurations of each such algorithm. For example, the same LZ compression algorithm using two different block sizes is considered as two different compression algorithms herein.
The data collection subsystem 116 collects information about one or more logical data sets within the storage array 104. In some embodiments, the data collection subsystem observes I/Os sent from the host to the storage array and collects I/O statistics, such as average an expected I/O operations per second (IOPS) across all extents, average IOPS per extent, average read/write latency, and/or the average data size for reads and writes. In one embodiment, the data collection subsystem collects information about logical data sets by analyzing data stored within the storage pool. In certain embodiments, the data collection subsystem collects information on a per-logical data set basis. Referring to the embodiment of
In many embodiments, the data collection subsystem collects data that describes the performance of each of the available compression algorithms (e.g., each of the available compression algorithms 114a) on each logical data set. Such data is referred to herein as “compression performance data.” In some embodiments, per-logical data set compression performance data includes: (1) the compression ratio achieved by each of the compression algorithms on the data set; (2) average latency added by each of the compression algorithms per I/O on the data set; (3) average processor cycles added by a compression algorithm per I/O on the data set; and/or (4) average memory used by each of the compression algorithms per I/O on the data set. In certain embodiments, the term “compression ratio” may refer to the ratio between uncompressed data size and compressed data size (i.e., the size of the data after the compression algorithm is applied).
In some embodiments, compression performance data may be collected for an existing workload by analyzing compressed data stored within a storage pool. In other embodiments, compression performance data may be collected for a new workload by collecting data samples from the workload, compressing data samples using one or more different compression algorithms, and analyzing the compression performance of the compressed samples.
Referring to
In the embodiment of
In certain embodiments, the storage array is a flash storage array. In some embodiments, the storage system may include one or more of the features described in U.S. Pat. No. 9,104,326, issued Aug. 11, 2015, entitled “SCALABLE BLOCK DATA STORAGE USING CONTENT ADDRESSING,” which is assigned to the same assignee as this patent application and is incorporated herein in its entirety. In certain embodiments, the storage system may include an EMC® XTREMIO® system.
Referring to
The skew curve 206 can be used to determine which percentage of capacity generates a given percentage of total activity. A given point along the curve 206 can be used to partition a logical data set into two tiers: a most active data tier and a least active data tier. For example, in the embodiment of
In various embodiments, a data collection subsystem (e.g., subsystem 116 in
Alternatively, the processing and decision blocks may represent steps performed by functionally equivalent circuits such as a digital signal processor (DSP) circuit or an application specific integrated circuit (ASIC). The flow diagrams do not depict the syntax of any particular programming language but rather illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables, may be omitted for clarity. The particular sequence of blocks described is illustrative only and can be varied without departing from the spirit of the concepts, structures, and techniques sought to be protected herein. Thus, unless otherwise stated, the blocks described below are unordered meaning that, when possible, the functions represented by the blocks can be performed in any convenient or desirable order.
Referring to
Referring again to
Referring to
Referring again to
Referring again to
Referring to
Referring again to
Referring again to
Referring again to
In some embodiments, the compression performance data and/or expected data set size may be determined by collecting I/O statistics from an existing workload as described above in conjunction with
In one embodiment, the storage requirements may be determined using the following equation:
((max_expected_data_set_size*(X/100))/first_compression_factor)+((max_expected_data_set_size*((100−X)/100))/second_compression_factor),
where first_compression_factor is a value for the compression ratio of the first algorithm as applied to the logical data set (according to the compression performance data) and second_compression_factor is a value for the compression ratio of the second algorithm. In one embodiment, considering the compression performance data shown in TABLE 2 and assuming that the first compression algorithm is selected to be “No compression,” the second compression algorithm may be selected to be algorithm B, and the data partition value is selected to be X=1. In such an embodiment, 1% of the data would not be compressed and 99% of the data would be compressed at a ratio of 1:2. In the aforementioned embodiment, further assuming that the expected size for the logical data set is 200TB, some embodiments storage requirements may be calculated as ((200TB*(1/100))/1)+((200TB*((100−1)/100))/2), or 101TB. In some embodiments, the storage requirements may be used to determine the number of storage devices needed to handle the logical data set.
Referring again to
In some embodiments, the compression performance data, expected IOPS value, and/or skew data may be determined by collecting I/O statistics from an existing workload as described above in conjunction with
In one embodiment, the processing requirements may be determined using the following equation:
(expected_iops*(most_active_iop_percentage/100)*first_extra_cycles_per_io)+(expected_iops*((1−most_active_iop_percentage)/100)*second_extra_cycles_per_io),
where most_active_iop_percentage is the percentage of activity for which the most active X % of data is responsible for (according to the skew data), first_extra_cycles_per_io is average number of cycles per I/O due to the first compression algorithm (according to the compression performance data), and second_extra_cycles_per_io is average number of cycles per I/O due to the second compression algorithm. In one embodiment, considering again the compression performance data shown in TABLE 2 and assuming that the first compression algorithm is selected to be “No compression,” the second compression algorithm may be selected to be algorithm B, and the data partition value may be selected to be X=1. In the aforementioned embodiment, further assuming that the expected IOPS value is 50 and, according to the skew data, the most active 1% of data is responsible for 30% of activity, the processing requirements may be calculated as (50 IO/s*(30%/100)*0)+(50 IOPS*(70%/100)*200 cycles/IO) or 7000 cycles/s.
In some embodiments, the processing requirements may include only processing cycles that are due to compression. In other embodiments, cycles due to other parts of I/O processing may be included within the processing requirements calculation for a given logical data set.
In various embodiments, the storage requirements may be used to determine a number of storage devices needed to handle the logical data set. In some embodiments, the processing requirements may be used to determine a number of processors needed to handle the logical data set.
In one embodiment, the method also includes determining memory requirements for the logical data set based on memory requirements for the first and second compression algorithms.
Referring again to
In certain embodiments, the output of the method may be the combination of compression algorithms and a corresponding data partition value that results in minimum processing requirements and/or storage requirements. In particular embodiments, the output of the method may be the combination of compression algorithms and a corresponding data partition value that minimizes cost, based on the required number of storage devices, the required number of processors, and the unit cost of storage devices and processors.
In some embodiments, the tool may be used to size a storage array without having full details of their workload. In various embodiments, a customer may have compression performance data and/or skew data for some but not all logical data sets in a workload. In such embodiments, the tool may use historic information about similar workloads as a substitute for the missing data in order to provide a sizing estimate for the workload. In the embodiment of
In some embodiments, when sizing a workload, an application type may be specified for one or more of its logical data sets (e.g., as part of input 406). In such embodiments, the tool may use the application type to retrieve historic compression performance data and/or skew data for applications of the same type.
In many embodiments, the tool may use at least some of the processing described below in conjunction with
Referring to
Referring to the embodiment of
Referring again to
Referring again to
Referring again to
Referring again to
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer. Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate. The program logic may be run on a physical or virtual processor. The program logic may be run across one or more physical or virtual processors.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
All references cited herein are hereby incorporated herein by reference in their entirety.
Having described certain embodiments, which serve to illustrate various concepts, structures, and techniques sought to be protected herein, it will be apparent to those of ordinary skill in the art that other embodiments incorporating these concepts, structures, and techniques may be used. Elements of different embodiments described hereinabove may be combined to form other embodiments not specifically set forth above and, further, elements described in the context of a single embodiment may be provided separately or in any suitable sub-combination. Accordingly, it is submitted that the scope of protection sought herein should not be limited to the described embodiments but rather should be limited only by the spirit and scope of the following claims.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5204958 | Cheng et al. | Apr 1993 | A |
| 6085198 | Skinner et al. | Jul 2000 | A |
| 6125399 | Hamilton | Sep 2000 | A |
| 6671694 | Baskins et al. | Dec 2003 | B2 |
| 7035971 | Merchant | Apr 2006 | B1 |
| 7203796 | Muppalaneni et al. | Apr 2007 | B1 |
| 7472249 | Cholleti et al. | Dec 2008 | B2 |
| 7908484 | Haukka et al. | Mar 2011 | B2 |
| 8356060 | Marwah | Jan 2013 | B2 |
| 8386425 | Kadayam et al. | Feb 2013 | B1 |
| 8386433 | Kadayam | Feb 2013 | B1 |
| 8799705 | Hallak et al. | Aug 2014 | B2 |
| 9104326 | Frank et al. | Aug 2015 | B2 |
| 9367398 | Ben-Moshe et al. | Jun 2016 | B1 |
| 9442941 | Luz et al. | Sep 2016 | B1 |
| 9703789 | Bowman et al. | Jul 2017 | B2 |
| 9733854 | Sharma | Aug 2017 | B2 |
| 9886314 | Borowiec | Feb 2018 | B2 |
| 10116329 | Bigman | Oct 2018 | B1 |
| 20030061227 | Baskins et al. | Mar 2003 | A1 |
| 20040267835 | Zwilling et al. | Dec 2004 | A1 |
| 20060271540 | Williams | Nov 2006 | A1 |
| 20070240125 | Degenhardt et al. | Oct 2007 | A1 |
| 20080082969 | Agha et al. | Apr 2008 | A1 |
| 20080235793 | Schunter et al. | Sep 2008 | A1 |
| 20090216953 | Rossi | Aug 2009 | A1 |
| 20100005233 | Hosokawa | Jan 2010 | A1 |
| 20100250611 | Krishnamurthy | Sep 2010 | A1 |
| 20110087854 | Rushworth et al. | Apr 2011 | A1 |
| 20110137916 | Deen et al. | Jun 2011 | A1 |
| 20110302587 | Nishikawa et al. | Dec 2011 | A1 |
| 20120023384 | Naradasi et al. | Jan 2012 | A1 |
| 20120124282 | Frank et al. | May 2012 | A1 |
| 20120158736 | Milby | Jun 2012 | A1 |
| 20120204077 | D'Abreu et al. | Aug 2012 | A1 |
| 20120233432 | Feldman et al. | Sep 2012 | A1 |
| 20130036289 | Welnicki et al. | Feb 2013 | A1 |
| 20130212074 | Romanski et al. | Aug 2013 | A1 |
| 20130290285 | Gopal et al. | Oct 2013 | A1 |
| 20130318053 | Provenzano et al. | Nov 2013 | A1 |
| 20130326318 | Haswell | Dec 2013 | A1 |
| 20130346716 | Resch | Dec 2013 | A1 |
| 20140019764 | Gopal et al. | Jan 2014 | A1 |
| 20140032992 | Hara et al. | Jan 2014 | A1 |
| 20140122823 | Gupta et al. | May 2014 | A1 |
| 20140244598 | Haustein et al. | Aug 2014 | A1 |
| 20150019507 | Aronovich | Jan 2015 | A1 |
| 20150098563 | Gulley et al. | Apr 2015 | A1 |
| 20150149789 | Seo et al. | May 2015 | A1 |
| 20150186215 | Das Sharma et al. | Jul 2015 | A1 |
| 20150199244 | Venkatachalam et al. | Jul 2015 | A1 |
| 20150205663 | Sundaram et al. | Jul 2015 | A1 |
| 20150263986 | Park | Sep 2015 | A1 |
| 20160011941 | He et al. | Jan 2016 | A1 |
| 20160110252 | Hyun et al. | Apr 2016 | A1 |
| 20160132270 | Miki | May 2016 | A1 |
| 20170115878 | Dewaikar | Apr 2017 | A1 |
| 20170123995 | Freyensee et al. | May 2017 | A1 |
| 20170255515 | Kim et al. | Sep 2017 | A1 |
| Number | Date | Country |
|---|---|---|
| 2014-206884 | Oct 2014 | JP |
| WO-2017070420 | Apr 2017 | WO |
| Entry |
|---|
| Notice of Allowance dated Sep. 22, 2017 for U.S. Appl. No. 15/079,215; 9 Pages. |
| Response (w/RCE) to U.S. Final Office Action dated Jun. 20, 2017 for U.S. Appl. No. 14/228,971; Response filed Sep. 13, 2017; 14 Pages. |
| U.S. Appl. No. 14/228,971, filed Mar. 28, 2014, Shoikhet et al. |
| U.S. Appl. No. 14/979,890, filed Dec. 28, 2015, Meiri et al. |
| U.S. Appl. No. 15/079,205, filed Mar. 24, 2016, Dorfman et al. |
| U.S. Appl. No. 15/079,208, filed Mar. 24, 2016, Ben-Moshe et al. |
| U.S. Appl. No. 15/079,213, filed Mar. 24, 2016, Ben-Moshe et al. |
| U.S. Appl. No. 15,079,215, filed Mar. 25, 2016, Krakov et al. |
| U.S. Appl. No. 15/282,546, filed Sep. 30, 2016, Kucherov et al. |
| U.S. Appl. No. 15/281,593, filed Sep. 30, 2016, Braunschvig et al. |
| U.S. Appl. No. 14/976,532, filed Dec. 21, 2015, Bigman. |
| U.S. Appl. No. 15/086,565, filed Mar. 31, 2016, Bigman. |
| U.S. Office Action dated Aug. 27, 2015 corresponding to U.S. Appl. No. 14/228,971; 23 Pages. |
| Response to U.S. Office Action dated Aug. 27, 2015 corresponding to U.S. Appl. No. 14/228,971; Response filed on Jan. 14, 2016; 10 Pages. |
| U.S. Final Office Action dated Feb. 25, 2016 corresponding to U.S. Appl. No. 14/228,971; 27 Pages. |
| Request for Continued Examination (RCE) and Response to Final Office Action dated Feb. 25, 2016 corresponding to U.S. Appl. No. 14/228,971; Response filed on May 25, 2016; 12 Pages. |
| U.S. Office Action dated Jun. 10, 2016 corresponding to U.S. Appl. No. 14/228,971; 27 Pages. |
| Response to U.S. Office Action dated Jun. 10, 2016 corresponding to U.S. Appl. No. 14/228,971; Response filed Aug. 17, 2016; 10 Pages. |
| U.S. Final Office Action dated Oct. 4, 2016 corresponding to U.S. Appl. No. 14/228,971; 37 Pages. |
| U.S. Office Action dated Sep. 22, 2015 corresponding to U.S. Appl. No. 14/228,982; 17 Pages. |
| Response to U.S. Office Action dated Sep. 22, 2015 corresponding to U.S. Appl. No. 14/228,982; Response filed on Feb. 1, 2016; 10 Pages. |
| Notice of Allowance dated Apr. 26, 2016 corresponding to U.S. Appl. No. 14/228,982; 9 Pages. |
| U.S. Office Action dated Jan. 12, 2016 corresponding to U.S. Appl. No. 14/228,491; 12 Pages. |
| Response to Office Action dated Jan. 12, 2016 corresponding to U.S. Appl. No. 14/229,491; Response filed on Jun. 2, 2016; 7 Pages. |
| Notice of Allowance dated Jul. 25, 2016 corresponding to U.S. Appl. No. 14/229,491; 10 Pages. |
| EMC Corporation, “Introduction to the EMC XtremIO Storage Array;” Version 4.0; White Paper—A Detailed Review; Apr. 2015; 65 Pages. |
| Vijay Swami, “XtremIO Hardware/Software Overview & Architecture Deepdive;” EMC On-Line Blog; Nov. 13, 2013; Retrieved from < http://vjswami.com/2013/11/13/xtremio-hardwaresoftware-overview-architecture-deepdive/>; 18 Pages. |
| Jon Klaus “EMC World—Day 4—VNX: Skew & Data Placement;” FastStorage, High Octane IT; Blog posted May 16, 2013; http://faststorage.eu/emc-world-day-4-vnx-skew-data-placement/; 5 Pages. |
| Response to U.S. Non-Final Office Action dated Oct. 4, 2017 corresponding to U.S. Appl. No. 14/228,971; Response filed Jan. 26, 2018; 11 Pages. |
| U.S. Non-Final Office Action dated Nov. 13, 2017 for U.S. Appl. No. 15/079,213, 9 pages. |
| U.S. Non-Final Office Action dated Feb. 9, 2017 for U.S. Appl. No. 14/228,971; 38 Pages. |
| U.S. Non-Final Office Action dated Dec. 1, 2017 for U.S. Appl. No. 14/979,890; 10 Pages. |
| U.S. Non-Final Office Action dated Apr. 21, 2017 for U.S. Appl. No. 15/079,215; 53 Pages. |
| Request for Continued Examination (RCE) and Response to Final Office Action dated Oct. 4, 2016 corresponding to U.S. Appl. No. 14/228,971; RCE and Response filed on Jan. 4, 2017; 19 Pages. |
| Response to U.S. Non-Final Office Action dated Dec. 1, 2017 for U.S. Appl. No. 14/979,890; Response filed on Feb. 28, 2018; 9 Pages. |
| Response to U.S. Non-Final Office Action dated Nov. 28, 2017 for U.S. Appl. No. 15/079,205; Response filed on Feb. 28, 2018; 11 Pages. |
| Response to U.S. Non-Final Office Action dated Nov. 13, 2017 for U.S. Appl. No. 15/079,213; Response filed on Feb. 13, 2018; 9 Pages. |
| Response to U.S. Non-Final Office Action dated Feb. 9, 2017 for U.S. Appl. No. 14/228,971; Response filed on May 9, 2017; 12 Pages. |
| Response to U.S. Non-Final Office Action dated Apr. 21, 2017 for U.S. Appl. No. 15/079,215; Response filed on Jul. 21, 2017; 9 Pages. |
| U.S. Final Office Action dated Jun. 20, 2017 for U.S. Appl. No. 14/228,971; 40 Pages. |
| U.S. Final Office Action dated May 29, 2018 for U.S. Appl. No. 14/228,971; 35 pages. |
| U.S. Non-Final Office Action dated May 31, 2018 for U.S. Appl. No. 15/281,593; 10 pages. |
| Response to Office Action dated Jun. 2, 2017 from U.S. Appl. No. 15/079,208, filed Sep. 5, 2017; 10 Pages. |
| Response to U.S. Non-Final Office Action dated Dec. 29, 2017 for U.S. Appl. No. 15/079,208; Response filed on Apr. 30, 2018; 7 Pages. |
| Response to U.S. Non-Final Office Action dated Dec. 22, 2017 for U.S. Appl. No. 15/282,546; Response filed May 17, 2018; 8 Pages. |
| U.S. Non-Final Office Action dated Nov. 28, 2017 corresponding to U.S. Appl. No. 15/079,205; 9 Pages. |
| U.S. Non-Final Office Action dated Dec. 29, 2017 corresponding to U.S. Appl. No. 15/079,208; 10 Pages. |
| U.S. Non-Final Office Action dated Dec. 22, 2017 corresponding to U.S. Appl. No. 15/282,546; 12 Pages. |
| U.S. Non-Final Office Action dated Oct. 4, 2017 for U.S. Appl. No. 14/228,971; 37 pages. |
| U.S. Non-Final Office Action dated Jun. 2, 2017 for U.S. Appl. No. 15/079,208; 19 Pages. |
| Notice of Allowance dated Sep. 10, 2018 for U.S. Appl. No. 15/281,593; 9 Pages. |
| Response to U.S. Non-Final Office Action dated May 31, 2018 for U.S. Appl. No. 15/281,593; Response filed on Jul. 2, 2018; 13 pages. |