The present invention relates generally to data storage. More particularly, the present invention relates to a method, an apparatus and a computer program product for managing data storage.
Systems may include different resources used by one or more host processors. Resources and host processors in the system may be interconnected by one or more communication connections, such as network connections. These resources may include, for example, data storage devices such as those included in the data storage systems manufactured by Dell EMC. These data storage systems may be coupled to one or more host processors and provide storage services to each host processor. Multiple data storage systems from one or more different vendors may be connected and may provide common data storage for one or more host processors in a computer system.
A host may perform a variety of data processing tasks and operations using the data storage system. For example, a host may perform basic system I/O (input/output) operations in connection with data requests, such as data read and write operations.
Host systems may store and retrieve data using a data storage system containing a plurality of host interface units, disk drives (or more generally storage devices), and disk interface units. Such data storage systems are provided, for example, by Dell EMC of Hopkinton, Mass. The host systems access the storage devices through a plurality of channels provided therewith. Host systems provide data and access control information through the channels to a storage device of the data storage system and data of the storage device is also provided from the data storage system to the host systems also through the channels. The host systems do not address the disk drives of the data storage system directly, but rather, access what appears to the host systems as a plurality of files, objects, logical units, logical devices or logical volumes. These may or may not correspond to the actual physical drives. Allowing multiple host systems to access the single data storage system allows the host systems to share data stored therein.
Generally, with the increasing amounts of information being stored, it may be beneficial to efficiently store and manage that information. While there may be numerous techniques for storing and managing information, each technique may have tradeoffs between reliability and efficiency.
There is disclosed a method, comprising: defining, for each of a plurality of data storage drives, one or more areas on a data storage drive such that each area on the data storage drive corresponds to an area associated with similar I/O characteristics on the other data storage drives; selecting two or more drive extents from corresponding areas on different data storage drives of the plurality of data storage drives; and forming a RAID extent based on the selected drive extents.
There is also disclosed an apparatus, comprising: memory; and processing circuitry coupled to the memory, the memory storing instructions which, when executed by the processing circuitry, cause the processing circuitry to: define, for each of a plurality of data storage drives, one or more areas on a data storage drive such that each area on the data storage drive corresponds to an area associated with similar I/O characteristics on the other data storage drives; select two or more drive extents from corresponding areas on different data storage drives of the plurality of data storage drives; and form a RAID extent based on the selected drive extents.
There is also disclosed a computer program product having a non-transitory computer readable medium which stores a set of instructions, the set of instructions, when carried out by processing circuitry, causing the processing circuitry to perform a method of: defining, for each of a plurality of data storage drives, one or more areas on a data storage drive such that each area on the data storage drive corresponds to an area associated with similar I/O characteristics on the other data storage drives; selecting two or more drive extents from corresponding areas on different data storage drives of the plurality of data storage drives; and forming a RAID extent based on the selected drive extents.
The invention will be more clearly understood from the following description of preferred embodiments thereof, which are given by way of examples only, with reference to the accompanying drawings, in which:
Embodiments of the invention will now be described. It should be understood that the embodiments described below are provided only as examples, in order to illustrate various features and principles of the invention, and that the invention is broader than the specific embodiments described below.
Data storage systems balance I/O (Input/Output) operations across its data storage drives in order to maximize performance. Traditionally, data storage systems maximized performance by moving logical space units (e.g., slices) between RAID (redundant array of independent disks) groups based on a number of I/O provided by the RAID groups. It should be understood that the number of I/O associated with each RAID group in a data storage system may differ due to the performance characteristics of its drives and the RAID level such that movement of slices to RAID groups associated with higher I/O may help increase performance of the data storage system. The data storage systems would have been able to use this information (i.e., the numbers for every RAID group) and the I/O level corresponding to every slice to move slices to the most appropriate RAID group.
Mapped RAID is a new approach for information protection, which is more flexible and provide better performance and economic characteristics than a legacy RAID. It should be understood that Mapped RAID splits drives in a partnership group into a set of drive extents (DE's). A number of the drive extents from different physical drives are then combined into RAID extents in accordance to the RAID level. For example, 5 DE's may be combined to create a 4+1 RAID extent. The RAID extents are then combined to create a rotation group formed from RAID extents situated on the different physical drives. For example, if the partnership group includes 15 HDD's and the type of RAID is 4+1, a rotation group will include 3 RAID extents. The reason to introduce the group is that an object situated within a group can be accessed using all or most of the physical drives in parallel. A slice is situated with a rotation group completely. The groups are then combined into RAID groups. There can be a number of RAID groups produced by a single set of physical drives forming the partnership group.
As will be understood from the foregoing, in Mapped RAID, all data storage drives (e.g., HDD's) may be part of a single pool, whereby RAID groups created on top of it are situated on the same physical drives and a single slice can span all the drives (and even multiple times). However, the I/O performance of a HDD depends on which cylinders are measured. For example, the HDD I/O performance may be different depending on whether the outer or inner cylinders of the drive are used.
The prior art does not make any attempt to create RAID groups using the rotation groups from the same range of cylinders and then explicitly state its bigger or smaller I/O capabilities. Another problem is that different RAID groups created on the same disk partnership group use the same set of drives so the legacy auto-tiering and I/O balancing approach does not work directly. In the legacy world, the RAID groups are independent and their I/O capabilities are summarized and can be used in parallel (to some extent). In mapped RAID, RAID groups created on the same set of drives shares the same I/O capabilities.
The techniques described herein form RAID extents from drive extents associated with corresponding areas of different data storage drives. As discussed above, the I/O performance of an HDD depends on which cylinders is measured. The outer ones may provide a 1.5× advantage over the inner ones. Thus, RAID extents from outer cylinders of HDDs will be able to fully utilize the I/O potential of the drives. It, therefore, makes sense to create RAID extents (and rotation and RAID groups) from drive extents situated on the same cylinders to have a uniform I/O rate across the whole group. This approach, advantageously, facilitates effective utilization of the different I/O capabilities for different HDD cylinder ranges.
The array of data storage drives 128 may include data storage drives such as magnetic disk drives, solid state drives, hybrid drives, and/or optical drives. The array of data storage drives 128 may be directly physically connected to and/or contained within storage processor 120, and/or may be communicably connected to storage processor 120 by way of one or more computer networks, e.g. including or consisting of a Storage Area Network (SAN) or the like.
In some embodiments, host I/O processing logic 135 (e.g. RAID logic 142 and/or drive extent pool logic 134) compares the total number of data storage drives that are contained in array of data storage drives 128 to a maximum partnership group size. In response to determining that the number of data storage drives that are contained in array of data storage drives 128 exceeds a maximum partnership group size, host I/O processing logic 135 divides the data storage drives in array of data storage drives 128 into multiple partnership groups, each one of which contains a total number of data storage drives that does not exceed the maximum partnership group size, and such that each data storage drive in the array of data storage drives 128 is contained in only one of the resulting partnership groups. In the example of
In some embodiments, the maximum partnership group size may be configured to a value that is at least twice as large as the minimum number of data storage drives that is required to provide a specific level of RAID data protection. For example, the minimum number of data storage drives that is required to provide 4D+1P RAID-5 must be greater than five, e.g. six or more, and accordingly an embodiment or configuration that supports 4D+1P RAID-5 may configure the maximum partnership group size to a value that is twelve or greater. In another example, the minimum number of data storage drives that is required to provide 4D+2P RAID-6 must be greater than six, e.g. seven or more, and accordingly in an embodiment or configuration that supports 4D+2P RAID-6 the maximum partnership group size may be configured to a value that is fourteen or greater. By limiting the number of data storage drives contained in a given partnership group to a maximum partnership group size, the disclosed technology advantageously limits the risk that an additional disk will fail while a rebuild operation is being performed using data and parity information that is stored within the partnership group in response to the failure of a data storage drive contained in the partnership group, since the risk of an additional disk failing during the rebuild operation increases with the total number of data storage drives contained in the partnership group. In some embodiments, the maximum partnership group size may be a configuration parameter set equal to a highest number of data storage drives that can be organized together into a partnership group that maximizes the amount of concurrent processing that can be performed during a rebuild process resulting from a failure of one of the data storage drives contained in the partnership group.
Memory 126 in storage processor 120 stores program code that is executable on processing circuitry 124. Memory 126 may include volatile memory (e.g. RAM), and/or other types of memory. The processing circuitry 124 may, for example, include or consist of one or more microprocessors, e.g. central processing units (CPUs), multi-core processors, chips, and/or assemblies, and associated circuitry. The processing circuitry 124 and memory 126 together form control circuitry, which is configured and arranged to carry out various methods and functions as described herein. The memory 126 stores a variety of software components that may be provided in the form of executable program code. For example, as shown in
Drive extent pool logic 134 generates drive extent pool 136 by dividing each one of the data storage drives in the array of data storage drives 128 into multiple, equal size drive extents. Each drive extent consists of a physically contiguous range of non-volatile data storage that is located on a single drive. For example, drive extent pool logic 134 may divide each one of the data storage drives in the array of data storage drives 128 into multiple, equal size drive extents of physically contiguous non-volatile storage, and add an indication (e.g. a drive index and a drive extent index, etc.) of each one of the resulting drive extents to drive extent pool 136. The size of the drive extents into which the data storage drives are divided is the same for every data storage drive. Various specific fixed sizes of drive extents may be used in different embodiments. For example, in some embodiments each drive extent may have a size of 10 gigabytes. Larger or smaller drive extent sizes may be used in alternative embodiments.
RAID logic 142 generates RAID extent table 144, which contains multiple RAID extent entries. RAID logic 142 also allocates drive extents from drive extent pool 136 to specific RAID extent entries that are contained in the RAID extent table 144. For example, each row of RAID extent table 144 may consist of a RAID extent entry which may indicate multiple drive extents, and to which multiple drive extents may be allocated. Each RAID extent entry in the RAID extent table 144 indicates the same number of allocated drive extents.
Drive extents are allocated to RAID extent entries in the RAID Extent Table 144 such that no two drive extents indicated by any single RAID extent entry are located on the same data storage drive.
Each RAID extent entry in the RAID extent table 144 may represent a RAID stripe and indicates i) a first set of drive extents that are used to persistently store host data, and ii) a second set of drive extents that are used to store parity information. For example, in a 4D+1P RAID-5 configuration, each RAID extent entry in the RAID extent table 144 indicates four drive extents that are used to store host data and one drive extent that is used to store parity information. In another example, in a 4D+2P RAID-6 configuration, each RAID extent entry in the RAID extent table 144 indicates four drive extents that are used to store host data and two drive extents that are used to store parity information.
RAID logic 142 also divides the RAID extent entries in the RAID extent table 144 into multiple RAID extent groups. Accordingly, multiple RAID extent groups of RAID extent entries are contained in the RAID extent table 144. In the example of
The drive extent pool 136 may also include a set of unallocated drive extents located on data storage drives in partnership group A 130 and associated with RAID extent group 1 146, that may be allocated to RAID extent entries in RAID extent group 1 146 in the event of a data storage drive failure, i.e. to replace drive extents that are located on a failed data storage drive contained in partnership group A 130. Similarly, drive extent pool 136 may also include a set of unallocated drive extents located on data storage drives in partnership group B 132 and associated with RAID extent group 2 148, that may be allocated to RAID extent entries in RAID extent group 2 148 in the event of a data storage drive failure, i.e. to replace drive extents that are located on a failed data storage drive contained in partnership group B 132.
When a drive extent is allocated to a RAID extent entry, an indication of the drive extent is stored into that RAID extent entry. For example, a drive extent allocated to a RAID extent entry may be indicated within that RAID extent entry by storing a pair of indexes “m|n” into that RAID extent entry, where “m” indicates a drive index of the data storage drive on which the drive extent is located (e.g. a numeric drive number within array of data storage drives 128, a slot number within which the physical drive located, a textual drive name, etc.), and “n” indicates an index of the drive extent within the data storage drive (e.g. a numeric drive extent number, a block offset, a sector number, etc.). For example, in embodiments in which data storage drives are indexed within array of data storage drives 128 starting with 0, and in which drive extents are indexed within the data storage drive that contains them starting with 0, a first drive extent of drive 0 in array of data storage drives 128 may be represented by “010”, a second drive extent within drive 0 may be represented by “011”, and so on.
The RAID logic 142 divides the RAID extent entries in each one of the RAID extent groups into multiple rotation groups. For example, RAID logic 142 divides RAID extent group 1 146 into a set of N rotation groups made up of rotation group 0 150, rotation group 1 152, and so on through rotation group N 154. RAID logic 142 also divides RAID extent group 2 148 into rotation groups 156. Each RAID extent group may be divided into an integral number of rotation groups, such that each individual rotation group is completely contained within a single one of the RAID extent groups. Each individual RAID extent entry is contained in only one rotation group. Within a RAID extent group, each rotation group contains the same number of RAID extent entries. Accordingly, each one of the N rotation groups made up of rotation group 0 150, rotation group 1 152, through rotation group N 154 in RAID extent group 1 146 contains the same number of RAID extent entries. Similarly, each one of the rotation groups in rotation groups 156 contains the same number of RAID extent entries.
In at least one embodiment, storage object logic 160 generates at least one corresponding logical unit (LUN) for each one of the RAID extent groups in RAID extent table 144. In the example of
Each one of the LUNs generated by storage object logic 160 is made up of multiple, equal sized slices. Each slice in a LUN represents an addressable portion of the LUN, through which non-volatile storage indicated by RAID extent entries in the corresponding RAID extent group is accessed. For example, each slice of a LUN may represent some predetermined amount of the LUN's logical address space. For example, each slice may span some predetermined amount of the LUN's logical address space, e.g. 256 megabytes, 512 megabytes, one gigabyte, or some other specific amount of the LUN's logical address space.
For example, as shown in
The storage object logic 160 uses individual slices of LUN 161 and LUN 176 to access the non-volatile storage that is to be used to store host data when processing write I/O operations within host I/O operations 112, and from which host data is to be read when processing read I/O operations within host I/O operations 112. For example, non-volatile storage may be accessed through specific slices of LUN 161 and/or LUN 176 in order to support one or more storage objects (e.g. other logical disks, file systems, etc.) that are exposed to hosts 110 by data storage system 116. Alternatively, slices within LUN 161 and/or LUN 176 may be exposed directly to write I/O operations and/or read I/O operations contained within host I/O operations 112.
For each one of LUNs 161 and 176, all host data that is directed to each individual slice in the LUN is completely stored in the drive extents that are indicated by the RAID extent entries contained in a rotation group to which the slice is mapped according to a mapping between the slices in the LUN and the rotation groups in the RAID extent group corresponding to the LUN. For example, mapping 158 maps each slice in LUN 161 to a rotation group in RAID extent group 1 146. Accordingly, all host data in write I/O operations directed to a specific slice in LUN 161 is completely stored in drive extents that are indicated by the RAID extent entries contained in a rotation group in RAID extent group 1 146 to which that slice is mapped according to mapping 158.
Mapping 178 maps each slice in LUN 176 to a rotation group in RAID extent group 2 148. Accordingly, all host data in write I/O operations directed to a specific slice in LUN 176 is completely stored in drive extents that are indicated by the RAID extent entries contained in a rotation group in RAID extent group 2 148 to which that slice is mapped according to mapping 178.
In some embodiments, multiple slices may be mapped to individual rotation groups, and the host data directed to all slices that are mapped to an individual rotation group is stored on drive extents that are indicated by the RAID extent entries contained in that rotation group.
In some embodiments, storing host data in write I/O operations directed to a specific slice into the drive extents that are indicated by the RAID extent entries contained in the rotation group to which that slice is mapped may include striping portions (e.g. blocks) of the host data written to the slice across the drive extents indicated by one or more of the RAID extent entries contained in the rotation group, e.g. across the drive extents indicated by one or more of the RAID extent entries contained in the rotation group that are used to store data. Accordingly, for example, in a 4D+1P RAID-5 configuration, the disclosed technology may operate by segmenting the host data directed to a given slice into sequential blocks, and storing consecutive blocks of the slice onto different ones of the drive extents used to store data that are indicated by one or more of the RAID extent entries contained in the rotation group to which the slice is mapped.
The size of each LUN generated by storage object logic 160 is a sum of the capacities of the drive extents that are indicated by the RAID extent entries in the corresponding RAID extent group that are used to persistently store host data that is directed to the slices contained in the LUN. For example, the size of LUN 161 is a sum of the capacities of the drive extents that are indicated by the RAID extent entries in RAID extent group 1 146 and that are used to store host data that is directed to the slices contained in LUN 161.
While for purposes of concise illustration only one rotation group (i.e., rotation group 450) is shown in
The RAID extent group 402 may be contained in a RAID extent table in embodiments or configurations that provide mapped 4D+1P RAID-5 striping and data protection. Accordingly, within each RAID extent entry in RAID extent group 402, four of the five indicated drive extents are used to store host data, and one of the five indicated drive extents is used to store parity information.
RAID extent entry 0 is shown for purposes of illustration indicating a first drive extent 2|0, which is the first drive extent in data storage drive 2 408, a second drive extent 4|0, which is the first drive extent in data storage drive 4 412, a third drive extent 5|0, which is the first drive extent in data storage drive 5 414, a fourth drive extent 8|0, which is the first drive extent in data storage drive 8 420, and a fifth drive extent 9|0, which is the first drive extent in data storage drive 9 422.
RAID extent entry 1 is shown for purposes of illustration indicating a first drive extent 0|1, which is the second drive extent in data storage drive 0 404, a second drive extent 1|0, which is the first drive extent in data storage drive 1 406, a third drive extent 3|1, which is the second drive extent in data storage drive 3 410, a fourth drive extent 6|0, which is the first drive extent in data storage drive 6 416, and a fifth drive extent 7|0, which is the first drive extent in data storage drive 7 418.
RAID extent entry 2 is shown for purposes of illustration indicating a first drive extent 0|2, which is the third drive extent in data storage drive 0 404, a second drive extent 2|1, which is the second drive extent in data storage drive 2 408, a third drive extent 4|1, which is the second drive extent in data storage drive 4 412, a fourth drive extent 5|1, which is the second drive extent in data storage drive 5 414, and a fifth drive extent 7|1, which is the second drive extent in data storage drive 7 418.
Referring to
Element 220 is a representation of a surface of a single platter which may include concentric tracks. The surface 220 is illustrated as including a radius R 224, circumferences or circles denoted C1 and C2, and areas A1, A2 and A3. The radius R 224 may denote the radius of the surface. Area A3 corresponds to a physical portion of the surface including tracks located between the circumference or circle C1 and the outer edge of the surface of 220. Area A1 corresponds to a physical portion of the surface include tracks located between the circumference or circle C2 and the center point P1 of the surface 220. Area A1 represents the innermost tracks or portion of the surface. Area A3 represents the outermost tracks or portion of the surface. Area A2 corresponds to a physical portion of the surface remaining (e.g., other than areas A1 and A3) as defined by the boundaries denoted by C1 and C2. Therefore, the entire physical surface capable of storing data may be partitioned into the three areas A1, A2 and A3. In this example, the radius R 224 may be divided into 10 segments as illustrated so that each segment corresponds to approximately 10% of the radius R 224 of the surface 220.
As discussed above, and as will be discussed further below, the techniques described herein form RAID extents from drive extents associated with corresponding areas of different data storage drives. It should be understood that cylinders of the data storage drive 210 may be split into a number of continuous ranges such that the cylinders in a range have similar I/O characteristics. For example, the areas A1, A2 and A3 may be associated with cylinders having similar I/O characteristics. The I/O performance of a HDD depends on which cylinders is measured. For example, the outer cylinders may provide 1.5× advantage over the inner ones. Thus, RAID extents from outer cylinders of drives will be able to fully utilize the I/O potential of the drives. It, therefore, makes sense to create RAID extents (and rotation and RAID groups) from drive extents situated on the same cylinders to have a uniform I/O rate. This approach, advantageously, facilitates effective utilization of the different I/O capabilities for different cylinder ranges.
At step 410, the method comprises defining, for each of a plurality of data storage drives, one or more areas on a data storage drive such that each area on the data storage drive corresponds to an area associated with similar I/O characteristics on the other data storage drives. At step 420, the method comprises selecting two or more drive extents from corresponding areas on different data storage drives of the plurality of data storage drives. At step 430, the method comprises forming a RAID extent based on the selected drive extents.
In at least one embodiment, the method as described herein may also consist of the following features:
As mentioned above, suppose in an example the number of drives is 9 and the rotation group size is 5 for 4+1 RAID. In such a scenario, the first RAID group may be created from RAID extents using the first 5 drives mostly (1st-5th drives namely) and the second RAID group from RAID extents using the last 5 drives mostly (5th-9th) such that the RAID groups may be located on almost separate sets of drives (with minimal intersection). It means that I/O directed to the first RAID group may not be impacted (or impacted very little) by the I/O directed to the second RAID group which makes them suitable to independent storage object.
For example, suppose the following situation: 3 drives in a partnership group, 1+1 RAID and two RAID groups. The first RAID group consumes high performance range and the second RAID group is situated on the lower performance range. The performance of each drive is 150. Thus, the performance of the partnership group is 450, whereas performances of the respective groups are 450 and 300. If a slice requiring 200 IOPS is put on the first RAID group then the IOPS are consumed from that particular group and the corresponding partnership group and drives (two as rotation group is situated on two drives). After consumption, the DPG will have 250 IOPS (e.g., 50, 50, and 150 IOPS assuming the slice is situated on the first two drives) and the first RAID group will have 250. However, it will not be possible to put another slice requiring 200 IOPS on it. Even if that RAID group has the capacity (it is 250 IOPS now and the partnership group has its 250 IOPS to be consumed) there are no two drives able to handle 100 IOPS each assuming that the slice's I/O is evenly distributed between drives it is situated on.
It should be understood that there are dependencies between I/O accesses as the data belonging to the same object accessed by the same application will most like be accessed together. If they are situated close, the accesses can be optimized. For example, if the data accessed together are situated on the adjacent cylinders they can be fetched more easily and HDD controller may be able to optimize the head movement. The rotation groups are mapped to HDD cylinders so it is possible to put the related slices physically close to each other by placing them to the corresponding rotation groups.
For example, in one embodiment, suppose a 3-drive-DPG and 1+1 RAID scenario, a slice is situated on two drives as a rotation group includes a single RAID extent consisting from two DEs. If slice 1 is situated on drives A and B and slice 2 is not related to it, then it would make sense to put it into rotation group, which is situated on drives A and C or B and C as it will enable some level of parallel access to them at least.
The issue here is which range to use to place a slice on, the more capable or less capable. The idea is that I/O temperature is used and the slice is put to the more capable range starting from the hottest one. If the slices relate to archive content then it makes sense to put them into less capable ranges starting from one with lowest temperature.
As will be appreciated by one skilled in the art, aspects of the technologies disclosed herein may be embodied as a system, apparatus, method or computer program product.
Accordingly, each specific aspect of the present disclosure may be embodied using hardware, software (including firmware, resident software, micro-code, etc.) or a combination of software and hardware. Furthermore, aspects of the technologies disclosed herein may take the form of a computer program product embodied in one or more non-transitory computer readable storage medium(s) having computer readable program code stored thereon for causing a processor and/or computer system to carry out those aspects of the present disclosure.
Any combination of one or more computer readable storage medium(s) may be utilized. The computer readable storage medium may be, for example, without limitation, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any non-transitory tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The figures include block diagram and flowchart illustrations of methods, apparatus(s) and computer program products according to one or more embodiments of the invention. It will be understood that each block in such figures, and combinations of these blocks, can be implemented by computer program instructions. These computer program instructions may be executed on processing circuitry to form specialized hardware. These computer program instructions may further be loaded onto a computer or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the block or blocks.
Those skilled in the art should also readily appreciate that programs defining the functions of the present invention can be delivered to a computer in many forms, including without limitation: (a) information permanently stored on non-writable storage media (e.g. read only memory devices within a computer such as ROM or CD-ROM disks readable by a computer I/O attachment); or (b) information alterably stored on writable storage media (e.g. floppy disks and hard drives).
While the invention is described through the above exemplary embodiments, it will be understood by those of ordinary skill in the art that modification to and variation of the illustrated embodiments may be made without departing from the inventive concepts herein disclosed.