The present disclosure generally relates to the field of electronics. More particularly, some embodiments generally relate to efficient Solid State Drive (SSD) data compression scheme and layout.
Generally, memory used to store data in a computing system can be volatile (to store volatile information) or non-volatile (to store persistent information). Volatile data structures stored in volatile memory are generally used for temporary or intermediate information that is required to support the functionality of a program during the run-time of the program. On the other hand, persistent data structures stored in non-volatile (or persistent memory) are available beyond the run-time of a program and can be reused. Moreover, new data is typically generated as volatile data first, before a user or programmer decides to make the data persistent. For example, programmers or users may cause mapping (i.e., instantiating) of volatile structures in volatile main memory that is directly accessible by a processor. Persistent data structures, on the other hand, are instantiated on non-volatile storage devices like rotating disks attached to Input/Output (I/O or IO) buses or non-volatile memory based devices like a solid state drive.
As computing capabilities are enhanced in processors, one concern is the speed at which memory may be accessed by a processor. For example, to process data, a processor may need to first fetch data from a memory. After completion of the data processing, the results may need to be stored in the memory. Therefore, the memory access speed can have a direct effect on overall system performance.
Another important consideration is power consumption. For example, in mobile computing devices that rely on battery power, it is very important to reduce power consumption to allow for the device to operate while mobile. Power consumption is also important for non-mobile computing devices as excess power consumption may increase costs (e.g., due to additional power usage, increased cooling requirements, etc.), shorten component life, limit locations at which a device may be used, etc.
Hard disk drives provide a relatively low-cost storage solution and are used in many computing devices to provide non-volatile storage. Disk drives, however, use a lot of power when compared with solid state drives since a hard disk drive needs to spin its disks at a relatively high speed and move disk heads relative to the spinning disks to read/write data. This physical movement generates heat and increases power consumption. Also, solid state drives are much faster at performing read and write operations when compared with hard drives. To this end, many computing segments are migrating towards solid state drives.
The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Further, various aspects of embodiments may be performed using various means, such as integrated semiconductor circuits (“hardware”), computer-readable instructions organized into one or more programs (“software”), or some combination of hardware and software. For the purposes of this disclosure reference to “logic” shall mean either hardware, software, firmware, or some combination thereof.
Presently, SSDs can be costlier than more traditional storage devices (such as hard disk drives) on a per megabyte basis. To this end, compression may be utilized in an SSD to compress data so that more data fits on the same portion of an SSD, resulting in a lower implementation cost on a per megabyte basis. Additionally, compression can result in significant reduction of write traffic to the NAND. The reduction in write traffic also causes a corresponding reduction in the write amplification, which implies better performance, reliability, wear-leveling, and power consumption.
To this end, some embodiments relate to efficient Solid State Drive (SSD) data compression scheme and layout. Such techniques are not limited to SSDs and may be applied to any type of non-volatile memory as further discussed below. More particularly, an embodiment provides an efficient data layout which takes both the compression data portion (or chunk) size and the indirection granularity into account and provides uniform data layouts for compressed and uncompressed blocks of data. Such techniques may also make recovery from a power loss (such as recovery provided by PLI (Power Loss Imminent) technology, which utilizes energy storing capacitors or batteries to complete in-progress commands and commit temporarily stored data to non-volatile storage) and firmware management easier. Another embodiment provides a novel padding scheme which enables super scalar data decompression, e.g., decreasing read data latencies. Yet another embodiment provides an automatic data by-pass capability for uncompressed data (e.g., organized as groups or chunks of data).
Furthermore, even though some embodiments are generally discussed with reference to Non-Volatile Memory (NVM), embodiments are not limited to a single type of NVM and non-volatile memory of any type or combinations of different NVM types (e.g., in a format such as a Solid State Drive (or SSD, e.g., including NAND and/or NOR type of memory cells) or other formats usable for storage such as a memory drive, flash drive, etc.) may be used. The storage media (whether used in SSD format or otherwise) can be any type of storage media including, for example, one or more of: nanowire memory, Ferro-electric Transistor Random Access Memory (FeTRAM), Magnetoresistive Random Access Memory (MRAM), flash memory, Spin Torque Transfer Random Access Memory (STTRAM), Resistive Random Access Memory, byte addressable 3-Dimensional Cross Point Memory, PCM (Phase Change Memory), etc. Also, any type of Random Access Memory (RAM) such as Dynamic RAM (DRAM), backed by a power reserve (such as a battery or capacitance) to retain the data, may be used. Hence, even volatile memory capable of retaining data during power failure or power disruption may be used for storage in various embodiments.
The techniques discussed herein may be provided in various computing systems (e.g., including a non-mobile computing device such as a desktop, workstation, server, rack system, etc. and a mobile computing device such as a smartphone, tablet, UMPC (Ultra-Mobile Personal Computer), laptop computer, Ultrabook™ computing device, smart watch, smart glasses, smart bracelet, etc.), including those discussed with reference to
In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as “cores 106,” or more generally as “core 106”), a processor cache 108 (which may be a shared cache or a private cache in various embodiments), and/or a router 110. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as processor cache 108), buses or interconnections (such as a bus or interconnection 112), logic 120, memory controllers (such as those discussed with reference to
In one embodiment, the router 110 may be used to communicate between various components of the processor 102-1 and/or system 100. Moreover, the processor 102-1 may include more than one router 110. Furthermore, the multitude of routers 110 may be in communication to enable data routing between various components inside or outside of the processor 102-1.
The processor cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102-1, such as the cores 106. For example, the processor cache 108 may locally cache data stored in a memory 114 for faster access by the components of the processor 102. As shown in
As shown in
System 100 also includes Non-Volatile (NV) storage (or Non-Volatile Memory (NVM)) device such as an SSD 130 coupled to the interconnect 104 via SSD controller logic 125. Hence, logic 125 may control access by various components of system 100 to the SSD 130. Furthermore, even though logic 125 is shown to be directly coupled to the interconnection 104 in
Furthermore, logic 125 and/or SSD 130 may be coupled to one or more sensors (not shown) to receive information (e.g., in the form of one or more bits or signals) to indicate the status of or values detected by the one or more sensors. These sensor(s) may be provided proximate to components of system 100 (or other computing systems discussed herein such as those discussed with reference to other figures including 4-6, for example), including the cores 106, interconnections 104 or 112, components outside of the processor 102, SSD 130, SSD bus, SATA bus, logic 125, etc., to sense variations in various factors affecting power/thermal behavior of the system/platform, such as temperature, operating frequency, operating voltage, power consumption, and/or inter-core communication activity, etc.
As illustrated in
As mentioned above, some embodiments allow for both compressed and uncompressed data (e.g., or groups/chunks of data) to be written with a uniform format. Use of a uniform format may reduce firmware complexity. In an embodiment, a compression token (which could be one or more bits) indicates whether a block has been compressed (or not). The compression token may be positioned in one or more bits which are usually used to convey the Logical Block Addressing/Address (LBA) information (which generally specifies the location or (e.g., linear) address of blocks of data stored on a storage device) in an uncompressed sector. As will be further discussed below, inclusion of the LBA and the compressed block size, in the compression meta data, may permit context replay and may allow for logic to automatically skip decompression on those blocks which were not compressed in the first place. For maximum compaction, one embodiment packs (e.g., all) variants of native 4 KB (4096 B, 4104 B, and 4112 B) sector sizes in a 512 B sector.
Lossless Data Compression provides for no data loss upon compression and compressed data can be retrieved exactly by decompression process. Lossless data compression can provide several indirect benefits in SSDs such as a larger spare area (which can directly translate to faster performance), increased (e.g., NAND) bandwidth because less data is written, increased ECC (Error Correction Code) protection because the space needed for longer parity bits is practically free if compression happened, and so forth.
As an illustrative example of an embodiment of this scheme, 4 KsB sector sizes can be used. KsB is defined as either 4096 B, 4104 B, or 4112 B of data. In this scheme the entire data payload from the host is compressed which includes 4096 B/4104 B/4112 B of host data. For incorporating compression in the SSD, a “compression block or cblock” is defined which can be a 4 KsB block of data or more. Each cblock is compressed individually/separately and each cblock is treated independently from the previous and next cblocks.
Generally, SSDs employ logical to physical mapping tables which are also called indirection tables or Flash Translation Tables (FTLs). Each indirection system has a minimum tracking granularity (usually 512 B but can be more or less) with which the data from the host is tracked inside the SSD. Due to indirection tracking complexities, it is also important to define an indirection tracking granularity (such as nearest 512 B, 1 KB, or other sizes). A compressed block is padded to the nearest indirection granularity boundary for ease of tracking in the indirection system.
One of the main drawbacks of data compression is the added decompression latency associated with data reads. Generally, a compressed block can only be decompressed by a single decompression engine and one is limited to the maximum bandwidth of that decompression engine. By incorporating various offsets (as described below), some embodiments can provide for super-scalar decompression, which would allow more than one decompression engine to decompress a block of data. This could enhance decompression performance and help with read data latencies. One embodiment provides the following intelligent nearest 512 B padding scheme for use in super scalar data decompression:
(a) For N bytes to be padded out, rather than N 0's followed by the 2-byte length, an embodiment utilizes an intelligent padding scheme that can improve decompress speed/latency.
(b) For N>2, a 2-byte offset field can be stored, followed by a non-zero byte that indicates there are some offsets (e.g., the number of offsets being stored). In the case of single offset, what is stored may be the offset of a byte in the compressed stream which corresponds to about 50% of the input uncompressed data. The compressor logic (e.g., logic 160) may preserve/save the output byte count (offset) when it has consumed the input byte that is (e.g., half-way) in the input data buffer. In general, this will not be 50% of the compressed stream, since the size of the compressed stream is highly dependent on where matching strings are found (and their length), and where literal bytes are encoded. The offset value that is saved should be the first valid symbol that can be decompressed to generate data at about the 50% point of the original uncompressed data. During decompression, if an offset is detected, a second parallel decompressor logic will operate to effectively double the performance. As an extension, an offset of the input byte may be stored (to which the symbol corresponds) so that the decompressed data can be directly written from the parallel unit in its right place. The above embodiment may be extended to more parallel decompressor logic, e.g., four parallel decompressors (storing four offsets in the compressed stream) and so on.
Moreover, in some embodiments, if N<3 then super scalar decompression may not be performed and the legacy approach of zero padding only may be instead applied. In that case, the last byte of the “super scalar decompression meta” called as “Offset Present/Type” below would indicate that there is no super scalar decompression may occur. When N<3, the remaining space beyond the “super scalar decompression meta” may be zero padded. For N>3, it may indicate how many indexes are available.
Referring to
In some embodiments, there are two forms of the compression meta: (1) Common Meta: Common to all compressed data chunks/portions; and (2) Final Meta: For the case where the data is compressed to a single sector or the last chunk/portion in the compressed block. Sample fields within these two meta types are given below:
(1) Common Meta or CMeta:
(2) Final Meta or FMeta:
In one embodiment, for maximum compaction, the 512 B packing scheme as shown in
In one embodiment, logic 160 is an integrated compression engine in the SSD controller (such as shown in
Moreover, in some embodiments, depending upon how much space is available for pad, the zero pad may be used if Z<3, or if it is greater than 3 then one or more offsets may be used for super scalar decompression.
Referring to
Referring to
Referring to
Several benefits of some embodiments may be as follows:
(a) Layout for Compressed and Uncompressed Data is Uniform. Uniform data layouts for compressed and uncompressed may allow for simpler firmware implementation. Compression can be turned off in some SKUs (Stock Keeping Units) and the same firmware can handle the uncompressed data easily;
(b) Super Scalar Data Decompression: By using the intelligent padding scheme explained above, it is possible to enable multiple decompression engines to work simultaneously on the compressed block, for lower read data latencies;
(c) Context Replay: The firmware (e.g., logic 160) may have the ability to read the compression meta-data and find out the LBA and how big each compressed chunk is for context replay purposes. This embedded LBA provides the information for context replay in case the context journal was not yet written when the drive shut down or in cases when there is an ECC fatal in the context journal of any band. The firmware reads each page and extracts the LBA and size information and updates its logical to physical table. This mechanism also enables rebuilding of the entire context from scratch should the need to do so arises; and/or
(d) Automatic Data By-Pass: During compression operation it is possible that compressed and uncompressed chunks are contiguously written to the media. Whether a chunk is compressed or uncompressed is indicated through the compression token/indicia (e.g., the absence of the compression token indicating that the data is written uncompressed). The decompression engine has the capability to automatically detect uncompressed chunks and move them contiguously with the previously uncompressed data. This is referred to as automatic data by-pass mode. This allows for efficient data decompression on reads because uncompressed chunks are automatically sent to the host without any decompression. Since this can be automated in hardware, firmware (e.g., logic 160) intervention is minimized; hence, decreasing the latency of the system.
Moreover, compression, as a standalone feature, generally just reduces the data size of the data being written to the SSD and hence lowers the cost of the SSD through lowered $/GB. It also provides other indirect benefits: (1) endurance of the SSD devices is improved because by writing less data, more data can be written over the lifetime of the device; it is to be noted that each SSD device can operate for a prescribed number of program/erase cycles reliably; (2) extra spare area is created which can be used in an SSD as the “shuffle-space” for improving the writes IOPS of the device; (3) power consumption is reduced because of the lower device I/O power utilization; and/or (4) write speed of the SSD is improved because less data has to be written to the devices and bus bandwidth is improved.
In an embodiment, one or more of the processors 402 may be the same or similar to the processors 102 of
A chipset 406 may also communicate with the interconnection network 404. The chipset 406 may include a graphics and memory control hub (GMCH) 408. The GMCH 408 may include a memory controller 410 (which may be the same or similar to the memory controller 120 of
The GMCH 408 may also include a graphics interface 414 that communicates with a graphics accelerator 416. In one embodiment, the graphics interface 414 may communicate with the graphics accelerator 416 via an accelerated graphics port (AGP) or Peripheral Component Interconnect (PCI) (or PCI express (PCIe) interface). In an embodiment, a display 417 (such as a flat panel display, touch screen, etc.) may communicate with the graphics interface 414 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display 417.
A hub interface 418 may allow the GMCH 408 and an input/output control hub (ICH) 420 to communicate. The ICH 420 may provide an interface to I/O devices that communicate with the computing system 400. The ICH 420 may communicate with a bus 422 through a peripheral bridge (or controller) 424, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 424 may provide a data path between the CPU 402 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 420, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 420 may include, in various embodiments, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices.
The bus 422 may communicate with an audio device 426, one or more disk drive(s) 428, and a network interface device 430 (which is in communication with the computer network 403, e.g., via a wired or wireless interface). As shown, the network interface device 430 may be coupled to an antenna 431 to wirelessly (e.g., via an Institute of Electrical and Electronics Engineers (IEEE) 802.11 interface (including IEEE 802.11a/b/g/n/ac, etc.), cellular interface, 3G, 4G, LPE, etc.) communicate with the network 403. Other devices may communicate via the bus 422. Also, various components (such as the network interface device 430) may communicate with the GMCH 408 in some embodiments. In addition, the processor 402 and the GMCH 408 may be combined to form a single chip. Furthermore, the graphics accelerator 416 may be included within the GMCH 408 in other embodiments.
Furthermore, the computing system 400 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 428), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).
As illustrated in
In an embodiment, the processors 502 and 504 may be one of the processors 402 discussed with reference to
In one embodiment, one or more of the cores 106 and/or processor cache 108 of
The chipset 520 may communicate with a bus 540 using a PtP interface circuit 541. The bus 540 may have one or more devices that communicate with it, such as a bus bridge 542 and I/O devices 543. Via a bus 544, the bus bridge 542 may communicate with other devices such as a keyboard/mouse 545, communication devices 546 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 403, as discussed with reference to network interface device 430 for example, including via antenna 431), audio I/O device, and/or a data storage device 548. The data storage device 548 may store code 549 that may be executed by the processors 502 and/or 504.
In some embodiments, one or more of the components discussed herein can be embodied as a System On Chip (SOC) device.
As illustrated in
The I/O interface 640 may be coupled to one or more I/O devices 670, e.g., via an interconnect and/or bus such as discussed herein with reference to other figures. I/O device(s) 670 may include one or more of a keyboard, a mouse, a touchpad, a display, an image/video capture device (such as a camera or camcorder/video recorder), a touch screen, a speaker, or the like. Furthermore, SOC package 602 may include/integrate the logic 125/160 in an embodiment. Alternatively, the logic 125/160 may be provided outside of the SOC package 602 (i.e., as a discrete logic).
The following examples pertain to further embodiments. Example 1 includes an apparatus comprising: logic, coupled to non-volatile memory, to receive data and compress the data to generate compressed data prior to storage of the compressed data in the non-volatile memory, wherein the compressed data is to comprise a compressed version of the data, size of the compressed data, common meta information, and final meta information. Example 2 includes the apparatus of example 1, wherein the common meta information is to comprise one or more of: one or more padding bits, size of the compressed data, an offset, and a compression token. Example 3 includes the apparatus of example 2, wherein the compression token is to comprise one or more bits. Example 4 includes the apparatus of example 2, wherein the compression token is to be stored in a same space as Logical Block Addressing (LBA) information. Example 5 includes the apparatus of example 2, wherein the compression token is to indicate whether a corresponding portion of data is compressed. Example 6 includes the apparatus of example 2, wherein absence of the compression token is to indicate that the corresponding portion of the data is uncompressed. Example 7 includes the apparatus of example 2, wherein decompression of the compressed data is to be performed at least partially based on a value of the compression token or absence of the compression token. Example 8 includes the apparatus of example 1, wherein decompression of the compressed data is to be performed by a plurality of decompression logic. Example 9 includes the apparatus of example 1, wherein the final meta information is to comprise one or more of: a compressed Cyclical Redundancy Code (CRC) and LBA information. Example 10 includes the apparatus of example 1, wherein the logic is to access the common information data or the final meta information to perform context replay or context rebuilding. Example 11 includes the apparatus of example 1, wherein the compressed data and the received data are to have layouts in accordance with uniform formats. Example 12 includes the apparatus of example 1, wherein the logic is to compress the received data in accordance with one or more lossless compression algorithms. Example 13 includes the apparatus of example 1, wherein the compressed data is to be encrypted after compression or decrypted before decompression. Example 14 includes the apparatus of example 13, wherein the compressed data is to be encrypted or decrypted in accordance with Advanced Encryption Standard. Example 15 includes the apparatus of example 1, wherein the one or more padding bits are to pad the compressed data to a nearest indirection granularity boundary. Example 16 includes the apparatus of example 1, wherein a memory controller is to comprise the logic. Example 17 includes the apparatus of example 1, wherein a solid state drive is to comprise the logic. Example 18 includes the apparatus of example 1, wherein the non-volatile memory is to comprise one or more of: nanowire memory, Ferro-electric Transistor Random Access Memory (FeTRAM), Magnetoresistive Random Access Memory (MRAM), flash memory, Spin Torque Transfer Random Access Memory (STTRAM), Resistive Random Access Memory, byte addressable 3-Dimensional Cross Point Memory, PCM (Phase Change Memory), and volatile memory backed by a power reserve to retain data during power failure or power disruption. Example 19 includes the apparatus of example 1, further comprising a network interface to communicate the data with a host.
Example 20 includes a method comprising: receiving data and compressing the data to generate compressed data prior to storage of the compressed data in non-volatile memory, wherein the compressed data comprises a compressed version of the data, size of the compressed data, common meta information, and final meta information. Example 21 includes the method of example 20, wherein the common meta information comprises one or more of: one or more padding bits, size of the compressed data, an offset, and a compression token, and the final meta information comprises one or more of: a compressed Cyclical Redundancy Code (CRC) and LBA information. Example 22 includes the method of example 20, further comprising decompressing the compressed data by a plurality of decompression logic. Example 23 includes the method of example 20, further comprising access the common information data or the final meta information to perform context replay or context rebuilding. Example 24 includes a computer-readable medium comprising one or more instructions that when executed on one or more processors configure the one or more processors to perform one or more operations to: receive data and compressing the data to generate compressed data prior to storage of the compressed data in non-volatile memory, wherein the compressed data comprises a compressed version of the data, size of the compressed data, common meta information, and final meta information. Example 25 includes the computer-readable medium of example 24, further comprising one or more instructions that when executed on the processor configure the processor to perform one or more operations to cause decompressing of the compressed data by a plurality of decompression logic. Example 26 includes the computer-readable medium of example 24, further comprising one or more instructions that when executed on the processor configure the processor to perform one or more operations to cause access to the common information data or the final meta information to perform context replay or context rebuilding.
Example 27 includes a computing system comprising: a host comprising a processor having one or more processor cores; non-volatile memory; and logic, coupled to the non-volatile memory, to receive data from a host and compress the uncompressed data to generate compressed data prior to storage of the compressed data in the non-volatile memory, wherein the compressed data is to comprise a compressed version of the uncompressed data, size of the compressed data, common meta information, and final meta information. Example 28 includes the system of example 27, wherein the common meta information is to comprise one or more of: one or more padding bits, size of the compressed data, an offset, and a compression token. Example 29 includes the system of example 28, wherein the compression token is to comprise one or more bits. Example 30 includes the system of example 28, wherein the compression token is to be stored in a same space as Logical Block Addressing (LBA) information. Example 31 includes the system of example 28, wherein the compression token is to indicate whether a corresponding portion of data is compressed. Example 32 includes the system of example 28, wherein absence of the compression token is to indicate that the corresponding portion of the data is uncompressed. Example 33 includes the system of example 28, wherein decompression of the compressed data is to be performed at least partially based on a value of the compression token or absence of the compression token. Example 34 includes the system of example 27, wherein decompression of the compressed data is to be performed by a plurality of decompression logic.
Example 35 includes an apparatus comprising means to perform a method as set forth in any preceding example.
Example 36 comprises machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as set forth in any preceding example.
In various embodiments, the operations discussed herein, e.g., with reference to
Additionally, such tangible computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals (such as in a carrier wave or other propagation medium) via a communication link (e.g., a bus, a modem, or a network connection).
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase “in one embodiment” in various places in the specification may or may not be all referring to the same embodiment.
Also, in the description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. In some embodiments, “connected” may be used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.
Thus, although embodiments have been described in language specific to structural features, numerical values, and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features, numerical values, or acts described. Rather, the specific features, numerical values, and acts are disclosed as sample forms of implementing the claimed subject matter.