The present disclosure generally relates to the field of electronics. More particularly, some embodiments generally relate to an aggressive write-back cache cleaning policy optimized for Non-Volatile Memory (NVM).
In computing, a “cache” generally refers to a hardware or software component that stores data for faster future accesses. A “cache hit” occurs when the requested data is found in the cache, while a “cache miss” occurs when the requested data is absent from the cache.
Various cache policies may be used, e.g., to tradeoff between speed and data correctness. One such policy that results in faster speeds but may pose data correctness issues is a write-back cache policy. Write-back (sometimes also called write-behind) refers to a policy where the initial writing is done only to the cache. The write to the backing store is postponed until the cache blocks containing the data are about to be modified or replaced by new content. Hence, the write-back cache policy can be more complex and time-consuming to implement since future modifications or replacements need to be tracked to maintain data correctness and may sometimes result in two memory operations (one to write the replaced/modified cached data to the back store and another operation to retrieve the requested data).
The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Further, various aspects of embodiments may be performed using various means, such as integrated semiconductor circuits (“hardware”), computer-readable instructions organized into one or more programs (“software”), or some combination of hardware and software. For the purposes of this disclosure reference to “logic” shall mean either hardware, software, firmware, or some combination thereof.
Due to costs and/or space limitations, cache sizes are generally smaller than other types of memory such as backing stores. Hence, maintaining only valid data in a cache can be helpful and, to do so, cache cleaning policies may be used. As discussed herein, a cache cleaning policy generally refers to a policy that ensures that dirty data in a cache is written to primary (or backing) storage before it is removed from the cache. Moreover, “dirty” data generally refers to unsynchronized (or “unsynced”) data which is stored in a cache media and not yet copied back (or written back) to a backing store/storage (such as one or more of an SSD (Solid State Drive), HDD (Hard Disk Drive), Hybrid Drive, etc.). However, various issues remain with some current cache cleaning policies. For example, some solutions may focus on cleaning Least Recently Used (LRU) blocks first, which can lead to one seek (or random access) operation per each dirty cache line cleaned for an average case. Thus, CAS (or Cache Acceleration Software), which may be used in some implementations, may be unable to clean dirty data in the background efficiently. As discussed herein, operations directed at a cache may be directed at a portion of the cache called a “cache line” which may be about 4 kiloBytes (kB), 64 kB, 512 kB, 1024 kB, etc. in various embodiments.
Some embodiments relate to an aggressive write-back cache cleaning policy optimized for Non-Volatile Memory (NVM). In an embodiment, dirty cache lines are sorted by their LBA (Logic Block Address) on backend storage and an attempt is made to first flush (or remove) the largest sequential portions (including one or more cache lines). As discussed herein, a “backing” or “backend” storage/store generally refers to any NVM device or NVM medium (such as those discussed herein) that is capable of storing data on a non-volatile or permanent basis. As discussed herein, NVM medium may include one or more of HDD, SSD, Hybrid Drive, etc. For example, the backing storage system may include near and/or far memory such as those discussed with reference to
In one embodiment, the aggressive write-back cache cleaning policy is aimed at reducing the amount of dirty data at the fastest rate possible by optimizing it for sequential Hard Disk Drive (HDD) storage (or more generally NVM) writing operations. When compared to some other solution (e.g., an ALRU or Approximately Least Recently Used cleaning policy), the aggressive write-back cache cleaning policy may lead to significantly lower number of seek operations, as well as much lower time to reduce the volume of dirty data. This may translate to up to 80× transfer rates for cache cleaning (leading to improved IO (Input Output) performance) and a reduced vulnerability window for data loss.
Additionally, usage of such an aggressive cleaning policy may decrease dirty data fractions; thus, allowing for more efficient eviction operations. Moreover, with less dirty data at any given period of time, it is possible to reduce the data loss vulnerability window (for example, where a cache device is configured as a mirror, transitioning to write-through policy when mirror is degraded will be quicker with less dirty data). A “write-through” policy generally refers to a policy to service a write operation by synchronously/simultaneously writing to both the cache and to the backing store. Also, a “mirror” generally refers to a Redundant Array of Independent Disks (RAID), where a RAID-1 volume provides a data protection scheme by mirroring/storing the same data on two separate storage devices, e.g., mirrored SSDs refers to a pair of SSDs containing the same data. This provides data protection in case one of the SSDs malfunctions or breaks.
Furthermore, one or more embodiments discussed herein may be applied to any type of memory including Volatile Memory (VM) and/or Non-Volatile Memory (NVM). Also, embodiments are not limited to a single type of NVM and non-volatile memory of any type or combinations of different NVM types (e.g., including NAND and/or NOR type of memory cells) or other formats usable for memory) may be used. The memory media (whether used in DIMM (Dual Inline Memory Module) format, SSD (Solid State Drive), or otherwise) can be any type of memory media including, for example, one or more of: nanowire memory, Ferro-electric Transistor Random Access Memory (FeTRAM), Magnetoresistive Random Access Memory (MRAM), multi-threshold level NAND flash memory, NOR flash memory, Spin Torque Transfer Random Access Memory (STTRAM), Resistive Random Access Memory, byte addressable 3-Dimensional Cross Point Memory, single or multi-level PCM (Phase Change Memory), memory devices that use chalcogenide phase change material (e.g., chalcogenide glass) or “write in place” non-volatile memory. Also, any type of Random Access Memory (RAM) such as Dynamic RAM (DRAM), backed by a power reserve (such as a battery or capacitance) to retain the data, may provide an NV memory solution. Volatile memory can include Synchronous DRAM (SDRAM). Hence, even volatile memory capable of retaining data during power failure or power disruption(s) may be used for memory in various embodiments.
The techniques discussed herein may be provided in various computing systems (e.g., including a non-mobile computing device such as a desktop, workstation, server, rack system, etc. and a mobile computing device such as a smartphone, tablet, UMPC (Ultra-Mobile Personal Computer), laptop computer, Ultrabook™ computing device, smart watch, smart glasses, smart bracelet, etc.), including those discussed with reference to
In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as “cores 106,” or more generally as “core 106”), a processor cache 108 (which may be a shared cache or a private cache in various embodiments), and/or a router 110. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as processor cache 108), buses or interconnections (such as a bus or interconnection 112), logic 120, memory controllers (such as those discussed with reference to
In one embodiment, the router 110 may be used to communicate between various components of the processor 102-1 and/or system 100. Moreover, the processor 102-1 may include more than one router 110. Furthermore, the multitude of routers 110 may be in communication to enable data routing between various components inside or outside of the processor 102-1.
The processor cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102-1, such as the cores 106. For example, the processor cache 108 may locally cache data stored in a memory 114 for faster access by the components of the processor 102. As shown in
As shown in
System 100 also includes NV memory 130 (or Non-Volatile Memory (NVM), e.g., compliant with NVMe (NVM express)) coupled to the interconnect 104 via NV controller logic 125. Hence, logic 125 may control access by various components of system 100 to the NVM 130. Furthermore, even though logic 125 is shown to be directly coupled to the interconnection 104 in
In an embodiment, the far memory is presented as “main memory” to the host Operating System (OS), while the near memory is a cache for the far memory that is transparent to the OS, thus rendering the embodiments described below to appear the same as general main memory solutions. The management of the two-level memory may be done by a combination of logic and modules executed via the host central processing unit (CPU) 102 (which is interchangeably referred to herein as “processor”). Near memory may be coupled to the host system CPU via one or more high bandwidth, low latency links, buses, or interconnects for efficient processing. Far memory may be coupled to the CPU via one or more low bandwidth, high latency links, buses, or interconnects (as compared to that of the near memory).
Referring to
In an embodiment, near memory 210 is managed by Near Memory Controller (NMC) 204, while far memory 208 is managed by Far Memory Controller (FMC) 206. FMC 206 reports far memory 208 to the system operating system (OS) as main memory (i.e., the system OS recognizes the size of far memory 208 as the size of system main memory 200). The system OS and system applications are “unaware” of the existence of near memory 210 as it is a “transparent” cache of far memory 208.
CPU 102 further comprises 2LM engine module/logic 202. The “2LM engine” is a logical construct that may comprise hardware and/or micro-code extensions to support two-level main memory 200. For example, 2LM engine 202 may maintain a full tag table that tracks the status of all architecturally visible elements of far memory 208. For example, when CPU 102 attempts to access a specific data segment in main memory 200, 2LM engine 202 determines whether the data segment is included in near memory 210; if it is not, 2LM engine 202 fetches the data segment in far memory 208 and subsequently writes the data segment to near memory 210 (similar to a cache miss). It is to be understood that, because near memory 210 acts as a “cache” of far memory 208, 2LM engine 202 may further execute data perfecting or similar cache efficiency processes.
Further, 2LM engine 202 may manage other aspects of far memory 208. For example, in embodiments where far memory 208 comprises nonvolatile memory (e.g., NVM 130), it is understood that nonvolatile memory such as flash is subject to degradation of memory segments due to significant reads/writes. Thus, 2LM engine 202 may execute functions including wear-leveling, bad-block avoidance, and the like in a manner transparent to system software. For example, executing wear-leveling logic may include selecting segments from a free pool of clean unmapped segments in far memory 208 that have a relatively low erase cycle count.
In some embodiments, near memory 210 may be smaller in size than far memory 208, although the exact ratio may vary based on, for example, intended system use. In such embodiments, it is to be understood that because far memory 208 may comprise denser and/or cheaper nonvolatile memory, the size of the main memory 200 may be increased cheaply and efficiently and independent of the amount of DRAM (i.e., near memory 210) in the system.
In one embodiment, far memory 208 stores data in compressed form and near memory 210 includes the corresponding uncompressed version. Thus, when near memory 210 request content of far memory 208 (which could be a non-volatile DIMM in an embodiment), FMC 206 retrieves the content and returns it in fixed payload sizes tailored to match the compression algorithm in use (e.g., a 256B transfer).
As mentioned above, some approaches to write-back cache cleaning may involve an ALRU policy. It is aimed at cleaning the coldest (or least recently used) data first. In its basic form, the ALRU algorithm may select the least recently used cache lines and flush/remove them to backing storage (thus, allowing a high rate of cache hits). Unfortunately, this algorithm fails to take into account the performance limitations of hard drives. As least recently used lines may be randomly distributed on backing storage, cleaning each of them requires the hard drive to perform excessive seek operations. When working set is larger than cache device, cleaning of write-back cache becomes a key performance limiting factor (in worst cases causing cached volumes to be slower than ones that lack any caching mechanisms). By contrast, the aggressive cleaning policy discussed herein acknowledges fundamental limitations of hard drives and aims to instead overcome them with smart write-back cleaning, leading to vastly improved performance in such caches configurations.
Referring to
At operation 230, the retrieved dirty cache lines 228 are sorted in (e.g., ascending) order of backend storage LBAs. Operation 232 then optionally (e.g., for SAM type backing storage devices) groups the sorted cache lines by continuous LBA ranges. Operation 234 optionally (e.g., if operation 232 is performed) sorts the range size (e.g., in descending) order. Operation 236 then flushes/removes/cleans all or some of the dirty cache lines from the generated list (per operations 230 and/or 232-234). Method 220 then returns to operation 221.
Furthermore, there is yet another advantage of the ability to keep dirty cache rate at consistently low levels. This advantage is data integrity. More particularly, it can be common for write-back cache deployments to use RAID-1 (Redundant Array of Inexpensive Drives level 1) style mirror of caching devices. However, when one replica in the mirror fails, it is a reasonable policy to immediately switch to write-through mode while cleaning all dirty data (so that a failure of second replica does not lead to data loss). The time between failure of first replica and cleaning of all dirty data is a vulnerability window in which failure of second replica means losing user data (and may then have to recover (potentially outdated or old) data from a backup source such as tapes). With less dirty data present, this switching operation may be shorter, whereas with conventional ALRU policy it is common for dirty data to reach close to 100% and never clean itself.
Moreover, one or more embodiments may be used to enhance both Windows® and Linux® in terms of: (1) performance in write-back mode from 2× up to 80× (with high spatial locality, when the working set is larger than caching device); and/or (2) improve data protection via reducing the vulnerability window after a single replica failure (when the mirrored pool of SSDs is used as cache). For example, one or more embodiments may be applied to B-Cache™, FlashCache™, and/or DM-Cache™ in a Linux kernel.
Various types of computer networks 303 may be utilized including wired (e.g., Ethernet, Gigabit, Fiber, etc.) or wireless networks (such as cellular, including 3G (Third-Generation Cell-Phone Technology or 3rd Generation Wireless Format (UWCC)), 4G (Fourth-Generation Cell-Phone Technology), 4G Advanced, Low Power Embedded (LPE), Long Term Evolution (LTE), LTE advanced, etc.). Moreover, the processors 302 may have a single or multiple core design. The processors 302 with a multiple core design may integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 302 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors.
In an embodiment, one or more of the processors 302 may be the same or similar to the processors 102 of
A chipset 306 may also communicate with the interconnection network 304. The chipset 306 may include a graphics and memory control hub (GMCH) 308. The GMCH 308 may include a memory controller 310 (which may be the same or similar to the memory controller 120 of
The GMCH 308 may also include a graphics interface 314 that communicates with a graphics accelerator 316. In one embodiment, the graphics interface 314 may communicate with the graphics accelerator 316 via an accelerated graphics port (AGP) or Peripheral Component Interconnect (PCI) (or PCI express (PCIe) interface). In an embodiment, a display 317 (such as a flat panel display, touch screen, etc.) may communicate with the graphics interface 314 through, for example, a signal converter that translates a digital representation of an image stored in a memory device such as video memory or system memory into display signals that are interpreted and displayed by the display. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display 317.
A hub interface 318 may allow the GMCH 308 and an input/output control hub (ICH) 320 to communicate. The ICH 320 may provide an interface to I/O devices that communicate with the computing system 300. The ICH 320 may communicate with a bus 322 through a peripheral bridge (or controller) 324, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 324 may provide a data path between the CPU 302 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 320, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 320 may include, in various embodiments, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices.
The bus 322 may communicate with an audio device 326, one or more disk drive(s) 328, and a network interface device 330 (which is in communication with the computer network 303, e.g., via a wired or wireless interface). As shown, the network interface device 330 may be coupled to an antenna 331 to wirelessly (e.g., via an Institute of Electrical and Electronics Engineers (IEEE) 802.11 interface (including IEEE 802.11a/b/g/n/ac, etc.), cellular interface, 3G, 4G, LPE, etc.) communicate with the network 303. Other devices may communicate via the bus 322. Also, various components (such as the network interface device 330) may communicate with the GMCH 308 in some embodiments. In addition, the processor 302 and the GMCH 308 may be combined to form a single chip. Furthermore, the graphics accelerator 316 may be included within the GMCH 308 in other embodiments.
Furthermore, the computing system 300 may include volatile and/or nonvolatile memory. For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 328), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).
As illustrated in
In an embodiment, the processors 402 and 404 may be one of the processors 302 discussed with reference to
In one embodiment, one or more of the cores 106 and/or processor cache 108 of
The chipset 420 may communicate with a bus 440 using a PtP interface circuit 441. The bus 440 may have one or more devices that communicate with it, such as a bus bridge 442 and I/O devices 443. Via a bus 444, the bus bridge 442 may communicate with other devices such as a keyboard/mouse 445, communication devices 446 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 303, as discussed with reference to network interface device 330 for example, including via antenna 331), audio I/O device, and/or a data storage device 448. The data storage device 448 may store code 449 that may be executed by the processors 402 and/or 404.
In some embodiments, one or more of the components discussed herein can be embodied as a System On Chip (SOC) device.
As illustrated in
The I/O interface 540 may be coupled to one or more I/O devices 570, e.g., via an interconnect and/or bus such as discussed herein with reference to other figures. I/O device(s) 570 may include one or more of a keyboard, a mouse, a touchpad, a display, an image/video capture device (such as a camera or camcorder/video recorder), a touch screen, a speaker, or the like. Furthermore, SOC package 502 may include/integrate items 125, 130, 160, and/or 162 in an embodiment. Alternatively, items 125, 130, 160, and/or 162 may be provided outside of the SOC package 502 (i.e., as a discrete logic).
Embodiments described herein can be powered by a battery, wireless charging, a renewal energy source (e.g., solar power or motion-based charging), or when connected to a charging port or wall outlet.
The following examples pertain to further embodiments. Example 1 includes an apparatus comprising: logic to cause removal of one or more cache lines from a cache based at least in part on a list of cache lines in the cache, wherein the cache is to store data to be stored in a backing storage system, wherein the list of cache lines is to comprise a sorted list of Logic Block Addresses (LBAs) on the backing storage system that correspond to the one or more cache lines. Example 2 includes the apparatus of example 1, comprising logic to group the one or more cache lines into one or more LBA ranges. Example 3 includes the apparatus of example 2, comprising logic to sort the one or more LBA ranges by a size of the one or more LBA ranges. Example 4 includes the apparatus of example 1, wherein the logic is to cause removal of the one or more cache lines in response to an indication that the one or more cache lines are to be modified or replaced and in response to comparison of a number of the one or more cache lines and a threshold value. Example 5 includes the apparatus of example 1, comprising logic to determine whether to cause removal of the one or more cache lines based at least in part on: the list of the cache lines or an Approximately Least Recently Used (ALRU) cleaning policy. Example 6 includes the apparatus of example 1, wherein the one or more cache lines are to be written to the cache in accordance with a write-back policy. Example 7 includes the apparatus of example 1, wherein the cache is to comprise at least one Solid State Drive (SSD). Example 8 includes the apparatus of example 1, wherein the backing storage system is to comprise at least one Synchronous Access Memory (SAM) device. Example 9 includes the apparatus of example 1, wherein the one or more cache lines are to store data before that data is to be written to the backing storage system. Example 10 includes the apparatus of example 1, wherein the backing storage system is to comprise a plurality of storage nodes. Example 11 includes the apparatus of example 10, wherein the plurality of storage nodes is to comprise a near storage node and/or a far storage node. Example 12 includes the apparatus of example 10, wherein the plurality of storage nodes is to communicate via a network. Example 13 includes the apparatus of example 12, wherein the network is to comprise a wired and/or a wireless network. Example 14 includes the apparatus of example 10, wherein each of the plurality of storage nodes is to comprise one or more of: a hard disk drive, a solid state drive, and a hybrid drive. Example 15 includes the apparatus of example 1, wherein the cache or the backing storage system are to comprise Non-Volatile Memory (NVM), wherein the non-volatile memory is to comprise one or more of: nanowire memory, Ferro-electric Transistor Random Access Memory (FeTRAM), Magnetoresistive Random Access Memory (MRAM), flash memory, Spin Torque Transfer Random Access Memory (STTRAM), Resistive Random Access Memory, byte addressable 3-Dimensional Cross Point Memory, PCM (Phase Change Memory), write-in-place non-volatile memory, and volatile memory backed by a power reserve to retain data during power failure or power disruption. Example 16 includes the apparatus of example 1, further comprising one or more of: at least one processor, having one or more processor cores, communicatively coupled to the cache or the backing storage system, a battery communicatively coupled to the apparatus, or a network interface communicatively coupled to the apparatus.
Example 17 includes a method comprising: causing removal of one or more cache lines from a cache based at least in part on a list of cache lines in the cache, wherein the cache stores data to be stored in a backing storage system, wherein the list of cache lines comprises a sorted list of Logic Block Addresses (LBAs) on the backing storage system that correspond to the one or more cache lines. Example 18 includes the method of example 17, further comprising grouping the one or more cache lines into one or more LBA ranges. Example 19 includes the method of example 18, further comprising sorting the one or more LBA ranges by a size of the one or more LBA ranges. Example 20 includes the method of example 17, wherein causing removal of the one or more cache lines is to be performed in response to an indication that the one or more cache lines are to be modified or replaced. Example 21 includes the method of example 17, further comprising writing the one or more cache lines to the cache in accordance with a write-back policy.
Example 22 includes one or more computer-readable medium comprising one or more instructions that when executed on at least one processor configure the at least one processor to perform one or more operations to: cause removal of one or more cache lines from a cache based at least in part on a list of cache lines in the cache, wherein the cache stores data to be stored in a backing storage system, wherein the list of cache lines comprises a sorted list of Logic Block Addresses (LBAs) on the backing storage system that correspond to the one or more cache lines. Example 23 includes the one or more computer-readable medium of example 22, further comprising one or more instructions that when executed on the processor configure the processor to perform one or more operations to group the one or more cache lines into one or more LBA ranges. Example 24 includes the one or more computer-readable medium of example 23, further comprising one or more instructions that when executed on the processor configure the processor to perform one or more operations to sort the one or more LBA ranges by a size of the one or more LBA ranges. Example 25 includes the one or more computer-readable medium of example 22, further comprising one or more instructions that when executed on the processor configure the processor to perform one or more operations to write the one or more cache lines to the cache in accordance with a write-back policy.
Example 26 includes a computing system comprising: a processor; memory, coupled to the processor, to store data corresponding to object stores; and logic to cause removal of one or more cache lines from a cache based at least in part on a list of cache lines in the cache, wherein the cache is to store data to be stored in a backing storage system, wherein the list of cache lines is to comprise a sorted list of Logic Block Addresses (LBAs) on the backing storage system that correspond to the one or more cache lines. Example 27 includes the computing system of example 26, comprising logic to group the one or more cache lines into one or more LBA ranges. Example 28 includes the computing system of example 26, wherein the logic is to cause removal of the one or more cache lines in response to an indication that the one or more cache lines are to be modified or replaced and in response to comparison of a number of the one or more cache lines and a threshold value. Example 29 includes the computing system of example 26, comprising logic to determine whether to cause removal of the one or more cache lines based at least in part on: the list of the cache lines or an Approximately Least Recently Used (ALRU) cleaning policy. Example 30 includes the computing system of example 26, wherein the one or more cache lines are to be written to the cache in accordance with a write-back policy. Example 31 includes the computing system of example 26, wherein the cache is to comprise at least one Solid State Drive (SSD). Example 32 includes the computing system of example 26, wherein the backing storage system is to comprise at least one Synchronous Access Memory (SAM) device. Example 33 includes the computing system of example 26, wherein the one or more cache lines are to store data before that data is to be written to the backing storage system. Example 34 includes the computing system of example 26, wherein the backing storage system is to comprise a plurality of storage nodes. Example 35 includes the computing system of example 26, wherein the cache or the backing storage system are to comprise Non-Volatile Memory (NVM), wherein the non-volatile memory is to comprise one or more of: nanowire memory, Ferro-electric Transistor Random Access Memory (FeTRAM), Magnetoresistive Random Access Memory (MRAM), flash memory, Spin Torque Transfer Random Access Memory (STTRAM), Resistive Random Access Memory, byte addressable 3-Dimensional Cross Point Memory, PCM (Phase Change Memory), write-in-place non-volatile memory, and volatile memory backed by a power reserve to retain data during power failure or power disruption. Example 36 includes the computing system of example 26, further comprising one or more of: the processor, having one or more processor cores, communicatively coupled to the cache or the backing storage system, a battery communicatively coupled to the apparatus, or a network interface communicatively coupled to the apparatus.
Example 37 includes an apparatus comprising means to perform a method as set forth in any preceding example. Example 38 comprises machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as set forth in any preceding example.
In various embodiments, the operations discussed herein, e.g., with reference to
Additionally, such tangible computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals (such as in a carrier wave or other propagation medium) via a communication link (e.g., a bus, a modem, or a network connection).
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase “in one embodiment” in various places in the specification may or may not be all referring to the same embodiment.
Also, in the description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. In some embodiments, “connected” may be used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.
Thus, although embodiments have been described in language specific to structural features, numerical values, and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features, numerical values, or acts described. Rather, the specific features, numerical values, and acts are disclosed as sample forms of implementing the claimed subject matter.