Not applicable.
Not applicable.
Not applicable.
Internet traffic is increasingly dominated by content distribution services such as live-streaming and video-on-demand, where user requests may be predictable based on statistical history. In addition, content distribution services usually exhibit strong temporal variability, resulting in highly congested peak hours and underutilized off-peak hours. A common approach is to take advantage of memories distributed across the network, for example, at end users and/or within the network, to store popular contents that are frequently requested by users. This storage process is known as caching. For example, caching may be performed during off-peak hours so that user requests may be served from local caches during peak hours to reduce network load.
Coded caching is a content caching and delivery technique that serves different content requests from users with a single coded multicast transmission based on contents cached at user devices. However, current coded caching schemes that are used for downloadable files are relatively static and may not address the dynamic server-client interactions in streaming services. To resolve these and other problems, and as will be more fully explained below, a coordinated content coding using caches (c4) coordinator is used to dynamically identify coding opportunities among segment requests of clients during streaming.
In one embodiment, the disclosure includes a method implemented by a network element (NE) configured as a c4 coordinator, the method comprising receiving, via a receiver of the NE, a first request from a first remote NE requesting a first file, receiving, via the receiver, a second request from a second remote NE requesting a second file, aggregating, via a processor of the NE, the first request and the second request according to first cache content information of the first remote NE and second cache content information of the second remote NE to produce an aggregated request, and sending, via a transmitter of the NE, the aggregated request to a content server to request a single common delivery of the first file and the second file with coded caching. In some embodiments, the disclosure also includes determining, via the processor, that a coding opportunity is present when the first cache content information indicates that the second file is cached at the first remote NE and when the second cache content information indicates that the first file is cached at the second remote NE, wherein the first request and the second request are aggregated when determining that the coding opportunity is present, and/or starting, via the processor, a timer with a pre-determined timeout interval upon receiving the first request, and determining, via the processor, that the second request is received prior to an expiration of the timer indicating an end of the pre-determined timeout interval, wherein the first request and the second request are aggregated when determining that the second request is received prior to the expiration of the timer, and/or receiving, via the receiver, the first cache content information from the first remote NE, and receiving, via the receiver, the second cache content information from the second remote NE, and/or receiving, via the receiver, a coded file carrying a combination of the first file and the second file coded with the coded caching, and sending, via the transmitter, the coded file to the first remote NE and the second remote NE using a multicast transmission, and/or the coded file comprises a bitwise exclusive-or (XOR) of the first file and the second file, and wherein the coded file comprises a file header indicating a first filename of the first file, a first file size of the first file, a second filename of the second file, and a second file size of the second file, and/or receiving, via the receiver, at least an additional request from an additional remote NE requesting an additional file, determining, via the processor, an optimal coding opportunity among the first request, the second request, and the additional request according to the first cache content information of the first remote NE, the second cache content information of the second remote NE, and additional cache content information of the additional remote NE, and further aggregating the first request and the second request when determining that the optimal coding opportunity is between the first request and the second request, and/or the first file and the second file are associated with a scalable video coding (SVC) encoded video stream represented by a plurality of base layer files at a base quality level, a plurality of first enhancement layer files associated with a first quality level higher than the base quality level, and a plurality of second enhancement layer files associated with a second quality level higher than the first quality level, wherein the first cache content information indicates that the plurality of base layer files and the plurality of first enhancement layer files are cached at the first remote NE, and wherein the second cache content information indicates that the plurality of base layer files and the plurality of second enhancement layer files are cached at the second remote NE, and/or the first file and the second file are associated with a SVC encoded video stream represented by a plurality of base layer files at a base quality level, a plurality of first enhancement layer files associated with a first quality level higher than the base quality level, and a plurality of second enhancement layer files associated with a second quality level higher than the first quality level, wherein the first cache content information indicates that a first set of the plurality of base layer files and a second set of the plurality of first enhancement layer files associated with the first set are cached at the first remote NE, wherein the second cache content information indicates that a third set of the plurality of base layer files and a fourth set of the plurality of second enhancement layer files associated with the third set are cached at the second remote NE, and wherein the first set and the third set are different, and/or the first file and the second file are associated with a SVC encoded video stream represented by a plurality of base layer files at a base quality level and a plurality of first enhancement layer files associated with a first quality level higher than the base quality level, wherein the first cache content information indicates that a first portion of each of the plurality of base layer files and a second portion of each of the plurality of first enhancement layer files are cached at the first remote NE, wherein the second cache content information indicates that a third portion of each of the plurality of base layer files and a fourth portion of each of the plurality of first enhancement layer files are cached at the second remote NE, wherein the first portion and the third portion are different, and wherein the second portion and the fourth portion are different.
In another embodiment, the disclosure includes a NE configured to implement a c4 coordinator, the NE comprising a receiver configured to receive a first request from a first remote NE requesting a first file, and receive a second request from a second remote NE requesting a second file, a processor coupled to the receiver and configured to aggregate the first request and the second request according to first cache content information of first remote NE and second cache content information of the second remote NE to produce an aggregated request, and a transmitter coupled to the processor and configured to send the aggregated request to a content server to request a single common delivery of the first file and the second file with coded caching. In some embodiments, the disclosure also includes a memory configured to store a cache list, wherein the receiver is further configured to receive the first cache content information from the first remote NE, and receive the second cache content information from the second remote NE, and wherein the processor is further configured to update the cache list according to the first cache content information and the second cache content information, and/or the processor is further configured to aggregate the first request and the second request when determining that the first file is cached at the second remote NE and the second file is cached at the first remote NE according to the cache list, and/or the processor is further configured to start a timer with a pre-determined timeout interval when the first request is received, determine that the second request is received prior to an expiration of the timer indicating an end of the pre-determined timeout interval, and aggregate the first request and the second request when determining that the second request is received prior to the expiration of the timer, and/or the receiver is further configured to receive a coded file carrying a combination of the first file and the second file coded with the coded caching, and wherein the transmitter is further configured to send the coded file to the first remote NE and the second remote NE using a multicast transmission, and/or the content server is a dynamic adaptive streaming over hypertext transfer protocol (HTTP) (DASH) server, and wherein the first remote NE and the second remote NE are DASH clients.
In yet another embodiment, the disclosure includes a method implemented in a NE comprising sending, via a transmitter of the NE, a request to a c4 coordinator in a network requesting a first file, receiving, via a receiver of the NE, a coded file carrying a combination of the first file and a second file coded with coded caching from the c4 coordinator, obtaining, via processor of the NE, the second file from a cache memory of the NE, and obtaining, via the processor, the first file from the coded file by decoding the coded file according to the second file obtained from the cache memory. In some embodiments, the disclosure also includes decoding the coded file by performing a bitwise XOR operation on the coded file and the second file, and/or receiving, via the receiver, the request from a client application executing on the NE, and sending, via the transmitter to the client application, the first file extracted from the decoding, and/or sending, via the transmitter, a cache report to the c4 coordinator indicating contents cached at the cache memory.
For the purpose of clarity, any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
DASH is a scheme for video streaming. In DASH, a video content is represented in multiple representations with different quality levels. Each representation is partitioned into a sequence of segments each comprising a short playback time of the video content. Examples of multiple representations are adaptive video streaming representations as described in
Coded caching is a content caching and delivery technique that serves different content requests from users with a single coded multicast transmission based on contents cached at user devices. One focus of coded caching is to jointly optimize content placement and delivery for downloadable files. Caching of video streams may be more complex due to the different representations as shown in the schemes 100 and 200. Since a large amount of contents in the Internet are streaming videos, applying coded caching to streaming services may improve network performance. However, the current coded caching schemes that are used for downloadable files are relatively static and may not address the dynamic server-client interactions in streaming services.
Disclosed herein are various embodiments of a coded caching-based system for video streaming and content placement schemes. The coded caching-based system employs a coordination node to group and identify coding opportunities based on content requests from clients and the clients' cache contents. A coding opportunity is present when a first file requested by a first client is cached at a second client and at the same time a second file requested by the second client is cached at the first client. When a coded opportunity is present among a group of client requests, the coordination node requests a server to deliver a single coded content to satisfy all the client requests. Thus, the coordination node is referred to as a c4 coordinator. Upon receiving the coded content delivery request, the server encodes all the requested files into a single common coded file, for example, by performing bitwise XOR on all the requested files. Upon receiving the coded file, the c4 coordinator sends the coded file to all corresponding clients using multicast transmission. In addition, the coded caching-based system employs a local proxy between each client and the c4 coordinator. Each local proxy has direct access to a local cache of a corresponding client. All client requests are directed to corresponding local proxies. The local proxies act as a decoding node to decode coded content received from the c4 coordinator using cache contents of corresponding clients and send the decoded file to the corresponding clients. The disclosed embodiments further consider the multiple representations of video streams for content placement to increase coded caching gain. Although the disclosed embodiments are described in the context of video streaming using DASH, the disclosed embodiments are suitable for use in any content delivery networks (CDNs) and are applicable to any type of contents.
The server 310 may be any hardware computer server configured to send and receive data over a network for content delivery. The content may include video, audio, text, or combinations thereof. The server 310 comprises a memory 319, which may be any device configured to store contents. As shown, files 311 shown as S1, S2a, S2b, S3a, S3b, . . . , SN are stored in the memory 319. For example, the files 311 correspond to multiple representations of video streams. In some embodiments, the server 310 may store the files 311 in external storage devices located close to the server 310 instead of the memory 319 internal to the server 310. The server 310 communicates and delivers contents to the clients 330 via the c4 coordinator 320. Upon receiving a coded content delivery request from the c4 coordinator 320, the server 310 performs coded caching to deliver a single common coded content to serve multiple clients' 330 requests, as described more fully below.
The clients 330 are shown as U1 and U2. The clients 330 may be any user devices such as computers, mobile devices, and/or televisions configured to request and receive content from the server 310. Each client 330 comprises a cache 337, a video player 338, and a proxy 339. The caches 337 are any internal memory configured to temporarily store files 331 or 332. For example, the server 310 caches portions of the files 311 at the clients' 330 caches 337 during off-peak hours. As shown, the files S1, S2a, and S3a 311 are cached at the client U1330's cache 337 shown as files 331, and the files S1, S2b, and S3b 311 are cached at the client U2330's cache 337 shown as files 332. The video players 338 may comprise software and/or hardware components configured to perform video decoding and playback.
Each proxy 339 may be an application or a software component implemented in a corresponding client 330. Each proxy 339 has direct access to the cache 337 and the video player 338 of the corresponding client 330. The proxy 339 acts as an intermediary between the video player 338 and the server 310. The video player 338 directs all content requests to the proxy 339. During a video playback, the proxy 339 may directly access the files 331 or 332 that are cached at the cache 337 for playback when requested by the video player 338. When a requested content is not stored at the cache 337, the proxy 339 forwards the video player's 338 requests to the c4 coordinator 320. The proxy 339 reports the contents of the cache 337 such as the files 331 and 332 cached to the c4 coordinator to enable the c4 coordinator to identify coding opportunity, as described more fully below. Upon receiving a coded content, the proxy 339 decodes the coded content using the contents cached at the cache 337 and sends the decoded content to the video player 338, as described more fully below. Although the proxies 339 are shown as separate components from the video players 338, the proxies 339 may be integrated into the video players 338.
The c4 coordinator 320 may be an application or a software component implemented in a network device. The c4 coordinator 320 is configured to coordinate coded caching for content delivery. The c4 coordinator 320 has a global view of cache contents such as the files 331 and 332 at the clients' 330 caches 337. For example, each client 330 informs the c4 coordinator 320 of internal cache contents during an initialization phase, as described more fully below. The c4 coordinator 320 determines whether a coding opportunity is present among content requests received from the clients' 330 proxies 339. A coding opportunity is present when the client U1330 requests a file that is cached at the client U2's 330 cache 337 and at the same time the client U2330 requests a file that is cached at the client U1's 330 cache 337. When a coding opportunity is present, the c4 coordinator 320 aggregates the requests and sends a coded content delivery request to the server 310. In response, the server 310 sends a single common coded content to the c4 coordinator 320. The c4 coordinator 320 sends the coded content to corresponding clients 330 using multicast transmission. Since the server 310 sends a single common coded content satisfying multiple requests instead of sending a separate file to serve each request, network bandwidth is reduced. It should be noted that although the c4 mechanisms are described in the context of video streaming, the c4 mechanisms may be applied to any type of content delivery application. In addition, the system 300 may comprise any number of clients, where the c4 coordinator 320 may determine coding opportunities among any number of requests from any number of clients and the server 310 may send a common coded content to corresponding clients. An optimal aggregation may be to find a minimum set cover for the requests and cache contents of the clients. Alternatively, a sub-optimal aggregation may be to find the best cover for two of the requests.
At least some of the features/methods described in the disclosure are implemented in a network apparatus or component, such as an NE 400. For instance, the features/methods in the disclosure may be implemented using hardware, firmware, and/or software installed to run on hardware. The NE 400 is any device that transports packets through a network, e.g., a switch, router, bridge, server, a client, etc. As shown in
A processor 430 is coupled to each Tx/Rx 410 to process the frames and/or determine which nodes to send the frames to. The processor 430 may comprise one or more multi-core processors and/or memory devices 432, which may function as data stores, buffers, etc. The processor 430 may be implemented as a general processor or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs). The processor 430 comprises a c4 processing module 433, which may perform coded caching and may implement methods 500, 600, 700, 800, and 900, as discussed more fully below, and/or any other flowcharts, schemes, and methods discussed herein. As such, the inclusion of the c4 processing module 433 and associated methods and systems provide improvements to the functionality of the NE 400. Further, the c4 processing module 433 effects a transformation of a particular article (e.g., the network) to a different state. In an alternative embodiment, the coded caching processing module 433 may be implemented as instructions stored in the memory device 432, which may be executed by the processor 430.
The memory device 432 may comprise a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory device 432 may comprise a long-term storage for storing content relatively longer, e.g., a read-only memory (ROM). For instance, the cache and the long-term storage may include dynamic RAMs (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof. The memory device 432 is configured to store content segments 434 such as the files 311, 331, and 332. For example, the memory device 432 corresponds to the memory 319 and caches 337.
It is understood that by programming and/or loading executable instructions onto the NE 400, at least one of the processor 430 and/or memory device 432 are changed, transforming the NE 400 in part into a particular machine or apparatus, e.g., a multi-core forwarding architecture, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable and that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an ASIC that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions (e.g., a computer program product stored in a non-transitory medium/memory) may be viewed as a particular machine or apparatus.
The streaming phase begins at step 520, for example, during peak hours. At step 520, the client 1 sends a first request to the proxy 1 requesting a file S3b. At step 525, the proxy 1 determines that the requested file S3b is not present at the client 1's cache and dispatches the first request to the c4 coordinator. At step 530, the client 2 sends a second request to the proxy 2 requesting a file S2a. At step 535, the proxy 2 determines that the requested file S2a is not present at the client 2's cache and dispatches the second request to the c4 coordinator.
At step 540, the c4 coordinator determines that the first request and the second request arrive within a pre-determined timeframe. For example, the c4 coordinator starts a countdown timer with the pre-determined timeframe after receiving the first request from the client 1 and determines that the second request is received prior to the end of the count-down or the expiration of the timer. The duration of the pre-determined timeframe may be configured based on latency requirements of a streaming application in use. The c4 coordinator determines that a coding opportunity is present based on the cache content information list updated at the step 515, where the file S3b requested by the client 1 is cached at the client 2 and the file S2a requested by the client 2 is cached at the client 1. Thus, at step 545, the c4 coordinator sends an aggregated request to the server requesting a coded delivery of the files S2a and S3b.
At step 550, upon receiving the aggregated request, the server determines that the aggregated request is a request for a coded response and sends a single coded file carrying a coded caching combined version of the files S2a and S3b. For example, the single coded file comprises a file header indicating file sizes and filenames of the files S2a and S3b. At step 560, the c4 coordinator forwards the single coded file to the proxy 1 and the proxy 2 using multicast transmission.
At step 565, upon receiving the coded file, the proxy 1 decodes the coded file based on cached content (e.g., file S2a) at the client 1 and sends the decoded segment S3b to the client 1. For example, the proxy 1 examines the file header of the coded file. When the file header indicates more than one file size, the file is a coded file. The proxy 1 decodes the received coded file using files in the client 1's cache that are indicated in the file header. Similarly, at step 570, upon receiving the coded file, the proxy 2 decodes the coded file based on cached content (e.g., file S3b) at the client 2 and sends the decoded file S2a to the client 2. It should be noted that the method 500 may be applied to aggregate any number of client requests as long as a coding opportunity is present among the client requests.
At step 640, the client 2 sends a second request to the proxy 2 requesting a file S2a. At step 645, the proxy 1 dispatches the second request to the c4 coordinator. At step 650, the c4 coordinator detected a timeout condition and forwards the second request to the server. At step 655, the server sends the uncoded file S2a to the c4 coordinator. At step 660, the c4 coordinator forwards the uncoded file S3b to the proxy 2. At step 665, the proxy 2 forwards the uncoded file S2a to the client 2. It should be noted that although the file S3b requested by the client 1 at the step 610 is cached at the client 2 and the file S2a requested by the client 2 at the step 645 is cached at the client, the two requests did not arrive at the c4 coordinator within a timeout period, thus no coding opportunity is available.
At step 730, a first determination is made whether the second request is received prior to an expiration of the timer indicating an end of the pre-determined timeout interval. When the second request is received prior to the expiration of the timer, the method 700 proceeds to step 740. Otherwise, the method 700 proceeds to step 770.
At step 740, a second determination is made whether a coding opportunity is present. A coding opportunity is present when the first cache content information indicates that the second file is cached at the first remote NE and when the second cache content information indicates that the first file is cached at the second remote NE. When a coding opportunity is present, the method 700 proceeds to step 750. Otherwise, the method 700 proceeds to step 770.
At step 750, the first request and the second request are aggregated to produce an aggregated request. At step 755, the aggregated request is sent to a content server such as the server 310 to request a delivery of the first file and the second file with coded caching. At step 760, a coded file carrying a combination of the first file and the second file coded with the coded caching is received. For example, the coded file is received from the content server, which determines that the aggregated request is a request for a coded file. At step 765, the coded file is sent to the first remote NE and the second remote NE using a multicast transmission.
At step 770, when there is a timeout condition or when no coding opportunity is available, the first request and the second request are separately dispatched to the content server. At step 775, the first file is received from the content server. At step 780, the second file is received from the content server. At step 785, the first file is sent to the first remote NE using unicast transmission. At step 790, the second file is sent to the second remote NE using unicast transmission.
As described above, a DASH server such as the server 310 may perform video streaming using adaptive video streaming with representations as shown in the scheme 100 or using SVC with representations as shown in the scheme 200. With multiple representations or versions of the same video available at the server, coding opportunity varies depending on the versions and/or segments cached at the clients such as the clients 330. Thus, the c4 mechanisms may provide different gains for different content placement schemes. The following embodiments analyze and evaluate different content placement schemes for adaptive video streaming and SVC.
To analyze the coded caching gain for adaptive video streaming, a set up with a server and K clients is used. The server stores N video files, each comprising a size of F bits at a base rate of r bps. Assume the size of a video file is directly proportional to the bit rate of the video. Then, the file size is scaled by the same factor α as the bit rate of the video. Each client has a cache capacity of M×F bits. The server uniformly caches M/N portion of each video file at each client. To cache versions with a bit rate of α×r bps, the server caches M/(αN) portion of each α×r bps video file at each client. As an example, K is set to a value of 2 to represent 2 clients and M is set to a value of N.
In a first scenario, when caching the video files at a base rate of about r bps, all N segments are cached at each client. Streaming at the base rate (e.g., α=1) requires no server bandwidth since the clients may stream from clients' caches. Streaming at a rate of about 2r bps requires a server bandwidth of about 4r bps (e.g., K×2r) since the 2r bps versions are not cached at the clients.
In a second scenario, when caching the 2r bps (e.g., α=2) version of the video files, each client caches half (e.g., M/(αN)=½) of each video file. For example, the server caches half of each video file at a first client and another disjoint half of each file at a second client. Streaming at a rate of about 2r bps requires a server bandwidth of about 2r bps (e.g., K×(1−M/(αN))×2r=2r) without coding and about r bps (e.g., K×(1−M/(αN))×½×2r=r) with coding. Streaming at a rate of about 3r bps requires a server bandwidth of about 3r bps (e.g., K×(1−M/(αN))×3r=3r) when the client playback the cached portion at the lower rate of about 2r bps and request the 3r bps version for the uncached portion. However, when the client desires to playback the entire video at 3r bps, a server bandwidth of about 6r bps (e.g., K×3r) is required.
In a third scenario, when caching the 3r bps (e.g., α=3) version of the video files, each client caches a third (e.g., M/(αN)=⅓) of each video file. For example, the server caches one third of each video file at a first client and another disjoint one third of each file at a second client. Then, streaming at a rate of about 3r bps requires a server bandwidth of about 4r bps (e.g., K×(1−M/(αN))×3r=4r) without coding. When applying coded caching, the required server bandwidth is about 3r bps, where one third of each requested file (e.g., K×M/αN×½×3r=2r) is coded and the remaining one third of each requested file is uncoded (e.g., K×M/αN×3r=2r). Streaming at a rate of about 4r bps requires a server bandwidth of about 4r bps (e.g., K×(1−M/(αN))×4r=3r) when the client playback the cached portion at the lower rate of about 3r bps and request the 4r bps version for the uncached portion. However, when the client desires to playback the entire video at 4r bps, a server bandwidth of about 8r bps (e.g., K×4r) is required. The following table summarizes the three scenarios:
Table 1—Summary of Coded Caching Gain for Adaptive Video Streaming
The scheme 1100 reduces the bandwidth usage by
when compared to the scheme 1000.
The scheme 1200 reduces the bandwidth usage by
when compared to the scheme 1000.
The scheme 1300 reduces the bandwidth usage by
when compared to the scheme 1000.
In an embodiment, a NE includes means for receiving a first request from a first remote NE requesting a first file, means for receiving a second request from a second remote NE requesting a second file, means for aggregating the first request and the second request according to first cache content information of the first remote NE and second cache content information of the second remote NE to produce an aggregated request, and means for sending the aggregated request to a content server to request a single common delivery of the first file and the second file with coded caching.
In an embodiment, a NE includes means for sending a request to a c4 coordinator in a network requesting a first file, means for receiving a coded file carrying a combination of the first file and a second file coded with coded caching from the c4 coordinator, means for obtaining the second file from a cache memory of the NE, and means for obtaining the first file from the coded file by decoding the coded file according to the second file obtained from the cache memory.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.