This disclosure relates generally to data transfer, and more specifically to systems, methods, and apparatus for transferring data between interconnected devices.
In some processing systems, a computing workload may be split among multiple compute devices, each of which may include a processor and memory. Data produced as a result of a first computation by a first one of the compute devices may be stored at a storage device, then transferred to a second one of the compute devices where it may be used as an input to a second computation. A host device may coordinate data movement between the compute devices and the storage device.
The above information disclosed in this Background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not constitute prior art.
A method for transferring data may include writing, from a producing device, data to a storage device through an interconnect, determining a consumer device for the data, prefetching the data from the storage device, and transferring, based on the determining, the data to the consumer device through the interconnect. The method may further comprise receiving, at a prefetcher for the storage device, an indication of a relationship between the producing device and the consumer device, and determining the consumer device based on the indication. The method may further comprise placing the data in a stream at the storage device based on the relationship between the producing device and the consumer device. The indication may be provided by an application associated with the consumer device. Receiving the indication may include receiving the indication through a coherent memory protocol for the interconnect. Receiving the indication through a coherent memory protocol may include receiving a producer identifier (ID) and a consumer ID through one or more fields of the coherent memory protocol. The method may further include detecting, at a prefetcher for the storage device, an access pattern of the producing device and the consumer device, and determining the consumer device based on the access pattern. The method may further include allocating, by a host, memory at the consumer device for the data. The method may further include allocating, by the storage device, memory at the consumer device for the data. The memory at the consumer device may include reserved memory. The method may further include updating, by a host, a mapping for the memory at the consumer device. The transferring may overlap a compute operation at the consumer device. The method may further include notifying a prefetcher for the storage device of a status of the writing. The notifying may include writing to a memory location.
A device may include an interconnect interface, a storage medium, and a prefetcher configured to perform a determination of a consumer device for data stored in the storage medium, prefetch the data from the device, and transfer, based on the determination, the data to the consumer device through the interconnect interface. The device may further include a data structure configured to store information on a relationship between a producer device of the data and the consumer device. The data structure may include a producer identifier (ID) and a consumer ID for the relationship. The device may further include a multi-stream interface configured to store the data received through the interconnect interface in a stream of the storage medium based on the relationship. The prefetcher may include detection logic configured to determine an access pattern for the consumer device and a producer device of the data.
A system may include an interconnect, a producer device coupled to the interconnect, a consumer device coupled to the interconnect, and a storage device coupled to the interconnect and configured to store data received from the producer device through the interconnect, and a prefetcher coupled to the interconnect, wherein the prefetcher may be configured to perform a determination of the consumer device based on the producer device, prefetch the data, and transfer, based on the determination, the data to the consumer device through the interconnect. The producer device may be configured to notify the prefetcher of a status of the data received from the producer device through the interconnect. The system may further include a host device coupled to the interconnect. The host device may be configured to send, through the interconnect, information to the prefetcher about a relationship between the producer device and the consumer device. The host device may include a coherency engine configured to maintain memory coherency between the producer device, the consumer device, and the storage device.
The figures are not necessarily drawn to scale and elements of similar structures or functions may generally be represented by like reference numerals or portions thereof for illustrative purposes throughout the figures. The figures are only intended to facilitate the description of the various embodiments described herein. The figures do not describe every aspect of the teachings disclosed herein and do not limit the scope of the claims. To prevent the drawings from becoming obscured, not all of the components, connections, and the like may be shown, and not all of the components may have reference numbers. However, patterns of component configurations may be readily apparent from the drawings. The accompanying drawings, together with the specification, illustrate example embodiments of the present disclosure, and, together with the description, serve to explain the principles of the present disclosure.
A storage device in accordance with example embodiments of the disclosure may prefetch data stored at the storage device and transfer it to a consumer device that may use the data for a computation or other processing. In some embodiments, this may reduce or eliminate the involvement of a host which may be a bottleneck in transferring data between devices. Depending on the implementation details, prefetching data and transferring it to a consumer device may reduce access latency and/or synchronization overhead, and/or may enable data input and/or output (I/O) operations to overlap with data processing operations at the consumer device, thereby improving throughput.
In some embodiments, a producer device and a consumer device may be coupled through an interconnect in a pipeline configuration to perform distributed computations such as machine learning (ML) training and/or inference. For example, a producer device (e.g., a compute device such as an accelerator, graphics processing unit (GPU), and/or the like) may write the results of a first stage of computation to a storage device through the interconnect. A consumer device (e.g., another compute device such as an accelerator, GPU, and/or the like) may read the results from the storage device and use the results for a next stage of computation. In some embodiments, a prefetcher in the storage device may prefetch the results stored by the producer device and transfer the results to the consumer device in anticipation of the consumer device using the results for the next stage of computation. Depending on the implementation details, this may enable data to be transferred to the consumer device in parallel with other processing being performed by the consumer device, thereby reducing or hiding memory and/or storage device access latency.
A storage device may determine which consumer device to transfer prefetched data to based on various techniques in accordance with example embodiments of the disclosure. For example, in some embodiments, a prefetcher for a storage device may receive information from an application (e.g., running on a host coupled to the interconnect) indicating producer-consumer relationships between one or more producer devices and one or more consumer devices. Thus, when a specific producer device writes data to the storage device (e.g., a specific amount of data written to a specific location), the prefetcher may prefetch the data and transfer it to a specific consumer device. As another example, in some embodiments, a prefetcher may monitor read and/or write operations for a storage device to detect one or more access patterns that may predict which consumer device is likely to use data stored by a specific producer device.
To provide a target location for writing prefetched data at a consumer device, a storage device may allocate memory at a consumer device based on various techniques in accordance with example embodiments of the disclosure. For example, in some embodiments, a storage device may send a memory allocation request to a host which may allocate target memory at the consumer device (e.g., through a virtual memory manager (VMM) at the host). As another example, the storage device may allocate the target memory itself (e.g., using a VMM at the prefetcher). In some embodiments in which the storage device allocates the target memory, the storage device may copy the prefetched data to a reserved area of memory at the consumer device.
In some embodiments, an interconnect between a producer device, a consumer device, a storage device, and/or a host may be implemented at least partially with a memory coherent interface and/or using one or more memory coherent protocols. In such embodiments, one or more aspects of the memory coherent interface and/or protocol may be used to implement one or more features in accordance with example embodiments of the disclosure. For example, in some embodiments, a coherency engine may send information about one or more producer-consumer relationships to a prefetcher using one or more protocol fields such as a tag field.
In some embodiments, a storage device may store data from one or more producer devices in one or more streams at the storage device. For example, data having similar lifetimes and/or similar producer-consumer relationships may be placed in the same streams. Thus, in some embodiments, data destined for the same consumer device may be placed in the same stream. Depending on the implementation details, this may improve garbage collection and/or block erase operations at the storage device, because, for example, some or all of the data transferred to a specific consumer device may become invalid at the same time.
The principles disclosed herein have independent utility and may be embodied individually, and not every embodiment may utilize every principle. However, the principles may also be embodied in various combinations, some of which may amplify the benefits of the individual principles in a synergistic manner.
The host device 102 may include a central processing unit (CPU) 112 and a memory 114 which, in this embodiment, may be implemented with dynamic random access memory (DRAM). Each of the compute devices 104a, 104b, 104c, and 104d may include a corresponding GPU 116a, 116b, 116c, and 116d, respectively (indicated as GPU0, GPU1, GPU2, and GPU3, respectively). The GPUs 116a, 116b, 116c, and 116d may be referred to collectively as 116. Each of the compute devices 104a, 104b, 104c, and 104d may further include a corresponding local device memory 118a, 118b, 118c, and 118d, respectively (indicated as DRAM0, DRAM1, DRAM2, and DRAM3, respectively). The local device memories 118a, 118b, 118c, and 118d may be referred to collectively as 118. Each of the storage devices 106a and 106b may include a corresponding local storage medium 120a and 120b, respectively (indicated as Storage0 and Storage1, respectively). The local storage medium 120a and 120b may be referred to collectively as 120. Each of the storage devices 106a and 106b may further include a corresponding controller 122a and 122b, respectively, (indicated as Controller0 and Controller1, respectively). The controllers 122a and 122b may be referred to collectively as 122.
In some embodiments, an application running on the host device 102 may coordinate data movement between the individual device local memories. For example, the host device 102 may send one or more commands to one of the storage devices 106 to transfer data from the local memory 118 of one of the compute units 104 to the storage medium 120 of the storage device 106. This may be referred to as pulling data from the local memory 118. The host device 102 may also send one or more commands to one of the storage devices 106 to transfer data from the storage medium 120 of the storage device 106 to the local memory 118 of one of the compute units 104. This may be referred to as pushing data to the local memory 118.
In the embodiment illustrated in
Depending on the implementation details, the host stage 102 may be a bottleneck for data movement between devices because it may be involved in coordinating some or all of the data transfers. Thus, the storage devices 106 may be passive participants in the data movement. Moreover, in some embodiments, data transfers between the local memories 118 and the storage media 120 may only occur while a processing kernel is not executing on the corresponding GPU 116.
In some embodiments, one or more of the compute devices 204 may operate as a producer device that may produce (e.g., as a result of a computation or other processing) data that may be consumed by one or more of the compute devices 204 that may operate as a consumer device. In some situations, a compute device 204 may operate as both a producer device and a consumer device.
The prefetcher 224 may implement one or more techniques for storing and/or transferring data to and/or from one or more of the compute devices 204 and/or other devices accessible through the interconnect 208 in accordance with example embodiments of the disclosure. For example, the prefetcher 224 may be implemented as a programmable prefetcher that may prefetch data from local memory at the storage device 206 (e.g., storage medium 220) and push it to the local memory 218 of one or more of the compute devices 204 (e.g., a memory at the device having a processor or other GI 216 that may use the data, or a memory at a device that may be relatively close, or closest, to a processor or other GI that may use the data, Thus, in some embodiments, a consumer device may be a compute device 204 that may include a processor or other GI that may use the transferred data, or a consumer device may be a compute device 204 or other device having a memory that may store the transferred data for a processor or other GI (e.g., at another device connected to the interconnect 208) that may use the transferred data.
In some embodiments, the prefetcher 224 may determine a consumer device to prefetch data for, and/or push data to, based on information the prefetcher may receive from an application (e.g., running on a host coupled to the interconnect) indicating one or more producer-consumer relationships between one or more producer devices and one or more consumer devices. In some embodiments, the prefetcher 224 may determine a consumer device by monitoring one or more read and/or write operations for one or more storage devices to detect one or more access patterns that may predict which consumer device is likely to use data stored by a specific producer device. In some embodiments, the prefetcher 224 may include detection logic 225 configured to monitor read and/or write operations and/or detect one or more access patterns.
In some embodiments, the prefetcher 224 may allocate memory at a consumer device by requesting a memory allocation by a host device, by allocating the memory itself, or in any other manner.
Depending on the implementation details, the embodiment illustrated in
In some embodiments, the prefetcher 224 may be integral with the storage device 206. For example, in some embodiments the prefetcher may be implemented partially or entirely as part of a storage device controller for the storage device 206. As another example, in some embodiments, the prefetcher 224 may be implemented partially or entirely as part of a host device and/or one or more of the compute devices 204.
The compute devices 204 may be implemented with any type of device that may include memory 218 and/or processor or other GI 216 that may produce and/or use data that may be stored in the storage device 206. Examples may include GPUs, accelerators, neural processing units (NPUs), tensor processing units (TPUs), network interface cards (NICs), and/or the like.
Any of the memories 218a and 218b and/or storage medium 220 may be implemented with any type of memory and/or storage media including any type of solid state media, magnetic media, optical media, and/or the like, any type of volatile memory such DRAM, static random access memory (SRAM), and/or the like, any type of nonvolatile memory including flash memory such as not-AND (NAND) flash memory, persistent memory (PMEM) such as cross-gridded nonvolatile memory, memory with bulk resistance change, phase change memory (PCM), and/or the like, or any combination thereof.
The interconnect 208 may be implemented one or more of any type of interface and/or protocol including Peripheral Component Interconnect Express (PCIe), Nonvolatile Memory Express (NVMe), NVMe-over-fabric (NVMe-oF), Ethernet, Transmission Control Protocol/Internet Protocol (TCP/IP), remote direct memory access (RDMA), RDMA over Converged Ethernet (ROCE), FibreChannel, InfiniBand, Serial ATA (SATA), Small Computer Systems Interface (SCSI), Serial Attached SCSI (SAS), iWARP, and/or the like, or any combination thereof. In some embodiments, the interconnect 208 may be implemented with one or more memory semantic and/or memory coherent interfaces and/or protocols such as Compute Express Link (CXL), and/or CXL.mem, CXL.io, and/or CXL.cache, Gen-Z, Coherent Accelerator Processor Interface (CAPI), Cache Coherent Interconnect for Accelerators (CCIX), and/or the like, or any combination thereof.
For purposes of illustration, the embodiment illustrated in
Referring to
For purposes of illustration, each of the compute devices 304 may process a corresponding stage of an ML workload 310, which in this embodiment, may be implemented as a neural network. Thus, compute devices 304a, 304b, 304c, and 304d may process corresponding stages 310a, 310b, 310c, and 310d, respectively, of the neural network workload 310. The final stage 310d may include, for example, one or more fully connected (FC) layers and a SoftMax function. However, the system illustrated in
The host device 302 may include a central processing unit (CPU) 312 and a memory 314 which, in this embodiment, may be implemented with dynamic random access memory (DRAM), but may also be implemented with any other type of memory.
For purposes of illustration, each of the compute devices 304a, 304b, 304c, and 304d may include a corresponding GPU 316a, 316b, 316c, and 316d, respectively (indicated as GPU0, GPU1, GPU2, and GPU3, respectively). The GPUs 316a, 316b, 316c, and 316d may be referred to collectively as 316. However, any other type of compute and/or processing apparatus may be used.
Each of the compute devices 304a, 304b, 304c, and 304d may further include a corresponding local device memory 318a, 318b, 318c, and 318d, respectively (indicated as DRAM0, DRAM1, DRAM2, and DRAMS, respectively). The local device memories 318a, 318b, 318c, and 318d may be referred to collectively as 318. For purposes of illustration, the memories 318 may be implemented with DRAM as shown in
Each of the storage devices 306a and 306b may include a corresponding local storage medium 320a and 320b, respectively (indicated as Storage0 and Storage1, respectively). The local storage medium 320a and 320b may be referred to collectively as 320. For purposes of illustration, the storage media 320 may be assumed to be NAND flash memory, but any type of memory and/or storage media may be used.
Each of the storage devices 306a and 306b may further include a corresponding prefetcher 324a and 324b, respectively, (indicated as Prefetcher0 and Prefetcher1, respectively). The prefetchers 324a and 324b may be referred to collectively as 324.
For purposes of illustration, the interconnect 308 may be implemented with CXL, but any other type of interconnect(s) and/or protocol(s) may be used.
One or more of the CPU 312, the GPUs 316, and/or prefetchers 324 may be assigned a general initiator identifier (Cl ID), for example, by the host 302. In the embodiment illustrated in
Any of the prefetchers 324 may push data to any of the memories 314 and/or 318 using connections through the interconnect 308, some examples of which are shown by dashed arrows 326. Any of the prefetchers 324 may communicate with any of the GPUs 316 and or CPU 312 using connections through the interconnect 308, some examples of which are shown by solid arrows 328.
Referring to
An application 403 running on a host 402 may provide one or more indications of producer-consumer relationships to a prefetcher 424. The one or more indications (which may also be referred to as hints) may include information such a producer GI ID, a consumer GI ID, a data address, and/or a data size (in bytes, pages, blocks, and/or the like) as illustrated in Table 1 which may be stored by the prefetcher 424.
In some embodiments, the application 403 may pass the producer and/or consumer GI IDs to the prefetcher, for example, during data reads and/or writes using one or more CXL fields such as a tag field and/or a metavalue field and metafield field. The host 402 and/or application 403 may be implemented, for example, with the corresponding host 302 illustrated in
Referring to
In the example illustrated in
Thus, in some embodiments, a prefetcher may exploit existing apparatus for stream-based placement to place related data in the same stream, which, depending on the implementation details, may provide an efficient storage technique for data to be prefetched and/or pushed to a compute device.
Referring to
At operation 504, the storage device may make one or more data placement decisions (e.g., using the prefetcher) based, for example, on one or more indications from the application, for storing data at the device. For example, the prefetcher may select one or more streams for storing data received from a host and/or one or more producer devices based on one or more indications of producer-consumer relationships. At operation 506, the prefetcher may then store the data in the selected streams through a multi-stream interface in the storage device.
At operation 508, the storage device may detect, e.g., using detection logic in the prefetcher, one or more access patterns that may indicate a producer-consumer relationship between one or more producer devices and one or more consumer devices. The detection of access patterns may be in addition to, or an alternative to, the indications of producer-consumer relationship provided by an application and/or host, Based on one or more indicated producer-consumer relationship and/or one or more detected access patterns, the prefetcher may select one or more consumer devices to prefetch data for, and one or more times to prefetch the data. For example, the prefetcher may prefetch data for a specific consumer device when there is free space for the data in the memory of the consumer device.
At operation 510, the prefetcher may push the prefetched data to the consumer device through an interconnect such as CXL. In some embodiments, the prefetcher may perform one or more operations to allocate target space for the data at the consumer device prior to pushing the data as described in more detail below.
In some embodiments, an application may provide the one or more indications of producer-consumer relationships to a prefetcher programmatically, for example, by programming the prefetcher through an application programming interface (API). Such an arrangement may be used, for example, when a user or programmer may have insights into the data access patterns of a workload. An example of a pseudocode definition for a procedure for sending one or more indications (e.g., hints) to a prefetcher may be as follows:
send_prefetch_hint (const void*prefetcher, size_t producerid, size_t consumer_id, const void*buffer_ptr, size_t size, string access_pattern);
<one or more compute operations>
Examples of parameters that may be provided with an indication of a producer-consumer relationship may be as follows:
Prefetcher: prefetcher device
Producer_id: ID of producer device
Consumer_id: ID of consumer device
Buffer_ptr: pointer to memory written by producer and read by consumer
Size: size of memory written by producer
Access_pattern: can be sequential, random, or determined at runtime
An example invocation of the procedure for sending one or more indications to a prefetcher may be as follows for a case in which the application may provide an access pattern for the prefetcher to identify (e.g., the prefetcher may push data to GPU1 before the end of GPU0 kernel execution):
send_prefetch_hint ( . . . “sequential”), 1->4
An example invocation of the procedure for a case in which an access pattern may be determined by the prefetcher at runtime may be as follows:
send_prefetch_hint ( . . . “runtime”), 1->2->3->4
Referring to
In an implementation in which the prefetcher determines an access pattern at runtime, the Prefetcher0 may observe, at operation (2), that GPU1 may read data elements 640a, 640b, 640c, and 640d in sequence after GPU0 writes the data 638. At operation (3), based on the observed access pattern, Prefetcher0 may prefetch the data 640 when it observes GPU0 writing the data 638. Alternatively, or additionally, Prefetcher0 may observe GPU1 sequentially reading data elements 640a, 640b, 640c, and 640d and therefore prefetch data elements 640e, 640f, 640g, and 640i on the assumption that GPU1 will read those data elements next.
In an implementation in which the prefetcher is provided a producer-consumer relationship between GPU0 and GPU1, Prefetcher0 may not need to observe the data write at operation (2) and may instead, at operation (3), Prefetcher0 may prefetch the data 640 based on the producer-consumer relationship when GPU0 writes the data 638.
In some embodiments, Prefetcher0 may not perform a prefetch operation unless it first verifies that there is free memory available in memory 318b (DRAM1) at the consumer device. In some embodiments, the prefetcher 324a may be implemented, for example, using combinational and/or sequential logic, one or more neural networks, and/or the like.
At operation (4), Prefetcher0 may push the prefetched data 640 to DRAM1 at the consumer device.
In some embodiments, GPU1 may become aware of the presence of the pushed data using various techniques in accordance with example embodiments of the disclosure. For example, in embodiments in which the Prefetcher may allocate the memory for the pushed data, GPU1 may check a reserved memory area that may be allocated for the pushed data. As another example, GPU1 may be aware of the presence of the pushed data by checking page table data.
Referring to
At operation (4), the host device 302 may allocate the requested memory space in DRAM1. In some embodiments, the CPU 312 of host device 302 may initiate a direct memory access (DMA) transfer of second data from Storage0 to DRAM1 which may be performed at operation (5). In other embodiments, Prefetcher0 may initiate and/or perform the data transfer (e.g., by prefetching the data and pushing it to DRAM1) after the host device 302 completes the memory allocation.
Referring to
Referring to
Prefetcher0 may then prefetch and copy additional data to the allocated target space in the reserved space 319b of DRAM1. At operation (4), Prefetcher0 may send a request to the host device 302 to update one or more page table mappings of the newly allocated space,
Referring to
If, however, the prefetcher decides to allocate the target memory itself, then at operation 1012, the prefetcher may initiate the allocation with a VMM at the prefetcher. At operation 1014, the VMM may allocate the target memory at the consumer device, for example, from a reserved memory area. At operation 1016, the prefetcher may prefetch the data and copy it to the target memory at the consumer device. At operation 1018, the prefetcher may request the host device to update a page table to reflect the newly allocated target memory at the consumer device.
Referring to
At operation 1106, GPU0, at the producer device 106a, may begin writing first data to the Storage Device. At operation 1108, a CPU coherency engine may send a producer (e.g., initiator) GI ID for GPU0 to Prefetcher0, for example, using one or more cxl.mem fields such as the tag field. At operation 1110, Prefetcher0 may determine a stream in which to place the first data from GPU0 and store the first data via a multi-stream interface based, for example, on one or more of the stored indications and/or the determined placement. At operation 1112, GPU0 may notify Prefetcher0 that the write operation of the first data as complete, or example, by writing any data to a predetermined memory location.
At operation 1114, GPU1 may begin a read operation of the first data from the Storage Device (which was written by GPU0). At operation 1116, the CPU coherency engine may send a consumer GI ID for GPU1 to Prefetcher0, for example, using one or more cxl.mem fields such as the tag field. At operation 1118, Prefetcher0 may send the first data from the Storage Device to GPU1. At operation 1120, Prefetcher0 may detect a runtime access pattern between GPU0 and GPU1 based on the write and read operations 1106 and 1114. In some embodiments, the Prefetcher may not detect this pattern, for example, if the CPU has sent one or more indications of a producer-consumer relationship between GPU0 and GPU1.
At operation 1122, Prefetcher0 may initiate a memory allocation for target memory at DRAM1 with the VMM. If the Prefetcher initiates a memory allocation by requesting a memory allocation from the host CPU, the VMM located at the host device may perform the allocation. If, however, Prefetcher0 performs the memory allocation itself, it may use the VMM located at the Storage Device. At operation 1124, the VMM (whether at the host CPU or Storage Device) may allocate target space in DRAM1. At operation 1126, Prefetcher0 may prefetch the data from the stream in which it was stored. At operation 1128, Prefetcher0 may push the prefetched data to DRAM1. At operation 1130, Prefetcher0 may request the host CPU to update a page table for the data pushed to DRAM1.
The embodiment illustrated in
The embodiment illustrated in
Any of the functionality described herein, including any of the host functionality, device functionally, and/or the like described with respect to
Any of the storage devices disclosed herein may be implemented in any form factor such as 3.5 inch, 2.5 inch, 1.8 inch, MI, Enterprise and Data Center SSD Form Factor (EDSFF), NF1, and/or the like, using any connector configuration such as Serial ATA (SATA), Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), U.2, and/or the like. Any of the storage devices disclosed herein may be implemented entirely or partially with, and/or used in connection with, a server chassis, server rack, dataroom, datacenter, edge datacenter, mobile edge datacenter, and/or any combinations thereof.
The embodiment illustrated in
Some embodiments disclosed above have been described in the context of various implementation details, but the principles of this disclosure are not limited to these or any other specific details. For example, some functionality has been described as being implemented by certain components, but in other embodiments, the functionality may be distributed between different systems and components in different locations and having various user interfaces. Certain embodiments have been described as having specific processes, operations, etc., but these terms also encompass embodiments in which a specific process, operation, etc. may be implemented with multiple processes, operations, etc., or in which multiple processes, operations, etc. may be integrated into a single process, step, etc. A reference to a component or element may refer to only a portion of the component or element. For example, a reference to a block may refer to the entire block or one or more subblocks. The use of terms such as “first” and “second” in this disclosure and the claims may only be for purposes of distinguishing the things they modify and may not indicate any spatial or temporal order unless apparent otherwise from context. In some embodiments, a reference to a thing may refer to at least a portion of the thing, for example, “based on” may refer to “based at least in part on,” and/or the like. A reference to a first element may not imply the existence of a second element. The principles disclosed herein have independent utility and may be embodied individually, and not every embodiment may utilize every principle. However, the principles may also be embodied in various combinations, some of which may amplify the benefits of the individual principles in a synergistic manner.
The various details and embodiments described above may be combined to produce additional embodiments according to the inventive principles of this patent disclosure. Since the inventive principles of this patent disclosure may be modified in arrangement and detail without departing from the inventive concepts, such changes and modifications are considered to fall within the scope of the following claims.
This application claims priority to, and the benefit of, U.S. Provisional Patent Application Ser. No. 63/235,666 titled “Systems, Methods, and Devices For Transferring Data Between Interconnected Devices” filed Aug. 20, 2021 which is incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63235666 | Aug 2021 | US |