Field
Aspects of the present disclosure relate generally to data communication, and more particularly, to providing improved communication performance associated with the use of a resource shared for data communication.
Background
The utilization of data communication, whether by wireline or wireless communication links, has become commonplace in today's society. Various devices, such as personal computers (PCs), laptop computers, tablet devices, personal digital assistants (PDAs), smart phones, etc., are commonly used every day by individuals and businesses to communicate all forms of data, including electronic documents, voice data, video data, multimedia data, and the like. The aforementioned devices may communicate the foregoing data via a number of different interfaces associated with a number of different data sources, whether external or internal to the device.
As an example, often multiple data sources provide data flows within a system, whereby a particular resource, such as a buffer, is shared among the multiple data flows. For example, a particular device may comprise a system on a chip (SoC) architecture wherein the SoC architecture includes a long term evolution (LTE) modem providing a source of data, a wireless wide area network (WWAN) interface providing a source of data, and a wireless local area network (WLAN) interface providing a source of data. In operation of the device, each of these data sources may provide data flows to be delivered to a same data sink (e.g., host processor, operating system, application, etc. of the device) via a same buffer.
The shared use of the aforementioned buffer by the data flows of the multiple data sources may result in data packet loss. For example, all data packets for each of the data sources for which the data flows flow through a shared buffer are dropped when that buffer is full. These packet losses result in performance degradation of the system and reduced end-to-end throughput. In particular, as the input data rate gets closer to the output data rate, the effective throughput reduces.
In accordance with one example, at an input/output ratio of 0.5, the effective throughput of the shared buffer is close to 1 (i.e., there are no packet losses). However, as the input/output ratio increases to 0.9 in this example, the effective throughput of the shared buffer reduces to 0.95, implying a net packet loss rate of 5%.
One technique to address the foregoing problem may be to increase the buffer size. However, implementation costs increase significantly with the increase in the size of the buffers, particularly in SoC implementations. Moreover, the complexity of the implementation also increases when larger buffers are used. Likewise, adding additional buffers to the system significantly increases the complexity of the implementation. Accordingly, increasing the buffer size and/or adding additional buffers often does not provide a viable solution to the shared resource data packet loss problem.
In one aspect of the disclosure, a method for data communication including monitoring, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate. The method of embodiments also includes passing, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows, and dropping, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
In an additional aspect of the disclosure, an apparatus for data communication includes means for monitoring, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate. The apparatus of embodiments also includes means for passing, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows, and means for dropping, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
In an additional aspect of the disclosure, a non-transitory computer-readable medium having program code recorded thereon is disclosed. The program code according to some aspects includes code to cause a computer to monitor, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate. The program code of embodiments also includes code to cause the computer to pass, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows, and to drop, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
In an additional aspect of the disclosure, an apparatus for data communication includes at least one processor and a memory coupled to the at least one processor, wherein the at least one processor is configured to monitor, by a packet gate coupled to an input of a shared resource, a number of chunks of a data packet received at the packet gate. The at least one processor of embodiments is also configured to pass, by the packet gate, the chunks on to the shared resource until a specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks, wherein the shared resource comprises a resource shared by a plurality of data flows, and wherein the data packet comprises a data packet of a first data flow of the plurality of data flows, and to drop, by the packet gate, all chunks of the data packet after the specified number of chunks of the data packet are passed on to provide recovery of the data packet from the passed chunks.
The foregoing has outlined rather broadly the features and technical advantages of examples according to the disclosure in order that the detailed description that follows may be better understood. Additional features and advantages will be described hereinafter. The conception and specific examples disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Such equivalent constructions do not depart from the scope of the appended claims. Characteristics of the concepts disclosed herein, both their organization and method of operation, together with associated advantages will be better understood from the following description when considered in connection with the accompanying figures. Each of the figures is provided for the purpose of illustration and description, and not as a definition of the limits of the claims.
A further understanding of the nature and advantages of the present disclosure may be realized by reference to the following drawings. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
The detailed description set forth below, in connection with the appended drawings, is intended as a description of various possible configurations and is not intended to limit the scope of the disclosure. Rather, the detailed description includes specific details for the purpose of providing a thorough understanding of the inventive subject matter. It will be apparent to those skilled in the art that these specific details are not required in every case and that, in some instances, well-known structures and components are shown in block diagram form for clarity of presentation.
This disclosure relates generally to providing or participating in data communications utilizing a shared resource, wherein communication performance is improved with respect to the shared resource utilizing a packet gate operable in accordance with the concepts herein. For example, a packet gate is utilized with respect to a shared resource to improve the effective throughput and/or reduce packet losses with respect to a plurality of data flows sharing the resource.
The configuration of system 100 illustrated in
It should be appreciated that, although three data sources and three data sinks are shown in the illustrated embodiment of system 100, different numbers of data sources and/or data sinks may be provided in accordance with the concepts herein. Moreover, there is no requirement that the number of data sources and the number of data sinks be the same. It should also be appreciated that the particular configuration of data sources and/or data sinks may differ from that of the illustrated embodiment. For example, the data sources may include wireline and/or wireless data sources, multiple instances of a same type or configuration of data source, etc. Similarly, the data sinks may include multiple instances of the same type or configuration of data sinks, a number of differently configured data sinks, a single data sink, etc.
Irrespective of the particular configuration of system 100, one or more resources may be shared with respect to data flows between the data sources and one or more data sinks, whereby the sharing of the resource is subject to data packet losses. For example, as shown in the further detail provided in
In operation according to the embodiment illustrated in
It should be appreciated that, although the embodiment illustrated in
As shown in the embodiment of
Graph 200 of
Embodiments implemented in accordance with concepts of the disclosure improve the effective throughput of a shared resource, such as buffer 131, and reduce packet losses associated with its shared use without requiring an increase with respect to attributes of the shared resource, such as without requiring increased buffer size.
The embodiment of system 300 shown in
The illustrated embodiment of system 300 includes encoders 310-1 through 310-3 disposed in the data paths between each data source and the switching and routing fabric coupling the data sources to the shared resource, packet gate 320 disposed between the switching and routing fabric and the input to the shared resource, and decoder 331 disposed between the output of the shared resource and the data packet destination. It should be appreciated that, although the illustrated embodiment of system 300 shows packet gate 320 as being separate from data sink 330, packet gates implemented according to the concepts herein may be provided in configurations different than that shown, such as to be fully or partially integrated into a data sink. Similarly, although the shared resource (e.g., buffer 131) and corresponding decoder 331 are shown in the illustrated embodiment of system 300 as being integrated with data sink 330, this functionality may be provided in configurations different than that shown, such as to be fully or partially separated from a data sink. Also, although a single encoder or decoder is shown with respect to a particular data path, it should be appreciated that embodiments may implement different numbers of encoders and/or decoders, such as to provide a plurality of encoders/decoders operable to perform different coding techniques. Additionally or alternatively, a different number of packet gates may be provided with respect to a data sink than shown, such as to provide a plurality of packet gates where a plurality of shared resources are implemented with respect to a data sink.
Encoders 310-1 through 310-3 provide data redundancy encoding, such as through the use of forward error correction (FEC) encoding, with respect to the data of the respective flows. For example, encoders 310-1 through 310-3 may implement one or more erasure codes (e.g., tornado codes, low-density parity-check codes, Reed-Solomon coding, fountain codes, RAPTOR codes, RAPTORQ codes, and maximum distance separable (MDS) codes) whereby source data is broken into fragments (e.g., k source fragments for each source object such as data packets or other blocks of source data) and additional repair fragments (e.g., r repair fragments for each source object) are generated to provide a total number of fragments (e.g., n=k+r) greater than the source fragments. Accordingly, encoders 310-1 through 310-3 are shown as including data packet disassembly blocks, as may be operable to break the source data into the aforementioned fragments, and encoder blocks, as may be operable to perform the aforementioned data coding.
Correspondingly, decoder 331 provides decoding of the source data from the encoded data. For example, where FEC encoding is utilized as described above, decoder 331 may operate to recover the source object using any combination of k number of fragments (i.e., any combination of source fragments and/or repair fragments totaling k in number), or possibly k+x where x is some small integer value (e.g., 1 or 2) where a non-MDS code is used. Accordingly, decoder 331 is shown as including a decoder block, as may be operable to perform the aforementioned data decoding, and a packet assembly block, as may be operable to reassemble source objects from the decoded fragments.
Use of the aforementioned encoding facilitates a high probability of recovery of the data from some specified portion of the total number of encoded fragments, wherein the specified portion of encoded fragments is configured provide data recovery to a certain probability of success. For example, perfect recovery codes, such as MDS codes, facilitate recovery of the source data using any combination of k fragments (i.e., any combination of a number of source fragments and/or a number of repair fragments totaling k) to a very high probability (e.g., 100% probability of recovery). Similarly, some near perfect recovery codes, such as RAPTOR codes and RAPTORQ codes, facilitate recovery of the source data using any combination of k+x fragments (i.e., any combination of a number of source fragments and/or an number of repair fragments totaling k+x) to a high probability (e.g., 99.99% probability of recovery where x=1, 99.999% probability of recovery where x=2, etc.). In providing the foregoing data encoding, embodiments herein utilize RAPTORQ encoding in light of RAPTORQ being a near perfect erasure recovery code that provides a high probability of data recovery with very small encoding and decoding complexity, and thus is particularly well suited for implementation in some system configurations, such as SoC systems.
In operation of system 300 of the illustrated embodiment, data packets from a data source go through a “Packet Disassembly” process of a respective encoder 310 where the packets are broken up into smaller fixed size chunks suitable for transmission over the switching and routing fabric. FEC encoding is then applied by the respective encoder 310 to the foregoing chunks (e.g., using the aforementioned RAPTORQ encoding), whereby the encoding technique utilized allows recovery of data with some loss of data chunks in transmission. The encoded chunks are then sent into switching and routing fabric 120 to be routed to an appropriate data sink, such as data sink 330 (e.g., host processor, operating system, application, etc.).
Packet gate 320 of the illustrated embodiment, provided between the egress of the switching and routing fabric and an input of the shared resource, operates to keep track of the number of chunks of a packet that have been received. When logic of packet gate 320 determines that a specified number of chunks of a packet are received that are sufficient for the decoder to recover the packet with a high probability (e.g., k chunks or k+x chunks, a known number established by the encoding technique implemented), the packet gate drops all subsequent chunks of that packet. The chunks that are not dropped by the packet gate are passed through buffer 131 (i.e., the shared resource) for processing downstream by the respective decoder 331. Accordingly, at the output of the shared resource, the received chunks are processed by a “Packet Assembly” process of encoder 331 and the original packet assembled by encoder 331. The packet is then passed to data packet destination 132 of data sink 330.
At block 401 of the illustrated flow, a data packet to be provided to data sink 330 is received from a data source by encoder 310. An exemplary format of a received data packet is shown in
Logic of encoder 310 operates to disassemble the received data packet into chunks (e.g., k data packet portions of equal size) at block 402. An exemplary format of the resulting chunks is shown in
The chunks of source data are provided to coding logic of encoder 310 for encoding the chunks using a redundant data coding technique (e.g., FEC encoding, such as RAPTORQ) at block 403 of the illustrated embodiment. For example, the coding logic may operate to generate a number of repair chunks (e.g., r) providing redundant data from which the data packet can be recovered from any combination of a predetermined number (e.g., k or k+x) of source chunks and repair chunks. It should be appreciated that, in operation according to embodiments, the chunk identification field may exceed the chunk count field when the chunk contains repair symbols generated by the encoding technique.
At block 404 of flow 400 shown in
At block 405 of the illustrated flow, a forwarded encoded chunk to be provided to data sink 330 is received from encoder 310 by packet gate 320. In providing intelligent gating operation according to concepts herein, logic of packet gate 320 operates to track the number of received encoded chunks for each packet, as shown at block 406 of the illustrated embodiment. For example, packet gate 320 may track the number of received encoded chunks using a database or table as illustrated in
Packet gate 320 of embodiments operates to pass encoded chunks on to the shared resource (e.g., buffer 131 of the embodiment illustrated in
It should be appreciated that, although the illustrated flow of
The encoded chunks passed to the shared resource of the embodiment illustrated in
The chunks of encoded data, as may comprise source chunks and/or repair chunks, provided through the shared resource are provided to decoding logic of decoder 331 for decoding the chunks using a redundant data coding technique (e.g., FEC encoding, such as RAPTORQ) at block 411 of the illustrated embodiment. For example, the decoding logic may operate to regenerate a packet from some portion of source chunks (e.g., some number of the k source chunks) and/or some number of repair chunks (e.g., some number of the r repair chunks), wherein the total number of encoded chunks (e.g., k or k+x) used to regenerate the data packet is determined by the particular coding technique utilized.
Thereafter, at block 412 of the illustrated embodiment, the recovered packets are forwarded by decoder 331 to data packet destination 132 for normal operation of the data packet destination. It should be appreciated that, although operation to provide reduction in data packet losses with respect to buffer 131, as may be shared among a number of data flows directed to data packet destination 132, utilizing packet gate 320 and associated encoding and decoding is implemented according to the illustrated embodiment, data processing as performed by data packet destination 132 may be performed without modification to accommodate the use of the packet gate. That is, operation of the packet gate of embodiments is transparent with respect to the data sources and data packet destination.
It should be appreciated that the operation of the aforementioned encoder, packet gate, and decoder are shown in flow 400 of the illustrated embodiment as being performed serially, some or all such operations or portions thereof may be performed in parallel. For example, the decoder may be receiving encoded chunks forwarded by the packet gate while the packet gate continues to receive forwarded encoded chunks and perform analysis with respect thereto. Similarly, the packet gate may perform operations to drop additional received encoded chunks while the decoder continues to receive previously forwarded encoded chunks. Accordingly, it can be appreciated that the operations shown in the illustrated embodiment of flow 400 may be performed in an order different than that shown.
It should also be appreciated that multiple instances of some or all of the operations of flow 400 may be performed, whether in parallel, serially, or independently. For example, the operations illustrated as being performed by encoder 310 (blocks 401-404) may be performed in parallel by a plurality of encoders (e.g., any or all of encoders 310-1 through 310-3) associated with data sources providing data to data sink 330. Additionally or alternatively, multiple instances of the operations of flow 400 may be performed in parallel, such as to provide reduction in data packet losses for a plurality of shared resources using a packet gate in accordance with the concepts herein.
In accordance with the foregoing operation of flow 400, performance improvements are gained by adding extra overhead using a redundant data encoder (e.g., FEC encoder). For example, using a near perfect coding technique with low encoding and decoding complexity, such as RAPTORQ, the system can tolerate larger data losses and still perform very well, such as to maintain an effective throughput of 1 (i.e., no packet loss) even when the ratio of input to output rate of the shared resource reaches 1.
In the aforementioned use of redundant data coding with a packet gate implementation, it can be appreciated that embodiments introduce an additional design parameter, wherein the additional design parameter is the amount of repair symbols generated by the redundant data encoder. This parameter, together with the shared resource attributes (e.g., buffer size) and input and output data rates, defines the performance of the system. Thus, although it seems counter-intuitive that performance improvements can be gained by adding extra overhead using a redundant encoder, systems implementing packet gates in accordance with the concepts herein can tolerate larger data losses and still perform very well.
The following analysis illustrates the gains that can be achieved by implementations in accordance with the concepts herein. In analyzing the performance of a system implementation in accordance with embodiments herein, the system may be modeled as a simple M/M/1/K queue with an input data rate of λ, as shown in
and where (1+δ)(1−Pk)=1. If δ is chosen such that (1+δ)(1−Pk)=1, then the effective packet loss rate after the decoder becomes 0.
The graphs of
Accordingly, embodiments herein operate to select an amount of data encoding overhead to utilize based upon the incoming data rate and the rate of data output by the shared resource. Embodiments may thus dynamically select an amount of data encoding overhead to implement, such as to implement no or little data encoding overhead when the shared resource is not near its capacity and to increase the data encoding overhead as the shared resource approaches its capacity limit (e.g., buffer or channel throughput limit).
Although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present disclosure.
Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
The functional blocks and modules in
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Skilled artisans will also readily recognize that the order or combination of components, methods, or interactions that are described herein are merely examples and that the components, methods, or interactions of the various aspects of the present disclosure may be combined or performed in ways other than those illustrated and described herein.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of tangible storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. Computer-readable storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, a connection may be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, or digital subscriber line (DSL), then the coaxial cable, fiber optic cable, twisted pair, or DSL, are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
As used herein, including in the claims, the term “and/or,” when used in a list of two or more items, means that any one of the listed items can be employed by itself, or any combination of two or more of the listed items can be employed. For example, if a composition is described as containing components A, B, and/or C, the composition can contain A alone; B alone; C alone; A and B in combination; A and C in combination; B and C in combination; or A, B, and C in combination. Also, as used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (i.e., A and B and C) or any of these in any combination thereof.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use embodiments in accordance with concepts of the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.