Generally, a wireless network system has two communication paths—an uplink path and a downlink path. When data is transmitted from a base station (e.g., a cell site) to a computing device along the downlink path, packets are received by the computing device and then processed in accordance with a protocol stack. The term “protocol stack” refers to the software implementation of a suite of communication protocols by the computing device. Individual protocols within a suite may be designed with a single purpose in mind; however, because each protocol usually communicates with at least one other protocol, the protocols are normally imagined as layers in a stack. In the protocol stack, the lowest layer is responsible for interacting with the underlying hardware while each layer further up in the stack adds additional capabilities.
One example of a protocol stack is the Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (E-UTRA) protocol stack that was developed for Long Term Evolution (LTE). As shown in
Various embodiments concern approaches to nesting sub-queues within queues to permit more effective management of elements queued for execution.
Introduced here are approaches in which queues can be branched into one or more sub-queues for more effective management of information units and tasks. Assume, for example, that a queue manager determines that a new queuing element should be executed before an existing queuing element that was previously populated in an entry of a primary buffer. In such a scenario, the queue manager may store the existing queuing element to a storage space and then insert a special queuing element in the entry that, when executed, routes the processor to a secondary buffer. Then, the queue manager may populate the new queuing element and the existing queuing element into the secondary buffer in such a manner that the processor will execute the new queuing element before executing the existing queuing element.
Sub-queues could also be used to expand the available capacity of a primary buffer into which queuing elements are populated for execution by a processor. For example, in some embodiments, the queue manager is configured to monitor available capacity of the primary buffer. Upon determining that the available capacity of the primary buffer has fallen beneath a threshold, the queue manager may insert a special queuing element into the primary buffer that, when executed, routes the processor to a secondary buffer in which queuing elements can be populated.
Various features of the technologies described herein will become more apparent to those skilled in the art from a study of the Detailed Description in conjunction with the drawings. Embodiments are illustrated by way of example and not limitation in the drawings, in which like references may indicate similar elements. While the drawings depict various embodiments for the purpose of illustration, those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technologies. Accordingly, while specific embodiments are shown in the drawings, the technology is amenable to various modifications.
In MAC layers, like those in the protocol suites for 4G and 5G wireless communication standards, a single information unit or a single processing task (or simply “task”) will frequently have to branch into multiple sub-units or sub-tasks. For instance, this may occur when segmentation or desegmentation is performed. Several software-implemented approaches have been developed in an attempt to process sub-units and sub-tasks more efficiently. However, there are notable downsides to these software-implemented approaches. For example, these software-implemented approaches consume a relatively high amount of power due to the additional computation that is involved and require more data buffers (or simply “buffers”) in which to temporarily store the sub-units or sub-tasks. Moreover, performance of these software-implemented approaches tends to be quite slow, and therefore may result in significant delays.
To accelerate the processing of sub-units and sub-tasks, more effective control of the underlying hardware is needed. Introduced here, therefore, are approaches in which queues can be branched into one or more sub-queues for more effective management of information units and tasks. For the purpose of illustration, embodiments may be described in the context of queuing elements that are loaded into entries in queues for processing. The terms “queuing element” and “element,” as used herein, may refer to a sub-task, sub-unit, or any other piece of information that needs to be processed.
As further discussed below, the present disclosure is directed to hardware-implemented approaches for branching a main queue (or simply “queue”) into one or more sub-queues into which queuing elements can be populated. These approaches may be useful in designing acceleration engines that are configured for segmentation to, or desegmentation from, the RLC layer, as well as implementing segmentation and desegmentation protocols. As an example, a single RLC PDU may be split into multiple RLC PDU segments that are populated into a sub-queue, or multiple PLC PDU segments in a sub-queue may be concatenated into a single RLC PDU.
Embodiments may be described with reference to particular types of network technologies, protocol stacks, processes, etc. However, those skilled in the art will recognize that these features are similarly applicable to other types of network technologies, protocol stacks, etc. For example, while embodiments may be described in the context of the LTE protocol stack, features of these embodiments could be extended to protocol stacks developed for 4G/5G network technologies. As another example, while the approaches described herein may be described in the context of preventing overflow, features of these approaches could also be used to ensure that a certain action (e.g., retransmission of packets) needs to occur by a certain point of an already scheduled queue.
Aspects of the technology can be embodied using special-purpose hardware (e.g., circuitry), programmable circuitry programmed with software and/or firmware, or a combination of special-purpose hardware and programmable circuitry. Accordingly, embodiments may include a machine-readable medium with instructions that, when executed, cause a computing device to perform a process in which a special queuing element is inserted into a queue that, when read, points to control information for a sub-queue. Entries can be populated in the sub-queue for processing. Moreover, the control information may include a return pointer that indicates where to return in the queue after all entries in the sub-queue have been processed.
References in this description to “an embodiment” or “one embodiment” means that the particular feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another.
Unless the context clearly requires otherwise, the words “comprise” and “comprising” are to be construed in an inclusive sense rather than an exclusive or exhaustive sense (i.e., in the sense of “including but not limited to”). The term “based on” is also to be construed in an inclusive sense rather than an exclusive or exhaustive sense. Thus, unless otherwise noted, the term “based on” is intended to mean “based at least in part on.”
The terms “connected,” “coupled,” or any variant thereof is intended to include any connection or coupling between two or more elements, either direct or indirect. The connection/coupling can be physical, logical, or a combination thereof. For example, objects may be electrically or communicatively coupled to one another despite not sharing a physical connection.
When used in reference to a list of multiple items, the word “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list.
The sequences of steps performed in any of the processes described here are exemplary. However, unless contrary to physical possibility, the steps may be performed in various sequences and combinations. For example, steps could be added to, or removed from, the processes described here. Similarly, steps could be replaced or reordered. Thus, descriptions of any processes are intended to be open-ended.
Overview of Insertion Scheme
The primary buffer 202 can be any region of physical memory storage in which data can be temporarily stored. For example, the primary buffer 202 may be a circular buffer (also referred to as a “cyclic buffer” or “ring buffer”) that is representative of a data structure that uses a buffer of fixed size as if it were connected end-to-end. A circular buffer is a bounded queue with separate indices (write_pointer, read_pointer) for inserting and removing data. As such, the indices will simply continue working through the bounded queue as if the buffer is contiguous in nature. Such a data structure lends itself to buffering streams of data since individual queuing elements do not need to be shuffled when one is consumed. When the read pointer reads a queuing element in an entry in a circular buffer, the read pointer can simply progress to the next entry in the circular buffer. In contrast, if the primary buffer 202 were a non-circular buffer, then it would be necessary to shift all queuing elements when one is consumed. In protocol stacks suitable for 4G/5G network technologies, the primary buffer 202 may be used as a queue for incoming traffic following flow classifications, or the primary buffer 202 may be used as a queue for quality of service (QoS) traffic after QoS classifications.
As shown in
The queuing elements may be stored in a contiguous memory space such that multiple queuing elements can be dequeued at once. For example, multiple queuing elements may be dequeued at once if execution of one queuing element depends on the outcome of execution of the preceding queuing element. Storing queuing elements in a contiguous memory space (e.g., a circular buffer) improves operational efficiency of the bus to which the queue manager is communicatively connected and avoids excessive delays due to latencies in accessing system data. This also makes control schemes easier for hardware to implement. However, one issue with conventional control schemes is that the hardware-implemented buffers exist in a contiguous memory space that makes it difficult to insert anything between adjacent queuing elements. If a new queuing element needs to be added to a primary buffer at a certain location in the queue (e.g., between a pair of existing queuing elements), there is no straightforward way to do this effectively with conventional control schemes.
Introduced here is an insertion scheme that addresses this issue. Assume that the primary buffer 202 includes a sequence of queuing elements that are stored in contiguous memory space allocated for the queue. Moreover, assume that each entry is of identical size and consistent format. To implement the insertion scheme, a queue manager can insert a special queuing element in which a field is defined to be an “insertion indicator,” which is actually a pointer to a storage space where control information for a sub-queue may be stored.
In
Insertion indicators may also be used to expand the primary buffer 202 if capacity of the primary buffer 202 exceeds a threshold. For instance, insertion indicators may be used to ensure that the primary buffer 202 does not run out of its allocated memory space. As an example, if the queue manager determines that the write pointer is in danger of overwriting an existing queuing element in the primary buffer 202, then the queue manager can delete the most recently populated queuing element from the primary buffer 202, insert an insertion indicator to expand the amount of available memory space, and then cause the deleted queuing element to be written into the secondary buffer that is pointed to by the insertion indicator.
Before queuing elements are inserted into a sub-queue (e.g., one of the secondary buffers 304a-b or tertiary buffers 306a-c), the queue manager may initially organize those queuing elements. For example, assume that the queue manager is interested in adding queuing element(s) to the primary buffer 302 in a desired location. In the primary buffer 302 at the location where those queuing element(s) are to be added, two different situations can occur. First, a secondary buffer may replace a queuing element in the primary buffer 302 at the location. In this situation, the queue manager changes the queuing element in the primary buffer 302 to a special queuing element that includes an insertion indicator, which points to the control information of the secondary buffer. Second, a secondary buffer may be inserted before a regular queuing element in the primary buffer 302. In this situation, the regular queuing element is saved to a storage space (e.g., a register) and then replaced with a special queuing element that includes an insertion indicator. This insertion indicator will point to the control information of the secondary buffer that is to be inserted into the primary buffer 302. Then, the regular queuing element can be populated into the secondary buffer. Where the regular queuing element is populated in the secondary buffer may depend on the order in which the queue manager wants queuing elements to be processed. For example, the regular queuing element may be populated at the end of the secondary buffer so that execution occurs immediately before reverting back to the primary buffer 302. In some embodiments, the queue manager is configured to dynamically increase the size of the secondary buffer (e.g., by one queuing element) to account for the saved queuing element copied over from the primary buffer 302. After the special queuing element has been inserted into the primary buffer 302, the control and statistical information for the primary buffer 302 may be updated as further discussed below.
To execute these operations, the queue manager can implement a special command to incorporate the secondary buffer. This special command may be different than the normal enqueue and dequeue commands. The special command may define the entry point in the primary buffer 302 and the special queuing element which is to be inserted. Moreover, this special command may instruct the queue manager to update the statistics for the primary buffer 302 with additional information about the secondary buffer to which the special element points.
In some embodiments, the queue manager is responsible for managing a set of primary buffers. In such embodiments, the main NE indicator 502 will indicate not empty so long as at least one of the primary buffers is not empty. Note, however, that some of these primary buffers may have secondary buffers nested therein as discussed above. To account for the secondary buffers, the hierarchical bitmap 500 can indicate which groups of queues are not empty. Here, for example, NE0 is the NE indicator for Queue Group 0, and it acts as a logical OR operator for all queues in Queue Group 0. Queue Group 0 may be representative of a single queue (e.g., a primary buffer), or Queue Group 0 may be representative of multiple queues (e.g., a primary buffer and one or more secondary buffers). Accordingly, NE0 will indicate that that Queue Group 0 is not empty so long as one of the queues in Queue Group 0 is not empty. NE1, NE2, and NE3 are the NE indicators for Queue Group 1, Queue Group 2, and Queue Group 3, respectively. Note that the number of queues in each group need not necessarily be the same.
The main NE indicator 502 may act as a logical OR operator for all of the queue groups. Accordingly, the main NE indicator 502 may indicate not empty if any of the NE indicators for the queue groups indicate not empty. The level of hierarchy and granularity of the queues/groups may be highly programmable.
In some embodiments, the queue manager is responsible for managing a set of primary buffers. In such embodiments, the main OF indicator 602 will indicate overflow so long as at least one of the primary buffers is full. Much like hierarchical bitmap 500 of
The main OF indicator 602 may act as a logical OR operator for all of the queue groups. Accordingly, the main OF indicator 602 may indicate overflowing if any of the OF indicators for the queue groups indicate overflowing. The level of hierarchy and granularity of the queues/groups may be highly programmable.
In some embodiments, the queue manager is responsible for managing a set of primary buffers. In such embodiments, the main UF indicator 702 will indicate overflow if any of the primary buffers are experiencing underflow (i.e., are empty). Much like hierarchical bitmaps 500, 600 of
The main UF indicator 702 may act as a logical OR operator for all of the queue groups. Accordingly, the main UF indicator 702 may indicate underflowing if any of the UF indicators for the queue groups indicate underflowing. The level of hierarchy and granularity of the queues/groups may be highly programmable.
Each queue managed by a queue manager may be associated with a set of timers to indicate timeout events.
Queue information and statistics may be maintained in data structures (e.g., tables) that are readily searchable using, for example, queue identifiers.
As mentioned above, the queue manager may be responsible for ensuring that the information/statistics associated with each primary buffer are updated if any secondary buffers are nested therein. Thus, the data structure may be updated whenever a secondary buffer is added or removed by the queue manager. These data structures can be stored in a memory and made accessible to software and/or firmware executing on the computing device of which the queue manager is a part. As shown in
In some embodiments, the queue manager is configured to automatically sort the primary buffers that it is responsible for managing according to size. Thus, the queue manager may generate a list of primary buffers that is ordered from largest to smallest, or vice versa. Said another way, the queue manager may sort the list of primary buffers in ascending or descending order, so the first entry may be either the largest or smallest queue depending on the configured order.
As discussed above, there are at least two situations in which an insertion scheme may be implemented by a queue manager. These situations are discussed with respect to
First, the queue manager may opt to replace a bounded, existing queuing element in the primary buffer with a secondary buffer. In this situation, the queue manager needs to change the bounded, existing queuing element to a special queuing element that, when executed, routes the processor to the secondary buffer. The term “bounded,” as used herein, refers to an existing queuing element that is preceded and followed by existing queuing elements. When an existing queuing element is bounded, inserting a new queuing element can prove to be difficult since multiple existing queuing elements may need to be rearranged.
Rather than rearrange multiple queuing elements, the queue manager can save the bounded, existing queuing element to a storage space (step 1102). For example, the queue manager may temporarily save the bounded, existing queuing element to a register. Then, the queue manager can insert the special queuing element into the primary buffer in place of the bounded, existing queuing element (step 1103). More specifically, the queue manager can cause the special queuing element to be written in the same entry in the primary buffer such that the bounded, existing queuing element is overwritten. As discussed above, the special queuing element may include an insertion indicator that, when executed, routes the processor to the secondary buffer.
Thereafter, the queue manager can populate the new queuing element and the existing queuing element into the secondary buffer in such a manner that the processor will execute the new queuing element before executing the existing queuing element (step 1104). Where the new queuing element and the existing queuing element are populated into the secondary buffer may depend on the order in which the queue manager wants those queuing elements to be executed. For example, the existing queuing element may be populated into the last entry of the secondary buffer so that execution occurs immediately before redirection of the processor from the secondary buffer to the primary buffer. As another example, the new queuing element may be populated into the first entry of the secondary buffer so that execution occurs immediately after redirection of the processor from the primary buffer to the secondary buffer.
For the purpose of illustration, refer again to the above-mentioned example where the primary buffer includes five queuing elements to be executed and the queuing manager has determined that a new queuing element should be executed after the third queuing element but before the fourth queuing element. In this scenario, the queue manager can temporarily save the fourth queuing element to a storage space (e.g., a register), insert a special queuing element in place of the fourth queuing element, and then populate the new queuing element and the fourth queuing element into a secondary buffer. The new queuing element can be populated into any entry in the secondary buffer that is above the fourth queuing element. For example, the new queuing element may be populated into the first entry in the secondary buffer while the fourth queuing element may be populated into the second entry in the secondary buffer, or the new queuing element may be populated into the first entry in the secondary buffer while the fourth queuing element may be populated into the last entry in the secondary buffer.
Alternatively, the queue manager may populate the existing queuing element directly into the secondary buffer rather than into the storage space as discussed above with reference to step 1102. In such embodiments, the queue manager may populate the existing queuing element directly into a predetermined entry in the secondary buffer responsive to determining that a new queuing element should be executed before the existing queuing element. The predetermined entry could be, for example, the first entry or the last entry in the secondary buffer.
In some embodiments, the queue manager is configured to increase the size of the secondary buffer to account for inclusion of the existing queuing element copied over from the primary buffer. For example, the queue manager may dynamically increase the size of the secondary buffer by one entry to account for the existing queuing element. Moreover, as discussed above, information regarding the primary buffer may be maintained (e.g., in a register) in some embodiments. In such embodiments, the queue manager may ensure that the information is updated to account for nesting of the secondary buffer within the primary buffer.
Second, the queue manager may opt to insert a secondary buffer before an existing queuing element to avoid overwriting (e.g., due to overflow). In this situation, the queue manager needs to replace the existing queuing element with a special queuing element that, when executed, routes the processor to the secondary buffer.
Then, the queue manager can allocate memory space for a secondary buffer in which queuing elements can be populated (step 1203) and insert a special queuing element into the primary buffer that, when executed, routes the processor to the secondary buffer (step 1204). More specifically, the queue manager may identify the existing queuing element that was most recently populated into the primary buffer, save the existing queuing element to a storage space (e.g., a register), and then populate the existing queuing element into the secondary buffer. Thus, the queue manager may copy the most recently populated queuing element from the primary buffer into the secondary buffer to expand the number of effective entries in the primary buffer. Generally, the existing queuing element is populated into the first entry of the secondary buffer. However, the existing queuing element could be populated into another entry of the secondary buffer.
As discussed above with respect to step 1203, memory space may be allocated for the secondary buffer when needed. However, when the secondary buffer is no longer needed, the queue manager may wish to release the previously allocated memory space. Said another way, since the secondary buffer is intended to be temporarily used for overflow, the queue manager may wish to release the memory space allocated for the secondary buffer responsive to determining that overflow is no longer an issue. Thus, the queue manager may monitor available capacity of the secondary buffer (step 1205). For example, the queue manager may continually examine either an underflow (UF) indicator or a not empty (NE) indicator associated with the secondary buffer. If the queue manager determines that at least one entry in the secondary buffer has not been executed, then the queue manager may not take further action. However, if the queue manager determines that all entries in the secondary buffer have been executed, then the queue manager may release the memory space that was allocated for the secondary buffer (step 1206). Accordingly, the queue manager may be able to dynamically allocate and release memory space depending on the number of secondary buffers needed over time.
The steps of these processes may be performed in various sequences. For example, steps 1203 and 1205-1206 of
There are several alternative approaches to those described herein.
One alternative involves copying all queuing elements beneath the location where the new queuing element is to be inserted. For example, assume that a new queuing element is to be inserted into a primary buffer that includes five queuing elements contiguously arranged in the queue. If the queue manager determines that the new queuing element should be arranged above the second queuing element, then the queue manager may copy the second, third, fourth, and fifth queuing elements (e.g., for inclusion in the secondary buffer). But this approach is computationally complicated, slow, and power intensive.
Another alternative involves representing each queue as a memory block of contiguous lists. Such an approach allows an entire queue to be represented as a linked list of memory blocks. To add to the list at any point, the queue manager could simply create another link in the linked list of memory blocks. While this approach is relatively straightforward, to insert in the middle of a memory block, the queue manager would have to break the memory block into two memory blocks and then insert the new memory block therebetween.
Alternatively, the queue manager could employ the insertion schemes described herein. This alternative offers several advantages, namely, (1) it permits a tradeoff between performance and flexibility on the contiguous memory block sizes and (2) if a linked list of memory blocks is released, the list can deallocate the released memory blocks easily and rechain the list. Accordingly, this approach offers efficient space allocation/deallocation since free memory block regions can collapse when deallocated back to the pool. A disadvantage of this approach is that contiguous lists of memory blocks tend to be difficult for hardware to handle. At a high level, a simple memory block makes normal enqueue and dequeue operations more complicated, and thus circular buffers tend to be much more efficient for hardware-implemented queuing operations.
Overview of Queue Manager
The queue manager 1302 may maintain register banks 1308 for some or all control registers. For example, the queue manager 1302 may maintain a separate register bank (e.g., registers 206 of
The event processing engine 1312 may be responsible for enqueuing and dequeuing elements, including special elements with insertion indicators, into the buffers allocated by the buffer releaser 1304. In some embodiments, the queue manager 1302 further includes a dedicated module for calculating statistics, sorting queues, etc. This dedicated module, which may be referred to as the “calculating and sorting engine 1314,” can be implemented via hardware, firmware, software, or any combination thereof. As discussed above, the buffer releaser 1304 may be responsible for interacting with the buffer manager 1306 to allocate buffers when necessary and/or release buffers after the queues are finished.
In some embodiments, the queue manager 1302 is communicatively connected to a system bus 1316 via a Direct Memory Access (DMA) channel 1318. Such a design may only be necessary when the queues are shared with software that is executing in the system memory 1320 of the computing device 1300, though this is commonly how queue managers are used.
Benefits of Insertion Scheme
Several benefits can be obtained by employing the insertion schemes described herein. These benefits include
Lower power consumption due to efficient operations of hardware-implemented queues;
Improved processing speed due to the higher speed at which hardware-implemented queues can be processed in comparison to entirely software-implemented queues;
Flexible and efficient management of lists in hardware design; and
Flexible and efficient usage of memory (e.g., additional memory for sub-queues can be added/removed dynamically as those sub-queues are added/removed).
These benefits may be particularly useful to portable computing devices (also referred to as “mobile computing devices”) such as mobile phones, routers, etc. For instance, the insertion scheme may be used in high-performance, low-cost, and/or low-power modems designed for 4G/5G network technologies (also referred to as “4G modems” or “5G modems”).
Computing System
The computing system 1400 may include a processor 1402, main memory 1406, non-volatile memory 1410, network adapter 1412 (e.g., a network interface), video display 1418, input/output device 1420, control device 1422 (e.g., a keyboard, pointing device, or mechanical input such as a button), drive unit 1424 that includes a storage medium 1426, and signal generation device 1430 that are communicatively connected to a bus 1416. The bus 4116 is illustrated as an abstraction that represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. The bus 1416, therefore, can include a system bus, Peripheral Component Interconnect (PCI) bus, PCI-Express bus, HyperTransport bus, Industry Standard Architecture (ISA) bus, Small Computer System Interface (SCSI) bus, Universal Serial Bus (USB), Inter-Integrated Circuit (I2C) bus, or bus compliant with Institute of Electrical and Electronics Engineers (IEEE) Standard 1394.
The computing system 1400 may share a similar computer processor architecture as that of a server, router, desktop computer, tablet computer, mobile phone, video game console, wearable electronic device (e.g., a watch or fitness tracker), network-connected (“smart”) device (e.g., a television or home assistant device), augmented or virtual reality system (e.g., a head-mounted display), or another electronic device capable of executing a set of instructions (sequential or otherwise) that specify action(s) to be taken by the processing system 1400.
While the main memory 1406, non-volatile memory 1410, and storage medium 1424 are shown to be a single medium, the terms “storage medium” and “machine-readable medium” should be taken to include a single medium or multiple media that stores one or more sets of instructions 1426. The terms “storage medium” and “machine-readable medium” should also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the processing system 1400.
In general, the routines executed to implement the embodiments of the present disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions 1404, 1408, 1428) set at various times in various memories and storage devices in a computing device. When read and executed by the processor 1402, the instructions cause the computing system 1400 to perform operations to execute various aspects of the present disclosure.
While embodiments have been described in the context of fully functioning computing devices, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms. The present disclosure applies regardless of the particular type of machine- or computer-readable medium used to actually cause the distribution. Further examples of machine- and computer-readable media include recordable-type media such as volatile and non-volatile memory devices 1410, removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS) and Digital Versatile Disks (DVDs)), cloud-based storage, and transmission-type media such as digital and analog communication links.
The network adapter 1412 enables the computing system 1400 to mediate data in a network 1414 with an entity that is external to the computing system 1400 through any communication protocol supported by the computing system 1400 and the external entity. The network adapter 1412 can include a network adaptor card, a wireless network interface card, a switch, a protocol converter, a gateway, a bridge, a hub, a receiver, a repeater, or a transceiver that includes an integrated circuit (e.g., enabling communication over Bluetooth® or Wi-Fi®).
The techniques introduced here can be implemented using software, firmware, hardware, or a combination of such forms. For example, aspects of the present disclosure may be implemented using special-purpose hardwired (i.e., non-programmable) circuitry in the form of application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), and the like.
Remarks
The foregoing description of various embodiments has been provided for the purposes of illustration. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling those skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated.
Although the Detailed Description describes various embodiments, the technology can be practiced in many ways no matter how detailed the Detailed Description appears. Embodiments may vary considerably in their implementation details, while still being encompassed by the specification. Particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the technology encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the embodiments.
The language used in the specification has been principally selected for readability and instructional purposes. It may not have been selected to delineate or circumscribe the subject matter. It is therefore intended that the scope of the technology be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, of the scope of the technology as set forth in the following claims.
This application is a continuation of International Patent Application No. PCT/IB2020/058655, filed on Sep. 17, 2020, which claims the benefit of priority to U.S. Provisional Application No. 62/968,467, filed on Jan. 31, 2020, the contents of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62968467 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IB2020/058655 | Sep 2020 | US |
Child | 17877669 | US |