Embodiments of the present principles generally relate to bandwidth estimations/rate control of media streams in communication systems and more specifically to a method and apparatus for bandwidth estimations/rate control of media streams using multiple, combined probing techniques.
Current audio and video conferencing systems include a Selective Forwarding Unit (SFU), which receives and forwards audio and video streams from and to each participant of the conferencing system via separate outgoing channels, or streams, for each client. In other words, in group calls, any sender (or publisher) will transmit data to the SFU to use it as a relay and/or data forwarding service to get the data to reach each of the receivers (or subscribers). The SFU should avoid congesting the subscribers' receiving channel with too much data, or the additional packet latency (and/or loss) may destroy the user experience.
In some instance, an SFU does not modify the stream that the SFU receives from the publisher and just forwards the stream to each of the subscribers. An SFU has to receive a stream from the publisher with a low enough bitrate to fit the available bandwidth between the SFU and an intended subscriber. In some instances, to ensure a low enough bitrate, a publisher is notified regarding a maximum allowed bitrate, which corresponds to a lowest of the bandwidth estimations between the publisher and the SFU connection and each connection between the SFU and a subscriber. However, such methods do not scale well for many participants since it takes only one subscriber having a poor internet connection to degrade a communication experience for all the subscribers.
Another option is for the publisher to send multiple streams with different qualities, or a multi-quality stream (which can be decomposed in different qualities), and/or a combination of both, for the SFU to forward the stream with the highest bitrate possible, but below the available bandwidth to each subscriber.
Conventionally, for achieving “rate-control” multiple factors must be taken into account, including but not limited to, the variability of the connection or the congestion events caused by concurrent flows in that connection. A typical process begins by setting a minimum bitrate, and then increasing the bitrate based until a data stream displays signs of problems (e.g., congestion), at which point a rollback is attempted. Once the problems are resolved, the bitrate can start to increase again, in an adaptive loop that tries to constantly and carefully adjust how much data is communicated to a subscriber.
Existing bandwidth estimation strategies are classified into two groups: receiver-side and sender-side. In their most basic form, both groups operate between two single end-points, i.e., these are not designed at the base as multi-party congestion control algorithms. In the first group, the subscriber communicates back to the sender its estimated bandwidth through a limitation message, e.g., the Receiver Estimated Maximum Bitrate (REMB), taking into account the received data stream and its properties (including, but not limited to, incoming rate, packet loss, latencies, jitter). In the second group, it is the sender that estimates the bandwidth by using the receiver's feedback for adjusting the rate.
A common method for evaluating the available bandwidth of the receiver's channel includes probing the network's capacity, within a short period of time, by sending data at a much faster rate than a current target rate. By keeping the probing time low, the sender reduces the adversarial effects produced of that increased amount of data traffic on a congested or small channel, and by using non-useful data (often by means of accessory, non-decodable, data stuffing, also called padding data) the probing period has no impact on the user data stream. There are two typical ways of probing the channel. The first probing technique, referred to as non-paced probing, effective with receiver-side rate control and sometimes in combination with multiple quality data (e.g. video simulcast and temporal scalability), includes sending a combination of at least one of padding attached to each data packet and a burst of probing packets at once, increasing the subscriber's incoming rate in a sustainable manner, after or before a data packet. Non-paced probing, as defined, in the context of an SFU, and used in combination with multi-quality data, provides the capacity to help discover the available bandwidth to enable applying rate-control algorithms in subscriber connections and allows an SFU, based on the discovered bandwidth, to select the best data quality to forward to each subscriber from multi-quality data received by the SFU from a publisher. It is a common requirement in many implementations, that the burst of padding at once, must be after a so-called marker packet.
The second probing technique, referred to as paced probing, more suited for sender-side rate control, and especially in combination with encoders that can adjust bitrate in fine steps, includes pacing each probing packet (or each group of probing packets) by some value, to have more information about the channel timings given the packet reception timestamps that come along with the receiver's feedback. By increasing the quantity of the data points collected by the sender, assuming that the probe is successfully executed, the bandwidth estimation can be more accurate. When relying only on network probing alone, however, in the case a probe fails by being lost or not correctly received by the subscriber, the sender estimation may be inaccurate.
Current probing strategies, however, encounter several issues. In some instances a probe fails, or a probe is not executed correctly, or a probe is misconfigured for a value that is under the real channel capacity, or rate-control is not stable enough due to streamed data that has a limited set of streamable qualities/bitrates different enough from estimated available bandwidth and between paced probing points. That is, there is currently a lack of a probing method, or a probing strategy, that works at least when subscribers with different rate control strategies join the same communication.
Methods and apparatuses for bandwidth estimations/rate control of media streams using multiple, combined probing techniques are provided herein.
In some embodiments a method for rate control of media streams using multiple, combined probing techniques includes receiving a multi-quality data stream, adding padding packets to the multi-quality data stream to determine which data stream of the multi-quality data stream to select to communicate data packets of the multi-quality data stream, adding probing packets to the selected data stream of the multi-quality data stream to determine a bandwidth at which to communicate the selected data stream, and communicating the selected data stream to receivers using the determined bandwidth.
In some embodiments, the method can further include at least one of determining, using feedback information from a communication network, whether or not a different quality data stream of the multi-quality data stream should be selected before adding padding packets to the multi-quality data stream, or determining, using feedback information from the communication network, if a bandwidth estimation process should be performed before adding probing packets to the selected data stream of the multi-quality data.
In some embodiments of the present principles, the feedback information from the communication network can include Real-Time Transport Protocol (RTP) data received from at least one of a sender or a receiver of the multi-quality data stream.
In some embodiments, the method can further include performing a sanity check of the selected data stream before communicating the selected data stream to receivers using the determined bandwidth.
In some embodiments, the method can further include condensing the probing packets added to the selected data stream of the multi-quality data stream with probing packets previously scheduled to be added to the selected data stream into a single burst before communicating the selected data stream to receivers.
In some embodiments, in the method a last padding packet has to be added to the multi-quality data stream before a probing packet can be added.
In some embodiments an apparatus for rate control of media streams using multiple, combined probing techniques includes a forwarding engine configured to receive a multi-quality data stream, a padding adder module configure to add padding packets to the multi-quality data stream to determine which data stream of the multi-quality data stream to select to communicate data packets of the multi-quality data stream, a probing manager module configured to add probing packets to the selected data stream of the multi-quality data stream to determine a bandwidth at which to communicate the selected data stream, and a sending engine configured to communicate the selected data stream to receivers using the determined bandwidth.
In some embodiments of the present principles, the apparatus further includes a rate control module and the padding adder module is further configure to determine, using feedback information from the rate control module, whether or not a different quality data stream of the multi-quality data stream should be selected before adding padding packets to the multi-quality data stream.
In some embodiments of the present principles, the apparatus further includes a rate control module and the probing manager module is further configure to determine, using feedback information from the rate control module, if a bandwidth estimation process should be performed before adding probing packets to the selected data stream of the multi-quality data. In some embodiments the feedback information from the rate control module includes Real-Time Transport Protocol (RTP) data received from at least one of a sender or a receiver of the multi-quality data stream.
In some embodiments, the forwarding engine is further configured to perform a sanity check of the selected data stream having the added probing packets before communicating the selected data stream to receivers using the determined bandwidth.
In some embodiments, the probing manager module is further configure to condense the probing packets added to the selected data stream of the multi-quality data stream with probing packets previously scheduled to be added to the selected data stream into a single burst before the selected data stream is communicated to receivers.
In some embodiments, in the apparatus a last padding packet has to be added to the multi-quality data stream before a probing packet can be added.
In some embodiments, an apparatus for rate control of media streams using multiple, combined probing techniques includes a processor and a memory coupled to the processor, the memory having stored therein at least one of programs or instructions executable by the processor. In such embodiments, the apparatus, upon execution of the programs or instructions, is configured to receive a multi-quality data stream, add padding packets to the multi-quality data stream to determine which data stream of the multi-quality data stream to select to communicate data packets of the multi-quality data stream, add probing packets to the selected data stream of the multi-quality data stream to determine a bandwidth at which to communicate the selected data stream, and communicate the selected data stream to receivers using the determined bandwidth.
In some embodiments, the apparatus is further configured to determine, using feedback information from a communication network, whether or not a different quality data stream of the multi-quality data stream should be selected before adding padding packets to the multi-quality data stream.
In some embodiments, the apparatus is further configured to determine, using feedback information from a communication network, if a bandwidth estimation process should be performed before adding probing packets to the selected data stream of the multi-quality data. In some embodiments, the feedback information from the communication network includes Real-Time Transport Protocol (RTP) data received from at least one of a sender or a receiver of the multi-quality data stream.
In some embodiments, the apparatus is further configured to perform a sanity check of the selected data stream before communicating the selected data stream to receivers using the determined bandwidth.
In some embodiments, the apparatus is further configured to condense the probing packets added to the selected data stream of the multi-quality data stream with probing packets previously scheduled to be added to the selected data stream into a single burst before communicating the selected data stream to receivers.
In some embodiments, in the apparatus a last padding packet has to be added to the multi-quality data stream before a probing packet can be added.
Other and further embodiments of the present principles are described below.
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. The figures are not drawn to scale and may be simplified for clarity. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation.
Embodiments of the present principles generally relate to methods, apparatuses and systems for bandwidth estimations/rate control of media streams using multiple, combined probing techniques. While the concepts of the present principles are susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and are described in detail below. It should be understood that there is no intent to limit the concepts of the present principles to the particular forms disclosed. On the contrary, the intent is to cover all modifications, equivalents, and alternatives consistent with the present principles and the appended claims. For example, although embodiments of the present principles will be described primarily with respect to specific media/data streams and specific numbers of publishers and subscribers, such teachings should not be considered limiting. Embodiments in accordance with the present principles can be applied to other similar data including any number of publishers and subscribers.
In the following description, the term selective forwarding unit (SFU) is used to describe and refer to an apparatus that selectively forwards media packets from a source (referred to as publisher or sender) to a receiver (referred to as subscriber or receiver) and performs as a media router.
In embodiments described herein, the term padding is used to describe and refer to bits or characters that fill up unused portions of a data structure, such as a field, packet or frame. In some embodiments, padding can be added at the end of a data structure. In some embodiments, padding can consist of 1 bits, blank characters, and/or null characters.
In the following description, the terms stream, data stream, media stream, and the like, are used interchangeably and are intended to refer to and describe data for which rate control can be determined using sender-side techniques in accordance with the present principles.
Embodiments of the present principles provide bandwidth estimation/rate control in communication systems including an SFU to improve the overall communication quality. In some embodiments of the present principles, both paced and non-paced probing techniques are implemented together to improve the efficiency of the bandwidth estimation discovery between an SFU and a subscriber. The appropriate combination of paced and non-paced probing allows an SFU to support different rate-control strategies, as well as recover faster in case temporary problems on the network impede the correct execution of either one of the probing techniques.
More specifically, when using receiver-side or sender-side strategies, the estimated bandwidth and the reliability of the estimation are a function of, among other things, the transmitted rate. Therefore, using a combination of paced and non-paced probing techniques in accordance with the present principles, is especially useful under certain conditions, for example, to efficiently switch between data stream qualities when the SFU is processing a simulcast/scalable stream. For example, in instances in which a channel with a real bandwidth is well above the real-time communications requirements, if the publisher is sending n qualities, each with a rate in bits-per-seconds (bps) ri (with 0<=i<n, and ri<rj for i<j) and if the SFU is sending quality i at its rate ri and the receiver (based on such information) estimates the channel bandwidth as b (in bps) such as ri<b<rj (with i<j), the SFU will be unable to switch to the higher quality rj, despite the optimal channel unless the SFU pads the stream with padding data until the value rj is reached in accordance with embodiments of the present principles.
In the embodiment of
In addition, the padding adder module 104 can decide to probe more bandwidth by adding only-padding packets (also referred to as probing packets) to received packets sent all together in a burst. That is, as depicted in the embodiment of the SFU 100 of
In some instances, the rate control module 110 of the SFU 100 of
The flow logic for the above-described two instances is different in instances in which an RTP packet is a marker or a non-marker. For example, in the case of a marker packet without ongoing probes (state 1), the probing manager module 106 computes and configures the number of probes to generate, their size, and the payload type based on the probing properties coming from the rate control module 110. Once the probes are created, the probing manager module 106 can schedule the packets to be forwarded by the forwarding engine 102 to, for example, the sending engine 108.
In the case of a marker packet but with ongoing probes, two additional substates exist. In a first substate, a previously scheduled batch is canceled and treated as a new request from the rate control module 110. The flow is then the same as in state 1 in which the probing manager module 106 computes and configures the number of probes to generate, their size, and the payload type based on the probing properties coming from the rate control module 110. Once the probes are created, the probing manager module 106 can schedule the packets to be forwarded by the forwarding engine 102 to, for example, the sending engine 108.
In the case in which some padding packets have already been sent, the remaining packets are condensed in a single burst and sent out all at once before forwarding the RTP packet. This maintains the expected padding flow from a subscriber's perspective but loses the paced property, effectively functioning as a non-paced padding.
At 204, it is determined whether probing packets are scheduled in the probing manager module 106. If there are probing packets scheduled, the method 200 continues to 206. If there are no probing packets scheduled, the method 200 can skip to 218.
At 206, it is determined if any scheduled probe packets have already been sent. If probe packets have been sent the method 200 proceeds to 208. If probe packets have not been sent the method 200 skips to 212.
At 208, the probing manager module 106 transforms paced probing into non-paced probing. That is, in some embodiments and as described above, at 208 packets are condensed into a single burst to be sent out all at once This maintains the expected padding flow from a subscriber's perspective but loses the paced property, effectively functioning as a non-paced padding. The method 200 can proceed to 210.
At 210, RTP packets, including the probing packets, are communicated to the forwarding engine 102, which can communicate the RTP packets to the sending engine 108. The method 200 can then be exited.
At 212, if scheduled probe packets have not been sent, the probing is delayed and not performed. The method 200 can proceed to 214.
At 214, probe settings are determined and configured. That is, in some embodiments and as described above, the probing manager module 106 computes and configures the number of probes to generate, their size, and the payload type based on the probing properties coming from the rate control module 110. The method 200 can proceed to 216.
At 216, probing packets are scheduled for transmission. The method 200 can then revert back to 210 at which RTP packets, including the probing packets, are communicated to the forwarding engine 102, which can communicate the RTP packets to the sending engine 108. The method 200 can then be exited.
At 218, it is determined if the rate control module 110 has determined, using for example feedback from the network, whether or not an available bandwidth (e.g., bandwidth estimation) should be determined. If it has been determined that available bandwidth should not be determined, the method can revert back to 210, at which RTP packets are communicated to the forwarding engine 102, which can communicate the RTP packets to the sending engine 108. The method 200 can then be exited. If it is determined that available bandwidth should be determined, the method can revert back to 214 at which probe settings are determined and configured using the probing manager module 106, which computes and configures the number of probes to generate, their size, and the payload type based on the probing properties coming from the rate control module 110. The method can then proceed to 216 at which the probing packets are scheduled for transmission. The method can then revert back to 210 at which RTP packets are communicated to the forwarding engine 102, which can communicate the RTP packets to the sending engine 108. The method 200 can then be exited.
In embodiments of the present principles in which a bandwidth probing has been triggered by, for example, the rate control module 110, and the RTP packet is non-marker, there again exists two cases. In the first case, in which there is no ongoing probing by, for example the probing manager module 106, and the probing manager module 106 cannot start or schedule the probing process, as it happens after a marker packet is received, the RTP non-marker packet is then forwarded, and the ongoing processing stops. In the second case, in which there is an ongoing probing process (it does not matter if scheduled or started), the probing manager module 106 condenses the generated padding packets in a single burst which is sent out all at once before forwarding the RTP packet.
At 304, it is determined whether or not scheduled probing packets exist. If probing packets exist, the method 300 can proceed to 306. If probing packets do not exist, the method 300 can skip to 308.
At 306, the probing manager module 106 transforms paced probing into non-paced probing. That is, in some embodiments and as described above, at 306 packets are condensed into a single burst to be sent out all at once This maintains the expected padding flow from a subscriber's perspective but loses the paced property, effectively functioning as a non-paced padding. The method 300 can proceed to 308.
At 308, RTP packets, including any probing packets, are communicated to the forwarding engine 102, which can communicate the RTP packets to the sending engine 108. The method 300 can then be exited.
Referring back to
The Padding Adder and the Probing Manager of the present principles have to select the right timing to send out padding data carefully. The standard specification defines non-markers (a packet within a frame) and markers (the last packet of a frame) data packets, and many implementations demand that padding data must follow marker packets but cannot be sent in between non-marker packets. There is no theoretical limit on the amount of padding that can be sent, but as an SFU of the present principles can only forward packets without storing or modifying them, if the Probing Manager detects a non-marker packet that is about to be forwarded in the middle of a paced probing procedure, such a procedure should be stopped.
In embodiments of an SFU of the present principles, such as the SFU 100 of
Embodiments of the present principles, support receivers that work with receiver-side and/or sender-side rate control at the same time, by, in some embodiments, disabling the Probing Manager for receiver-side only subscribers. As depicted in
Is should be noted that, a rate control module of the present principles, such as the rate control module 110 of
At 504, padding packets are added to the multi-quality data stream to determine a data stream of the multi-quality data stream at which to communicate data packets of the multi-quality data stream. The method 500 can proceed to 506.
At 506, probing packets are added to the selected data stream of the multi-quality data stream to determine a bandwidth at which to communicate the selected data stream. The method 500 can proceed to 508.
At 508, the selected data stream of the multi-quality data stream is communicated to, for example at least one subscriber, using the determined bandwidth. The method 500 can then be exited.
In some embodiments of the present principles, the method 500 can further include 503, at which it is determined whether or not a different quality data stream of the multi-quality data stream should be selected. If at 503 it is determined that a different quality data stream of the multi-quality data stream should be selected, the method 500 can proceed to 504.
If at 503 it is determined that a different quality data stream of the multi-quality data stream should not be selected, the method 500 can proceed to a further included 505 during which it is determined if a bandwidth estimation process should be performed. If at 505 it is determined that a bandwidth estimation process should be performed, the method 500 can proceed to 506.
In the illustrated embodiment, computer system 600 includes one or more processors 610 coupled to a system memory 620 via an input/output (I/O) interface 630. Computer system 600 further includes a network interface 640 coupled to I/O interface 630, and one or more input/output devices 650, such as cursor control device 660, keyboard 670, and display(s) 680. In various embodiments, any of components may be utilized by the system to receive user input described above. In various embodiments, a user interface (e.g., user interface 630) may be generated and displayed on display 680. In some cases, it is contemplated that embodiments may be implemented using a single instance of computer system 600, while in other embodiments multiple such systems, or multiple nodes making up computer system 600, may be configured to host different portions or instances of various embodiments. For example, in one embodiment some elements may be implemented via one or more nodes of computer system 600 that are distinct from those nodes implementing other elements. In another example, multiple nodes may implement computer system 600 in a distributed manner.
In different embodiments, computer system 600 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
In various embodiments, computer system 600 may be a uniprocessor system including one processor 610, or a multiprocessor system including several processors 610 (e.g., two, four, eight, or another suitable number). Processors 610 may be any suitable processor capable of executing instructions. For example, in various embodiments processors 610 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x96, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 610 may commonly, but not necessarily, implement the same ISA.
System memory 620 may be configured to store program instructions 622 and/or data 632 accessible by processor 610. In various embodiments, system memory 620 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/flash-type memory, persistent storage (magnetic or solid state), or any other type of memory. In the illustrated embodiment, program instructions and data implementing any of the elements of the embodiments described above may be stored within system memory 620. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 620 or computer system 600.
In one embodiment, I/O interface 630 may be configured to coordinate I/O traffic between processor 610, system memory 620, and any peripheral devices in the system, including network interface 640 or other peripheral interfaces, such as input/output devices 650. In some embodiments, I/O interface 630 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 620) into a format suitable for use by another component (e.g., processor 610). In some embodiments, I/O interface 630 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 630 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 630, such as an interface to system memory 620, may be incorporated directly into processor 610.
Network interface 640 may be configured to allow data to be exchanged between computer system 600 and other devices attached to a network (e.g., network 690), such as one or more external systems or between nodes of computer system 600. In various embodiments, network 690 may include one or more networks including but not limited to Local Area Networks (LANs) (e.g., an Ethernet or corporate network), Wide Area Networks (WANs) (e.g., the Internet), wireless data networks, some other electronic data network, or some combination thereof. In various embodiments, network interface 640 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
Input/output devices 650 may, in some embodiments, include one or more display terminals, keyboards, keypads, touch pads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or accessing data by one or more computer systems 600. Multiple input/output devices 660 may be present in computer system 600 or may be distributed on various nodes of computer system 600. In some embodiments, similar input/output devices may be separate from computer system 600 and may interact with one or more nodes of computer system 600 through a wired or wireless connection, such as over network interface 640.
Those skilled in the art will appreciate that computer system 600 is merely illustrative and is not intended to limit the scope of embodiments. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions of various embodiments, including computers, network devices, Internet appliances, PDAs, wireless phones, pagers, etc. Computer system 600 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available.
Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computer system 600 may be transmitted to computer system 600 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium or via a communication medium. In general, a computer-accessible medium may include a storage medium or memory medium such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g., SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc.
The methods and processes described herein may be implemented in software, hardware, or a combination thereof, in different embodiments. In addition, the order of methods can be changed, and various elements can be added, reordered, combined, omitted or otherwise modified. All examples described herein are presented in a non-limiting manner. Various modifications and changes can be made as would be obvious to a person skilled in the art having benefit of this disclosure. Realizations in accordance with embodiments have been described in the context of particular embodiments. These embodiments are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances can be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and can fall within the scope of claims that follow. Structures and functionality presented as discrete components in the example configurations can be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements can fall within the scope of embodiments.
In the foregoing description, numerous specific details, examples, and scenarios are set forth in order to provide a more thorough understanding of the present disclosure. It will be appreciated, however, that embodiments of the disclosure can be practiced without such specific details. Further, such examples and scenarios are provided for illustration, and are not intended to limit the disclosure in any way. Those of ordinary skill in the art, with the included descriptions, should be able to implement appropriate functionality without undue experimentation.
References in the specification to “an embodiment,” etc., indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is believed to be within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly indicated.
Embodiments in accordance with the disclosure can be implemented in hardware, firmware, software, or any combination thereof. Embodiments can also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors. A machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device or a “virtual machine” running on one or more computing devices). For example, a machine-readable medium can include any suitable form of volatile or non-volatile memory.
In addition, the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine accessible medium/storage device compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. In some embodiments, the machine-readable medium can be a non-transitory form of machine-readable medium/storage device.
Modules, data structures, and the like defined herein are defined as such for ease of discussion and are not intended to imply that any specific implementation details are required. For example, any of the described modules and/or data structures can be combined or divided into sub-modules, sub-processes or other units of computer code or data as can be required by a particular design or implementation.
In the drawings, specific arrangements or orderings of schematic elements can be shown for ease of description. However, the specific ordering or arrangement of such elements is not meant to imply that a particular order or sequence of processing, or separation of processes, is required in all embodiments. In general, schematic elements used to represent instruction blocks or modules can be implemented using any suitable form of machine-readable instruction, and each such instruction can be implemented using any suitable programming language, library, application-programming interface (API), and/or other software development tools or frameworks. Similarly, schematic elements used to represent data or information can be implemented using any suitable electronic arrangement or data structure. Further, some connections, relationships or associations between elements can be simplified or not shown in the drawings so as not to obscure the disclosure.
While the foregoing is directed to embodiments of the present principles, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2023/013464 | 2/21/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63311686 | Feb 2022 | US |