This disclosure relates generally to wireless networks and, more particularly, to systems, apparatus, articles of manufacture, and methods for wireless network optimization.
Communication systems may utilize a network of multiple frequencies, spectrum, and types, such as fourth or fifth or sixth generation cellular (e.g., 4G or 5G or 6G), Citizens Broadband Radio Service (CBRS), private cellular, Wireless Fidelity (Wi-Fi), satellite (e.g., a geosynchronous equatorial orbit (GEO) satellite, a non-governmental organization (NGO) satellite, etc.), etc. In some examples, a user equipment (UE) device (also referred to as a UE) has multiple connectivity options such that the UE can connect to a network utilizing one or more of the multiple frequency spectrums and/or types.
In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. The figures are not to scale. As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other.
Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc., are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.
As used herein “substantially real time” and “substantially real-time” refer to occurrence in a near instantaneous manner recognizing there may be real-world delays for computing time, transmission, etc. Thus, unless otherwise specified, “substantially real time” and “substantially real-time” refer to being within a 1-second time frame of real time. For example, a first event can occur in substantially real-time relative to a second event provided the first event occurs within 1-second of the second event. As such, substantially real-time recognizes events that occur within a tolerance of some real time event (e.g., the same event, a different event, etc.).
As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
As used herein, “processor circuitry” is defined to include (i) one or more special purpose electrical circuits structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmable with instructions to perform specific operations and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of processor circuitry include programmable microprocessors, Field Programmable Gate Arrays (FPGAs) that may instantiate instructions, Central Processor Units (CPUs), Graphics Processor Units (GPUs), Digital Signal Processors (DSPs), XPUs, or microcontrollers and integrated circuits such as Application Specific Integrated Circuits (ASICs).
For example, an XPU may be implemented by a heterogeneous computing system including multiple types of processor circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more DSPs, etc., and/or a combination thereof) and application programming interface(s) (API(s)) that may assign computing task(s) to whichever one(s) of the multiple types of processor circuitry is/are best suited to execute the computing task(s).
Terrestrial and non-terrestrial communication protocols, spectrums, connection technologies, etc., may be used to implement wired and/or wireless communication between communication-enabled devices (e.g., wired and/or wireless devices) commonly referred to as user equipment (UE). In some disclosed examples, a device can be an electronic and/or computing device, such as a handset device (e.g., a smartphone), a tablet, an Internet-of-Things device, industrial equipment, a wearable device, a vehicle, etc., and/or any other physical or tangible items or assets. In some disclosed examples, a device can be active by being powered and/or enabled to transmit and/or receive data. In some disclosed examples, a device can be passive by being nonpowered, unpowered, and/or disabled to transmit and/or receive data. In some disclosed examples, a device that is nonpowered, unpowered, etc., can be an object. For example, a smartphone that is turned off, has a dead battery, has a battery removed, etc., can be a device and/or an object. In some disclosed examples, UEs can include wired or wireless-enabled devices such as smartphones, tablets or tablet computers, laptop computers, desktop computers, wearable devices, or any other device capable of transmitting or receiving data through a wired and/or wireless connection.
Multiple connections are necessary to utilize a network of multiple frequencies, spectrum, and types, such as fourth or fifth or sixth generation cellular (e.g., 4G or 5G or 6G), Citizens Broadband Radio Service (CBRS), private cellular, Wireless Fidelity (Wi-Fi), satellite (e.g., a geosynchronous equatorial orbit (GEO) satellite, a non-governmental organization (NGO) satellite, etc.), etc. The ability to move frictionlessly across these connectivity options and spectrums is necessary for enterprises (e.g., entities who operate enterprise networks) to solve issues with devices that have multiple connectivity options, specifically issues with control, connectivity, and workload consolidation efforts across clients, edge, and cloud options. Conventional connection effectuation is carried out in silos either from fixed spectrum chipsets or mobile spectrum stacks, thereby complicating access methods as well as handling, security, and configuration profiling to ensure quality-of-service (QoS) from each spectrum.
Conventional communication networks are static in the sense of connectivity and configurations. In some instances, conventional communication networks are static in the sense that they are not adequately able to support multiple connection types. In some instances, conventional communication networks are static in the sense that they are unable to support configuration changes to improve efficiency. For example, conventional communication networks are typically configured based on estimated usage and connection type. In some examples, the conventional communication networks are “fixed” and put into operation to support specific wireless connection and predetermined capacity.
Conventional network deployments include deployment of multiple radio base stations to connect to each type of available communication connection (e.g., 4G/5G/6G, Wi-Fi, private radio, etc.). For example, the communication connections can include Long Term Evolution (LTE) (e.g., Cat-22, spectrums 1, 2, 3, 4, 71 Gigahertz (GHz)), 4G LTE (e.g., Cat-20, spectrums B1 . . . B71), 5G new radio (NR) sub-6 GHz (e.g., spectrums n77, n78, n79, n1-n71, etc.), 5G millimeter wave (e.g., spectrum n257, n258, n260, n261, etc.), private network space low earth orbit (LEO) satellites (e.g., spectrums Ku 12-18 GHz spectral bands, Ka 26-40 GHz spectral bands, etc.), public and/or private space satellites (e.g., Ku, Ka, X, S, and ultra high frequency (UHF) bands), GEO satellites (e.g., BeiDou Navigation Satellite System (BDS), Global Positioning System (GPS), Galileo, Glonass, Quasi-Zenith Satellite System (QZSS)), LEO satellites (e.g., IRIDIUM®, STARLINK®, etc.), etc., and/or any combination(s) thereof.
The deployment of multiple radio base stations increases deployment complexity and cost (e.g., monetary cost associated with additional hardware, resource cost associated with increased number of compute, memory, and/or network resources required to be in operation, etc.). Examples disclosed herein overcome such challenges of conventional network deployments by utilizing multi-spectrum and/or communication connection technologies to continuously identify devices that are connected to network(s).
Examples disclosed herein identify optimal and/or otherwise improved selection of communication connection technologies for which an identified device is to connect and communicate using. For example, devices can include electronic devices associated with persons (e.g., pedestrians, persons in an industrial or manufacturing setting, etc.), vehicles, equipment, tools, and the like. Examples disclosed herein can identify an electronic device and communication connection capabilities of the electronic device and, based on a variety of considerations, factors, and data (e.g., connection data, network environment data, etc.), identify a communication connection network of which the electronic device can utilize to effectuate communication with improved QoS (e.g., increased throughput, reduced latency, etc.). A possible advantage of examples disclosed herein is that examples disclosed herein can achieve an ability to connect to one or more spectrums autonomously without friction, which is not achievable with conventional communication networks. A possible advantage of examples disclosed herein is that examples disclosed herein can achieve improved service and choice based on network quality with a lower total cost of ownership for enterprises.
Network quality and usage optimizations are typically focused on a specific set of users (e.g., UEs) using the same connection type (e.g., 4G/5G, Wi-Fi, etc.) with no consideration of actual environmental (e.g., weather conditions) and/or network-centric environmental impacts (e.g., blockage of signal) or actual usage at a particular network node, either a fixed network node or base station (e.g., 5G gNodeB (gNB)) or mobile network node or base station (e.g., satellite node B (sNB), access point on a moving vehicle, etc.). Conventional techniques for optimizing and/or otherwise improving network communications are limited to one connection type and do not consider real-time usage of multi-access users and devices, which can include wireless, wired, active/passive, etc., sensors. Examples disclosed herein overcome the limitations of conventional network communication optimizations by utilizing an array of real-time network telemetry and/or real-world multi-access activity at a specific physical location.
In some disclosed examples, a wireless measurement engine (e.g., a wireless network measurement engine) can invoke Artificial Intelligence/Machine Learning (AI/ML) techniques to utilize multi-access converged connection data at a physical network node and actual network traffic utilization to determine measurements (e.g., network measurements, wireless measurements, wireless network measurements, etc.) based on wireless data, and configure and/or reconfigure network nodes based on the measurements with a re-dimensioned network node that can adapt over time to specifically address the actual needs of connected UEs or gateways.
Determination of wireless measurements may include an active collection of physical layer (e.g., Layer 1 (L1)) data. Analysis of wireless measurements may include a use of the physical layer (e.g., L1 data). Network-focused wireless measurement analytics may refer to optimizing and/or benchmarking network characteristics. Application-focused wireless measurement determination may refer to the collection of physical layer (e.g., L1) wireless network data (e.g., radio access network data, Wi-Fi data, etc.) that can improve application performance and latency associated with electronic devices (e.g., wireless devices, mobile devices, etc.).
In some disclosed examples, the wireless measurement engine can determine wireless measurements, which can include wireless device statistics with uplink and downlink scheduling information including modulation and coding schemes, NR resource blocks, a number of orthogonal frequency-division multiplexing (OFDM) slots per frame, a number of slots per frame, a number of slots per subframe, channel quality indicators (CQIs), rank indicators for antenna quality, signal-to-noise ratios (SNRs), timing advance data, etc.
In some disclosed examples, the wireless measurement engine can determine wireless measurements, which can include radio physical layer (e.g., L1) statistics including how long an application and/or service took to process uplink and/or downlink pipelines on a distributed unit (e.g., a virtual radio access network (vRAN) distributed unit (DU)). In some disclosed examples, the wireless measurement engine can determine wireless measurements, which can include vRAN DU statistics including how many cores (e.g., compute cores) are allocated and what is the utilization per core. In some disclosed examples, the wireless measurement engine can determine wireless measurements, which can include open radio access network (O-RAN) statistics including packet throughput, latencies between a radio unit and a DU, etc. In some disclosed examples, the wireless measurement engine can determine wireless measurements, which can include platform statistics including power consumption statistics that are exposed from the physical layer (e.g., L1 radio layer).
In some disclosed examples, the wireless measurement engine can determine wireless measurements, which can include in-phase (I) and quadrature (Q) samples (also referred to as IQ samples). In some disclosed examples, the IQ samples can include uplink (UL), downlink (DL), and channel sounding samples, which can be generated based on the transmission and/or reception of Sounding Reference Signals (SRS). For example, a base station can transmit a known SRS signal (e.g., a known IQ SRS sample when the known SRS signal is in IQ representation) to a wireless device to cause the wireless device to transmit back the known SRS signal. However, the known SRS signal received from the wireless device may be altered due to interference. As a result, the received SRS signal from the wireless device is different than the known SRS signal. For example, the received SRS signal from the wireless device can be a resulting SRS signal (e.g., a resulting IQ SRS sample when the resulting SRS signal is in IQ representation) because the signal is a result of the interference.
In some disclosed examples, the wireless measurement engine can self-calibrate network nodes using active, live, operational, etc., usage data. For example, the wireless measurement engine can adjust (e.g., automatically adjust) a network node to converged multi-access usage by reconfiguring, based on determining wireless measurements, either fixed or mobile network nodes to accommodate actual-, live-, or real-world usage and telemetry of connected users, devices, or gateways.
In some disclosed examples, the wireless measurement engine can access wireless connectivity at the 1-2 open systems interconnection (OSI) layer (e.g., L1, Layer 2 (L2), etc.), sense the wireless spectrum type, enable the connection, provide multi-access in one base station (or more), and/or the appropriate billing method. In examples disclosed herein, multi-access includes terrestrial (e.g., cellular (e.g., 5G NR, fourth generation (4G)/Long Term Evolution (LTE), etc.), Wi-Fi, citizens broadband radio service (CBRS), public safety spectrum(s), etc.) connectivity as well as non-terrestrial satellite-based connectivity (e.g., Ku band, Ka band, low earth orbit (LEO) uplink/downlink (UL/DL), geostationary earth orbit (GEO) UL/DL, proprietary satellite spectrum(s), licensed and/or unlicensed terrestrial spectrums (e.g., a UE connected to a satellite using a licensed terrestrial spectrum), etc.). In some disclosed examples, the wireless measurement engine can use real time, low latency analytics to determine how and when to connect to the UE/device. In some disclosed examples, the wireless measurement engine can perform on the wire modifications to ongoing packet streams using real-time telemetry.
Additionally or alternatively, the device environment 102 may be implemented by any other generation of cellular technology such as fourth generation cellular (e.g., 4G) LTE and/or sixth generation cellular (e.g., 6G).
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In some examples, the L1 data can correspond to L1 data of an OSI model. In some examples, the L1 data of an OSI model can correspond to the physical layer of the OSI model (e.g., L1 data can be referred to as physical layer wireless data), L2 data of the OSI model can correspond to the data link layer of the OSI model (e.g., L2 data can be referred to as data link layer wireless data), L3 data of the OSI model can correspond to the network layer of the OSI model (e.g., L3 data can be referred to as network layer wireless data), and so forth. In some examples, the L1 data can correspond to the transmitted raw bit stream over a physical medium (e.g., a wired line physical structure such as coax or fiber, an antenna, a receiver, a transmitter, a transceiver, etc.). In some examples, the L1 data can be implemented by signals, binary transmission, etc. In some examples, the L2 data can correspond to physical addressing of the data, which may include Ethernet data, MAC addresses, logical link control (LLC) data, etc. In some examples, the L3 data can correspond to the functional and procedural means of transferring variable-length data sequences from a source to a destination host via one or more networks, while maintaining the QoS functions.
In the illustrated example of
RUs, RRUs, RANs, vRANs, DUs, CUs, and/or core servers as disclosed herein can be implemented by FLEXRAN™ Reference Architecture for Wireless Access provided by Intel® Corporation of Santa Clara, California. In some examples, FLEXRAN™ can be implemented by an off-the-shelf general-purpose Xeon® series processor with Intel Architecture server system and/or a virtualized platform including components of processors, input/output (I/O) circuitry, and/or accelerators (e.g., artificial intelligence and/or machine-learning accelerators, ASICs, FPGAs, GPUs, etc.) provided by Intel® Corporation. Additionally or alternatively, FLEXRAN™ can be implemented by a specialized and/or customized server system and/or a virtualized platform including components of processors, input/output (I/O) circuitry, and/or accelerators (e.g., artificial intelligence and/or machine-learning accelerators, ASICs, FPGAs, GPUs, etc.) provided by Intel® Corporation and/or any other manufacturer. A possible advantage of examples disclosed herein is that, in some examples, FlexRAN™ Reference Architecture can enable increased levels of flexibility with the programmable on-board features, memory, and I/O. A possible advantage of examples disclosed herein is that, in some examples, deployments based on the FlexRAN™ Reference Architecture can scale from small to large capacities with the same set of components running different applications or functions, ranging from the RAN to core network and data center including edge computing and media, enabling economies of scale. A possible advantage of examples disclosed herein is that, in some examples disclosed herein, architectures, deployments, and/or systems based on the 3rd Generation Partnership Project (3GPP) standard and/or the Open RAN standard can be implemented by hardware, software, and/or firmware associated with FLEXRAN™. For example, a 3GPP system as disclosed herein can include a server including processor circuitry that can execute and/or instantiate machine-readable instructions to implement FLEXRAN™.
In some examples, hardware platforms, such as the IoT device 3350 of
In the illustrated example of
In the illustrated example of
In some examples, the application layer 128 can include and/or implement business support systems (BSS), operations support systems (OSS), 5G core (5GC) systems, Internet Protocol multimedia core network subsystems (IMS), etc., in connection with operation of a telecommunications network, such as the wireless communication system 100 of
In the illustrated example of
In the illustrated example of
In some examples, the wireless measurement engine circuitry 200, or portion(s) thereof, can implement a measurement engine (e.g., a cellular data measurement engine, a wireless data measurement engine, a wireless measurement engine, etc.). For example, the wireless measurement engine circuitry 200, or portion(s) thereof, can implement a measurement engine based on FlexRAN™ Reference Architecture. In some examples, at least one of one(s) of the first networks 118, one(s) of the RRUs 120, one(s) of the DUs 122, one(s) of the CUs 124, one(s) of the core devices 126, or the cloud network 107 can be implemented by the wireless measurement engine circuitry 200. For example, a first one and/or a second one of the first networks 118, or portion(s) thereof, can be implemented by the wireless measurement engine circuitry 200. In some examples, a first one and/or a second one of the RRUs 120, or portion(s) thereof, can be implemented by the wireless measurement engine circuitry 200. In some examples, a first one and/or a second one of the DUs 122, or portion(s) thereof, can be implemented by the wireless measurement engine circuitry 200. In some examples, a first one and/or a second one of the CUs 124, or portion(s) thereof, can be implemented by the wireless measurement engine circuitry 200. In some examples, a first one and/or a second one of the core devices 126, or portion(s) thereof, can be implemented by the wireless measurement engine circuitry 200. In some examples, the cloud network 107, or portion(s) thereof, can be implemented by the wireless measurement engine circuitry 200.
In the illustrated example of
In the illustrated example of
In some examples, the interface circuitry 210 can receive data from one(s) of the devices 108, 110, 112, 114, 116, the first networks 118, the RRUs 120, the DUs 122, the CUs 124, the core devices 126, the device environment 102 (e.g., the 5G device environment), the edge network 104, the core network 106, the cloud network 107, etc., of
In some examples, the interface circuitry 210 can be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a BLUETOOTH® interface, a near field communication (NFC) interface, a PCI interface, a PCIe interface, a secure payment gateway (SPG) interface, a Global Navigation Satellite System (GNSS) interface, a 4G/5G/6G interface, a CBRS interface, a Category 1 (CAT-1) interface, a Category M (CAT-M) interface, a NarrowBand-Internet of Things (NB-IoT) interface, etc., and/or any combination(s) thereof. In some examples, the interface circuitry 210 can be implemented by one or more communication devices such as one or more receivers, one or more transceivers, one or more modems, one or more gateways (e.g., residential, commercial, or industrial gateways), one or more wireless access points (WAPs), and/or one or more network interfaces to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network, such as the device environment 102 (e.g., the 5G device environment), the edge network 104, the core network 106, the cloud network 107, the first networks 118, etc., of
In the illustrated example of
In some examples, the parser circuitry 220 includes and/or implements a dynamic load balancer (DLB) to extract data received by and/or otherwise associated with the interface circuitry 210. In some examples, the dynamic load balancer can be implemented by a Dynamic Load Balancer provided by Intel® of Santa Clara, California. Additionally or alternatively, the parser circuitry 220 may implement a queue management service, which can be implemented by hardware, software, and/or firmware. In some examples, the parser circuitry 220 generates queue events (e.g., data queue events, enqueue events, dequeue events, etc.). In some examples, the queue events can be implemented by an array of data (e.g., a data array). Alternatively, the queue events may be implemented by any other data structure. For example, the parser circuitry 220 can generate a first queue event, which can include a data pointer that references data stored in memory, a priority (e.g., a value indicative of the priority, a data priority, etc.) of the data, etc., and/or any combination(s) thereof. In some examples, the events can correspond to, be indicative of, and/or otherwise be representative of workload(s) (e.g., compute or computational workload(s), data processing workload(s), etc.) to be facilitated by DLB circuitry, which can be implemented by the parser circuitry 220. For example, the parser circuitry 220 can generate a queue event as an indication of data to be enqueued to the DLB circuitry to generate output(s) based on the enqueued data.
In some examples, a queue event, such as the first queue event, can be implemented by an interrupt (e.g., a hardware, software, and/or firmware interrupt) that, when generated and/or otherwise invoked, can indicate to the DLB circuitry (and/or DLB service) that there is/are workload(s) associated with the wireless physical layer data 262 to be performed or carried out. In some examples, the DLB circuitry can enqueue (e.g., add, insert, load, store, etc.) the queue event by adding, enqueueing, inserting, loading, and/or otherwise storing the data pointer, the priority, etc., into first hardware queue(s) (e.g., producer or data producer queue(s), load balancer queue(s), hardware implemented load balancer queue(s), etc.) included in and/or otherwise implemented by the DLB circuitry. Additionally or alternatively, the DLB service can enqueue the queue event by enqueueing, loading, and/or otherwise storing the data pointer, the priority, etc., into the first hardware queue(s).
In some examples, the priority (e.g., the data priority) can be based on waiting for all antenna data (e.g., SRS data from all expected antenna(s)) or waiting for a minimum threshold of data and/or measurements. For example, different queues can have different priorities. In some examples, a first data queue maintained by the DLB circuitry can be associated with a first data priority in which SRS data is not to be enqueued to worker core(s) until the SRS data from all expected antenna(s) is received. In some examples, a second data queue maintained by the DLB circuitry can be associated with a second data priority in which SRS data is not to be enqueued to worker core(s) until a threshold amount of SRS data and/or associated measurements is received and/or determined.
In some examples, a worker core can be a core of processor circuitry that is available to receive a workload to process. For example, the worker core can be idle or not executing a workload. In some examples, the worker core can be busy or executing a workload but may not be busy or executing a workload when the worker core is needed to receive another workload. In some examples, a worker core can be a core of processor circuitry that is configured to handle a particular workload. For example, a workload to be processed can be a machine-learning workload. In some examples, a core of processor circuitry may not be a worker core if the core is not configured to execute and/or instantiate the machine-learning workload. In some examples, a core of processor circuitry may not be a worker core if the core is not configured to execute and/or instantiate the machine-learning workload with increased efficiency and thereby the core may be a sub-optimal or nonideal choice to execute and/or instantiate the machine-learning workload. In some examples, a core of processor circuitry can be a worker core if the core is configured for a particular workload, such as by having a configuration of an operating frequency (e.g., a clock frequency), access to instructions from an Instruction Set Architecture (ISA) (e.g., a machine-learning ISA, a 5G cellular related ISA, etc.), etc., and/or any combination(s) thereof, to execute the workload.
In some examples, the DLB circuitry can dequeue the queue event by dequeuing, loading, and/or otherwise storing the data pointer, the priority, etc., into second hardware queue(s) (e.g., consumer or data consumer queue(s), load balancer queue(s), hardware implemented load balancer queue(s), etc.) that may be accessed by compute cores (e.g., consumer cores of processor circuitry, worker cores of processor circuitry, etc.) for subsequent processing. In some examples, the compute cores are included in and/or otherwise implemented by the parser circuitry 220, and/or, more generally, the wireless measurement engine circuitry 200. In some examples, the compute cores are included in and/or otherwise implemented by the DLB circuitry. In some examples, one or more of the compute cores are separate from the DLB circuitry. Additionally or alternatively, the DLB service can dequeue the queue event by dequeuing, loading, and/or otherwise storing the data pointer, the priority, etc., into the second hardware queue(s).
In some examples, a compute core can write data to the queue event. For example, the queue event can be implemented by a data array. In some examples, the compute core can write data into one or more positions of the data array. For example, the compute core can add data to one or more positions of the data array that does not include data, modify existing data of the data array, and/or remove existing data of the data array. By way of example, the parser circuitry 220 can dequeue a queue event from the DLB circuitry. The parser circuitry 220 can determine that the queue event includes a data pointer that references wireless data, such as SRS data. The parser circuitry 220 can complete (and/or cause completion of) a computation operation or workload on the wireless data, such as identifying data portion(s) of interest from the wireless data, extracting data portion(s) of interest from the wireless data, etc. After completion of the computation operation/workload, the parser circuitry 220 can cause a compute core to write a completion bit, byte, etc., into the queue event. After the completion bit, byte, etc., is written to the queue event, the parser circuitry 220 can enqueue the queue event back to the DLB circuitry.
In some examples, the DLB circuitry can determine that the computation operation has been completed by identifying the completion bit, byte, etc., in the queue event.
In the illustrated example of
In some examples, the device identification circuitry 230 can generate association(s) (e.g., data association(s)) of a device (e.g., an identification of a device), a measurement periodicity, and a location. For example, the device identification circuitry 230 can generate one or more data associations of the first device 108, a measurement periodicity of determining a location of the first device 108 two times per second (e.g., 2 Hertz (Hz)), and a location of the first device 108 of in the device environment 102 of
In the illustrated example of
In some examples, the wireless physical layer measurements 264 can include L1 latency measurements, such as downlink latency measurements (e.g., a minimum downlink latency, a maximum downlink latency, an average downlink latency), downlink latency per transmission time interval (TTI) measurements (e.g., minimum, maximum, average, etc.), uplink latency measurements (e.g., minimum, maximum, average, etc.), uplink latency per TTI measurements (e.g., minimum, maximum, average, etc.), SRS latency measurements (e.g., minimum, maximum, average, etc.), SRS latency per TTI measurements (e.g., minimum, maximum, average, etc.), etc. In some examples, the wireless physical layer measurements 264 can include L1 cellular measurements, such as a cellular data throughput between MAC and physical layer (PHY) layers for each active cell, active wireless device, and/or each uplink and/or downlink per wireless device. In some examples, the wireless physical layer measurements 264 can include L1 baseband unit (BBU) core usage measurements, such as a percentage core utilization of respective compute cores of a BBU. In some examples, the wireless physical layer measurements 264 can include L1 O-RAN fronthaul measurements, such as a total number of receive (RX) packets, a number of RX packets that arrive on time, a number of RX packets that arrive early, a number of RX packets that arrive late, a number of RX packets that are corrupt, a number of RX packets that are duplicate, etc.
In some examples, the wireless physical layer measurements 264 can include L1 vRAN measurements, such as a number of RX packets per second, RX throughput, transmit (TX) packets per second, TX throughput, etc. In some examples, the wireless physical layer measurements 264 can include vRAN port measurements, such as a number of RX physical uplink shared channel (PUSCH) packets per each antenna port, a number of RX SRS packets per each antenna port, a number of RX physical random access channel (PRACH) packets per each antenna port, etc. In some examples, the wireless physical layer measurements 264 can include any other type of wireless measurement, such as a number of PDSCH per slot, a number of physical downlink control channel (PDCCH) per slot, a number of channel state information reference signal (CSIRS) per slot, a number of PUSCH per slot, a number of SRS per slot, a number of L1 cells, a number of L1 cores, a number of L1 radios, a number of L1 antenna ports, a number of L1 symbols, downlink MAC PHY measurements, a number of uplink MAC PHY measurements, IQ measurements, latency measurements, cell measurements, core usage measurements, etc.
In some examples, the wireless measurement determination circuitry 240 can calculate, determine, generate, and/or output wireless measurement, such as a number of carriers, a download bandwidth, an upload bandwidth, a download fast Fourier transform (FFT) size, an upload FFT size, a number of downlink resource blocks, a number of uplink resource blocks, a number of transmission antennas, a number of receiving antennas, a number of downlink ports, a number of uplink ports, a numerology measurement, a cellular identifier, a single-sideband (SSB) power, an SSB period, an SSB subcarrier spacing, an SSB subcarrier offset, an SSB mask, a number of active SSBs, a demodulation reference signal (DMRS) type measurement, a PRACH configuration index or identifier, a PRACH subcarrier spacing measurement, a PRACH zero correlation zone configuration measurement, a PRACH restricted set measurement, a PRACH root sequence index, a PRACH starting frequency, a PRACH frequency division multiplexing (FDM) measurement, a PRACH SSB random access channel (RACH) measurement, a PRACH number of receive RU (NrofRXRU) measurement, a cyclic prefix measurement, a group hop flag measurement, a sequence hop flag measurement, a hopping index or identifier, a frame duplex mode measurement, a time division duplex (TDD) period measurement, a slot configuration measurement, an uplink throughput measurement, a downlink throughput measurement, a transmission comb, etc.
In some examples, the wireless measurement determination circuitry 240 can calculate, determine, generate, and/or output wireless measurements, such as a number (e.g., a maximum number) of downlink channels in a slot, a number (e.g., a maximum number) of downlink data channels in a slot, a number (e.g., a maximum number) of downlink control channels in a slot, a number (e.g., a maximum number) of uplink channels in a slot, a number (e.g., a maximum number) of uplink data channels in a slot, a number (e.g., a maximum number) of uplink control channels in a slot, a number (e.g., a maximum number) of reference signal (e.g., SRS) channels in a slot, etc. Additionally or alternatively, the wireless measurement determination circuitry 240 may calculate, determine, generate, and/or output any other type and/or quantity of wireless measurements associated with wireless data.
In some examples, the wireless measurement determination circuitry 240 determines reliability data associated with a network. For example, the wireless measurement determination circuitry 240 can identify an antenna and/or a receiver at which the wireless physical layer data 262 is received. In some examples, the wireless measurement determination circuitry 240 can determine that the antenna and/or the receiver have technical specifications such as an operating frequency, a bandwidth, a polarization, an antenna gain, a platform height, an incident angle, an azimuth beamwidth, an elevation beamwidth, a horizontal beamwidth, a vertical beam width, an electrical down tilt, an upper side lobe level, a front to back ratio, isolation between ports, a power rating, an impedance, an antenna configuration, a return loss, etc. For example, the wireless measurement determination circuitry 240 can determine that the wireless physical layer data 262 from a first antenna with first technical specifications can have increased reliability and/or increased data integrity (and/or reduced uncertainty or data uncertainty or error rate) with respect to the wireless physical layer data 262 from a second antenna with second technical specifications. For example, the first antenna can have a higher power rating, azimuth beamwidth, etc., than the power rating, the azimuth beamwidth, etc., of the second antenna. In some examples, the technical specifications of the antennas and/or the receivers can be input to the machine-learning model 266 to improve an accuracy of the output(s). In some examples, the output(s) of the machine-learning model 266 can include reliability indicators, uncertainty values, etc., associated with the wireless measurement determinations. For example, the output(s) of the machine-learning model 266 can include (i) a percentage of dropped wireless data packets, (ii) a reliability indicator (e.g., a reliability indicator of 70% reliable where 100% is the most reliable and 0% is the least reliable, 85% reliable, 98% reliable, etc.) representative (e.g., a reliability metric representative) of the accuracy of the percentage and/or a reliability of the underlying data (e.g., a quantification of the reliability of data from one or more first antennas of a first base station). Additionally or alternatively, any other input to the machine-learning model 266, such as the wireless physical layer data 262 and/or the wireless physical layer measurements 264, can be assigned reliability data or values to be evaluated by the machine-learning model 266.
In the illustrated example of
In the illustrated example of
In some examples, the wireless physical layer data 262 can include data received by the interface circuitry 210. For example, the wireless physical layer data 262 can be data received from one(s) of the devices 108, 110, 112, 114, 116, the first networks 118, the RRUs 120, the DUs 122, the CUs 124, the core devices 126, the device environment 102, the edge network 104, the core network 106, the cloud network 107, etc., of
Artificial intelligence (AI), including machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output based on patterns and/or associations previously learned by the model via a training process. For instance, the wireless measurement engine circuitry 200 can train the ML model 266 with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) consistent with the recognized patterns and/or associations.
Many different types of machine-learning models and/or machine-learning architectures exist. In some examples, the wireless measurement engine circuitry 200 generates the ML model 266 as neural network model(s). The wireless measurement engine circuitry 200 can use a neural network model to execute an AI/ML workload, which, in some examples, may be executed using one or more hardware accelerators. In general, machine-learning models/architectures that are suitable to use in the example approaches disclosed herein include recurrent neural networks. However, other types of machine learning models could additionally or alternatively be used such as supervised learning artificial neural network (ANN) models, clustering models, classification models, etc., and/or a combination thereof. Example supervised learning ANN models can include two-layer (2-layer) radial basis neural networks (RBN), learning vector quantization (LVQ) classification neural networks, etc. Example clustering models can include k-means clustering, hierarchical clustering, mean shift clustering, density-based clustering, etc. Example classification models can include logistic regression, support-vector machine or network, Naive Bayes, etc. In some examples, the wireless measurement engine circuitry 200 can compile, generate, and/or otherwise output the ML model 266 as a lightweight machine-learning model.
In general, implementing an ML/AI system involves two phases, a learning/training phase and an inference phase. In the learning/training phase, the wireless measurement engine circuitry 200 uses a training algorithm to train the ML model 266 to operate in accordance with patterns and/or associations based on, for example, training data. In general, the ML model 266 include(s) internal parameters (e.g., configuration register data) that guide how input data is transformed into output data, such as through a series of nodes and connections within the ML model 266 to transform input data into output data. Additionally, hyperparameters are used as part of the training process to control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). Hyperparameters are defined to be training parameters that are determined prior to initiating the training process.
Different types of training may be performed based on the type of ML/AI model and/or the expected output. For example, the wireless measurement engine circuitry 200 can invoke supervised training to use inputs and corresponding expected (e.g., labeled) outputs to select parameters (e.g., by iterating over combinations of select parameters) for the ML model 266 that reduce model error. As used herein, “labeling” refers to an expected output of the machine learning model (e.g., a classification, an expected output value, etc.). Alternatively, the wireless measurement engine circuitry 200 may invoke unsupervised training (e.g., used in deep learning, a subset of machine learning, etc.) that involves inferring patterns from inputs to select parameters for the ML model 266 (e.g., without the benefit of expected (e.g., labeled) outputs).
In some examples, the wireless measurement engine circuitry 200 trains the ML model 266 using unsupervised clustering of operating observables. For example, the operating observables can include reference signal data (e.g., SRS measurement data), a certificate (e.g., a digital certificate), an IP address, a manufacturer and/or vendor identifier, a MAC address, a serial number, a universal unique identifier (UUID), data associated with a UE, the wireless physical layer data 262, the wireless physical layer measurements 264, etc., and/or any combination(s) thereof. However, the wireless measurement engine circuitry 200 may additionally or alternatively use any other training algorithm such as stochastic gradient descent, Simulated Annealing, Particle Swarm Optimization, Evolution Algorithms, Genetic Algorithms, Nonlinear Conjugate Gradient, etc.
In some examples, the wireless measurement engine circuitry 200 can train the ML model 266 until the level of error is no longer reducing. In some examples, the wireless measurement engine circuitry 200 can train the ML model 266 locally on the wireless measurement engine circuitry 200 and/or remotely at an external computing system communicatively coupled to a network. In some examples, the wireless measurement engine circuitry 200 trains the ML model 266 using hyperparameters that control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). In some examples, the wireless measurement engine circuitry 200 can use hyperparameters that control model performance and training speed such as the learning rate and regularization parameter(s). The wireless measurement engine circuitry 200 can select such hyperparameters by, for example, trial and error to reach an optimal model performance. In some examples, the wireless measurement engine circuitry 200 utilizes Bayesian hyperparameter optimization to determine an optimal and/or otherwise improved or more efficient network architecture to avoid model overfitting and improve the overall applicability of the ML model 266. Alternatively, the wireless measurement engine circuitry 200 may use any other type of optimization. In some examples, the wireless measurement engine circuitry 200 can perform re-training. The wireless measurement engine circuitry 200 can execute such re-training in response to override(s) by a user of the wireless measurement engine circuitry 200, a receipt of new training data, etc.
In some examples, the wireless measurement engine circuitry 200 facilitates the training of the ML model 266 using training data. In some examples, the wireless measurement engine circuitry 200 utilizes training data that originates from locally generated data, such as 4G LTE L1 data, 5G L1 data, 6G L1 data, reference signal (e.g., SRS) data, radio identifiers, CIR data, SNR data, etc. In some examples, the wireless measurement engine circuitry 200 utilizes training data that originates from externally generated data. For example, the wireless measurement engine circuitry 200 can utilize L1 data, L2 data, etc., from any data source (e.g., a RAN system, a satellite, etc.).
In some examples where supervised training is used, the wireless measurement engine circuitry 200 can label the training data (e.g., label training data or portion(s) thereof as object identification data, location data, etc.). Labeling is applied to the training data by a user manually or by an automated data pre-processing system. In some examples, the wireless measurement engine circuitry 200 can pre-process the training data using, for example, an interface (e.g., interface circuitry, network interface circuitry, etc.) to extract and/or otherwise identify data of interest and discard data not of interest to improve computational efficiency. In some examples, the wireless measurement engine circuitry 200 sub-divides the training data into a first portion of data for training the ML model 266, and a second portion of data for validating the ML model 266.
Once training is complete, the wireless measurement engine circuitry 200 can deploy the ML model 266 for use as executable construct(s) that process(es) an input and provides output(s) based on the network of nodes and connections defined in the ML model 266. The wireless measurement engine circuitry 200 can store the ML model 266 in a datastore, such as the datastore 260, that can be accessed by the wireless measurement engine circuitry 200, a cloud repository, etc. In some examples, the wireless measurement engine circuitry 200 can transmit the ML model 266 to external computing system(s) via a network. In some examples, in response to transmitting the ML model 266 to the external computing system(s), the external computing system(s) can execute the ML model 266 to execute AI/ML workloads with at least one of improved efficiency or performance to achieve improved object tracking, location detection and/or determination, etc., and/or any combination(s) thereof.
Once trained, the deployed one(s) of the ML model 266 can be operated in an inference phase to process data. In the inference phase, data to be analyzed (e.g., live data) is input to the ML model 266, and the ML model 266 execute(s) to create output(s). This inference phase can be thought of as the AI “thinking” to generate the output based on what it learned from the training (e.g., by executing the ML model 266 to apply the learned patterns and/or associations to the live data). In some examples, input data undergoes pre-processing before being used as an input to the ML model 266. Moreover, in some examples, the output data can undergo post-processing after it is generated by the ML model 266 to transform the output into a useful result (e.g., a display of data, a detection and/or identification of an object, a location determination of an object, an instruction to be executed by a machine, etc.).
In some examples, output of the deployed one(s) of the ML model 266 can be captured and provided as feedback. By analyzing the feedback, an accuracy of the deployed one(s) of the ML model 266 can be determined. If the feedback indicates that the accuracy of the deployed model is less than a threshold or other criterion, training of an updated model can be triggered using the feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model.
In some examples, the outputs of the ML model 266 can be the wireless physical layer measurements 264, or portion(s) thereof. In some examples, the outputs of the ML model 266 can be a recommendation to change an aspect of a network. For example, the recommendation can be to increase an antenna power of a transmitting UE and/or a receiving base station. In some examples, the recommendation can be to activate, enable, and/or increase a number of compute cores of a base station that is allocated to handle and/or process network traffic to improve network performance and/or throughput (e.g., increase performance and/or increase throughput). In some examples, the recommendation can be to deactivate, disenable, and/or decrease a number of compute cores of a base station that is allocated to handle and/or process network traffic to conserve power.
As used herein, “data” is information in any form that may be ingested, processed, interpreted and/or otherwise manipulated by processor circuitry to produce a result. The produced result may itself be data. As used herein, a “dataset” is a set of one or more collections of information (e.g., unprocessed and/or raw data, calculated and/or determined measurements based on the unprocessed and/or raw data, etc.) in any form that may be ingested, processed, interpreted and/or otherwise manipulated by processor circuitry to produce a result. The produced result may itself be data. As used herein, a “model” is a set of instructions and/or data that may be ingested, processed, interpreted and/or otherwise manipulated by processor circuitry to produce a result. Often, a model is operated using input data to produce output data in accordance with one or more relationships reflected in the model. The model may be based on training data. As used herein “threshold” is expressed as data such as a numerical value represented in any form, that may be used by processor circuitry as a reference for a comparison operation.
In some examples, the wireless measurement engine circuitry 200 includes means for receiving and/or transmitting data (e.g., wireless physical layer data). For example, the means for receiving and/or transmitting data may be implemented by the interface circuitry 210. In some examples, the interface circuitry 210 may be instantiated by processor circuitry such as the example processor 3352 of
In some examples, the wireless measurement engine circuitry 200 includes means for extracting data (e.g., wireless physical layer data) and/or means for parsing data (e.g., wireless physical layer data). For example, the means for extracting and/or parsing data may be implemented by the parser circuitry 220. In some examples, the parser circuitry 220 may be instantiated by processor circuitry such as the example processor 3352 of
In some examples, the wireless measurement engine circuitry 200 includes means for identifying a device. For example, the means for identifying a device may be implemented by the device identification circuitry 230. In some examples, the device identification circuitry 230 may be instantiated by processor circuitry such as the example processor 3352 of
In some examples, the wireless measurement engine circuitry 200 includes means for determining a measurement (e.g., a wireless physical layer measurement). For example, the means for determining a measurement may be implemented by the wireless measurement determination circuitry 240. In some examples, the wireless measurement determination circuitry 240 may be instantiated by processor circuitry such as the example processor 3352 of
In some examples, the wireless measurement engine circuitry 200 includes means for generating an event (e.g., event data). For example, the means for generating an event may be implemented by the event generation circuitry 250. In some examples, the event generation circuitry 250 may be instantiated by processor circuitry such as the example processor 3352 of
In some examples, the wireless measurement engine circuitry 200 includes means for storing data. For example, the means for storing data may be implemented by the datastore 260. In some examples, the datastore 260 may be instantiated by processor circuitry such as the example processor 3352 of
While an example manner of implementing the wireless measurement engine circuitry 200 is illustrated in
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In example operation, the wireless device 502 transmits wireless data, such as 5G reference signal (e.g., SRS) data, Wi-Fi data packets, etc., to one(s) of the gNBs 504, 506, 508. The one(s) of the gNBs 504, 506, 508 provide, output, and/or cause transmission of the wireless data to the server 510. The server 510 can determine wireless measurements as disclosed herein in substantially real-time. The server 510 can determine to change the second wireless communication system 500 based on the wireless measurements. For example, the server 510 can instruct the wireless device 502 via one(s) of the gNBs 504, 506, 508 to increase a transmission power associated with an antenna of the wireless device 502, switch from a 5G network to a Wi-Fi network, etc., and/or any combination(s) thereof. In the example of
In example operation, the wireless interface 602 and/or the wired interface 610 can obtain example measurements 612 (e.g., wireless measurements, network measurements, etc.) associated with an example UE 614 or any other type of communication-enabled device via an example RU 616. For example, the measurements 612 can include user statistics with uplink and downlink scheduling information, radio layer (L1) statistics, vRAN DU statistics, O-RAN statistics (e.g., statistics based on and/or associated with the Open Radio Access Network (O-RAN) standard), platform statistics, IQ samples (e.g., quadrature samples), etc., and/or any combination(s) thereof, associated with a UE or any other type of communication-enabled device. In some examples, one(s) of the measurements 612 can be determined by the RU 616 and/or the server 600 based on wireless data, such as the wireless physical layer data 262 of
A possible advantage of examples disclosed herein is that the server 600 can utilize the measurements 612 to effectuate uplink and/or downlink scheduling of wireless communication. For example, the server 600 can identify an example wireless communication type selection 618 based on the measurements 612. In some examples, the server 600 can determine based on the measurements 612 that an application executed and/or instantiated by the UE 614 is to switch or transition from a first type of wireless communication (e.g., 4G LTE, 5G, Wi-Fi, etc.) to a second type of wireless communication (e.g., 4G LTE, 5G, Wi-Fi, etc.), which can have increased bandwidth (e.g., the second type of wireless communication has a first bit rate (e.g., bits per second (bps)) that is greater than a second bit rate (e.g., bps) of the first type of wireless communication). In some examples, a user associated with the UE 614, a service level agreement (SLA) and/or policy (e.g., an enterprise policy) associated with the UE 614, etc., can specify that an application executed and/or instantiated by the UE 614 is to run with reduced data usage on a wireless connection (e.g., a 4G LTE data plan, a 5G data plan, a Wi-Fi hotspot data plan, etc.). For example, the server 600 can instruct the UE 614 to switch from a first type of wireless communication to a second type of wireless communication based on the specification to run with reduced data usage, the measurements 612, etc., and/or any combination(s) thereof. For example, an application associated with the UE 614 may utilize a first amount of data (e.g., kilobytes (KB)) when utilizing the first type of wireless communication and a second amount of data (e.g., KB) when utilizing the second type of wireless communication where the second amount of data is less than the first amount of data. In some examples, a user associated with the UE 614, an SLA/policy associated with the UE 614, etc., can determine to enable the UE 614 to connect a video call on a specific cellular network (e.g., 4G LTE, 5G, etc.) instead of a different type of wireless network (e.g., Wi-Fi). For example, the server 600 can instruct the UE 614 to switch from 5G to Wi-Fi based on the measurements 612, which can include application-focused RAN statistics and/or wireless measurements.
In the illustrated example of
In the illustrated example of
In example operation, the wireless devices 802 transmit and/or cause transmission of wireless data (e.g., 5G data, Wi-Fi data, satellite data, etc.) to the DUs 810 via the RRHs 804 and the RRUs 806. In example operation, the DUs 810 can determine wireless measurements based on the wireless data in substantially real-time. In example operation, the DUs 810 can determine the wireless measurements in substantially real-time for on-premises analysis, which can include AI/ML analytics and/or processing. In example operation, the DUs 810 can direct and/or instruct at least one(s) of the wireless devices 802, the RRHs 804, or the RRUs 806 to change a device and/or network configuration to improve operation of the fourth wireless communication system 800. In example operation, the core network 814 can obtain the wireless measurements in non-real time for off-premises analysis, which can include AI/ML analytics and/or processing. In example operation, the core network 814 can direct and/or instruct at least one(s) of the wireless devices 802, the RRHs 804, the RRUs 806, the DUs 810, or the CU 812 to change a device and/or network configuration to improve operation of the fourth wireless communication system 800.
The wireless measurement determination architecture 900 includes an example UE 902, an example next generation radio access network (NG RAN) 904, an example next generation core network (NG CN) 906, an example data network (DN) 908, an example wireless measurement AI/ML engine 910, an example near-real time radio access network intelligent controller (near-RT RIC) 912, and example I/Q analytics applications 914. The NG RAN 904 includes and/or implements an example RU 916, an example DU 918, an example CU-UP 920, and an example CU-CP 922. The NG CN 906 includes and/or implements an example user plane function (UPF) 924 and an example access and mobility management function (AMF) 926.
In the illustrated example of
In the illustrated example of
In example operation, the wireless measurement engine 930 can receive measurement requests (e.g., a request for measurements, statistics, etc., based on wireless data associated with the UE 902); configure the NG RAN 904 and the UE 902 for measurement determination; and calculate example wireless measurement 911 of the UE 902 based on UE and/or RAN measurements. In some examples, the wireless measurement engine 930 receives reference signals (e.g., SRS) measurements and/or other information from a gNB, such as a gNB implemented by the NG RAN 904, via the AMF 926. In some examples, the wireless measurement engine 930 can configure the UE 902 via the DU 918 to transmit reference signal (e.g., SRS) data based on a configuration periodicity and/or transmission comb. In some examples, the wireless measurement engine 930 calculates and/or otherwise determines the wireless measurements 911 and outputs the wireless measurements 911 to the wireless measurement AI/ML engine 910 to analyze the wireless measurements 911, trends thereof, etc., to determine insights into a wireless communication network associated with the UE 902.
In some examples, the wireless measurement engine 930 publishes the wireless measurements 911 of the UE 902 to the I/Q analytics applications 914 that can consume the wireless measurement results to effectuate compute workloads (e.g., network-related workloads, AI/ML-related workloads, etc.). A possible advantage of examples disclosed herein is that the wireless measurement engine 930 can configure a rate at which reference signal (e.g., SRS) data is obtained from the UE 902 and/or a rate at which reference signal (e.g., SRS) measurements based on the reference signal (e.g., SRS) data can be available for storage, access, and/or transmission to other hardware, software, and/or firmware. For example, the wireless measurement AI/ML engine 910 can output a recommendation to change a configuration, a location periodicity, an accuracy and/or latency, etc., and instruct the wireless measurement engine 930 to effectuate the change (e.g., configure the UE 902 to transmit data from the UE 902 to the L1 interface 928 at a specified rate and/or using a specified configuration).
In the illustrated example of
In the example workflow 1100 of
In the example workflow 1100 of
In the example workflow 1100 of
In some examples, the multi-core processor circuitry 1208 can be implemented by a CPU, a DSP, a GPU, an FPGA, an Infrastructure Processing Unit (IPU), network interface circuitry (NIC) (e.g., a smart NIC), an XPU, etc., or any other type of processor circuitry. In the example of
In some examples, the RX core 1210 can implement a first example ring buffer 1216. In some examples, the TX core 1212 can implement a second example ring buffer 1218. In the example data management workflow 1200, one or more first example cores 1220, which include the RX core 1210, can receive the UE data 1202, 1204, 1206, 1207, 1209 from UEs. In some examples, the UE data 1202, 1204, 1206, 1207, 1209 can be cleartext, ciphertext, etc. In some examples, the UE data 1202, 1204, 1206, 1207, 1209 can be transmitted in 512-byte packets. Alternatively, the UE data 1202, 1204, 1206, 1207, 1209 may be transmitted in any other byte sized packets and/or data format. In the example data management workflow 1200, the one or more first cores 1220 can extract data of interest (e.g., extract subset(s) or portion(s) of the data) from the UE data 1202, 1204, 1206, 1207, 1209, such as the L1 reference signal (e.g., SRS) data, the L1 Wi-Fi data, etc. In some examples, the one or more first cores 1220 can store the extracted data in the first ring buffer 1216. For example, the one or more first cores 1220 can extract L1 wireless data from the first UE data 1202 and add and/or insert the extracted L1 wireless data into the first ring buffer 1216. A possible advantage of examples disclosed herein the RX core 1210 can extract subset(s) of incoming data based on a UE identifier.
In the example data management workflow 1200, the one or more first cores 1220 can generate queue events corresponding to respective ones of the UE data 1202, 1204, 1206, 1207, 1209. For example, the one or more first cores 1220 can generate a first queue event including the first UE identifier, a second queue event including the second UE identifier, and a third queue event including the third UE identifier. In some examples, the queue events can be implemented by an array of data. Alternatively, the queue events may be implemented by any other data structure. In some examples, the queue events can include data pointers that reference respective locations in memory at which the UE data 1202, 1204, 1206, 1207, 1209 is stored. For example, the first queue event can include a first data pointer that corresponds to a memory address, a range of memory addresses, etc., at which the first UE data 1202, or portion(s) thereof, are stored. In the example data management workflow 1200, the one or more first cores 1220 can enqueue the first through third queue events into the DLB circuitry 1214. For example, the one or more first cores 1220 can enqueue the first through third queue events into hardware-managed queues (e.g., portion(s) of memory). In some examples, the DLB circuitry 1214 can select one of the identifiers to process based on a priority value, which may be included in the queue events.
In the example data management workflow 1200, the DLB circuitry 1214 can dequeue the first through third queue events to one or more of the second cores 1222 (cores identified by UE1, UE2, UEN), which can implement worker cores. In the example data management workflow 1200, the one or more second cores 1222 can execute computational task(s), operation(s), etc., on the UE data 1202, 1204, 1206, 1207, 1209 associated with the respective dequeued queue events. For example, the one or more second cores 1222 can execute a cryptographic, encryption, etc., task (e.g., an IP security (IPSec) task) on the UE data 1202, 1204, 1206, 1207, 1209. In response to completing the task(s), the one or more second cores 1222 can enqueue the queue events back to the DLB circuitry 1214. For example, the DLB circuitry 1214 can reorder and/or otherwise re-assemble the UE data 1202, 1204, 1206, 1207, 1209 (e.g., data packets that include and/or otherwise implement the UE data 1202, 1204, 1206, 1207, 1209). In the example data management workflow 1200, the DLB circuitry 1214 can dequeue the queue events to the TX core 1212, which can cause the TX core 1212 to transmit the reordered and/or reassembled data packets (e.g., encrypted data packets) to different hardware, software, and/or firmware. In some examples, the TX core 1212 can provide the data packets to the second ring buffer 1218. In some examples, the data included in the second ring buffer 1218 can include less data than data originally inserted in the first ring buffer 1216. For example, UE #1 L1 wireless (WL) data in the first ring buffer 1216 can include a first quantity of L1 wireless data (e.g., a first number of measurements, a first number of bits, etc.) and UE #1 WL subset in the second ring buffer 1218 can include a second quantity of L1 wireless data less than the first quantity.
In some examples, the TX core 1212 can transmit the data packets from the second ring buffer 1218 to the wireless measurement engine circuitry 200 of
In the example data management workflow 1200, the multi-core processor circuitry 1208 can obtain first wireless data from a first antenna of a base station and second wireless data from a second antenna of the base station. For example, the first wireless data can be first UE #1 L1 wireless data received at a first antenna of a base station from a UE and the second wireless data can be second UE #1 L1 wireless data received at a second antenna of the same base station.
In the example data management workflow 1200, the multi-core processor circuitry 1208 can store the first wireless data (e.g., first cellular data) in a first linked list, such as a first portion identified by UE #1 WL Subset in the first ring buffer 1216, which can be stored in memory associated with the multi-core processor circuitry 1208. In the example data management workflow 1200, the multi-core processor circuitry 1208 can store the second wireless data (e.g., second cellular data) in a second linked list, such as a second portion of the first ring buffer 1216 (e.g., the first ring buffer 1216 can include multiple slices with each slice corresponding to L1 wireless data from the UE). In some examples, the first linked list is associated with the first antenna and the second linked list is associated with the second antenna.
In the example data management workflow 1200, the wireless measurement engine circuitry 200 (e.g., one or more cores of the multi-core processor circuitry 1208) can determine a wireless measurement of the UE based on at least one of the first wireless data (e.g., first cellular data) or the second wireless data (e.g., second cellular data). For example, the RX core 1210 can enqueue a first data pointer that references UE #1 L1 WL data stored in memory in the first linked list, which can be included in and/or implemented by the DLB circuitry 1214. In the example data management workflow 1200, the DLB circuitry 1214 can dequeue the first data pointer to the one or more second cores 1222. The one or more second cores 1222 can determine wireless measurements based on the UE #1 L1 WL data. In the example data management workflow 1200, after the determination(s), the one or more second cores 1222 can provide the first data pointer back to the DLB circuitry 1214. For example, the first data pointer can reference the wireless measurements stored in memory associated with the multi-core processor circuitry 1208. Additionally or alternatively, the one or more second cores 1222 can provide a second data pointer to the DLB circuitry 1214. For example, the second data pointer can reference the wireless measurements stored in memory associated with the multi-core processor circuitry 1208. In some examples, the DLB circuitry 1214 can store the first data pointer and/or the second data pointer in a third linked list, such as a slice of the second ring buffer 1218 identified by UE #1 WL Subset. In some examples, the wireless measurement engine circuitry 200 (e.g., one or more cores of the multi-core processor circuitry 1208) can access the wireless measurements based on the first data pointer (e.g., accessing memory location(s) identified by the first data pointer) and/or the second data pointer (e.g., accessing memory location(s) identified by the second data pointer). In some examples, the wireless measurement engine circuitry 200 can determine a recommendation to change a network configuration of a network associated with the UE based on the wireless measurements.
In some examples, the multi-core processor circuitry 1208 and/or the wireless measurement engine circuitry 200 can obtain at least one of the first wireless data or the second wireless data based on Intel® FLEXRAN™ Reference Architecture. In some examples, the multi-core processor circuitry 1208 and/or the wireless measurement engine circuitry 200 can store the at least one of the first wireless data or the second wireless data based on Intel® FLEXRAN™ Reference Architecture. In some examples, the multi-core processor circuitry 1208 and/or the wireless measurement engine circuitry 200 can determine the wireless measurements based on Intel® FLEXRAN™ Reference Architecture.
In some examples, the multi-core processor circuitry 1208 can aggregate a plurality of wireless data sets associated with a UE using a linked list. For example, the first ring buffer 1216 and/or the second ring buffer 1218 can include multiple slices, each of which can be associated with the same UE. For example, the first ring buffer 1216 can include multiple UE #1 WL Subset slices, where a first slice (e.g., a first data slice, a first portion, a first data portion, a first data buffer portion, etc.) can be first wireless data received by a first antenna of a first base station, a second slice can be second wireless data received by a second antenna of the first base station or a different base station, etc. In some examples, the multi-core processor circuitry 1208 can identify respective priorities of portions of the plurality of wireless data sets with a linked list associated with a UE. For example, each slice of the first ring buffer 1216 and/or the second ring buffer 1218 can have a different data or data handling priority, processing priority, etc.
In some examples, the multi-core processor circuitry 1208 can format the portions of the plurality of wireless data sets from a first data format to a second data format with a linked list. For example, cellular data stored in the first ring buffer 1216 can have a first data format and cellular data stored in the second ring buffer 1218 can have a second data format different from the first data format. In some examples, the second data format is based on a type of wireless measurement engine utilized to determine wireless measurements. In some examples, wireless data can be converted from the first data format into the second data format when moved from the first ring buffer 1216 to the second ring buffer 1218. In some examples, wireless data can be converted from the second data format into the first data format when moved from the second ring buffer 1218 to the first ring buffer 1216.
In some examples, the multi-core processor circuitry 1208 can generate a wireless measurement engine packet based on the portions of the plurality of wireless data sets in the second data format, and the wireless measurements associated with the UE can be based on the wireless measurement engine packet. For example, the wireless measurement engine circuitry 200 can obtain wireless data from the second ring buffer 1218 in the second data format, generate a wireless measurement engine packet including the wireless data in the second data format, and determine a wireless measurement associated with the UE based on the wireless measurement engine packet. In some examples, the wireless measurement engine packet can be a data packet that can be transmitted to an electronic device, a UE, etc. In some examples, the wireless measurement engine packet can be consumed by an application and/or a service. For example, the wireless measurement engine circuitry 200 can generate a graphical user interface (GUI) after a consumption (e.g., execution of an application and/or a service based on data included in the measurement engine packet) of the wireless measurement engine packet.
In the illustrated example of
In some examples, fewer or more than instances of the DLB circuitry 1232, 1234 and/or fewer or more than the producer cores 1236, 1238 and/or consumer cores 1240, 1242 depicted in the illustrated example may be used. In the example of
In the example of
In the example workflow 1230, the reorder logic circuitry 1244 can obtain data from one or more of the producer cores 1236, 1238 and facilitate reordering operations. For example, the reorder logic circuitry 1244 can inspect a data pointer from one of the producer cores 1236, 1238. In some examples, the data pointer can be associated with wireless data, or portion(s) thereof. For example, the data pointer can reference a UE identifier, such as UE #1 of
In some examples, the reorder logic circuitry 1244 stores the data pointer and other data pointers associated with data packets in the known data flow in a buffer (e.g., a ring buffer, a first-in first-out (FIFO) buffer, etc.) until a portion of or an entirety of the data pointers in connection with the known data flow are obtained and/or otherwise identified. In the example of
In the illustrated example of
In some examples, the arbitration logic circuitry 1248 can be configured and/or instantiated to perform arbitration by selecting a given one of the consumer cores 1240, 1242. For example, the arbitration logic circuitry 1248 can implement one or more arbiters, sets of arbitration logic circuitry (e.g., first arbitration logic circuitry, second arbitration logic circuitry, etc.), etc., where each of the one or more arbiters, each of the sets of arbitration logic circuitry, etc., can correspond to a respective one of the consumer cores 1240, 1242. In some examples, the arbitration logic circuitry 1248 is based on consumer readiness (e.g., a consumer core having space available for an execution or completion of a task), task availability, etc. In the example workflow 1230, the arbitration logic circuitry 1248 transmits and/or otherwise facilitates a passage of data pointers from the queueing logic circuitry 1246 to example consumer queues 1250.
In the example workflow 1230, the consumer cores 1240, 1242 are in communication with the consumer queues 1250 to obtain data pointers for subsequent processing. In some examples, a length (e.g., a data length) of one or more of the consumer queues 1250 are programmable and/or otherwise configurable. In some examples, the DLB circuitry 1232, 1234 generate an interrupt (e.g., a hardware interrupt) to one(s) of the consumer cores 1240, 1242 in response to a status, a change in status, etc., of the consumer queues 1250. Responsive to the interrupt, the one(s) of the consumer cores 1240, 1242 can retrieve the data pointer(s) from the consumer queues 1250.
In the illustrated example of
In the illustrated example of
In the illustrated example of
For example, the wireless device 1302 may communicate with an example RRH 1304, an example RU 1306, an example BBU pool 1308, and an example core network 1310. In the example of
In the illustrated example of
In the illustrated example of
For example, the wireless device 1502 may communicate with an example RRH 1504, an example RU 1506, an example BBU pool 1508, and an example core network 1510. In the example of
In the illustrated example of
For example, the wireless device 1602 may communicate with an example RRH 1604, an example RU 1606, an example BBU pool 1608, and an example core network 1610. In the example of
In the illustrated example of
In some examples, the edge cloud 1710, the central office 1720, the cloud data center 1730, and/or portion(s) thereof, may implement one or more wireless measurements engines that collect data from and/or compute measurements associated with devices of the endpoint (consumer and producer) data sources 1760 (e.g., autonomous vehicles 1761, user equipment 1762, business and industrial equipment 1763, video capture devices 1764, drones 1765, smart cities and building devices 1766, sensors and IoT devices 1767, etc.). In some examples, the edge cloud 1710, the central office 1720, the cloud data center 1730, and/or portion(s) thereof, may implement one or more measurement engines to execute measurement determination operations with improved accuracy. For example, the edge cloud 1710, the central office 1720, the cloud data center 1730, and/or portion(s) thereof, can be implemented by the wireless measurement engine circuitry 200 of
Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer endpoint devices, than at a base station, than at a central office). However, the closer that the edge location is to the endpoint (e.g., user equipment (UE)), the more that space and power is often constrained. Thus, edge computing attempts to reduce the amount of resources needed for network services, through the distribution of more resources which are located closer both geographically and in network access time. In this manner, edge computing attempts to bring the compute resources to the workload data where appropriate, or bring the workload data to the compute resources.
The following describes aspects of an edge cloud architecture that covers multiple potential deployments and addresses restrictions that some network operators or service providers may have in their own infrastructures. These include, variation of configurations based on the edge location (because edges at a base station level, for instance, may have more constrained performance and capabilities in a multi-tenant scenario); configurations based on the type of compute, memory, storage, fabric, acceleration, or like resources available to edge locations, tiers of locations, or groups of locations; the service, security, and management and orchestration capabilities; and related objectives to achieve usability and performance of end services. These deployments may accomplish processing in network layers that may be considered as “near edge,” “close edge,” “local edge,” “middle edge,” or “far edge” layers, depending on latency, distance, and timing characteristics.
Edge computing is a developing paradigm where computing is performed at or closer to the “edge” of a network, typically through the use of a compute platform (e.g., Intel Architecture or ARM compute hardware architecture) implemented at base stations, gateways, network routers, or other devices which are much closer to endpoint devices producing and consuming the data. For example, edge gateway servers may be equipped with pools of memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with standardized compute hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices. Within edge computing networks, there may be scenarios in services which the compute resource will be “moved” to the data, as well as scenarios in which the data will be “moved” to the compute resource. Or as an example, base station compute, acceleration and network resources can provide services in order to scale to workload demands on an as needed basis by activating dormant capacity (subscription, capacity on demand) in order to manage corner cases, emergencies or to provide longevity for deployed resources over a significantly longer implemented lifecycle.
In contrast to the network architecture of
Depending on the real-time requirements in a communications context, a hierarchical structure of data processing and storage nodes may be defined in an edge computing deployment. For example, such a deployment may include local ultra-low-latency processing, regional storage and processing as well as remote cloud datacenter based storage and processing. Key performance indicators (KPIs) may be used to identify where sensor data is best transferred and where it is processed or stored. This typically depends on the OSI layer dependency of the data. For example, lower layer (PHY, MAC, routing, etc.) data typically changes quickly and is better handled locally in order to meet latency requirements. Higher layer data such as Application Layer data is typically less time critical and may be stored and processed in a remote cloud datacenter. At a more generic level, an edge computing system may be described to encompass any number of deployments operating in the edge cloud 1710, which provide coordination from client and distributed computing devices.
Examples of latency, resulting from network communication distance and processing time constraints, may range from less than a millisecond (ms) when among the endpoint layer 1800, under 5 ms at the edge devices layer 1810, to even between 10 to 40 ms when communicating with nodes at the network access layer 1820. Beyond the edge cloud 1710 are core network 1830 and cloud data center 1832 layers, each with increasing latency (e.g., between 50-60 ms at the core network layer 1830, to 100 or more ms at the cloud data center layer 1840). As a result, operations at a core network data center 1835 or a cloud data center 1845, with latencies of at least 50 to 100 ms or more, will not be able to accomplish many time-critical functions of the use cases 1805. Each of these latency values are provided for purposes of illustration and contrast; it will be understood that the use of other access network mediums and technologies may further reduce the latencies. In some examples, respective portions of the network may be categorized as “close edge,” “local edge,” “near edge,” “middle edge,” or “far edge” layers, relative to a network source and destination. For instance, from the perspective of the core network data center 1835 or a cloud data center 1845, a central office or content data network may be considered as being located within a “near edge” layer (“near” to the cloud, having high latency values when communicating with the devices and endpoints of the use cases 1805), whereas an access point, base station, on-premise server, or network gateway may be considered as located within a “far edge” layer (“far” from the cloud, having low latency values when communicating with the devices and endpoints of the use cases 1805). It will be understood that other categorizations of a particular network layer as constituting a “close,” “local,” “near,” “middle,” or “far” edge may be based on latency, distance, number of network hops, or other measurable characteristics, as measured from a source in any of the network layers 1800-1840.
The various use cases 1805 may access resources under usage pressure from incoming streams, due to multiple services utilizing the edge cloud. For example, measurement determination for devices associated with such incoming streams of the various use cases 1805 is desired and may be achieved with example wireless measurement engines (e.g., the wireless measurement engine circuitry 200 of
The end-to-end service view for these use cases involves the concept of a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements. The services executed with the “terms” described may be managed at each layer in a way to assure real time, and runtime contractual compliance for the transaction during the lifecycle of the service. When a component in the transaction is missing its agreed to service level agreement (SLA), the system as a whole (components in the transaction) may provide the ability to (1) understand the impact of the SLA violation, and (2) augment other components in the system to resume overall transaction SLA, and (3) implement operations to remediate.
Thus, with these variations and service features in mind, edge computing within the edge cloud 1710 may provide the ability to serve and respond to multiple applications of the use cases 1805 (e.g., object tracking, location determination, video surveillance, connected cars, etc.) in real-time or near real-time, and meet ultra-low latency requirements for these multiple applications. These possible advantages enable a whole new class of applications (VNFs), Function-as-a-Service (FaaS), Edge-as-a-Service (EaaS), standard processes, etc.), which cannot leverage conventional cloud computing due to latency or other limitations.
However, with the possible advantages of edge computing comes the following caveats. The devices located at the edge are often resource constrained and therefore there is pressure on usage of edge resources. Typically, this is addressed through the pooling of memory and storage resources for use by multiple users (tenants) and devices. The edge may be power and cooling constrained and therefore the power usage needs to be accounted for by the applications that are consuming the most power. There may be inherent power-performance tradeoffs in these pooled memory resources, as many of them are likely to use emerging memory technologies, where more power requires greater memory bandwidth. Likewise, improved security of hardware and root of trust trusted functions are also required because edge locations may be unmanned and may even need permissioned access (e.g., when housed in a third-party location). Such issues are magnified in the edge cloud 1710 in a multi-tenant, multi-owner, or multi-access setting, where services and applications are requested by many users, especially as network usage dynamically fluctuates and the composition of the multiple stakeholders, use cases, and services changes.
At a more generic level, an edge computing system may be described to encompass any number of deployments at the previously discussed layers operating in the edge cloud 1710 (network layers 1810-1830), which provide coordination from client and distributed computing devices. One or more edge gateway nodes, one or more edge aggregation nodes, and one or more core data centers may be distributed across layers of the network to provide an implementation of the edge computing system by or on behalf of a telecommunication service provider (“telco,” or “TSP”), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives.
Consistent with the examples provided herein, a client compute node may be embodied as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. Further, the label “node” or “device” as used in the edge computing system does not necessarily mean that such node or device operates in a client or agent/minion/follower role; rather, any of the nodes or devices in the edge computing system refer to individual entities, nodes, or subsystems which include discrete or connected hardware or software configurations to facilitate or use the edge cloud 1710.
As such, the edge cloud 1710 is formed from network components and functional features operated by and within edge gateway nodes, edge aggregation nodes, or other edge compute nodes among network layers 1810-1830. The edge cloud 1710 thus may be embodied as any type of network that provides edge computing and/or storage resources which are proximately located to RAN capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are discussed herein. In other words, the edge cloud 1710 may be envisioned as an “edge” which connects the endpoint devices and traditional network access points that serve as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks.
The network components of the edge cloud 1710 may be servers, multi-tenant servers, appliance computing devices, and/or any other type of computing devices. For example, the edge cloud 1710 may include an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case or a shell. In some examples, the edge cloud 1710 may include an appliance to be operated in harsh environmental conditions (e.g., extreme heat or cold ambient temperatures, strong wind conditions, wet or frozen environments, and the like). In some circumstances, the housing may be dimensioned for portability such that it can be carried by a human and/or shipped. Example housings may include materials that form one or more exterior surfaces that partially or fully protect contents of the appliance, in which protection may include weather protection, hazardous environment protection (e.g., electromagnetic interference, vibration, extreme temperatures), and/or enable submergibility. Example housings may include power circuitry to provide power for stationary and/or portable implementations, such as alternating current (AC) power inputs, direct current (DC) power inputs, AC/DC or DC/AC converter(s), power regulators, transformers, charging circuitry, batteries, wired inputs and/or wireless power inputs. Example housings and/or surfaces thereof may include or connect to mounting hardware to enable attachment to structures such as buildings, telecommunication structures (e.g., poles, antenna structures, etc.) and/or racks (e.g., server racks, blade mounts, etc.). Example housings and/or surfaces thereof may support one or more sensors (e.g., temperature sensors, vibration sensors, light sensors, acoustic sensors, capacitive sensors, proximity sensors, etc.). One or more such sensors may be contained in, carried by, or otherwise embedded in the surface and/or mounted to the surface of the appliance. Example housings and/or surfaces thereof may support mechanical connectivity, such as propulsion hardware (e.g., wheels, propellers, etc.) and/or articulating hardware (e.g., robot arms, pivotable appendages, etc.). In some circumstances, the sensors may include any type of input devices such as user interface hardware (e.g., buttons, switches, dials, sliders, etc.). In some circumstances, example housings include output devices contained in, carried by, embedded therein and/or attached thereto. Output devices may include displays, touchscreens, lights, light emitting diodes (LEDs), speakers, I/O ports (e.g., universal serial bus (USB)), etc. In some circumstances, edge devices are devices presented in the network for a specific purpose (e.g., a traffic light), but may have processing and/or other capacities that may be utilized for other purposes. Such edge devices may be independent from other networked devices and may be provided with a housing having a form factor suitable for its primary purpose; yet be available for other compute tasks that do not interfere with its primary task. Edge devices include IoT devices. The appliance computing device may include hardware and software components to manage local issues such as device temperature, vibration, resource utilization, updates, power issues, physical and network security, etc. The example processor systems of at least
In
Individual platforms or devices of the edge computing system 2000 are located at a particular layer corresponding to layers 2020, 2030, 2040, 2050, and 2060. For example, the client compute platforms 2002a, 2002b, 2002c, 2002d, 2002e, 2002f are located at an endpoint layer 2020, while the edge gateway platforms 2012a, 2012b, 2012c are located at an edge devices layer 2030 (local level) of the edge computing system 2000. Additionally, the edge aggregation platforms 2022a, 2022b (and/or fog platform(s) 2024, if arranged or operated with or among a fog networking configuration 2026) are located at a network access layer 2040 (an intermediate level). Fog computing (or “fogging”) generally refers to extensions of cloud computing to the edge of an enterprise's network or to the ability to manage transactions across the cloud/edge landscape, typically in a coordinated distributed or multi-node network. Some forms of fog computing provide the deployment of compute, storage, and networking services between end devices and cloud computing data centers, on behalf of the cloud computing locations. Some forms of fog computing also provide the ability to manage the workload/workflow level services, in terms of the overall transaction, by pushing certain workloads to the edge or to the cloud based on the ability to fulfill the overall service level agreement.
Fog computing in many scenarios provides a decentralized architecture and serves as an extension to cloud computing by collaborating with one or more edge node devices, providing the subsequent amount of localized control, configuration, and management, and much more for end devices. Furthermore, fog computing provides the ability for edge resources to identify similar resources and collaborate to create an edge-local cloud which can be used solely or in conjunction with cloud computing to complete computing, storage, or connectivity related services. Fog computing may also allow the cloud-based services to expand their reach to the edge of a network of devices to offer local and quicker accessibility to edge devices. Thus, some forms of fog computing provide operations that are consistent with edge computing as discussed herein; the edge computing aspects discussed herein are also applicable to fog networks, fogging, and fog configurations. Further, aspects of the edge computing systems discussed herein may be configured as a fog, or aspects of a fog may be integrated into an edge computing architecture.
The core data center 2032 is located at a core network layer 2050 (a regional or geographically central level), while the global network cloud 2042 is located at a cloud data center layer 2060 (a national or world-wide layer). The use of “core” is provided as a term for a centralized network location-deeper in the network-which is accessible by multiple edge platforms or components; however, a “core” does not necessarily designate the “center” or the deepest location of the network. Accordingly, the core data center 2032 may be located within, at, or near the edge cloud 2010. Although an illustrative number of client compute platforms 2002a, 2002b, 2002c, 2002d, 2002e, 2002f; edge gateway platforms 2012a, 2012b, 2012c; edge aggregation platforms 2022a, 2022b; edge core data centers 2032; and global network clouds 2042 are shown in
Consistent with the examples provided herein, a client compute platform (e.g., one of the client compute platforms 2002a, 2002b, 2002c, 2002d, 2002e, 2002f) may be implemented as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. For example, a client compute platform can include a mobile phone, a laptop computer, a desktop computer, a processor platform in an autonomous vehicle, etc. In additional or alternative examples, a client compute platform can include a camera, a sensor, etc. Further, the label “platform,” “node,” and/or “device” as used in the edge computing system 2000 does not necessarily mean that such platform, node, and/or device operates in a client or slave role; rather, any of the platforms, nodes, and/or devices in the edge computing system 2000 refer to individual entities, platforms, nodes, devices, and/or subsystems which include discrete and/or connected hardware and/or software configurations to facilitate and/or use the edge cloud 2010. A possible advantage of examples disclosed herein is that example wireless measurement engines (e.g., the wireless measurement engine circuitry 200 of
As such, the edge cloud 2010 is formed from network components and functional features operated by and within the edge gateway platforms 2012a, 2012b, 2012c and the edge aggregation platforms 2022a, 2022b of layers 2030, 2040, respectively. The edge cloud 2010 may be implemented as any type of network that provides edge computing and/or storage resources which are proximately located to RAN capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are shown in
In some examples, the edge cloud 2010 may form a portion of, or otherwise provide, an ingress point into or across a fog networking configuration 2026 (e.g., a network of fog platform(s) 2024, not shown in detail), which may be implemented as a system-level horizontal and distributed architecture that distributes resources and services to perform a specific function. For instance, a coordinated and distributed network of fog platform(s) 2024 may perform computing, storage, control, or networking aspects in the context of an IoT system arrangement. Other networked, aggregated, and distributed functions may exist in the edge cloud 2010 between the core data center 2032 and the client endpoints (e.g., client compute platforms 2002a, 2002b, 2002c, 2002d, 2002e, 2002f). Some of these are discussed in the following sections in the context of network functions or service virtualization, including the use of virtual edges and virtual services which are orchestrated for multiple tenants.
As discussed in more detail below, the edge gateway platforms 2012a, 2012b, 2012c and the edge aggregation platforms 2022a, 2022b cooperate to provide various edge services and security to the client compute platforms 2002a, 2002b, 2002c, 2002d, 2002e, 2002f. Furthermore, because a client compute platforms (e.g., one of the client compute platforms 2002a, 2002b, 2002c, 2002d, 2002e, 2002f) may be stationary or mobile, a respective edge gateway platform 2012a, 2012b, 2012c may cooperate with other edge gateway platforms to propagate presently provided edge services, relevant service data, and security as the corresponding client compute platforms 2002a, 2002b, 2002c, 2002d, 2002e, 2002f moves about a region. To do so, the edge gateway platforms 2012a, 2012b, 2012c and/or edge aggregation platforms 2022a, 2022b may support multiple tenancy and multiple tenant configurations, in which services from (or hosted for) multiple service providers, owners, and multiple consumers may be supported and coordinated across a single or multiple compute devices.
In examples disclosed herein, edge platforms in the edge computing system 2000 includes meta-orchestration functionality. For example, edge platforms at the far-edge (e.g., edge platforms closer to edge users, the edge devices layer 2030, etc.) can reduce the performance or power consumption of orchestration tasks associated with far-edge platforms so that the execution of orchestration components at far-edge platforms consumes a small fraction of the power and performance available at far-edge platforms.
The orchestrators at various far-edge platforms participate in an end-to-end orchestration architecture. Examples disclosed herein anticipate that the comprehensive operating software framework (such as, open network automation platform (ONAP) or similar platform) will be expanded, or options created within it, so that examples disclosed herein can be compatible with those frameworks. For example, orchestrators at edge platforms implementing examples disclosed herein can interface with ONAP orchestration flows and facilitate edge platform orchestration and telemetry activities. Orchestrators implementing examples disclosed herein act to regulate the orchestration and telemetry activities that are performed at edge platforms, including increasing or decreasing the power and/or resources expended by the local orchestration and telemetry components, delegating orchestration and telemetry processes to a remote computer, and/or retrieving orchestration and telemetry processes from the remote computer when power and/or resources are available.
The remote devices described above are situated at alternative locations with respect to those edge platforms that are offloading telemetry and orchestration processes. For example, the remote devices described above can be situated, by contrast, at a near-edge platforms (e.g., the network access layer 2040, the core network layer 2050, a central office, a mini-datacenter, etc.). By offloading telemetry and/or orchestration processes at a near edge platforms, an orchestrator at a near-edge platform is assured of (comparatively) stable power supply, and sufficient computational resources to facilitate execution of telemetry and/or orchestration processes. An orchestrator (e.g., operating according to a global loop) at a near-edge platform can take delegated telemetry and/or orchestration processes from an orchestrator (e.g., operating according to a local loop) at a far-edge platform. For example, if an orchestrator at a near-edge platform takes delegated telemetry and/or orchestration processes, then at some later time, the orchestrator at the near-edge platform can return the delegated telemetry and/or orchestration processes to an orchestrator at a far-edge platform as conditions change at the far-edge platform (e.g., as power and computational resources at a far-edge platform satisfy a threshold level, as higher levels of power and/or computational resources become available at a far-edge platform, etc.).
A variety of security approaches may be utilized within the architecture of the edge cloud 2010. In a multi-stakeholder environment, there can be multiple loadable security modules (LSMs) used to provision policies that enforce the stakeholder's interests including those of tenants. In some examples, other operators, service providers, etc. may have security interests that compete with the tenant's interests. For example, tenants may prefer to receive full services (e.g., provided by an edge platform) for free while service providers would like to get full payment for performing little work or incurring little costs. Enforcement point environments could support multiple LSMs that apply the combination of loaded LSM policies (e.g., where the most constrained effective policy is applied, such as where if any of A, B or C stakeholders restricts access then access is restricted). Within the edge cloud 2010, each edge entity can provision LSMs that enforce the Edge entity interests. The cloud entity can provision LSMs that enforce the cloud entity interests. Likewise, the various fog and IoT network entities can provision LSMs that enforce the fog entity's interests.
In these examples, services may be considered from the perspective of a transaction, performed against a set of contracts or ingredients, whether considered at an ingredient level or a human-perceivable level. Thus, a user who has a service agreement with a service provider, expects the service to be delivered under terms of the SLA. Although not discussed in detail, the use of the edge computing techniques discussed herein may play roles during the negotiation of the agreement and the measurement of the fulfillment of the agreement (e.g., to identify what elements are required by the system to conduct a service, how the system responds to service conditions and changes, and the like).
Additionally, in examples disclosed herein, edge platforms and/or orchestration components thereof may consider several factors when orchestrating services and/or applications in an edge environment. These factors can include next-generation central office smart network functions virtualization and service management, improving performance per watt at an edge platform and/or of orchestration components to overcome the limitation of power at edge platforms, reducing power consumption of orchestration components and/or an edge platform, improving hardware utilization to increase management and orchestration efficiency, providing physical and/or end to end security, providing individual tenant quality of service and/or service level agreement satisfaction, improving network equipment-building system compliance level for each use case and tenant business model, pooling acceleration components, and billing and metering policies to improve an edge environment.
A “service” is a broad term often applied to various contexts, but in general, it refers to a relationship between two entities where one entity offers and performs work for the benefit of another. However, the services delivered from one entity to another must be performed with certain guidelines, which ensure trust between the entities and manage the transaction according to the contract terms and conditions set forth at the beginning, during, and end of the service.
An example relationship among services for use in an edge computing system is described below. In scenarios of edge computing, there are several services, and transaction layers in operation and dependent on each other—these services create a “service chain.” At the lowest level, ingredients compose systems. These systems and/or resources communicate and collaborate with each other in order to provide a multitude of services to each other as well as other permanent or transient entities around them. In turn, these entities may provide human-consumable services. With this hierarchy, services offered at each tier must be transactionally connected to ensure that the individual component (or sub-entity) providing a service adheres to the contractually agreed to objectives and specifications. Deviations at each layer could result in overall impact to the entire service chain.
One type of service that may be offered in an edge environment hierarchy is Silicon Level Services. For instance, Software Defined Silicon (SDSi)-type hardware provides the ability to ensure low level adherence to transactions, through the ability to intra-scale, manage and assure the delivery of operational service level agreements. Use of SDSi and similar hardware controls provide the capability to associate features and resources within a system to a specific tenant and manage the individual title (rights) to those resources. Use of such features is among one way to dynamically “bring” the compute resources to the workload.
For example, an operational level agreement and/or service level agreement could define “transactional throughput” or “timeliness”—in case of SDSi, the system and/or resource can sign up to guarantee specific service level specifications (SLS) and objectives (SLO) of a service level agreement (SLA). For example, SLOs can correspond to particular key performance indicators (KPIs) (e.g., frames per second, floating point operations per second, latency goals, etc.) of an application (e.g., service, workload, etc.) and an SLA can correspond to a platform level agreement to satisfy a particular SLO (e.g., one gigabyte of memory for 10 frames per second). SDSi hardware also provides the ability for the infrastructure and resource owner to empower the silicon component (e.g., components of a composed system that produce metric telemetry) to access and manage (add/remove) product features and freely scale hardware capabilities and utilization up and down. Furthermore, it provides the ability to provide deterministic feature assignments on a per-tenant basis. It also provides the capability to tie deterministic orchestration and service management to the dynamic (or subscription based) activation of features without the need to interrupt running services, client operations or by resetting or rebooting the system.
At the lowest layer, SDSi can provide services and guarantees to systems to ensure active adherence to contractually agreed-to service level specifications that a single resource has to provide within the system. Additionally, SDSi provides the ability to manage the contractual rights (title), usage and associated financials of one or more tenants on a per component, or even silicon level feature (e.g., stockkeeping unit (SKU) features). Silicon level features may be associated with compute, storage or network capabilities, performance, determinism or even features for security, encryption, acceleration, etc. These capabilities ensure not only that the tenant can achieve a specific service level agreement, but also assist with management and data collection, and assure the transaction and the contractual agreement at the lowest manageable component level.
At a higher layer in the services hierarchy, Resource Level Services, includes systems and/or resources which provide (in complete or through composition) the ability to meet workload demands by either acquiring and enabling system level features via SDSi, or through the composition of individually addressable resources (compute, storage, and network). At yet a higher layer of the services hierarchy, Workflow Level Services, is horizontal, since service-chains may have workflow level requirements. Workflows describe dependencies between workloads in order to deliver specific service level objectives and requirements to the end-to-end service. These services may include features and functions like high-availability, redundancy, recovery, fault tolerance or load-leveling (we can include lots more in this). Workflow services define dependencies and relationships between resources and systems, describe requirements on associated networks and storage, as well as describe transaction level requirements and associated contracts in order to assure the end-to-end service. Workflow Level Services are usually measured in Service Level Objectives and have mandatory and expected service requirements.
At yet a higher layer of the services hierarchy, Business Functional Services (BFS) are operable, and these services are the different elements of the service which have relationships to each other and provide specific functions for the customer. In the case of Edge computing and within the example of Autonomous Driving, business functions may be composing the service, for instance, of a “timely arrival to an event”—this service would require several business functions to work together and in concert to achieve the goal of the user entity: GPS guidance, RSU (Road Side Unit) awareness of local traffic conditions, Payment history of user entity, Authorization of user entity of resource(s), etc. Furthermore, as these BFS(s) provide services to multiple entities, each BFS manages its own SLA and is aware of its ability to deal with the demand on its own resources (Workload and Workflow). As requirements and demand increases, it communicates the service change requirements to Workflow and resource level service entities, so they can, in-turn provide insights to their ability to fulfill. This operation assists the overall transaction and service delivery to the next layer.
At the highest layer of services in the service hierarchy, Business Level Services (BLS), is tied to the capability that is being delivered. At this level, the customer or entity might not care about how the service is composed or what ingredients are used, managed, and/or tracked to provide the service(s). The primary objective of business level services is to attain the goals set by the customer according to the overall contract terms and conditions established between the customer and the provider at the agreed to a financial agreement. BLS(s) are comprised of several Business Functional Services (BFS) and an overall SLA.
This arrangement and other service management features described herein are designed to meet the various requirements of edge computing with its unique and complex resource and service interactions. This service management arrangement is intended to inherently address several of the resource basic services within its framework, instead of through an agent or middleware capability. Services such as: locate, find, address, trace, track, identify, and/or register may be placed immediately in effect as resources appear on the framework, and the manager or owner of the resource domain can use management rules and policies to ensure orderly resource discovery, registration, and certification.
Moreover, any number of edge computing architectures described herein may be adapted with service management features. These features may enable a system to be constantly aware and record information about the motion, vector, and/or direction of resources as well as fully describe these features as both telemetry and metadata associated with the devices. These service management features can be used for resource management, billing, and/or metering, as well as an element of security. The same functionality also applies to related resources, where a less intelligent device, like a sensor, might be attached to a more manageable resource, such as an edge gateway. The service management framework is made aware of change of custody or encapsulation for resources. Since nodes and components may be directly accessible or be managed indirectly through a parent or alternative responsible device for a short duration or for its entire lifecycle, this type of structure is relayed to the service framework through its interface and made available to external query mechanisms.
Additionally, this service management framework is always service aware and naturally balances the service delivery requirements with the capability and availability of the resources and the access for the data upload the data analytics systems. If the network transports degrade, fail or change to a higher cost or lower bandwidth function, service policy monitoring functions provide alternative analytics and service delivery mechanisms within the privacy or cost constraints of the user. With these features, the policies can trigger the invocation of analytics and dashboard services at the edge ensuring continuous service availability at reduced fidelity or granularity. Once network transports are re-established, regular data collection, upload and analytics services can resume.
The deployment of a multi-stakeholder edge computing system may be arranged and orchestrated to enable the deployment of multiple services and virtual edge instances, among multiple edge platforms and subsystems, for use by multiple tenants and service providers. In a system example applicable to a cloud service provider (CSP), the deployment of an edge computing system may be provided via an “over-the-top” approach, to introduce edge computing platforms as a supplemental tool to cloud computing. In a contrasting system example applicable to a telecommunications service provider (TSP), the deployment of an edge computing system may be provided via a “network-aggregation” approach, to introduce edge computing platforms at locations in which network accesses (from different types of data access networks) are aggregated. However, these over-the-top and network aggregation approaches may be implemented together in a hybrid or merged approach or configuration.
For example, the one or more servers 2130 may operate as an intermediate network node to support a local Edge cloud or fog implementation among a local area network. Further, the gateway 2128 that is depicted may operate in a cloud-to-gateway-to-many Edge devices configuration, such as with the various IoT devices 2114, 2120, 2124 being constrained or dynamic to an assignment and use of resources in the cloud 2100.
Other example groups of IoT devices may include remote weather stations 2114, local information terminals 2116, alarm systems 2118, automated teller machines 2120, alarm panels 2122, or moving vehicles, such as emergency vehicles 2124 or other vehicles 2126, among many others. Each of these IoT devices may be in communication with other IoT devices, with servers 2104, with another IoT fog device or system (not shown), or a combination therein. The groups of IoT devices may be deployed in various residential, commercial, and industrial settings (including in both private or public environments). A possible advantage of examples disclosed herein is that example measurement engines (e.g., a measurement engine that includes and/or is implemented by the wireless measurement engine circuitry 200 of
As may be seen from
As the emergency vehicle 2124 proceeds towards the automated teller machine 2120, it may access the traffic control group 2106 to request clearance to the location, for example, by lights turning red to block cross traffic at an intersection in sufficient time for the emergency vehicle 2124 to have unimpeded access to the intersection.
Clusters of IoT devices, such as the remote weather stations 2114 or the traffic control group 2106, may be equipped to communicate with other IoT devices as well as with the cloud 2100. This may allow the IoT devices to form an ad-hoc network between the devices, allowing them to function as a single device, which may be termed a fog device or system (e.g., as described above with reference to
Flowcharts representative of example machine readable instructions, which may be executed to configure processor circuitry to implement the wireless measurement engine circuitry 200 of
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., as portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc., in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and/or stored on separate computing devices, wherein the parts when decrypted, decompressed, and/or combined form a set of machine executable instructions that implement one or more operations that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc., in order to execute the machine readable instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example operations of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc., may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, or (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or operations, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or operations, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B.
As used herein, singular references (e.g., “a,” “an,” “first,” “second,” etc.) do not exclude a plurality. The term “a” or “an” object, as used herein, refers to one or more of that object. The terms “a” (or “an”), “one or more,” and “at least one” are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., the same entity or object. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
Additionally or alternatively, at block 2802, one of the DUs 810, which may implement the wireless measurement engine circuitry 200, transmits a known reference (e.g., SRS) signal to one of the wireless devices 802. The known reference (e.g., SRS) signal transmitted by the one of the DUs 810 is referred to as a known reference signal because the particulars of the reference signal are known by the one of the DUs 810, stored in the datastore 260 (
In the illustrated example of
In the illustrated example of
Table 1 may be interpreted according to the legend illustrated in Table 2.
Read together, Table 1 and Table 2 indicate that UE 17020 had an SNR of 23.53 during uplink in frame 896, slot 4, symbol 0. By including the field A in Line 1 (e.g., RS), the wireless data (e.g., the wireless physical layer data 262 generated by the wireless measurement engine circuitry 200) indicates a specific symbol position, in a specific slot, of a specific frame that is experiencing communication problems on a per UE basis. As such, the wireless measurement engine circuitry 200 can analyze the wireless data (e.g., the wireless data represented in Table 1) to determine whether the resulting reference signal from a UE is altered. For example, at block 2808, the wireless measurement engine circuitry 200 determines whether the resulting reference signal is altered due to interference based on comparison of known and resulting reference signal. In the example of
Additionally, for example, if the wireless measurement determination circuitry 240 (
In some examples, the resulting reference signal may include data at an unexpected frame, slot, and symbol that does not correspond to the known reference signal. An example resulting reference signal including data in an unexpected frame, slot, and symbol is depicted in
In additional or alternative examples, the resulting reference signal may include data at the expected frame and slot that corresponds to the known reference signal but may be noisy. An example noisy resulting reference signal including data in the expected frame, slot, and symbol is depicted in
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
Accordingly, the wireless measurement engine circuitry 200 can remediate issues occurring with network infrastructure and/or wireless devices in the network without a person having to be physically present in the field to interact with the network infrastructure (e.g., a physical cell tower).
The IoT device 3350 may include processor circuitry in the form of, for example, a processor 3352, which may be a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, or other known processing elements. The processor 3352 may be a part of a system on a chip (SoC) in which the processor 3352 and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel. As an example, the processor 3352 may include an Intel® Architecture Core™ based processor, such as a Quark™, an Atom™, an i3, an i5, an i7, or a Microcontroller Unit (MCU) class (MCU-class) processor, or another such processor available from Intel® Corporation, Santa Clara, CA. However, any number other processors may be used, such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, CA, a Microprocessor without Interlocked Pipelined Stages (MIPS) based (MIPS-based) design from MIPS Technologies, Inc. of Sunnyvale, CA, an ARM-based design licensed from ARM Holdings, Ltd. Or customer thereof, or their licensees or adopters. The processors may include units such as an A5-A14 processor from Apple® Inc., a Snapdragon™ processor from Qualcomm® Technologies, Inc., or an OMAP™ processor from Texas Instruments, Inc.
The processor 3352 may communicate with a system memory 3354 over an interconnect 3356 (e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory may be random access memory (RAM) in accordance with a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., low power DDR (LPDDR), LPDDR2, LPDDR3, or LPDDR4). In various implementations the individual memory devices may be of any number of different package types such as single die package (SDP), dual die package (DDP) or quad die package (Q17P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs or MiniDIMMs.
To provide for persistent storage of information such as data, applications, operating systems and so forth, a storage 3358 may also couple to the processor 3352 via the interconnect 3356. In an example the storage 3358 may be implemented via a solid state disk drive (SSDD). Other devices that may be used for the storage 3358 include flash memory cards, such as SD cards, microSD cards, xD picture cards, and the like, and USB flash drives. In low power implementations, the storage 3358 may be on-die memory or registers associated with the processor 3352. However, in some examples, the storage 3358 may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage 3358 in addition to, or instead of, the technologies described, such resistance change memories, phase change memories, holographic memories, or chemical memories, among others.
The components may communicate over the interconnect 3356. The interconnect 3356 may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The interconnect 3356 may be a proprietary bus, for example, used in a SoC based system. Other bus systems may be included, such as an I2C interface, an SPI interface, point to point interfaces, and a power bus, among others.
Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more of components 3362, 3366, 3368, or 3370. Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, etc.) may be embodied by such communications circuitry.
The interconnect 3356 may couple the processor 3352 to a mesh transceiver 3362, for communications with other mesh devices 3364. The mesh transceiver 3362 may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee® standard, among others. Any number of radios, configured for a particular wireless communication protocol, may be used for the connections to the mesh devices 3364. For example, a wireless LAN (WLAN) unit may be used to implement Wi-Fi™ communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a wireless WAN (WWAN) unit.
The mesh transceiver 3362 may communicate using multiple standards or radios for communications at different range. For example, the IoT device 3350 may communicate with close devices, e.g., within about 10 meters, using a local transceiver based on BLE, or another low power radio, to save power. More distant mesh devices 3364, e.g., within about 50 meters, may be reached over ZigBee or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels, or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee.
A wireless network transceiver 3366 may be included to communicate with devices or services in the cloud 3300 via local or wide area network protocols. The wireless network transceiver 3366 may be a Low-Power Wide-Area (LPWA) transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4g standards, among others. The IoT device 3350 may communicate over a wide area using LoRaWAN™ (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies, but may be used with any number of other cloud transceivers that implement long range, low bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 802.15.4e specification may be used.
Any number of other radio communications and protocols may be used in addition to the systems mentioned for the mesh transceiver 3362 and wireless network transceiver 3366, as disclosed herein. For example, the radio transceivers 3362 and 3366 may include an LTE or other cellular transceiver that uses spread spectrum (SPA/SAS) communications for implementing high speed communications. Further, any number of other protocols may be used, such as Wi-Fi® networks for medium speed communications and provision of network communications.
The radio transceivers 3362 and 3366 may include radios that are compatible with any number of 3GPP (Third Generation Partnership Project) specifications, notably Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), and Long Term Evolution-Advanced Pro (LTE-A Pro). It may be noted that radios compatible with any number of other fixed, mobile, or satellite communication technologies and standards may be selected. These may include, for example, any Cellular Wide Area radio communication technology, which may include e.g. a 5th Generation (5G) communication systems, a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, or an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, a UMTS (Universal Mobile Telecommunications System) communication technology, In addition to the standards listed above, any number of satellite uplink technologies may be used for the wireless network transceiver 3366, including, for example, radios compliant with standards issued by the ITU (International Telecommunication Union), or the ETSI (European Telecommunications Standards Institute), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated.
A network interface controller (NIC) 3368 may be included to provide a wired communication to the cloud 3300 or to other devices, such as the mesh devices 3364. The wired communication may provide an Ethernet connection, or may be based on other types of networks, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others. An additional NIC 3368 may be included to allow connect to a second network, for example, a NIC 3368 providing communications to the cloud over Ethernet, and a second NIC 3368 providing communications to other devices over another type of network.
The interconnect 3356 may couple the processor 3352 to an external interface 3370 that is used to connect external devices or subsystems. The external devices may include sensors 3372, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global positioning system (GPS) sensors, pressure sensors, barometric pressure sensors, and the like. The external interface 3370 further may be used to connect the IoT device 3350 to actuators 3374, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.
In some optional examples, various input/output (I/O) devices may be present within, or connected to, the IoT device 3350. For example, a display or other output device 3384 may be included to show information, such as sensor readings or actuator position. An input device 3386, such as a touch screen or keypad may be included to accept input. An output device 3386 may include any number of forms of audio or visual display, including simple visual outputs such as binary status indicators (e.g., LEDs) and multi-character visual outputs, or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the IoT device 3350.
A battery 3376 may power the IoT device 3350, although in examples in which the IoT device 3350 is mounted in a fixed location, it may have a power supply coupled to an electrical grid. The battery 3376 may be a lithium ion battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like.
A battery monitor/charger 3378 may be included in the IoT device 3350 to track the state of charge (SoCh) of the battery 3376. The battery monitor/charger 3378 may be used to monitor other parameters of the battery 3376 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 3376. The battery monitor/charger 3378 may include a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 from Linear Technologies, an ADT7488A from ON Semiconductor of Phoenix, Arizona, or an IC from the UCD90xxx family from Texas Instruments of Dallas, TX. The battery monitor/charger 3378 may communicate the information on the battery 3376 to the processor 3352 over the interconnect 3356. The battery monitor/charger 3378 may also include an analog-to-digital (ADC) convertor that allows the processor 3352 to directly monitor the voltage of the battery 3376 or the current flow from the battery 3376. The battery parameters may be used to determine actions that the IoT device 3350 may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like.
A power block 3380, or other power supply coupled to a grid, may be coupled with the battery monitor/charger 3378 to charge the battery 3376. In some examples, the power block 3380 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the IoT device 3350. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, CA, among others, may be included in the battery monitor/charger 3378. The specific charging circuits chosen depends on the size of the battery 3376, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others.
The storage 3358 may include instructions 3382 in the form of software, firmware, or hardware commands to implement the techniques described herein. Although such instructions 3382 are shown as code blocks included in the memory 3354 and the storage 3358, it may be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an ASIC.
In an example, the instructions 3382 provided via the memory 3354, the storage 3358, or the processor 3352 may be embodied as a non-transitory, machine-readable medium 3360 including code to direct the processor 3352 to perform electronic operations in the IoT device 3350. The processor 3352 may access the non-transitory, machine-readable medium 3360 over the interconnect 3356. For instance, the non-transitory, machine-readable medium 3360 may be embodied by devices described for the storage 3358 of
Also in a specific example, the instructions 3382 on the processor 3352 (separately, or in combination with the instructions 3382 of the machine-readable medium 3360) may configure execution or operation of a trusted execution environment (TEE) 3390. In an example, the TEE 3390 operates as a protected area accessible to the processor 3352 for secure execution of instructions and secure access to data. Various implementations of the TEE 3390, and an accompanying secure area in the processor 3352 or the memory 3354 may be provided, for instance, through use of Intel® Software Guard Extensions (SGX) or ARM® TrustZone® hardware security extensions, Intel® Management Engine (ME), or Intel® Converged Security Manageability Engine (CSME). Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the IoT device 3350 through the TEE 3390 and the processor 3352.
The processor platform 3400 of the illustrated example includes processor circuitry 3412. The processor circuitry 3412 of the illustrated example is hardware. For example, the processor circuitry 3412 can be implemented by one or more integrated circuits, logic circuits, FPGAs, microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry 3412 may be implemented by one or more semiconductor based (e.g., silicon based) devices. In this example, the processor circuitry 3412 implements the parser circuitry 220, the device identification circuitry 230 (identified by DEVICE ID CIRCUITRY), the wireless measurement determination circuitry 240 (identified by WM DETERM CIRCUITRY), and the event generation circuitry 250 (identified by EVENT GEN CIRCUITRY) of
The processor circuitry 3412 of the illustrated example includes a local memory 3413 (e.g., a cache, registers, etc.). The processor circuitry 3412 of the illustrated example is in communication with a main memory including a volatile memory 3414 and a non-volatile memory 3416 by a bus 3418. In some examples, the bus 3418 can implement the bus 270 of
The processor platform 3400 of the illustrated example also includes interface circuitry 3420. The interface circuitry 3420 may be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a Bluetooth® interface, a near field communication (NFC) interface, a Peripheral Component Interconnect (PCI) interface, and/or a Peripheral Component Interconnect Express (PCIe) interface. In this example, the interface circuitry 3420 implements the interface circuitry 210 of
In the illustrated example, one or more input devices 3422 are connected to the interface circuitry 3420. The input device(s) 3422 permit(s) a user to enter data and/or commands into the processor circuitry 3412. The input device(s) 3422 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system.
One or more output devices 3424 are also connected to the interface circuitry 3420 of the illustrated example. The output device(s) 3424 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuitry 3420 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU.
The interface circuitry 3420 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network 3426. The communication can be by, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, an optical connection, etc.
The processor platform 3400 of the illustrated example also includes one or more mass storage devices 3428 to store software and/or data. Examples of such mass storage devices 3428 include magnetic storage devices, optical storage devices, floppy disk drives, HDDs, CDs, Blu-ray disk drives, redundant array of independent disks (RAID) systems, solid state storage devices such as flash memory devices and/or SSDs, and DVD drives. In this example, the one or more mass storage devices 3428 implement the datastore 260, the wireless physical layer data 262, the wireless physical layer measurements 264, and the ML model 266 of
The machine-readable instructions 3432, which may be implemented by the machine-readable instructions of
The processor platform 3400 of the illustrated example of
The cores 3502 of the microprocessor 3500 may operate independently or may cooperate to execute machine-readable instructions. For example, machine code corresponding to a firmware program, an embedded software program, or a software program may be executed by one of the cores 3502 or may be executed by multiple ones of the cores 3502 at the same or different times.
In some examples, the machine code corresponding to the firmware program, the embedded software program, or the software program is split into threads and executed in parallel by two or more of the cores 3502. The software program may correspond to a portion or all of the machine-readable instructions and/or operations represented by the flowcharts of
The cores 3502 may communicate by a first example bus 3504. In some examples, the first bus 3504 may be implemented by a communication bus to effectuate communication associated with one(s) of the cores 3502. For example, the first bus 3504 may be implemented by at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the first bus 3504 may be implemented by any other type of computing or electrical bus. The cores 3502 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 3506. The cores 3502 may output data, instructions, and/or signals to the one or more external devices by the interface circuitry 3506. Although the cores 3502 of this example include example local memory 3520 (e.g., Level 1 (L1) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor 3500 also includes example shared memory 3510 that may be shared by the cores (e.g., Level 2 (L2 cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 3510. The local memory 3520 of each of the cores 3502 and the shared memory 3510 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the memory 3354 of
Each core 3502 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 3502 includes control unit circuitry 3514, arithmetic and logic (AL) circuitry 3516 (sometimes referred to as an ALU), a plurality of registers 3518, the local memory 3520, and a second example bus 3522. Other structures may be present. For example, each core 3502 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry 3514 includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core 3502. The AL circuitry 3516 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 3502. The AL circuitry 3516 of some examples performs integer based operations. In other examples, the AL circuitry 3516 also performs floating point operations. In yet other examples, the AL circuitry 3516 may include first AL circuitry that performs integer based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry 3516 may be referred to as an Arithmetic Logic Unit (ALU). The registers 3518 are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 3516 of the corresponding core 3502. For example, the registers 3518 may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 3518 may be arranged in a bank as shown in
Each core 3502 and/or, more generally, the microprocessor 3500 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 3500 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The microprocessor 3500 may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry.
More specifically, in contrast to the microprocessor 3500 of
In the example of
The configurable interconnections 3610 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 3608 to program desired logic circuits.
The storage circuitry 3612 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 3612 may be implemented by registers or the like. In the illustrated example, the storage circuitry 3612 is distributed amongst the logic gate circuitry 3608 to facilitate access and increase execution speed.
The example FPGA circuitry 3600 of
Although
In some examples, the processor 3352 of
A block diagram illustrating an example software distribution platform 3705 to distribute software such as the example machine-readable instructions 3382 of
From the foregoing, it will be appreciated that example systems, methods, apparatus, and articles of manufacture have been disclosed that determine wireless measurements associated with a wireless device, and/or, more generally, a wireless network, in substantially real-time. Disclosed systems, apparatus, articles of manufacture, and methods improve the efficiency of computing devices adapted, configured, and/or otherwise instantiated for wireless measurement determination associated with wireless devices and/or networks by using less total time and/or resources by implementing the wireless measurement determination on reduced information. Disclosed systems, apparatus, articles of manufacture, and methods are accordingly directed to one or more improvement(s) in the operation of a machine such as a computer or other electronic and/or mechanical device.
Example 1 includes a method for wireless measurement determination comprising enqueueing a data pointer into a first data queue, the data pointer associated with wireless data from a wireless device, the first data queue associated with a first worker core of the processor circuitry, generating, with the first worker core, a wireless measurement based on the wireless data, dequeuing the data pointer from the first data queue into a second data queue, the data pointer associated with the wireless measurement, the second data queue associated with a second worker core of the processor circuitry, and determining, with the second worker core, a change to a configuration of at least one of the wireless device or a network associated with the wireless device based on the wireless measurement.
In Example 2, the subject matter of Example 1 can optionally include that parsing the wireless data into data portions.
In Example 3, the subject matter of any of Examples 1-2 can optionally include verifying that the wireless device is a trusted wireless device.
In Example 4, the subject matter of any of Examples 1-3 can optionally include identifying the wireless device based on an identifier included in the wireless data.
In Example 5, the subject matter of any of Examples 1-4 can optionally include at least one of executing or instantiating a machine-learning model based on the wireless data to output the wireless measurement.
In Example 6, the subject matter of any of Examples 1-5 can optionally include executing or instantiating a machine-learning model based on the wireless data to output the change.
In Example 7, the subject matter of any of Examples 1-6 can optionally include generating event data to cause the change to occur.
In Example 8, the subject matter of any of Examples 1-7 can optionally include storing at least one of the wireless data, the wireless measurement, or the output from the machine-learning model in a datastore for access by at least one of an application or a service.
In Example 9, the subject matter of any of Examples 1-8 can optionally include determining to switch the wireless device from a first network to a second network.
In Example 10, the subject matter of any of Examples 1-9 can optionally include that the first network is a cellular network and the second network is a Wireless Fidelity network.
In Example 11, the subject matter of any of Examples 1-10 can optionally include that the first network is a Wireless Fidelity network and the second network is a cellular network.
In Example 12, the subject matter of any of Examples 1-11 can optionally include that the wireless data is fifth generation cellular data.
In Example 13, the subject matter of any of Examples 1-12 can optionally include that the wireless data is Wireless Fidelity data.
In Example 14, the subject matter of any of Examples 1-13 can optionally include that the change is to effectuate power management associated with the wireless device.
In Example 15, the subject matter of any of Examples 1-14 can optionally include that the change is to reduce radiofrequency interference associated with the wireless device.
In Example 16, the subject matter of any of Examples 1-15 can optionally include that the change is to a communication channel associated with the wireless device.
In Example 17, the subject matter of any of Examples 1-16 can optionally include that the change is to change at least one of a receive power or a transmit power of an antenna of the wireless device.
In Example 18, the subject matter of any of Examples 1-17 can optionally include determining a location of the wireless device based on the wireless measurement.
In Example 19, the subject matter of any of Examples 1-18 can optionally include effectuating a lawful interception of the wireless data.
In Example 20, the subject matter of any of Examples 1-19 can optionally include that the determination of the wireless measurement is substantially in real-time.
In Example 21, the subject matter of any of Examples 1-20 can optionally include that the wireless measurement is a minimum value, a maximum value, or an average value of downlink latency.
In Example 22, the subject matter of any of Examples 1-21 can optionally include that the wireless measurement is a minimum value, a maximum value, or an average value of downlink latency per transmission time interval.
In Example 23, the subject matter of any of Examples 1-22 can optionally include that the wireless measurement is a minimum value, a maximum value, or an average value of uplink latency.
In Example 24, the subject matter of any of Examples 1-23 can optionally include that the wireless measurement is a minimum value, a maximum value, or an average value of uplink latency per transmission time interval.
In Example 25, the subject matter of any of Examples 1-24 can optionally include that the wireless measurement is a minimum value, a maximum value, or an average value of sounding reference signal latency.
In Example 26, the subject matter of any of Examples 1-25 can optionally include that the wireless measurement is a minimum value, a maximum value, or an average value of sounding reference signal latency per transmission time interval.
In Example 27, the subject matter of any of Examples 1-26 can optionally include that the wireless measurement is a throughput value of the wireless data.
In Example 28, the subject matter of any of Examples 1-27 can optionally include that the wireless measurement is a utilization percentage of one or more cores of a network interface that receives the wireless data.
In Example 29, the subject matter of any of Examples 1-28 can optionally include that the wireless measurement is a number of a total number of receive packets, a number of receive packets that are on time, a number of receive packets that are early, a number of receive packets that are late, a number of receive packets that are corrupt, or a number of receive packets that are duplicate.
Example 30 includes a method comprising obtaining multi-access wireless data from a wireless device associated with a network, the multi-access wireless data associated with an operation of at least one of the wireless device or infrastructure of the network, computing, in substantially real time relative to the operation by executing an instruction with programmable circuitry, a measurement based on the multi-access wireless data, and determining, in substantially real time relative to the operation by executing an instruction with the programmable circuitry, a change to a configuration of at least one of the wireless device or a virtual radio based on the measurement, the virtual radio associated with the network.
In Example 31, the subject matter of Example 30 can optionally include outputting a signal to instruct the at least one of the wireless device or the network to change the configuration of the at least one of the wireless device or the virtual radio.
In Example 32, the subject matter of any of Examples 30-31 can optionally include determining the change to the configuration of the at least one of the wireless device or the virtual radio based on a mode of operation of the virtual radio.
In Example 33, the subject matter of any of Examples 30-32 can optionally include determining the change to the configuration of at least one of the wireless device or the virtual radio to improve performance of an application associated with the wireless device.
In Example 34, the subject matter of any of Examples 30-33 can optionally include that the multi-access wireless data is multi-access physical layer wireless data.
In Example 35, the subject matter of any of Examples 30-34 can optionally include determining the mode of operation for the virtual radio based on a signal from a compute device that is remote with respect to the programmable circuitry.
In Example 36, the subject matter of any of Examples 30-35 can optionally include transmitting a first reference signal to the wireless device, and processing the multi-access wireless data to determine a difference between the first reference signal and a second reference signal included in the multi-access wireless data.
In Example 37, the subject matter of Example 36 can optionally include that the change is a first change, and the method further includes processing the difference between the first reference signal and the second reference signal to determine a second change, the second change including at least one of a timing change of a data transmission associated with the wireless device, a phase change of an antenna associated with the wireless device, a power settings change of the antenna, formation of a first beam between the wireless device and interface circuitry associated with the programmable circuitry, or alteration of a second beam formed between the wireless device and the interface circuitry.
In Example 38, the subject matter of any of Examples 30-37 can optionally include executing a machine learning model to determine the change to the configuration of the at least one of the wireless device or the virtual radio based on the measurement.
In Example 39, the subject matter of any of Examples 30-38 can optionally include that the measurement includes at least one of a location measurement associated with the multi-access wireless data, registration data associated with the multi-access wireless data, a reference signal measurement associated with the multi-access wireless data, a signal-to-noise ratio measurement associated with the multi-access wireless data, a channel impulse response measurement associated with the multi-access wireless data, device identifier data associated with the multi-access wireless data, header data associated with the multi-access wireless data, payload data associated with the multi-access wireless data, a Wi-Fi measurement associated with the multi-access wireless data, a Bluetooth measurement associated with the multi-access wireless data, or a satellite measurement associated with the multi-access wireless data.
In Example 40, the subject matter of any of Examples 30-39 can optionally include that the change to the configuration of the at least one of the wireless device or the virtual radio includes at least one of an increase to a first antenna power of the wireless device, an increase to a second antenna power of interface circuitry, activation of a first number of compute cores of the programmable circuitry to increase throughput of the network, or deactivation of a second number of compute cores of the programmable circuitry to reduce power consumption.
Example 41 is at least one computer readable medium comprising instructions to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 42 is at least one machine readable medium comprising instructions to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 43 is edge server processor circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 44 is an edge cloud processor circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 45 is edge node processor circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 46 is measurement engine circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 47 is a wireless measurement engine to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 48 is wireless measurement engine circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 49 is an apparatus comprising processor circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 50 is an apparatus comprising programmable circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 51 is an apparatus comprising one or more edge gateways to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 52 is an apparatus comprising one or more edge switches to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 53 is an apparatus comprising at least one of one or more edge gateways or one or more edge switches to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 54 is an apparatus comprising accelerator circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 55 is an apparatus comprising one or more graphics processor units to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 56 is an apparatus comprising one or more Artificial Intelligence processors to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 57 is an apparatus comprising one or more machine learning processors to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 58 is an apparatus comprising one or more neural network processors to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 59 is an apparatus comprising one or more digital signal processors to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 60 is an apparatus comprising one or more general purpose processors to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 61 is an apparatus comprising network interface circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 62 is an Infrastructure Processor Unit to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 63 is dynamic load balancer circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 64 is radio unit circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 65 is remote radio unit circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 66 is radio access network circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 67 is one or more base stations to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 68 is base station circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 69 is user equipment circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 70 is one or more Internet-of-Things devices to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 71 is one or more fog devices to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 72 is a software distribution platform to distribute machine-readable instructions that, when executed by processor circuitry, cause the processor circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 73 is edge cloud circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 74 is distributed unit circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 75 is central or centralized unit circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 76 is core server circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 77 is satellite circuitry to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 78 is at least one of one more GEO satellites or one or more LEO satellites to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 79 is an autonomous vehicle to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 80 is a robot to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 81 is circuitry to execute and/or instantiate instructions to implement FLEXRAN™ protocol to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
Example 82 is circuitry to execute and/or instantiate instructions to implement a virtual radio access network protocol to perform the method of any of Examples 1-29 and/or the method of any of Examples 30-40.
The following claims are hereby incorporated into this Detailed Description by this reference. Although certain example systems, methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all systems, methods, apparatus, and articles of manufacture fairly falling within the scope of the claims of this patent.
This patent claims the benefit of U.S. Provisional Patent Application No. 63/436,363, which was filed on Dec. 30, 2022. U.S. Provisional Patent Application No. 63/436,363 is hereby incorporated herein by reference in its entirety. Priority to U.S. Provisional Patent Application No. 63/436,363 is hereby claimed.
Number | Date | Country | |
---|---|---|---|
63436363 | Dec 2022 | US |