This disclosure relates generally to communication networks and, more particularly, to systems, apparatus, articles of manufacture, and methods for data driven networking.
In recent years, the volume of data generated by sensors and devices has grown rapidly. To effectively process this data, a computing paradigm called edge computing has developed. In edge computing, rather than transmitting all data to a centralized server for processing, workloads can be executed at the edge, bringing computation and data storage closer to the source of the data. With the greater prevalence of edge computing, management and optimization of edge resources has become an area of intense research and industrial interest.
In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. The figures are not necessarily to scale.
As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other. As used herein, stating that any part is in “contact” with another part is defined to mean that there is no intermediate part between the two parts.
Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc., are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly within the context of the discussion (e.g., within a claim) in which the elements might, for example, otherwise share a same name.
As used herein, “approximately” and “about” modify their subjects/values to recognize the potential presence of variations that occur in real world applications. For example, “approximately” and “about” may modify dimensions that may not be exact due to manufacturing tolerances and/or other real world imperfections as will be understood by persons of ordinary skill in the art. For example, “approximately” and “about” may indicate such dimensions may be within a tolerance range of +/−10% unless otherwise specified in the below description.
As used herein “substantially real time” refers to occurrence in a near instantaneous manner recognizing there may be real world delays for computing time, transmission, etc. Thus, unless otherwise specified, “substantially real time” refers to real time+/−1 second.
As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
As used herein, “programmable circuitry” is defined to include (i) one or more special purpose electrical circuits (e.g., an application specific circuit (ASIC)) structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmable with instructions to perform specific functions(s) and/or operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of programmable circuitry include programmable microprocessors such as Central Processor Units (CPUs) that may execute first instructions to perform one or more operations and/or functions, Field Programmable Gate Arrays (FPGAs) that may be programmed with second instructions to cause configuration and/or structuring of the FPGAs to instantiate one or more operations and/or functions corresponding to the first instructions, Graphics Processor Units (GPUs) that may execute first instructions to perform one or more operations and/or functions, Digital Signal Processors (DSPs) that may execute first instructions to perform one or more operations and/or functions, XPUs, Network Processing Units (NPUs) one or more microcontrollers that may execute first instructions to perform one or more operations and/or functions and/or integrated circuits such as Application Specific Integrated Circuits (ASICs). For example, an XPU may be implemented by a heterogeneous computing system including multiple types of programmable circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more NPUs, one or more DSPs, etc., and/or any combination(s) thereof), and orchestration technology (e.g., application programming interface(s) (API(s)) that may assign computing task(s) to whichever one(s) of the multiple types of programmable circuitry is/are suited and available to perform the computing task(s).
As used herein integrated circuit/circuitry is defined as one or more semiconductor packages containing one or more circuit elements such as transistors, capacitors, inductors, resistors, current paths, diodes, etc. For example, an integrated circuit may be implemented as one or more of an ASIC, an FPGA, a chip, a microchip, programmable circuitry, a semiconductor substrate coupling multiple circuit elements, a system on chip (SoC), etc.
Networks of multiple frequencies, spectrums, and/or communication types are increasingly important in modern computing. Prevalent technologies and standards that facilitate modern communication include fourth, fifth, and sixth generation cellular (e.g., 4G or 5G or 6G), Citizens Broadband Radio Service (CBRS), private cellular, Wireless Fidelity (Wi-Fi), satellite (e.g., a geosynchronous equatorial orbit (GEO) satellite, a non-governmental organization (NGO) satellite), etc.
Management of devices that utilize more than one connectivity technology (e.g., different wireless spectrums) presents multiple challenges. Specifically, issues may arise with control, connectivity management, and workload consolidation of such devices. Such problems are compounded when managing devices across multiple clients and geographic locations (e.g., the edge, the cloud). Conventional network connection implementation (e.g., conventional network effectuation) and management techniques may be performed in silos from fixed spectrum chipsets, which makes ensuring a satisfactory quality-of-service (QoS) from each spectrum, security management across spectrums, and configuration profiling challenging. Examples disclosed herein overcome such challenges via frictionless spectrum detection (FSD) based on data driven conditioning to order spectrum feeds. Some examples include a dynamic landscape of fixed or mobile edge nodes. Such examples may include terrestrial or non-terrestrial devices, creating a network that can adapt based on a location, time, and workload associated with the network. Some examples include policy techniques to correct for lost packets (e.g., within a specific spectrum and/or for out of order processing across multiple spectrums).
Conventional communication networks may be characterized as static. A network may be static in terms of connectivity, as it does not adequately support multiple connection types. A network may also be static in terms of configuration, unable to improve its efficiency via configuration changes. Conventional communication networks are typically configured based on estimated usage and/or connection type, and therefore are put into operation to support specific wireless connection and predetermined capacity. Conventional network deployments may include multiple radio base stations to connect to each type of available communication connection (e.g., 4G/5G/6G, Wi-Fi, private radio, etc.). Conventional communication connections can include long term evolution (LTE), 4G LTE (e.g., Cat-20, spectrums), 5G NR sub6G, 5G millimeter wave, private network space LEO satellites, public and/or private space satellites, GEO satellites, LEO satellites, etc., and/or any combination(s) thereof.
The deployment of multiple radio base stations increases deployment complexity and cost (e.g., monetary cost associated with additional hardware, resource cost associated with increased number of compute, memory, and/or network resources required to be in operation, etc.). Examples disclosed herein overcome such challenges of conventional network deployments by utilizing multi-spectrum, multi-modal terrestrial and non-terrestrial sensors and/or communication connection technologies to continuously identify devices that are connected to network(s). Examples disclosed herein identify optimal and/or otherwise improved selection of communication connection technologies for devices. For example, devices can include electronic devices associated with persons (e.g., pedestrians, persons in an industrial or manufacturing setting, etc.), vehicles, equipment, tools, etc. Examples disclosed herein can identify an electronic device and its communication connection capabilities and, based on a variety of factors (e.g., connection data, network environment data, etc.), identify a communication connection network of which the electronic device can utilize to improve network QoS (e.g., increased throughput, reduced latency, etc.). Advantageously, examples disclosed herein can connect to these spectrums autonomously (e.g., fluidly connect and/or disconnect), which conventional communication networks cannot. Advantageously, examples disclosed herein can achieve improved service, greater user choice (e.g., based on network quality), and lower total cost of ownership for enterprises.
Network quality and usage optimizations are typically focused on specific user equipment (UE) communicating via a single connection type (e.g., 4G/5G, Wi-Fi, etc.). Such conventional solutions do not consider environmental conditions (e.g., weather conditions), network-centric environmental impacts (e.g., signal blockage), or actual usage at a particular network node (e.g., at a fixed network node or base station). Conventional techniques for optimizing and/or otherwise improving network communications are limited to one connection type and do not consider real-time usage of multi-access users and devices, which can include wireless sensors, wired sensors, active/passive sensors, etc. Examples disclosed herein overcome the limitations of conventional network communication optimizations by utilizing an array of real-time network telemetry and/or real-world multi-access activity at a specific physical location. In some disclosed examples, a data driven networking (DDN) controller can invoke Artificial Intelligence/Machine Learning (AI/ML) techniques to utilize multi-access converged connection data at a physical network node and actual network traffic utilization to configure and/or reconfigure network nodes with a re-dimensioned network node that can adapt over time to address the needs of connected UEs or gateways.
In some disclosed examples, the DDN control circuitry can leverage location-aware capabilities for device identification with terrestrial techniques (e.g., time-of-arrival (TOA), angle-of-arrival (AOA), round-trip time (RTT), etc.) in cellular networks and/or non-terrestrial techniques (e.g., sync pulse generator (SPG) techniques, SPG, global navigation satellite system (GNSS), etc.) in satellite-based networks for different types of devices, such as 5G or 6G enabled devices, CBRS enabled devices, category 1 (CAT-1) devices, category M (CAT-M) devices, Narrowband Internet of Things (NB-IoT) devices, etc.
In some disclosed examples, the DDN control circuitry can self-calibrate network nodes using active, live, operational, etc., usage data. For example, the DDN control circuitry can adjust (e.g., automatically adjust) a network node to converged multi-access usage by reconfiguring either fixed or mobile network nodes to accommodate actual-, live-, or real-world usage and telemetry of connected users, devices, or gateways. For example, the devices, gateways, etc., can include 4G, 5G New Radio (NR), CBRS, private cellular, Wi-Fi, satellite, Bluetooth, light detection and ranging devices, passive/active sensors, etc.
Conventional communication networks use location detection capabilities to identify devices connected to a network. Conventional location detection capabilities have many shortcomings, especially when applied to mobile objects. When objects move, variance in signal strength and coverage can reduce location detection accuracy when compared to non-moving objects. Such shortcomings may challenge positioning, navigation, and timing (PNT) resilience in important applications (e.g., infrastructure, commercial applications, research). GPS is susceptible to challenges in location determination such as potential signal loss and unverified/unauthenticated receipt of GPS data (e.g., ranging signals). Applications relying on satellite GPS/GNSS location determination may be limited because of signal strength used for doppler frequency shift signatures. Furthermore, weak signals from distance geosynchronous equatorial orbit (GEO) (also referred to as geostationary orbit) satellites may be susceptible to malicious activity (e.g., jamming and spoofing) or electromagnetic noise. Terrestrial-based location determination may be limited by a lack of continuous global coverage (e.g., gaps between networks), local obstructions to sensors (e.g., causing a break in object tracking).
In some disclosed examples, the DDN control circuitry can access wireless connectivity at an 1-2 OSI layer, sense a wireless spectrum type, enable a connection based on the sensed wireless spectrum, provide multi-access at one or more base stations, and/or select an appropriate billing method. In some disclosed examples, the DDN control circuitry can use substantially real time, low latency analytics to determine how and when to connect to an electronic device (e.g., a UE). In some disclosed examples, the DDN control circuitry can store encryption keys with other identifying information (e.g., location) to ensure privacy and security. In some disclosed examples, the DDN control circuitry can perform on the wire modifications to ongoing packet streams using real-time telemetry. In some disclosed examples, the DDN control circuitry can use satellite data to alter wireless connectivity based on geographic activities. In some disclosed examples, the DDN control circuitry can implement security policies using telemetry and/or AI. For example, the DDN control circuitry may use unsupervised learning to detect one or more anomalies in a network communication and implement a security policy for the network in response to detection of the one or more anomalies.
In some disclosed examples, the DDN control circuitry leverages data driven location detection using multi-modal, multi-spectrum terrestrial and/or non-terrestrial techniques and sensors to achieve continuous, seamless, and/or otherwise frictionless coverage of active and/or passive objects. Multi-modal may refer to the utilization of multiple, different types of data sources (e.g., homogeneous, heterogeneous, etc.). For example, multi-modal location detection may be implemented as disclosed herein by determining a location of an object based on data from multiple, different (e.g., heterogeneous) data sources (e.g., a video camera, a wireless communication beacon, etc.). In some examples, multi-modal location detection may be implemented by determining a location of an object based on data from multiple homogeneous data sources (e.g., multiple cameras, multiple beacons, multiple base stations, multiple Wi-Fi access points, etc.). In other examples, multiple heterogenous data sources may be used for multi-modal location detection.
Multi-spectrum (or multi-spectral) may refer to two or more ranges of frequencies or wavelengths in the electromagnetic spectrum, which may be heterogeneous (e.g., corresponding to different frequency/wavelength ranges processed by different connection technologies), homogeneous (e.g., corresponding to different frequency/wavelength ranges processed by a given type of connection technology. For example, heterogeneous, multi-spectrum location detection may be implemented as disclosed herein by determining a location of an object based on light sensing (e.g., sensing based on LIDAR techniques) and electromagnetic sensing (e.g., sensing based on Wi-Fi, cellular, Bluetooth, etc., techniques). In some examples, homogeneous, multi-spectrum location detection may be implemented as disclosed herein by determining a location of an object based on a first type of cellular connection technology (e.g., 4G LTE), a second type of cellular connection technology (e.g., 5G, 6G, etc.), or any combination(s) thereof. In some examples, homogeneous, multi-spectrum location detection may be implemented as disclosed herein by determining a location of an object based on a first type of Bluetooth connection technology (e.g., Bluetooth low energy (BLE)), a second type of Bluetooth connection technology (e.g., Bluetooth version 3.0 (v3.0), Bluetooth version 4.0 (v4.0), etc.), or combination(s) thereof.
Advantageously, any connection technology, such as Wi-Fi, cellular, satellite, LIDAR, wireline Ethernet, Bluetooth, etc., along with other (multi-modal) sensor information, such as cameras and environmental sensors (e.g., air pressure, carbon monoxide, light, temperature, etc., sensors), or any combination(s) thereof, may be utilized to leverage legacy equipment, reduce installation costs and complexity, and improve accuracy of location detection. Advantageously, utilization of any connection technology, or combination(s) thereof, may generate a sufficiency and/or diversity of data to improve location, identification, machine learning, and dynamic sensor utilization applications to reduce a total cost of ownership and thereby provide a higher return on investment (ROI) for civilian, commercial, and/or industrial stakeholders.
In some disclosed examples, the DDN control circuitry can include a location engine to locate (e.g., position) a passive object or an active object based on data generated from multiple sensors. A passive object may refer to an object that is not powered and/or needs power for operation. An active object may refer to a mobile object and/or an object that is powered. In some disclosed examples, the location engine may leverage the participation of passive and/or active objects in the location detection of themselves. For example, an active object such as powered user equipment (e.g., a mobile handset device, a wearable device, etc.) may generate and transmit location data (e.g., 5G Layer 1 (L1) data, 5G data of a physical layer or Layer 1 (L1) of an Open Systems Interconnection (OSI) model, etc.) to the location engine. For example, the 5G L1 data can include Sounding Reference Signal (SRS) data or any other type of cellular data.
In some disclosed examples, the location engine may utilize homogeneous data and/or heterogeneous data based on at least one of need or availability. For example, the location engine may utilize homogeneous data to compute location while, in other examples, the location engine may utilize heterogeneous data to compute the location data. In some examples, the location engine may utilize homogeneous data to determine location data and, in response to determination that the location data has an accuracy, a reliability, etc., that is less than a threshold (e.g., an accuracy threshold, a reliability threshold, etc.), the location engine may utilize heterogeneous data to determine the location data to improve accuracy, reliability, etc. In some examples, the location engine may utilize heterogeneous data to determine an accuracy of location data. Then, in response to a determination that the location data has an accuracy, a reliability, etc., that is less than a threshold (e.g., an accuracy threshold, a reliability threshold, etc.), the location engine may utilize homogeneous data to determine the location data to improve the accuracy, the reliability, etc.
In some disclosed examples, the location engine may utilize AI/ML techniques to detect and/or otherwise determine a location of an object (e.g., a passive object, an active object, etc.). For example, the location engine may use different video pixels generated by a video camera as one of multiple sensors tracking the object. In some disclosed examples, the location engine may execute an AI/ML model using the video pixels as inputs (e.g., data inputs, AI/ML inputs, AI/ML model inputs, etc.) to generate outputs (e.g., data outputs, AI/ML outputs, AI/ML model outputs, etc.). In some disclosed examples, the location engine may execute the AI/ML model to generate the outputs to include a prediction and/or otherwise a determination of an instant location of the object, a future or subsequent location of the object, etc. In some disclosed examples, the location engine may execute the AI/ML model to generate the outputs to include detections of changes in an environment including the object. For example, the location engine may detect that another object or item is blocking the camera and/or the object of interest. For example, in an industrial environment including an autonomous robot having a robotic arm, the robotic arm may need to pick up a tool but the tool may have been previously moved away from the robotic arm. In some examples, the location engine may execute an AI/ML model to locate the tool and provide the location (e.g., the precise location, a location within a specified tolerance, etc.) to the robot so that the robot may re-find or locate the tool, pick up the tool, and execute an operation with the tool. Advantageously, the location engine may utilize AI/ML techniques, which may include the use of one or more machine learning models, by ingesting data from multiple modes, multiple spectrums, etc. Furthermore, although examples disclosed herein are described in reference to modern compute workloads and network transformations for workloads (e.g., vRAN), the techniques described herein are not limited thereto.
In contrast to the first DDN edge server 104, which is fixed, the second DDN edge server 106 is a mobile edge server. For example, the second DDN edge server 106 can be a vehicle (e.g., included in and/or otherwise associated with a vehicle) or a non-terrestrial vehicle such as an NGO satellite, airplane, etc. Alternatively, the second DDN edge server 106 may be a fixed and/or otherwise stationary edge server. The second DDN edge server 106 is in communication with a variety of example second devices 110, which can include a base station coupled to infrastructure (e.g., a residential or commercial building, a traffic light pole, a highway overpass, etc.), mobile handsets, tablet computers, a vehicle (e.g., a device of a vehicle that is capable of communicating via cellular or vehicle-to-everything (V2X) networks), etc., and/or any combination(s) thereof.
In example operation, the second DDN edge servers 104, 106 can achieve DDN physical (PHY) converged multi access communication. For example, the first DDN edge server 104 can obtain telemetry data associated with one(s) of the first devices 108 and network data (e.g., network environment data, network quality data, etc.) from network devices such as a base station. In some examples, the first DDN edge server 104 can determine that one of the first devices 108 is experiencing relatively low communication quality with a first type of communication connection (e.g., a 4G/5G/6G connection) and can instruct the one of the first devices 108 to switch and/or otherwise transition over to a second type of communication connection (e.g., Wi-Fi) based on the second type of communication connection having a relatively higher communication quality than the first type of communication connection.
In example operation, the second DDN edge server 106 can obtain telemetry data associated with one(s) of the second devices 110 and network data from network devices such as a base station. In some examples, the second DDN edge server 106 can determine that one of the second devices 110 is experiencing relatively low communication quality with a first type of communication connection (e.g., a 4G/5G/6G connection) and can instruct the one of the second devices 110 to switch and/or otherwise transition over to a second type of communication connection (e.g., Wi-Fi) based on the second type of communication connection having a relatively higher communication quality than the first type of communication connection. In some examples, communication link quality can be impacted by natural (e.g., weather) or unnatural (e.g., garbage truck or other obstruction) environmental conditions impacting the signal strength to and from a DDN node. For example, the second DDN edge server 106 may identify that multipath fading, scattering, doppler, power loss, and/or signal fade have impacted wireless signal link quality.
The illustrated example of
In general, any number of edge nodes (e.g., gNBs, DDN nodes, DDN servers, etc.) may combine to form a networked system of DDN nodes (e.g., edge compute devices) as illustrated in
Some examples described herein include one or more edge compute devices. An edge compute device may be any object that has the capacity to process instructions in executable code form. Examples of edge compute devices may include personal computers, servers, mobile devices, tablets, routers, switches, wireless access points, etc. Furthermore, although any of the edge nodes and/or edge devices described herein may be edge compute devices, many additional types of compute devices are compatible with the techniques described herein. In particular, the interested reader may refer to
In some examples, a network of DDN edge nodes may integrate additional DDN nodes into a network and/or transform an edge node (e.g., a “dumb” node) in the network into a DDN node (e.g., a “smart” node) with DDN circuitry and/or an artificial intelligence engine. For example, an edge compute device may be updated to include a capability to configure compute resources based on a resource demand. In some examples, programmable circuitry is to configure compute resources of an edge compute device responsive to an input from another edge compute devices, the input based on a resource demand. For example, a network of DDN nodes may include a set of fluid interconnected (e.g., mobile and/or fixed) generic edge nodes that are temporarily customized for specific workloads, reprogramming one or more of the edge node(s) to function as DDN edge node(s) for a specified period of time. In some examples, the DDN control circuitry 240 may execute instructions such as will be described in
The indoor environment 204 of the illustrated example includes an example second industrial machine 212, example storage containers (e.g., boxes, crates, etc.) 214, example video cameras 216, 218, 220, 222 (e.g., surveillance cameras), example Wi-Fi devices (e.g., Wi-Fi beacons, Wi-Fi enabled sensors, routers, modems, gateways, access points, hotspots, etc.) 224, 226, 228, example 5G devices (e.g., 5G beacons, 5G enabled sensors, access points, hotspots, etc.) 230, 232, example Bluetooth devices (e.g., Bluetooth beacons, Bluetooth enabled sensors, access points, hotspots, etc.) 234, 236, and an example radio-frequency identification (RFID) system 238. In the illustrated example, the second industrial machine 212 is a connection technology enabled forklift. For example, the second industrial machine 212 may be a Bluetooth-enabled forklift. Additionally or alternatively, the second industrial machine 212 may be enabled to connect to other device(s) via any other connection technology (e.g., 5G/6G, Wi-Fi, etc.).
In some examples, one(s) of the storage containers 214 may be enabled with connection technology. For example, one(s) of the storage containers 214 may be affixed with, coupled to, and/or otherwise include an RFID device (e.g., an RFID tag), an antenna (e.g., a Bluetooth antenna, a Wi-Fi antenna, a 5G/6G antenna, etc.), a transmitter (e.g., a Bluetooth transmitter, a Wi-Fi transmitter, a 5G/6G transmitter, etc.), etc., and/or any combination thereof. In some examples, the RFID system 238 may be implemented by one or more radio transponders, receivers, and/or transmitters.
In some examples, data producer(s) (e.g., sensor(s)) may be clustered. For example, one(s) of the video cameras 216, 218, 220, 222 may be coupled to one(s) of the industrial machines 210, 212. In some examples, other sensors, such as audio sensors, may be coupled to the industrial machines 210, 212, one(s) of the storage container(s) 214, etc. In the illustrated example, data driven network (DDN) control circuitry 240 can obtain audio-related data, such as Delivered Audio Quality (DAQ) data, amplitude data, frequency data, etc., and/or combination(s) thereof, from the audio sensor(s) from which location data may be determined. In some examples, the data producer(s) of the illustrated example are not singular in function and may be used in connection with one(s) of the other data producer(s). For example, the video cameras 216, 218, 220, 222 may be used to identify object(s) in the indoor environment 204, provide input(s) to an autonomous driving system of the industrial machines 210, 212, execute anomaly detection, etc.
In the illustrated example, one(s) of the second industrial machine 212, the storage containers 214, the video cameras 216, 218, 220, 222, the Wi-Fi devices 224, 226, 228, the 5G devices 230, 232, the Bluetooth devices 234, 236, and/or the RFID system 238 may be in communication with one(s) of each other via one or more connection technologies (e.g., Bluetooth, Wi-Fi, RFID, 5G/6G, etc.). In some examples, one(s) of the second industrial machine 212, the storage containers 214, the video cameras 216, 218, 220, 222, the Wi-Fi devices 224, 226, 228, the 5G devices 230, 232, the Bluetooth devices 234, 236, and/or the RFID system 238 may be in communication with the DDN control circuitry 240 via an example network 242. In some examples, the network 242 of the illustrated example of
In the illustrated example of
Although only one instance of the DDN control circuitry 240 is depicted in the illustrated example, in some examples, more than one of the DDN control circuitry 240 may be utilized. For example, the DDN control circuitry 240 depicted in
In some examples, the DDN control circuitry 240 may determine locations, positions, etc., of objects of the DDN system based on multi-spectrum, multi-modal data sources. In some examples, the DDN control circuitry 240 may determine a strength and/or quality of network connection(s) associated with an electronic device of the DDN system 200 based on multi-spectrum, multi-modal data sources. For example, the DDN control circuitry 240 may obtain satellite signal data from the GPS satellite 206, satellite signal data from the LEO satellite 207, 5G signal data from the 5G cellular system 208, Bluetooth signal data from the first industrial machine 210 and/or the second industrial machine 212, Wi-Fi signal data from one(s) of the video cameras 216, 218, 220, 222, RFID signal data from the RFID system 238 (e.g., a strength of an RFID beacon of the RFID system 238). In some examples, the DDN control circuitry 240 may execute one or more machine learning models using the multi-spectrum, multi-modal data as data inputs to generate data outputs. In some examples, the outputs may include determinations of whether device(s) in the outdoor environment 202, the indoor environment 204, and/or, more generally, the DDN system 200, is/are to switch from a first network (or first mode of communication) to a second network (or second mode of communication) based on the multi-spectrum, multi-modal data.
Advantageously, the DDN control circuitry 240 may determine whether electronic device(s) is/are to switch network connections in the DDN system 200 based on homogeneous and/or heterogeneous data sources. For example, the DDN control circuitry 240 may determine QoS parameters associated with network connections that the first industrial machine 210 is capable to utilize based on homogeneous data sources. In some examples, the DDN control circuitry 240 may determine QoS parameters associated with a 5G cellular connection of the first industrial machine 210 based on data from one or more 5G radio hardware units (RUs), one or more 5G Distributed Units (DUs), one or more 5G central units (CUs), etc. In some examples, the DDN control circuitry 240 may determine QoS parameters associated with the 5G cellular connection of the first industrial machine 210 and a Wi-Fi connection of the first industrial machine 210 based on heterogeneous data sources. For example, the DDN control circuitry 240 may determine the QoS parameters associated with the 5G cellular connection based on data from the one or more 5G RUs and determine the QoS parameters associated with the Wi-Fi connection from one or more Wi-Fi access points. In some examples, the DDN control circuitry 240 may determine whether a device is to switch network connections of the first industrial machine 210 based on homogeneous and heterogeneous data sources. For example, the DDN control circuitry 240 may determine to switch from a 5G cellular connection to a Wi-Fi connection based on data from (i) the 5G cellular system 208 and/or (ii) the first Wi-Fi device 224 and/or the third Wi-Fi device 228.
In some examples, the DDN control circuitry 240 executes and/or instantiates one or more artificial intelligence (AI) models to determine whether to cause an electronic device to utilize different network connections for communication (e.g., wireless communication). AI, including machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output based on patterns and/or associations previously learned by the model via a training process. For instance, the DDN control circuitry 240 may train the machine learning model(s) with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) consistent with the recognized patterns and/or associations.
Many different types of machine learning models and/or machine learning architectures exist. In some examples, the DDN control circuitry 240 generates the machine learning model(s) as neural network model(s). The DDN control circuitry 240 may use a neural network model to execute an AI/ML workload, which, in some examples, may be executed using one or more hardware accelerators. In general, machine learning models/architectures that are suitable to use in the example approaches disclosed herein include recurrent neural networks. However, other types of machine learning models could additionally or alternatively be used such as supervised learning artificial neural network (ANN) models, clustering models, classification models, etc., and/or a combination thereof. Example supervised learning ANN models may include two-layer (2-layer) radial basis neural networks (RBN), learning vector quantization (LVQ) classification neural networks, etc. Example clustering models may include k-means clustering, hierarchical clustering, mean shift clustering, density-based clustering, etc. Example classification models may include logistic regression, support-vector machine or network, Naive Bayes, etc. In some examples, the DDN control circuitry 240 may compile and/or otherwise generate one(s) of the machine learning model(s) as lightweight machine learning models.
In general, implementing an machine learning/artificial intelligence (ML/AI) system involves two phases, a learning/training phase and an inference phase. In the learning/training phase, the DDN control circuitry 240 uses a training algorithm to train the machine learning model(s) to operate in accordance with patterns and/or associations based on, for example, training data. In general, the machine learning model(s) include(s) internal parameters (e.g., configuration register data) that guide how input data is transformed into output data, such as through a series of nodes and connections within the machine learning model(s) to transform input data into output data. Additionally, hyperparameters are used as part of the training process to control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). Hyperparameters are defined to be training parameters that are determined prior to initiating the training process.
Different types of training may be performed based on the type of ML/AI model and/or the expected output. For example, the DDN control circuitry 240 may invoke supervised training to use inputs and corresponding expected (e.g., labeled) outputs to select parameters (e.g., by iterating over combinations of select parameters) for the machine learning model(s) that reduce model error. As used herein, “labeling” refers to an expected output of the machine learning model (e.g., a classification, an expected output value, etc.). Alternatively, the DDN control circuitry 240 may invoke unsupervised training (e.g., used in deep learning, a subset of machine learning, etc.) that involves inferring patterns from inputs to select parameters for the machine learning model(s) (e.g., without the benefit of expected (e.g., labeled) outputs).
In some examples, the DDN control circuitry 240 trains the machine learning model(s) using unsupervised clustering of operating observables. For example, the operating observables may include a vendor identifier, an Internet Protocol (IP) address, a media access control (MAC) address, a serial number, a certificate, etc., of a device (e.g., an enterprise device, an IoT device, etc.), Sounding Reference Signal (SRS) parameters, etc. However, the DDN control circuitry 240 may additionally or alternatively use any other training algorithm such as stochastic gradient descent, simulated annealing, particle swarm optimization, evolution algorithms, genetic algorithms, nonlinear conjugate gradient, etc.
In some examples, the DDN control circuitry 240 may train the machine learning model(s) until the level of error is no longer reducing. In some examples, the DDN control circuitry 240 may train the machine learning model(s) locally on the DDN control circuitry 240 and/or remotely at an external computing system communicatively coupled to the network 242. In some examples, the DDN control circuitry 240 trains the machine learning model(s) using hyperparameters that control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). In some examples, the DDN control circuitry 240 may use hyperparameters that control model performance and training speed such as the learning rate and regularization parameter(s). The DDN control circuitry 240 may select such hyperparameters by, for example, trial and error to reach an optimal model performance. In some examples, the DDN control circuitry 240 utilizes Bayesian hyperparameter optimization to determine an optimal and/or otherwise improved or more efficient network architecture to avoid model overfitting and improve the overall applicability of the machine learning model(s). Alternatively, the DDN control circuitry 240 may use any other type of optimization. In some examples, the DDN control circuitry 240 may perform re-training. The DDN control circuitry 240 may execute such re-training in response to override(s) by a user of the DDN control circuitry 240, a receipt of new training data, etc.
In some examples, the DDN control circuitry 240 facilitates the training of the machine learning model(s) using training data. In some examples, the DDN control circuitry 240 utilizes training data that originates from locally generated data, such as 5G Layer 1 (L1) data, IP addresses, MAC addresses, radio identifiers, SRS parameters, etc. In some examples, the DDN control circuitry 240 utilizes training data that originates from externally generated data. For example, the DDN control circuitry 240 may utilize L1 data from any data source (e.g., a camera, a RAN system, a satellite, etc.). In some examples, the L1 data may correspond to L1 data of an OSI model. In some examples, the L1 data of an OSI model may correspond to the physical layer of the OSI model, L2 data of the OSI model may correspond to the data link layer, L3 data of the OSI model may correspond to the network layer, and so forth. In some examples, the L1 data may correspond to the transmitted raw bit stream over a physical medium (e.g., a wired line physical structure such as coax or fiber, an antenna, a receiver, a transmitter, a transceiver, etc.). In some examples, the L1 data may be implemented by signals, binary transmission, etc. In some examples, the L2 data may correspond to physical addressing of the data, which may include Ethernet data, MAC addresses, logical link control (LLC) data, etc.
In some examples where supervised training is used, the DDN control circuitry 240 may label the training data (e.g., label training data or portion(s) thereof as object identification data, location data, etc.). Labeling is applied to the training data by a user manually or by an automated data pre-processing system. In some examples, the DDN control circuitry 240 may pre-process the training data using, for example, an interface (e.g., network interface circuitry) to extract and/or otherwise identify data of interest and discard data not of interest to improve computational efficiency. In some examples, the DDN control circuitry 240 sub-divides the training data into a first portion of data for training the machine learning model(s), and a second portion of data for validating the machine learning model(s).
Once training is complete, the DDN control circuitry 240 may deploy the machine learning model(s) for use as an executable construct that processes an input and provides an output based on the network of nodes and connections defined in the machine learning model(s). The DDN control circuitry 240 may store the machine learning model(s) in a datastore that may be accessed by the DDN control circuitry 240, a cloud repository, etc. In some examples, the DDN control circuitry 240 may transmit the machine learning model(s) to external computing system(s) via the network 242. In some examples, in response to transmitting the machine learning model(s) to the external computing system(s), the external computing system(s) may execute the machine learning model(s) to execute AI/ML workloads with at least one of improved efficiency or performance to achieve improved object tracking, location detection, etc., and/or a combination thereof.
Once trained, the deployed one(s) of the machine learning model(s) may be operated in an inference phase to process data. In the inference phase, data to be analyzed (e.g., live data) is input to the machine learning model(s), and the machine learning model(s) execute(s) to create an output. This inference phase can be thought of as the AI “thinking” to generate the output based on what it learned from the training (e.g., by executing the machine learning model(s) to apply the learned patterns and/or associations to the live data). In some examples, input data undergoes pre-processing before being used as an input to the machine learning model(s). Moreover, in some examples, the output data may undergo post-processing after it is generated by the machine learning model(s) to transform the output into a useful result (e.g., a display of data, a detection and/or identification of an object, a location determination of an object, an instruction to be executed by a machine, etc.).
In some examples, output of the deployed one(s) of the machine learning model(s) may be captured and provided as feedback. By analyzing the feedback, an accuracy of the deployed one(s) of the machine learning model(s) can be determined. If the feedback indicates that the accuracy of the deployed model is less than a threshold or other criterion, training of an updated model can be triggered using the feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model.
As used herein, data is information in any form that may be ingested, processed, interpreted and/or otherwise manipulated by processor circuitry to produce a result. The produced result may itself be data. As used herein, a model is a set of instructions and/or data that may be ingested, processed, interpreted and/or otherwise manipulated by processor circuitry to produce a result. Often, a model is operated using input data to produce output data in accordance with one or more relationships reflected in the model. The model may be based on training data. As used herein “threshold” is expressed as data such as a numerical value represented in any form, that may be used by processor circuitry as a reference for a comparison operation.
In some examples, one(s) of the DDN nodes 304, 306, 308, 310, 312 is/are logical entities representative of hardware (e.g., an ASIC, register-transfer level (RTL) hardware, etc.), software, and/or firmware. For example, one(s) of the DDN nodes 304, 306, 308, 310, 312 can be implemented using hardware (e.g., processor circuitry, memory, interface circuitry, accelerators, etc.), software (e.g., driver(s), an operating system (OS), application programming interface(s) (API(s)), etc.), and/or firmware.
In some examples, one(s) of the DDN nodes 304, 306, 308, 310, 312 is/are physical device(s). For example, one(s) of the DDN nodes 304, 306, 308, 310, 312 can be a server (e.g., a blade server, an edge server, a radio access network (RAN) server, etc.), a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a terrestrial or non-terrestrial vehicle (e.g., an autonomous vehicle, satellite, aircraft, boat, etc.), industrial equipment, a gaming console, a headset (e.g., an augmented reality (AR) headset, a virtual reality (VR) headset, etc.) or other wearable device, or any other type of computing or electronic device. In some examples, one(s) of the DDN nodes 304, 306, 308, 310, 312 can be a sensor (e.g., an electronic device capable of generating analog measurements and converting the analog measurements data into digital data). For example, one(s) of the DDN nodes 304, 306, 308, 310, 312 can be a sensor such as an antenna, a camera (e.g., a still-image camera, a video camera, an infrared camera, etc.), a laser (e.g., a light detection and ranging (LIDAR) sensor), a radiofrequency identification (RFID) reader, an environment sensor (e.g., a humidity sensor, a light sensor, a temperature sensor, a wind sensor, etc.), etc., or any other type of sensor. In some examples, one(s) of the DDN nodes 304, 306, 308, 310, 312 is/are logical entities representative of hardware, software, and/or firmware that is in communication with sensor(s). For example, one(s) of the DDN nodes 304, 306, 308, 310, 312 can be an edge server, a network interface, an Infrastructure Processing Unit (IPU), etc., that receives data from a sensor, such as an antenna.
In the illustrated example of
In the illustrated example of
In some examples, the control planes 314 are implemented by hardware, software, and/or firmware. For example, the control planes 314 can be implemented by (i) network interface circuitry, (ii) firmware associated with the network interface circuitry, and/or (iii) a software application. For example, the software application can be executed to execute a workload based on digital data that is converted from analog data, which can be received by the network interface circuitry. In the illustrated example, the control planes 314 are configured to receive data associated with gNodeB(s) (gNB(s)), satellite NodeBs (sNB(s)), sensor(s) (e.g., active sensor(s), passive sensor(s), etc.), and/or access point(s) (AP(s)). For example, the control planes 314 can be implemented with network interface circuitry to receive data from gNB(s) and/or associated firmware and/or software to process the data. Additionally and/or alternatively, the control planes 314 can be configured to receive data from any other source, such as a BLE device, an Ethernet device, etc. In the illustrated example, the control planes 314 can be instantiated to receive data from devices, extract parameters of interest from the data, and provide the parameters to other portion(s) of the DDNMAC 302. For example, the control planes 314 include an example multi-access PHY 316 to process the data from the data sources (e.g., the gNB(s), the sNB(s), etc.) in a centralized location.
In the illustrated example of
In the illustrated example, the DDNMAC 302 includes the DDN AI/ML engine 318 to output AI/ML recommendations based on telemetry data. For example, the DDN AI/ML engine 318 can provide telemetry data to one or more AI/ML models as model inputs to generate the AI/ML recommendations as model outputs. In some examples, the telemetry data is from the second through fifth DDN nodes 306, 308, 310, 312. For example, the telemetry data can include location data, communications and/or network quality data, and/or communications and/or network strength data. In some examples, network strength could be measured in packets retransmitted, packets dropped, throughput limits, throughput latency, jitter limits, etc., and/or any combination(s) thereof. In some examples, the AI/ML recommendation can include a recommendation, a request, a command, an instruction, etc., to cause one(s) of the second through fifth nodes 306, 308, 310, 312 to switch from a first network connection to a second network connection because the second network connection can have improved communications/network quality and/or strength with respect to the first network connection. In some examples, the DDN AI/ML engine 318 implements a decision tree that includes received signal strength indicator (RSSI) data, channel quality index (CQI) data, frequency utilization data, band utilization data, utilization load data of channel(s) for active connection(s), MIMO rank order, etc., and/or any combination(s) thereof.
In the illustrated example, the DDNMAC 302 includes the DDN controller 320 to cause one(s) of the second through fifth DDN nodes 306, 308, 310, 312 to change network connections based on the AI/ML recommendation. For example, the DDN controller 320 can determine that the AI/ML recommendation is indicative of recommending the second DDN node 306 to switch from a 5G cellular connection to a Wi-Fi connection to facilitate execution of one or more applications (e.g., a teleconference software application, a streaming media application, etc.). In some examples, the DDN controller 320 can generate a command in a data format that the second DDN node 306 is capable of receiving. For example, the DDN controller 320 can determine that the second DDN node 306 is using a 5G cellular connection and thereby the DDN controller 320 can transmit a command to the second DDN node 306 via a 5G cellular connection to have the second DDN node 306 switch from a 5G cellular connection to a Wi-Fi connection.
In the illustrated example, the DDNMAC 302 includes the DDN policy engine 322 to generate, modify, and/or maintain policies associated with one(s) of the DDN nodes 306, 308, 310, 312. In some examples, the policy can be a service level agreement (SLA). In some examples, the DDN policy engine 322 can receive data associated with the second DDN node 306. In some examples, the data can include types of network connections that the second DDN node 306 is capable of utilizing. In some examples, the DDN policy engine 322 can generate a policy (e.g., a network connection policy) corresponding to the second DDN node 306 based on the data. In some examples, the DDN policy engine 322 can modify the policy based on new or updated data from the second DDN node 306.
In the illustrated example, the DDNMAC 302 includes the DDN node orchestrator 324, which instantiates a DDN node (e.g., the first DDN edge server 104 of
In the illustrated example, the DDNMAC 302 includes the DDN database 326 to store event and/or AI datasets. For example, the DDN database 326 can store AI training or learning data and output the AI training or learning data to the DDN AI/ML engine 318. In some examples, the DDN database 326 can store inference data output from the DDN AI/ML engine 318. In some examples, the DDN database 326 can be implemented using one or more datastores. For example, the one or more datastores can be memory, one or more mass storage devices, etc., and/or any combination(s) thereof.
In the illustrated example, the DDNMAC 302 includes the DDN I/O opt in engine 328 to enable or disable network connections based on opt in selections from a user. For example, a user associated with the second DDN node 306 can determine to opt into using a 5G cellular connection and a Wi-Fi connection and to opt out of using a satellite connection and/or providing sensor data. In some examples, the DDN I/O opt in engine 328 can instruct the DDN controller 320 to enable or disable a network connection (identified by CXN(S)) associated with a node. For example, in response to a determination that a user associated with the second DDN node 306 opted out of using 5G cellular connection, the DDN I/O opt in engine 328 can instruct the DDN controller 320 to switch the second DDN node 306 from using a 5G cellular connection to a different cellular connection based on at least one of the user I/O opt in selections or the policy associated with the second DDN node 306. In some examples, the DDN I/O opt in engine 328 can obtain the opt in information from one(s) of the second through fifth nodes 306, 308, 310, 312. In some examples, the DDN I/O opt in engine 328 can obtain the opt in information from any other source, such as the DDN policy engine 322, the DDN database 326, etc.
In the illustrated example, the DDNMAC circuitry 402 includes DDN workload optimized processor circuitry 406 in a first state. For example, the DDN workload optimized processor circuitry 406 is multi-core processor circuitry that includes a plurality of example compute cores 408. In the illustrated example, first ones of the compute cores 408 execute workloads associated with the control plane 404 receiving and/or transmitting data to the gNB(s), the sensor(s), the AP(s), etc. For example, the first ones of the compute cores 408 can be configured to optimize and/or otherwise improve execution of the workloads by changing a core clock frequency, a type of instruction set to be utilized, etc. In the illustrated example, second ones of the compute cores 408 execute workloads associated with the control plane 404 controlling the multi-access PHY. For example, the second ones of the compute cores 408 can be configured to optimize and/or otherwise improve execution of the workloads by changing a core clock frequency, a type of instruction set to be utilized, etc. In the illustrated example, third ones of the compute cores 408 execute workloads associated with applications executed and/or instantiated by the DDNMAC circuitry 402. For example, the third ones of the compute cores 408 can be configured to optimize and/or otherwise improve execution of the workloads by changing a core clock frequency, a type of instruction set to be utilized, etc.
In the illustrated example, the DDNMAC circuitry 602 includes first example DDN workload optimized processor circuitry 606 and second example DDN workload optimized processor circuitry 608 in a first state. For example, the DDNMAC circuitry 602 can be dual-socket hardware. In the illustrated example, the first and second DDN workload optimized processor circuitry 606, 608 are multi-core processor circuitry that each include a plurality of example compute cores 610, 612. In the illustrated example, first ones of the first and second compute cores 610, 612 execute workloads associated with the control plane 604 receiving and/or transmitting data to the gNB(s), the sensor(s), the AP(s), etc. For example, the first and second ones of the compute cores 610, 612 can be configured to optimize and/or otherwise improve execution of the workloads by changing a core clock frequency, a type of instruction set to be utilized, etc. In the illustrated example, second ones of the first and second compute cores 610, 612 execute workloads associated with the control plane 604 controlling the multi-access PHY. For example, the second ones of the first and second compute cores 610, 612 can be configured to optimize and/or otherwise improve execution of the workloads by changing a core clock frequency, a type of instruction set to be utilized, etc. In the illustrated example, third ones of the first and second compute cores 610, 612 execute workloads associated with applications executed and/or instantiated by the DDNMAC circuitry 602. For example, the third ones of the first and second compute cores 610, 612 can be configured to optimize and/or otherwise improve execution of the workloads by changing a core clock frequency, a type of instruction set to be utilized, etc.
In the illustrated example, DDN nodes can be fixed or mobile with fluid (dynamic) multi-access PHY connections and core capacity. In some examples, DDN nodes can support one or more (virtual) instances per edge server. In some examples, DDN nodes can be reconfigured based on real-time telemetry (e.g., Link Quality, Environmental Conditions, etc., and/or any combination(s) thereof) and AI/ML Engine direction at a physical location at a specific time. In the illustrated example, the second edge server 704 is hosting two active DDN nodes, which include a first node (DDN PHY1) that has Wi-Fi and 5G multi-access PHYs and 6 core Capacity; and a second node (DDN PHY2) with Wi-Fi and BLE multi-access PHYs and 8 core capacity.
In some examples, a UE 810 that generates the data may have reduced communication and/or network quality when using Wi-Fi. In some examples, an example configuration controller 812 obtains telemetry data from the DDN processor circuitry 806. The telemetry data can include communication/network quality associated with the UE 810 Wi-Fi connection. The telemetry data can include communication capabilities of the UE 810, which can include the capability to use 5G cellular communication to transmit/receive data. The configuration controller 812 can determine that a possible solution is to cause the UE 810 to switch from Wi-Fi to 5G cellular. The configuration controller 812 can instruct an example orchestrator 814 that there is 5G network load availability to accommodate the UE 810. The orchestrator 814 can instruct an example connection controller 816 to direct the UE 810 to switch from Wi-Fi to 5G cellular. In response to the switch, the UE 810 can transmit data to an example access point 818 using Wi-Fi.
In example operation, the DDN server 902 may detect and/or steer the incoming wireless data 906 based on L1 inspection (e.g., L1 data inspection). In example operation, the DDN server 902 may parse and/or otherwise extract L1 data from the incoming wireless data 906. In example operation, the DDN server 902 may execute AI/ML model(s) with the L1 data as ML input(s) to generate ML output(s), which may include a location of a data source (e.g., a cellular data source). In example operation, the DDN server 902 may provide the location to the application 1006. In example operation, the application 1006 may cause one or more operations to occur. For example, the application 1006 may be an autonomous driving application, an autonomous robot application, etc., associated with the data source (e.g., the data source may be an autonomous vehicle, an autonomous robot, etc.). In some examples, in response to receiving the location of the data source, the application 1006 may determine a spectrum for which the data source is to use based on the location. Additionally and/or alternatively, the application 1006 may generate a command, a direction, an instruction, etc., to cause the data source, or device(s) associated thereof, to execute one or more actions (e.g., an autonomous driving action such as a change in speed or direction, an autonomous robot action such as a change in a robot arm position, etc.).
In example operation, the processor circuitry 1502, and/or, more generally, the DDN server 902, executes a workflow. For example, the processor circuitry 1502 can receive heterogeneous multi-spectrum data from various data sources (e.g., Wi-Fi data sources, 4G data sources, 5G data sources, Ethernet data sources, satellite data sources, etc.). The processor circuitry 1502 can execute the workflow on the data, which can include demodulation, spectrum detection, steering based on L1 portion(s) of the data, decryption, decompression, and frame construction.
In the illustrated example, the first cores 1704 execute and/or instantiate control midhaul workloads, such as Single Instruction, Multiple Data Extensions (SSE). In the illustrated example, the AI engine 1710 executes and/or instantiates x86 Advanced Matrix Extension (AMX) learning and inference functions. In the illustrated example, the FEC circuitry 1712 executes and/or instantiates FEC functions, such as block cyclic redundancy check (CRC), low-density parity-check (LDPC), decoding, and/or encoding functions. In the illustrated example, the FEC circuitry 1712 can execute and/or instantiate a first set of example functions 1716.
In the illustrated example, the second cores 1706 execute and/or instantiate signal processing functions, such as scramble and/or modulation functions. In some examples, the second cores 1706 can execute and/or instantiate a set of instructions such as Advanced Vector Extensions 512-bit instructions (also referred to herein as AVX-512 instructions) to implement the signal processing functions. In the illustrated example, the second cores 1706 can execute and/or instantiate a second set of example functions 1718.
In the illustrated example, the third cores 1708 execute and/or instantiate signal processing functions, such as beam forming functions. In some examples, the third cores 1708 can execute and/or instantiate a set of instructions such as instructions in an ISA that is tailored to and/or otherwise developed to improve and/or otherwise optimize 5G processing tasks (also referred to herein as 5G-ISA instructions). In the illustrated example, the third cores 1708 can execute and/or instantiate a third set of example functions 1720. In the illustrated example, the FSD executes and/or instantiates FSD functions 1716.
In the illustrated example, the DDN server 2102 may obtain an example camera feed 2108, an example RFID stream 2110, and an example environmental sensor stream 2112. In some examples, the DDN server 2102 implements the object detection 2103 with object detection circuitry, the motion detection 2104 with motion detection circuitry, and/or the anomaly detection 2106 with anomaly detection circuitry. For example, the DDN server 2102 may detect an object based on the camera feed 2108. The DDN server 2102 may detect motion of the object based on the RFID stream 2110. The DDN server 2102 may detect an anomaly condition associated with the object based on the environmental sensor stream 2112, which may include one or more environmental sensors (e.g., moisture, pressure, temperature, etc., sensors).
In the illustrated example, the DDN server 2102 may execute example event generation 2114 with event generation circuitry. For example, the DDN server 2102 may generate and publish an event indicative of output(s) of at least one of the object detection 2103, the motion detection 2104, or the anomaly detection 2106. For example, discrete sensors like IP cameras, RFID readers, light sensors, temperature sensors, humidity sensors, accelerometers, etc., can feed their data into the event generation 2114, which can include logic specific to the type of sensor generating the data.
In some examples, the events can include location and/or direction information. In some examples, the events can include only raw sensor data. In some examples, the events can include a detection of a forklift moving right to left by a camera having an identifier of 34. In some examples, the events can include a detection that an RFID tag associated with a forklift having an identifier of ABC has moved from Zone X to Zone Y. In some examples, the events can include a determination that a temperature in a hallway having an identifier of 12 has increased by 5 degrees Fahrenheit. In some examples, the events can include a detection that the lights in a room with an identifier of C4 has gone out.
In some examples, the event may include a first indication that the object has been detected, a second indication that the object is in motion (or has moved from a first location to a second location), and/or a third indication that an anomaly condition is present. In some examples, the event may include direction information, location information, etc., associated with the object. In some examples, the events may include sensor data (e.g., raw sensor data). In some examples, the event(s) may include a direction and/or a location of an object in an environment.
In example operation, the DDN server 2102 may publish the event to an example data broker 2116, which may be implemented by data broker circuitry. The data broker 2116 may store the events in an example event database 2118, which may be accessed by device(s), application(s), etc. In some examples, the event database 2118 may be implemented by memory and/or one or more mass storage devices. In some examples, the DDN server 2102 may implement at least one of the object detection 2103, the motion detection 2104, the anomaly detection 2106, the event generation 2114, or the data broker 2116 by executing an AI/ML model as described herein.
In the illustrated example, the DDN server 2202 obtains a first example RAN L1 feed 2203 and a second example RAN L1 feed 2204. In this example, the first RAN L1 feed 2203 may be implemented by 4G LTE or 5G (or 6G in other examples). In this example, the second RAN L1 feed 2204 may be implemented by Wi-Fi or Bluetooth (or RFID or GNSS in other examples). In example operation, the DDN server 2202 may execute an example time-of-arrival (TOA) calculation 2206, an example angle-of-arrival (AOA) calculation 2208, and an example user equipment (UE) identifier (ID) capture operation 2210 on the first RAN L1 feed 2203 and/or the second RAN L1 feed 2204.
In example operation, the DDN server 2202 may execute example event generation operations 2212 based on the TOA calculation 2206, the AOA calculation 2208, and the UE ID capture 2210. For example, the event generation operations 2212 may generate an event based on a TOA measurement, an AOA measurement, and a UE ID (e.g., a UE ID captured and/or otherwise extracted from the first RAN L1 feed 2203 and/or the second RAN L1 feed 2204). The event generation operations 2212 may cause event(s) to be published to an example data broker 2214. The data broker 2214 may store the event(s) in an example event database 2216. In some examples, the event database 2216 may be implemented by memory and/or one or more mass storage devices. In some examples, the event(s) may include a direction and/or a location of an object in an environment. In some examples, the DDN server 2202 may implement at least one of the event generation 2014 or the data broker 2016 by executing an AI/ML model.
In some examples, RAN based sensor data such as UE TOA data, UE AOA data, and UE scan report data can be fed into the event generation operations 2212. For example, the event generation operations 2212 can generate an event that includes a UE with an identifier of 123 is 12.5 meters away from basestation-2 at an angle of 37 degrees. In some examples, the event generation operations 2212 can generate an event that includes a UE with an identifier of 456 is 34.2 meters away from basestation-1 at an angle of 172 degrees. In some examples, the event generation operations 2212 can generate an event that indicates a Wi-Fi device with a media access control (MAC) address of 3F is 10.5 meters away from a Wi-Fi access point (AP) with an identifier of 37 at an angle of 17 degrees.
In example operation, the DDN server 2302 may execute example message parsing 2304 on the messages 2303. For example, the DDN server 2302 may parse the messages 2303 to extract data of interest from the messages 2303. In some examples, the messages 2303 may include UE identifiers (identified by UE-identifier), timestamps (identified by timestamp), record counts (identified by record-count), and/or records (identified by records[1 . . . n]). In this example, the records may include multi-spectrum, multi-modal records, such as Bluetooth, 4G LTE, 5G L1, Wi-Fi or Bluetooth L1, sensor records (e.g., temperature, ambient light, accelerometer, magnetometer, etc., records), GPS records, etc.
In example operation, the DDN server 2302 may generate event(s) based on the parsed messages by executing event generation 2306. In example operation, the DDN server 2302 may provide the event(s) to an example data broker 2308. In example operation, the data broker 2308 may push the event(s) to an example event database 2310, which may be accessed by device(s), application(s), etc. In some examples, the event database 2310 may be implemented by memory and/or one or more mass storage devices. In some examples, the event(s) can include a first event that indicates a UE with an identifier of 123 is 2.9 meters away from a Bluetooth beacon with an identifier of 7 at an angle of 33 degrees. In some examples, the event(s) can include a second event that indicates a UE with an identifier of 456 is able to see a Wi-Fi network with a service set identifier (SSID) of “Network-1” at an received signal strength indicator (RSSI) of −63 decibel-milliwatts (dBm).
In some examples, the location and direction AI engine 2406 can subscribe to various “topics” and based on certain “policies”, publish qualified events that may include a forklift with an identifier of ABC is at location X/Y/Z with a velocity vector of V. In some examples, the events may include a UE with an identifier of 123 is at location A/B/C with a velocity vector of V. In some examples, the events may include a Wi-Fi device with a MAC address of 2F is at location E/F/G with a velocity vector of V. For example, these events can be sent to the data broker 2403 on a unique “topic” as well as stored in the event database 2404.
In some examples, the central office 2520, the cloud data center 2530, and/or portion(s) thereof, may implement one or more location engines that locate and/or otherwise identify positions of devices of the endpoint (consumer and producer) data sources 2560 (e.g., autonomous vehicles 2561, user equipment 2562, business and industrial equipment 2563, video capture devices 2564, drones 2565, smart cities and building devices 2566, sensors and Internet-of-Things (IoT) devices 2567, etc.). In some such examples, the central office 2520, the cloud data center 2530, and/or portion(s) thereof, may implement one or more location engines to execute location detection operations with improved accuracy.
Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer endpoint devices, than at a base station, than at a central office). However, the closer that the edge location is to the endpoint (e.g., user equipment (UE)), the more that space and power is often constrained. Thus, edge computing attempts to reduce the amount of resources needed for network services, through the distribution of more resources which are located closer both geographically and in network access time. In this manner, edge computing attempts to bring the compute resources to the workload data where appropriate, or bring the workload data to the compute resources.
The following describes aspects of an edge cloud architecture that covers multiple potential deployments and addresses restrictions that some network operators or service providers may have in their own infrastructures. These include, variation of configurations based on the edge location (because edges at a base station level, for instance, may have more constrained performance and capabilities in a multi-tenant scenario); configurations based on the type of compute, memory, storage, fabric, acceleration, or like resources available to edge locations, tiers of locations, or groups of locations; the service, security, and management and orchestration capabilities; and related objectives to achieve usability and performance of end services. These deployments may accomplish processing in network layers that may be considered as “near edge”, “close edge”, “local edge”, “middle edge”, or “far edge” layers, depending on latency, distance, and timing characteristics.
Edge computing is a developing paradigm where computing is performed at or closer to the “edge” of a network, typically through the use of a compute platform (e.g., x86 or ARM compute hardware architecture) implemented at base stations, gateways, network routers, or other devices which are much closer to endpoint devices producing and consuming the data. For example, edge gateway servers may be equipped with pools of memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with standardized compute hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices. Within edge computing networks, there may be scenarios in services which the compute resource will be “moved” to the data, as well as scenarios in which the data will be “moved” to the compute resource. Or as an example, base station compute, acceleration and network resources can provide services in order to scale to workload demands on an as needed basis by activating dormant capacity (subscription, capacity on demand) in order to manage corner cases, emergencies or to provide longevity for deployed resources over a significantly longer implemented lifecycle.
In contrast to the network architecture of
Depending on the real-time requirements in a communications context, a hierarchical structure of data processing and storage nodes may be defined in an edge computing deployment. For example, such a deployment may include local ultra-low-latency processing, regional storage and processing as well as remote cloud data-center based storage and processing. Key performance indicators (KPIs) may be used to identify where sensor data is best transferred and where it is processed or stored. This typically depends on the ISO layer dependency of the data. For example, lower layer (PHY, MAC, routing, etc.) data typically changes quickly and is better handled locally in order to meet latency requirements. Higher layer data such as Application Layer data is typically less time critical and may be stored and processed in a remote cloud data-center. At a more generic level, an edge computing system may be described to encompass any number of deployments operating in the edge cloud 2510, which provide coordination from client and distributed computing devices.
Examples of latency, resulting from network communication distance and processing time constraints, may range from less than a millisecond (ms) when among the endpoint layer 2600, under 5 ms at the edge devices layer 2610, to even between 10 to 40 ms when communicating with nodes at the network access layer 2620. Beyond the edge cloud 2510 are core network 2630 and cloud data center 2632 layers, each with increasing latency (e.g., between 40-60 ms at the core network layer 2630, to 100 or more ms at the cloud data center layer 2640). As a result, operations at a core network data center 2635 or a cloud data center 2645, with latencies of at least 60 to 100 ms or more, will not be able to accomplish many time-critical functions of the use cases 2605. Each of these latency values are provided for purposes of illustration and contrast; it will be understood that the use of other access network mediums and technologies may further reduce the latencies. In some examples, respective portions of the network may be categorized as “close edge”, “local edge”, “near edge”, “middle edge”, or “far edge” layers, relative to a network source and destination. For instance, from the perspective of the core network data center 2635 or a cloud data center 2645, a central office or content data network may be considered as being located within a “near edge” layer (“near” to the cloud, having high latency values when communicating with the devices and endpoints of the use cases 2605), whereas an access point, base station, on-premise server, or network gateway may be considered as located within a “far edge” layer (“far” from the cloud, having low latency values when communicating with the devices and endpoints of the use cases 2605). It will be understood that other categorizations of a particular network layer as constituting a “close”, “local”, “near”, “middle”, or “far” edge may be based on latency, distance, number of network hops, or other measurable characteristics, as measured from a source in any of the network layers 2600-2640.
The various use cases 2605 may access resources under usage pressure from incoming streams, due to multiple services utilizing the edge cloud. For example, location detection of devices associated with such incoming streams of the various use cases 2605 is desired and may be achieved with example location engines as described herein. To achieve results with low latency, the services executed within the edge cloud 2510 balance varying requirements in terms of: (a) Priority (throughput or latency) and Quality of Service (QoS) (e.g., traffic for an autonomous car may have higher priority than a temperature sensor in terms of response time requirement; or, a performance sensitivity/bottleneck may exist at a compute/accelerator, memory, storage, or network resource, depending on the application); (b) Reliability and Resiliency (e.g., some input streams need to be acted upon and the traffic routed with mission-critical reliability, where as some other input streams may be tolerate an occasional failure, depending on the application); and (c) Physical constraints (e.g., power, cooling and form-factor).
The end-to-end service view for these use cases involves the concept of a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements. The services executed with the “terms” described may be managed at each layer in a way to assure real time, and runtime contractual compliance for the transaction during the lifecycle of the service. When a component in the transaction is missing its agreed to service level agreement (SLA), the system as a whole (components in the transaction) may provide the ability to (1) understand the impact of the SLA violation, and (2) augment other components in the system to resume overall transaction SLA, and (3) implement steps to remediate.
Thus, with these variations and service features in mind, edge computing within the edge cloud 2510 may provide the ability to serve and respond to multiple applications of the use cases 2605 (e.g., object tracking, location detection, video surveillance, connected cars, etc.) in real-time or near real-time, and meet ultra-low latency requirements for these multiple applications. These advantages enable a whole new class of applications (VNFs), Function-as-a-Service (FaaS), Edge-as-a-Service (EaaS), standard processes, etc.), which cannot leverage conventional cloud computing due to latency or other limitations.
However, with the advantages of edge computing comes the following caveats. The devices located at the edge are often resource constrained and therefore there is pressure on usage of edge resources. Typically, this is addressed through the pooling of memory and storage resources for use by multiple users (tenants) and devices. The edge may be power and cooling constrained and therefore the power usage needs to be accounted for by the applications that are consuming the most power. There may be inherent power-performance tradeoffs in these pooled memory resources, as many of them are likely to use emerging memory technologies, where more power requires greater memory bandwidth. Likewise, improved security of hardware and root of trust trusted functions are also required, because edge locations may be unmanned and may even need permissioned access (e.g., when housed in a third-party location). Such issues are magnified in the edge cloud 2510 in a multi-tenant, multi-owner, or multi-access setting, where services and applications are requested by many users, especially as network usage dynamically fluctuates and the composition of the multiple stakeholders, use cases, and services changes.
At a more generic level, an edge computing system may be described to encompass any number of deployments at the previously discussed layers operating in the edge cloud 2510 (network layers 2610-2630), which provide coordination from client and distributed computing devices. One or more edge gateway nodes, one or more edge aggregation nodes, and one or more core data centers may be distributed across layers of the network to provide an implementation of the edge computing system by or on behalf of a telecommunication service provider (“telco”, or “TSP”), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives.
Consistent with the examples provided herein, a client compute node may be embodied as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. Further, the label “node” or “device” as used in the edge computing system does not necessarily mean that such node or device operates in a client or agent/minion/follower role; rather, any of the nodes or devices in the edge computing system refer to individual entities, nodes, or subsystems which include discrete or connected hardware or software configurations to facilitate or use the edge cloud 2510.
As such, the edge cloud 2510 is formed from network components and functional features operated by and within edge gateway nodes, edge aggregation nodes, or other edge compute nodes among network layers 2610-2630. The edge cloud 2510 thus may be embodied as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are discussed herein. In other words, the edge cloud 2510 may be envisioned as an “edge” which connects the endpoint devices and traditional network access points that serve as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks.
The network components of the edge cloud 2510 may be servers, multi-tenant servers, appliance computing devices, and/or any other type of computing devices. For example, the edge cloud 2510 may include an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case or a shell. In some examples, the edge cloud 2510 may include an appliance to be operated in harsh environmental conditions (e.g., extreme heat or cold ambient temperatures, strong wind conditions, wet or frozen environments, and the like). In some circumstances, the housing may be dimensioned for portability such that it can be carried by a human and/or shipped. Example housings may include materials that form one or more exterior surfaces that partially or fully protect contents of the appliance, in which protection may include weather protection, hazardous environment protection (e.g., electromagnetic interference (EMI), vibration, extreme temperatures), and/or enable submergibility. Example housings may include power circuitry to provide power for stationary and/or portable implementations, such as alternating current (AC) power inputs, direct current (DC) power inputs, AC/DC or DC/AC converter(s), power regulators, transformers, charging circuitry, batteries, wired inputs and/or wireless power inputs. Example housings and/or surfaces thereof may include or connect to mounting hardware to enable attachment to structures such as buildings, telecommunication structures (e.g., poles, antenna structures, etc.) and/or racks (e.g., server racks, blade mounts, etc.). Example housings and/or surfaces thereof may support one or more sensors (e.g., temperature sensors, vibration sensors, light sensors, acoustic sensors, capacitive sensors, proximity sensors, etc.). One or more such sensors may be contained in, carried by, or otherwise embedded in the surface and/or mounted to the surface of the appliance. Example housings and/or surfaces thereof may support mechanical connectivity, such as propulsion hardware (e.g., wheels, propellers, etc.) and/or articulating hardware (e.g., robot arms, pivotable appendages, etc.). In some circumstances, the sensors may include any type of input devices such as user interface hardware (e.g., buttons, switches, dials, sliders, etc.). In some circumstances, example housings include output devices contained in, carried by, embedded therein and/or attached thereto. Output devices may include displays, touchscreens, lights, light emitting diodes (LEDs), speakers, I/O ports (e.g., universal serial bus (USB)), etc. In some circumstances, edge devices are devices presented in the network for a specific purpose (e.g., a traffic light), but may have processing and/or other capacities that may be utilized for other purposes. Such edge devices may be independent from other networked devices and may be provided with a housing having a form factor suitable for its primary purpose; yet be available for other compute tasks that do not interfere with its primary task. Edge devices include IoT devices. The appliance computing device may include hardware and software components to manage local issues such as device temperature, vibration, resource utilization, updates, power issues, physical and network security, etc. The example processor systems of at least
In
Individual platforms or devices of the edge computing system 2800 are located at a particular layer corresponding to layers 2820, 2830, 2840, 2850, and 2860. For example, the client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 2802f are located at an endpoint layer 2820, while the edge gateway platforms 2812a, 2812b, 2812c are located at an edge devices layer 2830 (local level) of the edge computing system 2800. Additionally, the edge aggregation platforms 2822a, 2822b (and/or fog platform(s) 2824, if arranged or operated with or among a fog networking configuration 2826) are located at a network access layer 2840 (an intermediate level). Fog computing (or “fogging”) generally refers to extensions of cloud computing to the edge of an enterprise's network or to the ability to manage transactions across the cloud/edge landscape, typically in a coordinated distributed or multi-node network. Some forms of fog computing provide the deployment of compute, storage, and networking services between end devices and cloud computing data centers, on behalf of the cloud computing locations. Some forms of fog computing also provide the ability to manage the workload/workflow level services, in terms of the overall transaction, by pushing certain workloads to the edge or to the cloud based on the ability to fulfill the overall service level agreement.
Fog computing in many scenarios provides a decentralized architecture and serves as an extension to cloud computing by collaborating with one or more edge node devices, providing the subsequent amount of localized control, configuration and management, and much more for end devices. Furthermore, fog computing provides the ability for edge resources to identify similar resources and collaborate to create an edge-local cloud which can be used solely or in conjunction with cloud computing to complete computing, storage or connectivity related services. Fog computing may also allow the cloud-based services to expand their reach to the edge of a network of devices to offer local and quicker accessibility to edge devices. Thus, some forms of fog computing provide operations that are consistent with edge computing as discussed herein; the edge computing aspects discussed herein are also applicable to fog networks, fogging, and fog configurations. Further, aspects of the edge computing systems discussed herein may be configured as a fog, or aspects of a fog may be integrated into an edge computing architecture.
The core data center 2832 is located at a core network layer 2850 (a regional or geographically central level), while the global network cloud 2842 is located at a cloud data center layer 2860 (a national or world-wide layer). The use of “core” is provided as a term for a centralized network location—deeper in the network—which is accessible by multiple edge platforms or components; however, a “core” does not necessarily designate the “center” or the deepest location of the network. Accordingly, the core data center 2832 may be located within, at, or near the edge cloud 2810. Although an illustrative number of client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 2802f; edge gateway platforms 2812a, 2812b, 2812c; edge aggregation platforms 2822a, 2822b; edge core data centers 2832; and global network clouds 2842 are shown in
Consistent with the examples provided herein, a client compute platform (e.g., one of the client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 28020 may be implemented as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. For example, a client compute platform can include a mobile phone, a laptop computer, a desktop computer, a processor platform in an autonomous vehicle, etc. In additional or alternative examples, a client compute platform can include a camera, a sensor, etc. Further, the label “platform,” “node,” and/or “device” as used in the edge computing system 2800 does not necessarily mean that such platform, node, and/or device operates in a client or slave role; rather, any of the platforms, nodes, and/or devices in the edge computing system 2800 refer to individual entities, platforms, nodes, devices, and/or subsystems which include discrete and/or connected hardware and/or software configurations to facilitate and/or use the edge cloud 2810. Advantageously, example location engines as described herein may detect and/or otherwise determine locations of the client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 2802f with improved performance and accuracy as well as with reduced latency.
As such, the edge cloud 2810 is formed from network components and functional features operated by and within the edge gateway platforms 2812a, 2812b, 2812c and the edge aggregation platforms 2822a, 2822b of layers 2830, 2840, respectively. The edge cloud 2810 may be implemented as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are shown in
In some examples, the edge cloud 2810 may form a portion of, or otherwise provide, an ingress point into or across a fog networking configuration 2826 (e.g., a network of fog platform(s) 2824, not shown in detail), which may be implemented as a system-level horizontal and distributed architecture that distributes resources and services to perform a specific function. For instance, a coordinated and distributed network of fog platform(s) 2824 may perform computing, storage, control, or networking aspects in the context of an IoT system arrangement. Other networked, aggregated, and distributed functions may exist in the edge cloud 2810 between the core data center 2832 and the client endpoints (e.g., client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 28020. Some of these are discussed in the following sections in the context of network functions or service virtualization, including the use of virtual edges and virtual services which are orchestrated for multiple tenants.
As discussed in more detail below, the edge gateway platforms 2812a, 2812b, 2812c and the edge aggregation platforms 2822a, 2822b cooperate to provide various edge services and security to the client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 2802f. Furthermore, because a client compute platforms (e.g., one of the client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 28020 may be stationary or mobile, a respective edge gateway platform 2812a, 2812b, 2812c may cooperate with other edge gateway platforms to propagate presently provided edge services, relevant service data, and security as the corresponding client compute platforms 2802a, 2802b, 2802c, 2802d, 2802e, 2802f moves about a region. To do so, the edge gateway platforms 2812a, 2812b, 2812c and/or edge aggregation platforms 2822a, 2822b may support multiple tenancy and multiple tenant configurations, in which services from (or hosted for) multiple service providers, owners, and multiple consumers may be supported and coordinated across a single or multiple compute devices.
In examples disclosed herein, edge platforms in the edge computing system 2800 includes meta-orchestration functionality. For example, edge platforms at the far-edge (e.g., edge platforms closer to edge users, the edge devices layer 2830, etc.) can reduce the performance or power consumption of orchestration tasks associated with far-edge platforms so that the execution of orchestration components at far-edge platforms consumes a small fraction of the power and performance available at far-edge platforms.
The orchestrators at various far-edge platforms participate in an end-to-end orchestration architecture. Examples disclosed herein anticipate that the comprehensive operating software framework (such as, open network automation platform (ONAP) or similar platform) will be expanded, or options created within it, so that examples disclosed herein can be compatible with those frameworks. For example, orchestrators at edge platforms implementing examples disclosed herein can interface with ONAP orchestration flows and facilitate edge platform orchestration and telemetry activities. Orchestrators implementing examples disclosed herein act to regulate the orchestration and telemetry activities that are performed at edge platforms, including increasing or decreasing the power and/or resources expended by the local orchestration and telemetry components, delegating orchestration and telemetry processes to a remote computer and/or retrieving orchestration and telemetry processes from the remote computer when power and/or resources are available.
The remote devices described above are situated at alternative locations with respect to those edge platforms that are offloading telemetry and orchestration processes. For example, the remote devices described above can be situated, by contrast, at a near-edge platforms (e.g., the network access layer 2840, the core network layer 2850, a central office, a mini-datacenter, etc.). By offloading telemetry and/or orchestration processes at a near edge platforms, an orchestrator at a near-edge platform is assured of (comparatively) stable power supply, and sufficient computational resources to facilitate execution of telemetry and/or orchestration processes. An orchestrator (e.g., operating according to a global loop) at a near-edge platform can take delegated telemetry and/or orchestration processes from an orchestrator (e.g., operating according to a local loop) at a far-edge platform. For example, if an orchestrator at a near-edge platform takes delegated telemetry and/or orchestration processes, then at some later time, the orchestrator at the near-edge platform can return the delegated telemetry and/or orchestration processes to an orchestrator at a far-edge platform as conditions change at the far-edge platform (e.g., as power and computational resources at a far-edge platform satisfy a threshold level, as higher levels of power and/or computational resources become available at a far-edge platform, etc.).
A variety of security approaches may be utilized within the architecture of the edge cloud 2810. In a multi-stakeholder environment, there can be multiple loadable security modules (LSMs) used to provision policies that enforce the stakeholder's interests including those of tenants. In some examples, other operators, service providers, etc. may have security interests that compete with the tenant's interests. For example, tenants may prefer to receive full services (e.g., provided by an edge platform) for free while service providers would like to get full payment for performing little work or incurring little costs. Enforcement point environments could support multiple LSMs that apply the combination of loaded LSM policies (e.g., where the most constrained effective policy is applied, such as where if any of A, B or C stakeholders restricts access then access is restricted). Within the edge cloud 2810, each edge entity can provision LSMs that enforce the Edge entity interests. The cloud entity can provision LSMs that enforce the cloud entity interests. Likewise, the various fog and IoT network entities can provision LSMs that enforce the fog entity's interests.
In these examples, services may be considered from the perspective of a transaction, performed against a set of contracts or ingredients, whether considered at an ingredient level or a human-perceivable level. Thus, a user who has a service agreement with a service provider, expects the service to be delivered under terms of the SLA. Although not discussed in detail, the use of the edge computing techniques discussed herein may play roles during the negotiation of the agreement and the measurement of the fulfillment of the agreement (e.g., to identify what elements are required by the system to conduct a service, how the system responds to service conditions and changes, and the like).
Additionally, in examples disclosed herein, edge platforms and/or orchestration components thereof may consider several factors when orchestrating services and/or applications in an edge environment. These factors can include next-generation central office smart network functions virtualization and service management, improving performance per watt at an edge platform and/or of orchestration components to overcome the limitation of power at edge platforms, reducing power consumption of orchestration components and/or an edge platform, improving hardware utilization to increase management and orchestration efficiency, providing physical and/or end to end security, providing individual tenant quality of service and/or service level agreement satisfaction, improving network equipment-building system compliance level for each use case and tenant business model, pooling acceleration components, and billing and metering policies to improve an edge environment.
A “service” is a broad term often applied to various contexts, but in general, it refers to a relationship between two entities where one entity offers and performs work for the benefit of another. However, the services delivered from one entity to another must be performed with certain guidelines, which ensure trust between the entities and manage the transaction according to the contract terms and conditions set forth at the beginning, during, and end of the service.
An example relationship among services for use in an edge computing system is described below. In scenarios of edge computing, there are several services, and transaction layers in operation and dependent on each other—these services create a “service chain”. At the lowest level, ingredients compose systems. These systems and/or resources communicate and collaborate with each other in order to provide a multitude of services to each other as well as other permanent or transient entities around them. In turn, these entities may provide human-consumable services. With this hierarchy, services offered at each tier must be transactionally connected to ensure that the individual component (or sub-entity) providing a service adheres to the contractually agreed to objectives and specifications. Deviations at each layer could result in overall impact to the entire service chain.
One type of service that may be offered in an edge environment hierarchy is Silicon Level Services. For instance, Software Defined Silicon (SDSi)-type hardware provides the ability to ensure low level adherence to transactions, through the ability to intra-scale, manage and assure the delivery of operational service level agreements. Use of SDSi and similar hardware controls provide the capability to associate features and resources within a system to a specific tenant and manage the individual title (rights) to those resources. Use of such features is among one way to dynamically “bring” the compute resources to the workload.
For example, an operational level agreement and/or service level agreement could define “transactional throughput” or “timeliness”—in case of SDSi, the system and/or resource can sign up to guarantee specific service level specifications (SLS) and objectives (SLO) of a service level agreement (SLA). For example, SLOs can correspond to particular key performance indicators (KPIs) (e.g., frames per second, floating point operations per second, latency goals, etc.) of an application (e.g., service, workload, etc.) and an SLA can correspond to a platform level agreement to satisfy a particular SLO (e.g., one gigabyte of memory for 250 frames per second). SDSi hardware also provides the ability for the infrastructure and resource owner to empower the silicon component (e.g., components of a composed system that produce metric telemetry) to access and manage (add/remove) product features and freely scale hardware capabilities and utilization up and down. Furthermore, it provides the ability to provide deterministic feature assignments on a per-tenant basis. It also provides the capability to tie deterministic orchestration and service management to the dynamic (or subscription based) activation of features without the need to interrupt running services, client operations or by resetting or rebooting the system.
At the lowest layer, SDSi can provide services and guarantees to systems to ensure active adherence to contractually agreed-to service level specifications that a single resource has to provide within the system. Additionally, SDSi provides the ability to manage the contractual rights (title), usage and associated financials of one or more tenants on a per component, or even silicon level feature (e.g., SKU features). Silicon level features may be associated with compute, storage or network capabilities, performance, determinism or even features for security, encryption, acceleration, etc. These capabilities ensure not only that the tenant can achieve a specific service level agreement, but also assist with management and data collection, and assure the transaction and the contractual agreement at the lowest manageable component level.
At a higher layer in the services hierarchy, Resource Level Services, includes systems and/or resources which provide (in complete or through composition) the ability to meet workload demands by either acquiring and enabling system level features via SDSi, or through the composition of individually addressable resources (compute, storage and network). At yet a higher layer of the services hierarchy, Workflow Level Services, is horizontal, since service-chains may have workflow level requirements. Workflows describe dependencies between workloads in order to deliver specific service level objectives and requirements to the end-to-end service. These services may include features and functions like high-availability, redundancy, recovery, fault tolerance or load-leveling (we can include lots more in this). Workflow services define dependencies and relationships between resources and systems, describe requirements on associated networks and storage, as well as describe transaction level requirements and associated contracts in order to assure the end-to-end service. Workflow Level Services are usually measured in Service Level Objectives and have mandatory and expected service requirements.
At yet a higher layer of the services hierarchy, Business Functional Services (BFS) are operable, and these services are the different elements of the service which have relationships to each other and provide specific functions for the customer. In the case of Edge computing and within the example of Autonomous Driving, business functions may be composing the service, for instance, of a “timely arrival to an event”—this service would require several business functions to work together and in concert to achieve the goal of the user entity: GPS guidance, RSU (Road Side Unit) awareness of local traffic conditions, Payment history of user entity, Authorization of user entity of resource(s), etc. Furthermore, as these BFS(s) provide services to multiple entities, each BFS manages its own SLA and is aware of its ability to deal with the demand on its own resources (Workload and Workflow). As requirements and demand increases, it communicates the service change requirements to Workflow and resource level service entities, so they can, in-turn provide insights to their ability to fulfill. This step assists the overall transaction and service delivery to the next layer.
At the highest layer of services in the service hierarchy, Business Level Services (BLS), is tied to the capability that is being delivered. At this level, the customer or entity might not care about how the service is composed or what ingredients are used, managed, and/or tracked to provide the service(s). The primary objective of business level services is to attain the goals set by the customer according to the overall contract terms and conditions established between the customer and the provider at the agreed to a financial agreement. BLS(s) are comprised of several Business Functional Services (BFS) and an overall SLA.
This arrangement and other service management features described herein are designed to meet the various requirements of edge computing with its unique and complex resource and service interactions. This service management arrangement is intended to inherently address several of the resource basic services within its framework, instead of through an agent or middleware capability. Services such as: locate, find, address, trace, track, identify, and/or register may be placed immediately in effect as resources appear on the framework, and the manager or owner of the resource domain can use management rules and policies to ensure orderly resource discovery, registration and certification.
Moreover, any number of edge computing architectures described herein may be adapted with service management features. These features may enable a system to be constantly aware and record information about the motion, vector, and/or direction of resources as well as fully describe these features as both telemetry and metadata associated with the devices. These service management features can be used for resource management, billing, and/or metering, as well as an element of security. The same functionality also applies to related resources, where a less intelligent device, like a sensor, might be attached to a more manageable resource, such as an edge gateway. The service management framework is made aware of change of custody or encapsulation for resources. Since nodes and components may be directly accessible or be managed indirectly through a parent or alternative responsible device for a short duration or for its entire lifecycle, this type of structure is relayed to the service framework through its interface and made available to external query mechanisms.
Additionally, this service management framework is always service aware and naturally balances the service delivery requirements with the capability and availability of the resources and the access for the data upload the data analytics systems. If the network transports degrade, fail or change to a higher cost or lower bandwidth function, service policy monitoring functions provide alternative analytics and service delivery mechanisms within the privacy or cost constraints of the user. With these features, the policies can trigger the invocation of analytics and dashboard services at the edge ensuring continuous service availability at reduced fidelity or granularity. Once network transports are re-established, regular data collection, upload and analytics services can resume.
The deployment of a multi-stakeholder edge computing system may be arranged and orchestrated to enable the deployment of multiple services and virtual edge instances, among multiple edge platforms and subsystems, for use by multiple tenants and service providers. In a system example applicable to a cloud service provider (CSP), the deployment of an edge computing system may be provided via an “over-the-top” approach, to introduce edge computing platforms as a supplemental tool to cloud computing. In a contrasting system example applicable to a telecommunications service provider (TSP), the deployment of an edge computing system may be provided via a “network-aggregation” approach, to introduce edge computing platforms at locations in which network accesses (from different types of data access networks) are aggregated. However, these over-the-top and network aggregation approaches may be implemented together in a hybrid or merged approach or configuration.
Other example groups of IoT devices may include remote weather stations 2914, local information terminals 2916, alarm systems 2918, automated teller machines 2920, alarm panels 2922, or moving vehicles, such as emergency vehicles 2924 or other vehicles 2926, among many others. Each of these IoT devices may be in communication with other IoT devices, with servers 2904, with another IoT fog device or system, or a combination therein. The groups of IoT devices may be deployed in various residential, commercial, and industrial settings (including in both private or public environments). Advantageously, example location engines as described herein may achieve location detection of one(s) of the IoT devices of the traffic control group 2906, one(s) of the IoT devices 2914, 2916, 2918, 2920, 2922, 2924, 2926, etc., and/or a combination thereof with improved performance, improved accuracy, and/or reduced latency.
As may be seen from
Clusters of IoT devices, such as the remote weather stations 2914 or the traffic control group 2906, may be equipped to communicate with other IoT devices as well as with the cloud 2900. This may allow the IoT devices to form an ad-hoc network between the devices, allowing them to function as a single device, which may be termed a fog device or system (e.g., as described above with reference to
The DDN control circuitry 3100 of
The DDN control circuitry 3100 of
The DDN control circuitry 3100 of the illustrated example includes example interface circuitry 3110, example configuration determination circuitry 3120, example location determination circuitry 3130, example connection evaluation circuitry 3140, example machine learning circuitry 3150, example configuration control circuitry 3160, an example datastore 3170, and an example bus 3180. In this example, the datastore 3170 includes an example policy and/or service level agreement (SLA) 3172, example node configuration data 3174 (identified by NODE CONFIG DATA), example telemetry data 3176, and example location data 3178.
In the illustrated example of 31, the interface circuitry 3110, the configuration determination circuitry 3120, the location determination circuitry 3130, the connection evaluation circuitry 3140, the machine learning circuitry 3150, the configuration control circuitry 3160, and/or the datastore 3170 are in communication with one(s) of each other via the bus 3180. For example, the bus 3180 can be implemented by at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a Peripheral Component Interconnect (PCI) bus, or a Peripheral Component Interconnect Express (PCIe or PCIE) bus. Additionally or alternatively, the bus 3180 can be implemented by any other type of computing or electrical bus.
In the illustrated example of
In some examples, the DDN control circuitry 3100 is instantiated by programmable circuitry executing the DDN control circuitry 3100 instructions and/or configured to perform operations such as those represented by the flowchart(s) of
In the illustrated example of
In some examples, the configuration control circuitry 3160 is instantiated by programmable circuitry executing configuration control instructions and/or configured to perform operations such as those represented by the flowchart(s) of
In some examples, the DDN control circuitry 3100 includes means for configuring, by executing an instruction with programmable circuitry, compute resources of the edge compute device based on a first resource demand associated with a first location of the edge compute device. For example, the means for configuring may be implemented by configuration determination circuitry 3120 and/or the configuration control circuitry 3160. In some examples, the configuration determination circuitry 3120 may be instantiated by programmable circuitry such as the example programmable circuitry 3712 of
In the illustrated example of
In some examples, the configuration control circuitry 3160 is instantiated by programmable circuitry executing configuration control instructions and/or configured to perform operations such as those represented by the flowchart(s) of
In some examples, the DDN control circuitry 3100 includes means for detecting a change in location of the edge compute device from a first location to a second location. For example, the means for detecting the change in location may be implemented by the location determination circuitry 3130. In some examples, the location determination circuitry 3130 may be instantiated by programmable circuitry such as the example programmable circuitry 3712 of
In the illustrated example of
In some examples, the DDN control circuitry 3100 includes means for evaluating a connection associated with an electronic device. For example, the connection evaluation circuitry 3140 can implement the means for evaluating. In some examples, the connection evaluation circuitry 3140 is instantiated by programmable circuitry executing configuration control instructions and/or configured to perform operations such as those represented by the flowchart(s) of
In some examples, the DDN control circuitry 3100 includes means for configuring network resources of an edge compute device based on a first spectrum availability associated with a first location of the edge compute device, and reconfigure the network resources of the edge compute device in response to detection of a change in location. For example, the means for configuring network resources may be implemented by connection evaluation circuitry 3140. In some examples, the connection evaluation circuitry 3140 may be instantiated by programmable circuitry such as the example programmable circuitry 3712 of
In the illustrated example of
In some examples, the DDN control circuitry 3100 includes means for reconfiguring compute resources based on an output of a machine learning model, the machine learning model to process input telemetry data, the input telemetry data including at least one of a vendor identifier, an Internet Protocol address, or a media access control address. For example, the means for reconfiguring may be implemented by machine learning circuitry 3150. In some examples, the machine learning circuitry 3150 may be instantiated by programmable circuitry such as the example programmable circuitry 3712 of
In the illustrated example of
In some examples, the configuration control circuitry 3160 is instantiated by programmable circuitry executing configuration control instructions and/or configured to perform operations such as those represented by the flowchart(s) of
In some examples, the DDN control circuitry 3100 includes means for reconfiguring, in response to detection of the change in location, the compute resources of the edge compute device based on a second resource demand associated with the second location. For example, the means for reconfiguring may be implemented configuration control circuitry 3160 and/or configuration determination circuitry 3120. In some examples, the configuration control circuitry 3160 may be instantiated by programmable circuitry such as the example programmable circuitry 3712 of
In the illustrated example of
In some examples, the datastore 3170 may be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). The datastore 3170 may additionally or alternatively be implemented by one or more double data rate (DDR) memories, such as DDR, DDR2, DDR3, DDR4, DDR5, mobile DDR (mDDR), DDR SDRAM, etc. The datastore 3170 may additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s) (HDD(s)), compact disk (CD) drive(s), digital versatile disk (DVD) drive(s), solid-state disk (SSD) drive(s), Secure Digital (SD) card(s), CompactFlash (CF) card(s), etc. While in the illustrated example the datastore 3170 is illustrated as a single datastore, the datastore 3170 may be implemented by any number and/or type(s) of databases. Furthermore, the data stored in the datastore 3170 may be in any data format such as, for example, binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc. The term “database” as used herein means an organized body of related data, regardless of the manner in which the data or the organized body thereof is represented. For example, the organized body of related data may be in the form of one or more of a table, a map, a grid, a packet, a datagram, a frame, a file, an e-mail, a message, a document, a report, a list or in any other form.
While an example manner of implementing the DDN control circuitry 240 of
Accordingly, while an example manner of implementing the DDN control circuitry of
Flowchart(s) representative of example machine readable instructions, which may be executed by programmable circuitry to implement and/or instantiate the DDN control circuitry 240 and/or 3100 of
The program may be embodied in instructions (e.g., software and/or firmware) stored on one or more non-transitory computer readable and/or machine readable storage medium such as cache memory, a magnetic-storage device or disk (e.g., a floppy disk, a Hard Disk Drive (HDD), etc.), an optical-storage device or disk (e.g., a Blu-ray disk, a Compact Disk (CD), a Digital Versatile Disk (DVD), etc.), a Redundant Array of Independent Disks (RAID), a register, ROM, a solid-state drive (SSD), SSD memory, non-volatile memory (e.g., electrically erasable programmable read-only memory (EEPROM), flash memory, etc.), volatile memory (e.g., Random Access Memory (RAM) of any type, etc.), and/or any other storage device or storage disk. The instructions of the non-transitory computer readable and/or machine readable medium may program and/or be executed by programmable circuitry located in one or more hardware devices, but the entire program and/or parts thereof could alternatively be executed and/or instantiated by one or more hardware devices other than the programmable circuitry and/or embodied in dedicated hardware. The machine readable instructions may be distributed across multiple hardware devices and/or executed by two or more hardware devices (e.g., a server and a client hardware device). For example, the client hardware device may be implemented by an endpoint client hardware device (e.g., a hardware device associated with a human and/or machine user) or an intermediate client hardware device gateway (e.g., a radio access network (RAN)) that may facilitate communication between a server and an endpoint client hardware device. Similarly, the non-transitory computer readable storage medium may include one or more mediums. Further, although the example program is described with reference to the flowchart(s) illustrated in
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., computer-readable data, machine-readable data, one or more bits (e.g., one or more computer-readable bits, one or more machine-readable bits, etc.), a bitstream (e.g., a computer-readable bitstream, a machine-readable bitstream, etc.), etc.) or a data structure (e.g., as portion(s) of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices, disks and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc., in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and/or stored on separate computing devices, wherein the parts when decrypted, decompressed, and/or combined form a set of computer-executable and/or machine executable instructions that implement one or more functions and/or operations that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by programmable circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc., in order to execute the machine-readable instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable, computer readable and/or machine readable media, as used herein, may include instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s).
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C #, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example operations of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc., may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, or (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B.
As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” object, as used herein, refers to one or more of that object. The terms “a” (or “an”), “one or more”, and “at least one” are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements, or actions may be implemented by, e.g., the same entity or object. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
At block 3204, the DDN control circuitry 3100 configures the network node to utilize a first wireless connection capability to execute workload(s) based on strength and quality of the first wireless connection. For example, the configuration control circuitry 3160 (
At block 3206, the DDN control circuitry 3100 stores a configuration of the network node. For example, the configuration control circuitry 3160 can store an association of the UE and a 5G cellular connection in the datastore 3170 (
At block 3208, the DDN control circuitry 3100 obtains telemetry data from the network node. For example, the interface circuitry 3110 (
At block 3210, the DDN control circuitry 3100 determines whether the first wireless connection strength and quality are below threshold(s) based on the telemetry data. For example, the connection evaluation circuitry 3140 (
If, at block 3210, the DDN control circuitry 3100 determines that the first wireless connection strength and quality are not below threshold(s) based on the telemetry data, control proceeds to block 3216. If, at block 3210, the DDN control circuitry 3100 determines that the first wireless connection strength and quality are below threshold(s) based on the telemetry data, control proceeds to block 3212. For example, the connection evaluation circuitry 3140 can determine that connection strength associated with a node is impacted by natural and/or unnatural events or conditions. In some examples, the connection evaluation circuitry 3140 can evaluate and/or otherwise determine connection strength based on signal fading loss, multipath fading, doppler and power loss of signal either transmitted or received, etc., and/or any combination(s) thereof. For example, a DDN node may be instantiated by a race car traveling at extreme speeds (e.g., 150, 200, etc., miles per hour (MPH)), and the connection evaluation circuitry 3140 may evaluate a satellite connection with the DDN node has derogated and a switch needs to happen, such as a switch to 5G cellular.
At block 3212, the DDN control circuitry 3100 instructs the network node to switch over to a second wireless connection that has improved strength and quality with respect to the first wireless connection. For example, the configuration control circuitry 3160 can instruct the UE to switch from the 5G cellular connection to the Wi-Fi connection to achieve improved connection and/or network strength and quality.
At block 3214, the DDN control circuitry 3100 updates the configuration of the network node. For example, the configuration control circuitry 3160 can update the association of the UE and the 5G cellular connection to be an association of the UE and the Wi-Fi connection. In some examples, the configuration control circuitry 3160 can store the new/updated association as the node configuration data 3174.
At block 3216, the DDN control circuitry 3100 determines whether to continue monitoring the network. For example, the interface circuitry 3110 can determine whether the UE has left a coverage area. In some examples, the interface circuitry 3100 can determine whether additional telemetry data associated with the UE has been received. If, at block 3216, the interface circuitry 3110 determines to continue monitoring the network, control returns to block 3208, otherwise the example machine readable instructions and/or the example operations 3200 of
At block 3304, the DDN control circuitry 3100 determines environmental conditions at the DDN_NODE_ID. For example, the machine learning circuitry 3150 (
At block 3306, the DDN control circuitry 3100 determines communication signal strength and quality of each wireless gNB connected to the DDN_NODE_ID. For example, the connection evaluation circuitry 3140 (
At block 3308, the DDN control circuitry 3100 determines communication signal strength and quality of each wireless sNB connected to the DDN_NODE_ID. For example, the connection evaluation circuitry 3140 can determine communication signal strength and quality of each sNB in communication with the DDN node.
At block 3310, the DDN control circuitry 3100 obtains data and quality of each passive sensor connected to the DDN_NODE_ID. For example, the interface circuitry 3110 (
At block 3312, the DDN control circuitry 3100 obtains data and quality of each active sensor connected to the DDN_NODE_ID. For example, the interface circuitry 3110 can obtain data from the DDN node that corresponds to sensor data from active sensor(s) obtained by the DDN node.
At block 3314, the DDN control circuitry 3100 obtains active or potentially active UE/Gateway connections at the DDN_NODE_ID. For example, the interface circuitry 3110 can obtain data associated with active or potentially active UE/Gateway connections at the DDN node.
At block 3316, the DDN control circuitry 3100 determines whether there is a DDN AI engine recommendation. For example, the machine learning circuitry 3150 can execute and/or instantiate an AI/ML model to output a recommendation indicative of the DDN node to switch network connections for improved communication signal strength and/or quality.
If, at block 3316, the DDN control circuitry 3100 determines that there is not a DDN AI engine recommendation, control proceeds to block 3318. At block 3318, the DDN control circuitry 3100 records a current configuration of the DDN_NODE_ID (including wireless, passive, active sensor, and/or environment data) in a database (DB). For example, the configuration determination circuitry 3120 (
At block 3320, the DDN control circuitry 3100 obtains a DDN policy recommendation. For example, the configuration determination circuitry 3120 can determine that the DDN node is to use a network connection in accordance with requirements of a DDN policy or SLA, which can include bandwidth requirements, latency requirements, throughput requirements, etc. In response to obtaining the DDN policy recommendation at block 3320, control proceeds to block 3324.
If, at block 3316, the DDN control circuitry 3100 determines that there is a DDN AI engine recommendation, control proceeds to block 3322. At block 3322, the DDN control circuitry 3100 configures/reconfigures the DDN_NODE_ID network assets as per the DDN AI recommendation via a DDN control circuitry. For example, the configuration control circuitry 3160 (
In response to configuring/reconfiguring the DDN_NODE_ID network assets as per the DDN AI recommendation via a DDN control circuitry at block 3322, control proceeds to block 3324. At block 3324, the DDN control circuitry 3100 determines whether to continue monitoring the network. For example, the interface circuitry 3110 can determine whether the DDN node has left a coverage area. In some examples, the interface circuitry 3110 can determine whether additional telemetry data associated with the DDN node has been received. If, at block 3324, the interface circuitry 3110 determines to continue monitoring the network, control returns to block 3302. Otherwise, the example machine readable instructions and/or the example operations 3300 of
At block 3404, the DDN control circuitry 3100 identifies active or potentially active network connections. For example, the connection evaluation circuitry 3140 (
At block 3406, the DDN control circuitry 3100 configures one(s) of the cores to optimize execution of workloads associated with the network connections. For example, the configuration control circuitry 3160 (
At block 3408, the DDN control circuitry 3100 obtains telemetry data associated with the network connections. For example, the interface circuitry 3110 (
At block 3410, the DDN control circuitry 3100 executes AI/ML algorithms on the telemetry data to generate a core configuration recommendation. For example, the machine learning circuitry 3150 (
At block 3412, the DDN control circuitry 3100 determines whether to configure/reconfigure one(s) of the cores based on the core configuration recommendation. For example, the configuration control circuitry 3160 can determine that the recommendation from the AI/ML model is indicative of a recommendation to configure or reconfigure a configuration of one or more of the compute cores 408.
If, at block 3412, the DDN control circuitry 3100 determines not to configure/reconfigure one(s) of the cores based on the core configuration recommendation, control proceeds to block 3416. If, at block 3412, the DDN control circuitry 3100 determines to configure/reconfigure one(s) of the cores based on the core configuration recommendation, control proceeds to block 3414.
At block 3414, the DDN control circuitry 3100 configures/reconfigures one(s) of the cores based on the core configuration recommendation. For example, the configuration control circuitry 3160 can configure one or more of the cores 408 based on the core configuration recommendation.
At block 3416, the DDN control circuitry 3100 determines whether to continue monitoring the network. For example, the interface circuitry 3110 can determine whether new telemetry data associated with the DDN node has been received, the DDN node is within or has left a coverage area, etc. If, at block 3416, the DDN control circuitry 310 determines to continue monitoring the network, control returns to block 3402, otherwise the example machine readable instructions and/or the example operations 3400 of
At block 3504, the DDN control circuitry 3100 identifies at least one of security or privacy requirements associated with the network node based on a service level agreement. For example, the configuration determination circuitry 3120 can identify whether the DDN node has privacy requirements such as opting out of a particular network connection such as 5G cellular, Bluetooth, etc., based on a service level agreement, a policy, etc.
At block 3506, the DDN control circuitry 3100 identifies application(s) executing on the network node. For example, the configuration determination circuitry 3120 can obtain a list of one or more applications, services, etc., that the DDN node is executing.
At block 3508, the DDN control circuitry 3100 obtains telemetry data associated with the network connections. For example, the interface circuitry 3110 (
At block 3510, the DDN control circuitry 3100 executes AI/ML algorithms to generate a network node configuration recommendation. For example, the machine learning circuitry 3150 can execute and/or instantiate an AI/ML model to generate a network node configuration recommendation, which can include a determination that the DDN node is to switch from 5G cellular to Wi-Fi to achieve improved execution of the application(s) the DDN node is/are executing.
At block 3512, the DDN control circuitry 3100 determines whether to reconfigure the network node based on the network node configuration recommendation. For example, the configuration control circuitry 3160 can determine whether the network node configuration recommendation is indicative of a change to a network connection that the DDN node is utilizing for improved performance.
If, at block 3512, the DDN control circuitry 3100 determines not to reconfigure the network node based on the network node configuration recommendation, control proceeds to block 3516. If, at block 3512, the DDN control circuitry 3100 determines to reconfigure the network node based on the network node configuration recommendation, control proceeds to block 3514.
At block 3514, the DDN control circuitry 3100 reconfigures the network node based on the network node configuration recommendation. For example, the configuration control circuitry 3160 can send data to the DDN node to cause the DDN node to switch from 5G cellular to Wi-Fi to achieve improved performance.
At block 3516, the DDN control circuitry 3100 determines whether to continue monitoring the network node. For example, the interface circuitry 3110 can determine whether new telemetry data associated with the DDN node has been received, the DDN node is within or has left a coverage area, etc. If, at block 3516, the DDN control circuitry 310 determines to continue monitoring the network node, control returns to block 3502, otherwise the example machine readable instructions and/or the example operations 3500 of
Slicing of the edge compute device allows multiple services, UEs, applications, etc., to share the physical infrastructure of the edge compute device. Slicing provides improved flexibility and scalability of the edge compute device, as each slice may be tailored to the specific needs of UEs, applications, services, etc., that have requested resources from the edge compute device. The DDN control circuitry 3100 may self-configure and/or receive configuration instructions to prioritize some resource requests and/or allocate additional resources to a slice. Configuration of the edge compute device (e.g., a first edge compute device) may also involve communication with a second edge compute device to provide capabilities beyond that of the first edge compute device alone.
At block 3554, the example location determination circuitry 3130 detects a change in location of the edge compute device to a second location. For example, the location determination circuitry 3130 may determine a distance from a wireless communication tower has increased, which may result in reduced wireless connectivity for one or more UEs. Such information may be provided by the location determination circuitry 3130 and/or the configuration determination circuitry 3120 to change a configuration of the edge compute device to provide enhanced capabilities to the UE or to a terrestrial satellite.
In some examples, the location determination circuitry 3130 may determine a physical location that is associated with increased network congestion. That is, in an area with many UEs and/or other devices (e.g., a busy downtown, an airport, an area with many IoT sensors, etc.) that request resources from the edge compute device, the edge compute device may provide increased power and/or bandwidth (e.g., reconfigure the edge compute device to increase processing and/or network capabilities) to satisfy the demand. That is, rather than being overwhelmed by the increased density of UEs in or near the second location, leading to slower speeds and poorer connectivity, the edge compute device can allocated increased resources to satisfy the demand. The configuration determination circuitry 3120 may also determine a change in location and reallocate resources (e.g., increase resource capabilities) based on an analysis of the geographic topography of the location (e.g., an obstruction that can affect connectivity).
At block 3556, the DDN control circuitry 3100 reconfigures the compute resource of the edge compute device based on a second resource demand associated with the second location. For example, a slice can be reconfigured to allocate resources, such as CPU, memory, and storage, based a change in resource demand associated with the second location. The configuration determination circuitry 3120, the machine learning circuitry 3150, the configuration control circuitry 3160, and/or more generally any portion of the DDN control circuitry 3100 may change a network configuration, change an IP address, change processing capabilities, change an operating system for a slice of the virtual machine (VM), launch a container, install or remove software, change system settings, apply an update, etc., in response to the second resource demand associated with the second location. In some examples, the edge compute device and/or any processor circuitry associated with the edge compute device may instantiate additional virtual partitions (e.g., with related resources and settings) that can be provided to satisfy the demand associated with the second location.
The edge compute device may, for example, reconfigure the compute resources based on an output of a machine learning model, the machine learning model to process input telemetry data, the input telemetry data including at least one of a vendor identifier, an Internet Protocol address, or a media access control address. In some examples, interface circuitry 3110, the configuration determination circuitry 3120, and/or the machine learning circuitry 3150 may collect telemetry data associated with a resource demand, the telemetry data including: a timestamp associated with the first resource demand, a number of compute cores assigned to the first resource demand, or network communication metrics associated with the first resource demand.
The reconfiguration may include launching a slice, creating a clone of a slice, deployment of additional VMs, etc. In some examples, the configuration control circuitry 3160 may reconfigure the compute resources to adjust a wireless capability of the edge compute device (e.g., modify a Wi-Fi connection, a cellular connection, a Bluetooth connection, etc.). For example, the DDN control circuitry 3100 may enable and/or disable a network adapter, a modem, and/or any communication/interface circuitry. The connection evaluation circuitry 3140 may also obtain telemetry data including a communication signal strength associated with an electronic device in communication with the edge compute device and cause, based on the telemetry data, the electronic device to switch from a first communication network to a second communication network to communicate with the edge compute device. Therefore, the configuration determination circuitry 3120, configuration control circuitry 3160 and/or configuration determination circuitry 3120 may evaluate a network strength (e.g., determine signal strength and quality), as well as evaluate other factors such as network congestion and network interference. In some examples, the connection evaluation circuitry 3140 may also prioritize networks based on quality of service requirements, etc.
The instructions 3550 end. However, additional instances of the instructions 3550 can be executed in response to, for example, a subsequent change in location and/or change in demand. As an illustrative example of the instructions 3550 in action, an electric vehicle may be equipped with an edge server executing the instructions 3550. The edge server may include the DDN control circuitry 3100 to, for example, control a wireless hotspot to provide network access, provide compute capabilities to devices within our outside of the electric vehicle, etc. Thus, the electric vehicle may execute the instructions 3552 to configure compute resources of the edge compute device and provide resources to endpoint devices (e.g., UEs proximate to the vehicle). The electric vehicle may change location, such as when a driver of the electric vehicle drives to a new geographic location. Then, the DDN control circuitry 3100 can then reconfigure the compute resources of the edge compute device based on the second location and/or change in resource demand associated with the second location. For example, a server of the electric vehicle could reconfigure a VM executing on the server to provide additional resources (e.g., a web server, a database server, wireless networking capabilities) to UEs that come into range of the moving electric vehicle. The resource demand may be associated with any combination of devices within or outside of the electric vehicle (e.g., any device on the electric vehicle's network).
Otherwise, the instructions continue at block 3562, at which the configuration determination circuitry 3120 determines if the DDN control circuitry 3100 is to reconfigure the compute resources by changing a frequency of the processor circuitry. If so, at block 3564 the configuration control circuitry 3160 changes a clock frequency of at least one of the plurality of processor cores. If not, control continues to block 3566 at which the configuration determination circuitry 3120 determines if it is to reconfigure the compute resources by modifying active cores.
At block 3566, the DDN control circuitry 3100 determines if it is to reconfigure the compute resources by modifying active cores. If so, control continues at block 3568 at which the configuration control circuitry 3160 deactivates and/or activates a processor core associated with an instruction set architecture that is different than a first instruction set architecture. For example, the DDN control circuitry 3100 may deactivate a first one of the plurality of processor cores, the first one of the plurality of processor cores associated with a first instruction set architecture (ISA) and activate a second one of the plurality of processor cores, the second one of the plurality of processor cores associated with a second ISA different than the first ISA. The instructions end.
The IoT device 3650 may include processor circuitry in the form of, for example, a processor 3652, which may be a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, or other known processing elements. The processor 3652 may be a part of a system on a chip (SoC) in which the processor 3652 and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel. As an example, the processor 3652 may include an Intel® Architecture Core™ based processor, such as a Quark™, an Atom™, an i3, an i5, an i7, or an microcontroller (MCU)-class processor, or another such processor available from Intel® Corporation, Santa Clara, Calif. However, any number other processors may be used, such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, Calif., a MIPS-based design from MIPS Technologies, Inc. of Sunnyvale, Calif., an ARM-based design licensed from ARM Holdings, Ltd. or customer thereof, or their licensees or adopters. The processors may include units such as an A5-A14 processor from Apple® Inc., a Snapdragon™ processor from Qualcomm® Technologies, Inc., or an OMAP™ processor from Texas Instruments, Inc.
The processor 3652 may communicate with a system memory 3654 over an interconnect 3656 (e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory may be random access memory (RAM) in accordance with a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., LPDDR, LPDDR2, LPDDR3, or LPDDR4). In various implementations the individual memory devices may be of any number of different package types such as single die package (SDP), dual die package (DDP) or quad die package (Q17P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDlMMs or MiniDIMMs.
To provide for persistent storage of information such as data, applications, operating systems and so forth, a storage 3658 may also couple to the processor 3652 via the interconnect 3656. In an example the storage 3658 may be implemented via a solid state disk drive (SSDD). Other devices that may be used for the storage 3658 include flash memory cards, such as SD cards, microSD cards, xD picture cards, and the like, and USB flash drives. In low power implementations, the storage 3658 may be on-die memory or registers associated with the processor 3652. However, in some examples, the storage 3658 may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage 3658 in addition to, or instead of, the technologies described, such resistance change memories, phase change memories, holographic memories, or chemical memories, among others.
The components may communicate over the interconnect 3656. The interconnect 3656 may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The interconnect 3656 may be a proprietary bus, for example, used in a SoC based system. Other bus systems may be included, such as an I2C interface, an SPI interface, point to point interfaces, and a power bus, among others.
Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more of components 3662, 3666, 3668, or 3670. Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, etc.) may be embodied by such communications circuitry.
The interconnect 3656 may couple the processor 3652 to a mesh transceiver 3662, for communications with other mesh devices 3664. The mesh transceiver 3662 may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee® standard, among others. Any number of radios, configured for a particular wireless communication protocol, may be used for the connections to the mesh devices 3664. For example, a wireless local area network (WLAN) unit may be used to implement Wi-Fi™ communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a wireless wide area network (WWAN) unit.
The mesh transceiver 3662 may communicate using multiple standards or radios for communications at different range. For example, the IoT device 3650 may communicate with close devices, e.g., within about 10 meters, using a local transceiver based on BLE, or another low power radio, to save power. More distant mesh devices 3664, e.g., within about 50 meters, may be reached over ZigBee or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels, or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee.
A wireless network transceiver 3666 may be included to communicate with devices or services in the cloud 3600 via local or wide area network protocols. The wireless network transceiver 3666 may be a LPWA transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4g standards, among others. The IoT device 3650 may communicate over a wide area using LoRaWAN™ (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies, but may be used with any number of other cloud transceivers that implement long range, low bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 802.15.4e specification may be used.
Any number of other radio communications and protocols may be used in addition to the systems mentioned for the mesh transceiver 3662 and wireless network transceiver 3666, as described herein. For example, the radio transceivers 3662 and 3666 may include an LTE or other cellular transceiver that uses spread spectrum (SPA/SAS) communications for implementing high speed communications. Further, any number of other protocols may be used, such as Wi-Fi® networks for medium speed communications and provision of network communications.
The radio transceivers 3662 and 3666 may include radios that are compatible with any number of 3GPP (Third Generation Partnership Project) specifications, notably Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), and Long Term Evolution-Advanced Pro (LTE-A Pro). It may be noted that radios compatible with any number of other fixed, mobile, or satellite communication technologies and standards may be selected. These may include, for example, any Cellular Wide Area radio communication technology, which may include e.g. a 5th Generation (5G) communication systems, a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, or an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, a UMTS (Universal Mobile Telecommunications System) communication technology, In addition to the standards listed above, any number of satellite uplink technologies may be used for the wireless network transceiver 3666, including, for example, radios compliant with standards issued by the ITU (International Telecommunication Union), or the ETSI (European Telecommunications Standards Institute), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated.
A network interface controller (NIC) 3668 may be included to provide a wired communication to the cloud 3600 or to other devices, such as the mesh devices 3664. The wired communication may provide an Ethernet connection, or may be based on other types of networks, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others. An additional NIC 3668 may be included to allow connect to a second network, for example, a NIC 3668 providing communications to the cloud over Ethernet, and a second NIC 3668 providing communications to other devices over another type of network.
The interconnect 3656 may couple the processor 3652 to an external interface 3670 that is used to connect external devices or subsystems. The external devices may include sensors 3672, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, a global positioning system (GPS) sensors, pressure sensors, barometric pressure sensors, and the like. The external interface 3670 further may be used to connect the IoT device 3650 to actuators 3674, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.
In some optional examples, various input/output (I/O) devices may be present within, or connected to, the IoT device 3650. For example, a display or other output device 3684 may be included to show information, such as sensor readings or actuator position. An input device 3686, such as a touch screen or keypad may be included to accept input. An output device 3684 may include any number of forms of audio or visual display, including simple visual outputs such as binary status indicators (e.g., LEDs) and multi-character visual outputs, or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the IoT device 3650.
A battery 3676 may power the IoT device 3650, although in examples in which the IoT device 3650 is mounted in a fixed location, it may have a power supply coupled to an electrical grid. The battery 3676 may be a lithium ion battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like.
A battery monitor/charger 3678 may be included in the IoT device 3650 to track the state of charge (SoCh) of the battery 3676. The battery monitor/charger 3678 may be used to monitor other parameters of the battery 3676 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 3676. The battery monitor/charger 3678 may include a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 from Linear Technologies, an ADT7488A from ON Semiconductor of Phoenix Ariz., or an IC from the UCD90xxx family from Texas Instruments of Dallas, Tex. The battery monitor/charger 3678 may communicate the information on the battery 3676 to the processor 3652 over the interconnect 3656. The battery monitor/charger 3678 may also include an analog-to-digital (ADC) convertor that allows the processor 3652 to directly monitor the voltage of the battery 3676 or the current flow from the battery 3676. The battery parameters may be used to determine actions that the IoT device 3650 may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like.
A power block 3680, or other power supply coupled to a grid, may be coupled with the battery monitor/charger 3678 to charge the battery 3676. In some examples, the power block 3680 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the IoT device 3650. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, Calif., among others, may be included in the battery monitor/charger 3678. The specific charging circuits chosen depends on the size of the battery 3676, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others.
The storage 3658 may include instructions 3682 in the form of software, firmware, or hardware commands to implement the techniques described herein. Although such instructions 3682 are shown as code blocks included in the memory 3654 and the storage 3658, it may be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an application specific integrated circuit (ASIC).
In an example, the instructions 3682 provided via the memory 3654, the storage 3658, or the processor 3652 may be embodied as a non-transitory, machine readable medium 3660 including code to direct the processor 3652 to perform electronic operations in the IoT device 3650. The processor 3652 may access the non-transitory, machine readable medium 3660 over the interconnect 3656. For instance, the non-transitory, machine readable medium 3660 may be embodied by devices described for the storage 3658 of
Also in a specific example, the instructions 3682 on the processor 3652 (separately, or in combination with the instructions 3682 of the machine readable medium 3660) may configure execution or operation of a trusted execution environment (TEE) 3690. In an example, the TEE 3690 operates as a protected area accessible to the processor 3652 for secure execution of instructions and secure access to data. Various implementations of the TEE 3690, and an accompanying secure area in the processor 3652 or the memory 3654 may be provided, for instance, through use of Intel® Software Guard Extensions (SGX) or ARM® TrustZone® hardware security extensions, Intel® Management Engine (ME), or Intel® Converged Security Manageability Engine (CSME). Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the IoT device 3650 through the TEE 3690 and the processor 3652.
The programmable circuitry platform 3700 of the illustrated example includes programmable circuitry 3712. The programmable circuitry 3712 of the illustrated example is hardware. For example, the programmable circuitry 3712 can be implemented by one or more integrated circuits, logic circuits, FPGAs, microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The programmable circuitry 3712 may be implemented by one or more semiconductor based (e.g., silicon based) devices. In this example, the programmable circuitry 3712 implements the configuration determination circuitry 3120 (identified by CONFIG DETERM CIRCUITRY), the location determination circuitry 3130 (identified by LOC DETERM CIRCUITRY), the connection evaluation circuitry 3140 (identified by CXN EVALUATION CIRCUITRY), the machine learning circuitry 3150 (identified by ML CIRCUITRY), and the configuration control circuitry 3160 (identified by CONFIG CONTROL CIRCUITRY) of
The programmable circuitry 3712 of the illustrated example includes a local memory 3713 (e.g., a cache, registers, etc.). The programmable circuitry 3712 of the illustrated example is in communication with main memory 3714, 3716, which includes a volatile memory 3714 and a non-volatile memory 3716, by a bus 3718. The volatile memory 3714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of RAM device. The non-volatile memory 3716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 3714, 3716 of the illustrated example is controlled by a memory controller 3717. In some examples, the memory controller 3717 may be implemented by one or more integrated circuits, logic circuits, microcontrollers from any desired family or manufacturer, or any other type of circuitry to manage the flow of data going to and from the main memory 3714, 3716.
The programmable circuitry platform 3700 of the illustrated example also includes interface circuitry 3720. The interface circuitry 3720 may be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a Bluetooth® interface, a near field communication (NFC) interface, a Peripheral Component Interconnect (PCI) interface, and/or a Peripheral Component Interconnect Express (PCIe) interface.
In the illustrated example, one or more input devices 3722 are connected to the interface circuitry 3720. The input device(s) 3722 permit(s) a user (e.g., a human user, a machine user, etc.) to enter data and/or commands into the programmable circuitry 3712. The input device(s) 3722 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a trackpad, a trackball, an isopoint device, and/or a voice recognition system.
One or more output devices 3724 are also connected to the interface circuitry 3720 of the illustrated example. The output device(s) 3724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuitry 3720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU.
The interface circuitry 3720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network 3726. The communication can be by, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a beyond-line-of-site wireless system, a line-of-site wireless system, a cellular telephone system, an optical connection, etc.
The programmable circuitry platform 3700 of the illustrated example also includes one or more mass storage devices 3728 to store software and/or data. In this example, the one or more mass storage devices 3728 implement the datastore 3170 of
The machine readable instructions 3732, which may be implemented by the machine readable instructions of
The programmable circuitry platform 3700 of the illustrated example of
The cores 3802 may communicate by a first example bus 3804. In some examples, the first bus 3804 may be implemented by a communication bus to effectuate communication associated with one(s) of the cores 3802. For example, the first bus 3804 may be implemented by at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the first bus 3804 may be implemented by any other type of computing or electrical bus. The cores 3802 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 3806. The cores 3802 may output data, instructions, and/or signals to the one or more external devices by the interface circuitry 3806. Although the cores 3802 of this example include example local memory 3820 (e.g., Level 1 (L1) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor 3800 also includes example shared memory 3810 that may be shared by the cores (e.g., Level 2 (L2 cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 3810. The local memory 3820 of each of the cores 3802 and the shared memory 3810 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory 3714, 3716 of
Each core 3802 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 3802 includes control unit circuitry 3814, arithmetic and logic (AL) circuitry (sometimes referred to as an ALU) 3816, a plurality of registers 3818, the local memory 3820, and a second example bus 3822. Other structures may be present. For example, each core 3802 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry 3814 includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core 3802. The AL circuitry 3816 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 3802. The AL circuitry 3816 of some examples performs integer based operations. In other examples, the AL circuitry 3816 also performs floating-point operations. In yet other examples, the AL circuitry 3816 may include first AL circuitry that performs integer-based operations and second AL circuitry that performs floating-point operations. In some examples, the AL circuitry 3816 may be referred to as an Arithmetic Logic Unit (ALU).
The registers 3818 are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 3816 of the corresponding core 3802. For example, the registers 3818 may include vector register(s), SIMD register(s), general-purpose register(s), flag register(s), segment register(s), machine-specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 3818 may be arranged in a bank as shown in
Each core 3802 and/or, more generally, the microprocessor 3800 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 3800 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages.
The microprocessor 3800 may include and/or cooperate with one or more accelerators (e.g., acceleration circuitry, hardware accelerators, etc.). In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general-purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU, DSP and/or other programmable device can also be an accelerator. Accelerators may be on-board the microprocessor 3800, in the same chip package as the microprocessor 3800 and/or in one or more separate packages from the microprocessor 3800.
More specifically, in contrast to the microprocessor 3800 of
In the example of
In some examples, the binary file is compiled, generated, transformed, and/or otherwise output from a uniform software platform utilized to program FPGAs. For example, the uniform software platform may translate first instructions (e.g., code or a program) that correspond to one or more operations/functions in a high-level language (e.g., C, C++, Python, etc.) into second instructions that correspond to the one or more operations/functions in an HDL. In some such examples, the binary file is compiled, generated, and/or otherwise output from the uniform software platform based on the second instructions. In some examples, the FPGA circuitry 3900 of
The FPGA circuitry 3900 of
The FPGA circuitry 3900 also includes an array of example logic gate circuitry 3908, a plurality of example configurable interconnections 3910, and example storage circuitry 3912. The logic gate circuitry 3908 and the configurable interconnections 3910 are configurable to instantiate one or more operations/functions that may correspond to at least some of the machine readable instructions of
The configurable interconnections 3910 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 3908 to program desired logic circuits.
The storage circuitry 3912 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 3912 may be implemented by registers or the like. In the illustrated example, the storage circuitry 3912 is distributed amongst the logic gate circuitry 3908 to facilitate access and increase execution speed.
The example FPGA circuitry 3900 of
Although
It should be understood that some or all of the circuitry of
In some examples, some or all of the circuitry of
In some examples, the programmable circuitry 3712 of
A block diagram illustrating an example software distribution platform 4005 to distribute software such as the example machine readable instructions 3732 of
From the foregoing, it will be appreciated that example systems, methods, apparatus, and articles of manufacture have been disclosed for data driven networking. Disclosed systems, methods, apparatus, and articles of manufacture collect network node environmental and multi-access usage telemetry in substantially real-time based on real-world utilization. In some examples, that telemetry along with connection status and health of UE/gateways is fed to AI/ML models resulting in either new or existing DDN node profile with associated DDN instance sufficient to address any network degradations. Disclosed systems, methods, apparatus, and articles of manufacture reconfigure the DDN control planes and/or DDN nodes to address constrains at the physical location of the network node.
Disclosed systems, methods, apparatus, and articles of manufacture improve the efficiency of using a computing device by achieving improved network utilization. Disclosed systems, methods, apparatus, and articles of manufacture improve the efficiency of using a computing device by moving or activating radios (e.g., 5G radios) based on environmental conditions to avoid service gaps caused by congestion or outage. Disclosed systems, methods, apparatus, and articles of manufacture are accordingly directed to one or more improvement(s) in the operation of a machine such as a computer or other electronic and/or mechanical device.
It is noted that this patent claims priority from International Patent Application Number PCT/CN2022/082979, which was filed on Mar. 25, 2022, and is hereby incorporated by reference in its entirety.
Example methods, apparatus, systems, and articles of manufacture for data driven networking are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes a method comprising obtaining telemetry data associated with an electronic device, and causing the electronic device to switch from a first communication network to a second communication network based on the telemetry data.
In Example 2, the subject matter of Example 1 can optionally include identifying wireless connection capabilities of the electronic device.
In Example 3, the subject matter of Examples 1-2 can optionally include configuring the electronic device to utilize the first communication network based on a strength and/or quality of the first communication network.
In Example 4, the subject matter of Examples 1-3 can optionally include storing a configuration of the electronic device, the configuration including an association of the electronic device and at least one of the first communication network or the second communication network.
In Example 5, the subject matter of Examples 1-4 can optionally include determining that the first communication network has at least one of a connection strength that is below a first threshold or a connection quality that is below a second threshold.
In Example 6, the subject matter of Examples 1-5 can optionally include in response to determining that at least one of the first threshold or the second threshold are satisfied, instruct the electronic device to switch from the first communication network to the second communication network.
In Example 7, the subject matter of Examples 1-6 can optionally include that the second communication network has improved communication strength and quality with respect to the first communication network.
In Example 8, the subject matter of Examples 1-7 can optionally include that the first communication network is a fifth generation (5G) cellular network and the second communication network is a Wireless Fidelity (Wi-Fi network).
In Example 9, the subject matter of Examples 1-8 can optionally include updating the configuration of the electronic device in response to the switch to the second communication network, the configuration to be stored in a datastore.
In Example 10, the subject matter of Examples 1-9 can optionally include determining a geographical actual physical location and/or identifier of the electronic device based on the telemetry data.
In Example 11, the subject matter of Examples 1-10 can optionally include determining network environmental conditions associated with the electronic device.
In Example 12, the subject matter of Examples 1-11 can optionally include determining a communication signal strength and quality of one or more wireless gNodeBs in communication with the electronic device.
In Example 13, the subject matter of Examples 1-12 can optionally include determining a communication signal strength and quality of one or more wireless sNodeBs in communication with the electronic device.
In Example 14, the subject matter of Examples 1-13 can optionally include obtaining data and/or data quality of one or more sensors in communication with the electronic device.
In Example 15, the subject matter of Examples 1-14 can optionally include determining active and/or potentially active UE or gateways in communication with the electronic device.
In Example 16, the subject matter of Examples 1-15 can optionally include executing and/or instantiating a machine learning model to generate an output based on the telemetry data.
In Example 17, the subject matter of Examples 1-16 can optionally include that the output includes a recommendation or a determination for the electronic device to switch from the first to the second communication network.
In Example 18, the subject matter of Examples 1-17 can optionally include identifying a configuration of cores of multi-core processor circuitry.
In Example 19, the subject matter of Examples 1-18 can optionally include configuring ones of the cores of the multi-core processor circuitry to optimize and/or otherwise improve execution of workloads associated with the second communication network.
In Example 20, the subject matter of Examples 1-19 can optionally include outputting, with the machine learning model, a determination indicative of configuring the ones of the multi-core processor circuitry.
In Example 21, the subject matter of Examples 1-20 can optionally include that the configuring of the ones of the cores of the multi-core processor circuitry includes changing a clock frequency of the ones of the cores or a set of Instruction Set Architecture (ISA) instructions for which the ones of the cores are to load.
In Example 21, the subject matter of Examples 1-20 can optionally include that the telemetry data includes a vendor identifier, an Internet Protocol (IP) address, a media access control (MAC) address, a serial number, a certificate, Sounding Reference Signal (SRS) parameters associated with the electronic device.
Example 22 is at least one computer readable medium comprising instructions to perform the method of any of Examples 1-21.
Example 23 is edge server processor circuitry to perform the method of any of Examples 1-21.
Example 24 is an edge cloud processor circuitry to perform the method of any of Examples 1-21.
Example 25 is edge node processor circuitry to perform the method of any of Examples 1-21.
Example 26 is location engine circuitry to perform the method of any of Examples 1-21.
Example 27 is an apparatus comprising processor circuitry to perform the method of any of Examples 1-21.
Example 28 is an apparatus comprising one or more edge gateways to perform the method of any of Examples 1-21.
Example 29 is an apparatus comprising one or more edge switches to perform the method of any of Examples 1-21.
Example 30 is an apparatus comprising at least one of one or more edge gateways or one or more edge switches to perform the method of any of Examples 1-21.
Example 31 is an apparatus comprising accelerator circuitry to perform the method of any of Examples 1-21.
Example 32 is an apparatus comprising one or more graphics processor units to perform the method of any of Examples 1-21.
Example 33 is an apparatus comprising one or more Artificial Intelligence processors to perform the method of any of Examples 1-21.
Example 34 is an apparatus comprising one or more machine learning processors to perform the method of any of Examples 1-21.
Example 35 is an apparatus comprising one or more neural network processors to perform the method of any of Examples 1-21.
Example 36 is an apparatus comprising one or more digital signal processors to perform the method of any of Examples 1-21.
Example 37 is an apparatus comprising one or more general purpose processors to perform the method of any of Examples 1-21.
Example 38 is an apparatus comprising network interface circuitry to perform the method of any of Examples 1-21.
Example 39 is an Infrastructure Processor Unit to perform the method of any of Examples 1-21.
Example 40 is hardware queue management circuitry to perform the method of any of Examples 1-21.
Example 41 is at least one of remote radio unit circuitry or radio access network circuitry to perform the method of any of Examples 1-21.
Example 42 is base station circuitry to perform the method of any of Examples 1-21.
Example 43 is user equipment circuitry to perform the method of any of Examples 1-21.
Example 44 is an Internet of Things device to perform the method of any of Examples 1-21.
Example 45 is a software distribution platform to distribute machine-readable instructions that, when executed by processor circuitry, cause the processor circuitry to perform the method of any of Examples 1-21.
Example 46 is edge cloud circuitry to perform the method of any of Examples 1-21.
Example 47 is distributed unit circuitry to perform the method of any of Examples 1-21.
Example 48 is control unit circuitry to perform the method of any of Examples 1-21.
Example 49 is core server circuitry to perform the method of any of Examples 1-21.
Example 50 is satellite circuitry to perform the method of any of Examples 1-21.
Example 51 is at least one of one more GEO satellites or one or more LEO satellites to perform the method of any of Examples 1-21.
Example 52 includes an edge compute device comprising interface circuitry, machine readable instructions, and programmable circuitry to execute the machine readable instructions to configure compute resources of the edge compute device based on a first resource demand associated with a first location of the edge compute device, detect a change in location of the edge compute device to a second location, and in response to the detection of the change in location, reconfigure the compute resources of the edge compute device based on a second resource demand associated with the second location.
Example 53 includes the edge compute device of any of the previous examples, wherein the programmable circuitry is to configure network resources of the edge compute device based on a first spectrum availability associated with the first location of the edge compute device, and reconfigure the network resources of the edge compute device in response to the detection of the change in location.
Example 54 includes the edge compute device of any of the previous examples, wherein the edge compute device is a mobile edge compute device included in a network of edge compute devices, the network of edge compute devices including at least one stationary compute device.
Example 55 includes the edge compute device of any of the previous examples, wherein the programmable circuitry is to configure the compute resources of the edge compute device responsive to an input from another one of the edge compute devices, the input based on a third resource demand.
Example 56 includes the edge compute device of any of the previous examples, wherein the programmable circuitry is to reconfigure the compute resources based on an output of a machine learning model, the machine learning model to process input telemetry data, the input telemetry data including at least one of a vendor identifier, an Internet Protocol address, or a media access control address.
Example 57 includes the edge compute device of any of the previous examples, wherein the programmable circuitry is to execute a virtual machine to reconfigure the compute resources.
Example 58 includes the edge compute device of any of the previous examples, wherein the programmable circuitry is to collect telemetry data associated with the first resource demand, the telemetry data including a timestamp associated with the first resource demand, a number of compute cores assigned to the first resource demand, and network communication metrics associated with the first resource demand.
Example 59 includes the edge compute device of any of the previous examples, wherein the compute resources include a plurality of processor cores, and to reconfigure the compute resources, the programmable circuitry is to change a clock frequency of at least one of the plurality of processor cores.
Example 60 includes the edge compute device of any of the previous examples, wherein to reconfigure the compute resources based on the second resource demand, the programmable circuitry is to deactivate a first one of the plurality of processor cores, the first one of the plurality of processor cores associated with a first instruction set architecture (ISA), and activate a second one of the plurality of processor cores, the second one of the plurality of processor cores associated with a second ISA different than the first ISA.
Example 61 includes the edge compute device of any of the previous examples, wherein the programmable circuitry is to obtain telemetry data including a communication signal strength associated with an electronic device in communication with the edge compute device, and cause, based on the telemetry data, the electronic device to switch from a first communication network to a second communication network to communicate with the edge compute device.
Example 62 includes a machine readable storage medium comprising instructions to cause programmable circuitry to at least configure compute resources of an edge compute device based on a first resource demand associated with a first location of the edge compute device, detect a change in location of the edge compute device to a second location, and in response to detection of the change in location, reconfigure the compute resources of the edge compute device based on a second resource demand associated with the second location.
Example 63 includes the machine readable storage medium of any of the previous examples, wherein the instructions are to cause the programmable circuitry to configure network resources of the edge compute device based on a first spectrum availability associated with the first location of the edge compute device, and reconfigure the network resources of the edge compute device in response to the detection of the change in location.
Example 64 includes the machine readable storage medium of any of the previous examples, wherein the edge compute device is a mobile edge compute device included in a network of edge compute devices, the network of edge compute devices including at least one stationary compute device.
Example 65 includes the machine readable storage medium of any of the previous examples, wherein the instructions are to cause the programmable circuitry to configure the compute resources of the edge compute device responsive to an input from another one of the edge compute devices, the input based on a third resource demand.
Example 66 includes the machine readable storage medium of any of the previous examples, wherein the instructions are to cause the programmable circuitry to reconfigure the compute resources based on an output of a machine learning model, the machine learning model to process input telemetry data, the input telemetry data including at least one of a vendor identifier, an Internet Protocol address, or a media access control address.
Example 67 includes the machine readable storage medium of any of the previous examples, wherein the instructions are to cause the programmable circuitry to execute a virtual machine to reconfigure the compute resources.
Example 68 includes the machine readable storage medium of any of the previous examples, wherein the instructions are to cause the programmable circuitry to collect telemetry data associated with the first resource demand, the telemetry data including a timestamp associated with the first resource demand, a number of compute cores assigned to the first resource demand, and network communication metrics associated with the first resource demand.
Example 69 includes the machine readable storage medium of any of the previous examples, wherein the compute resources include a plurality of processor cores, and to reconfigure the compute resources, the programmable circuitry is to change a clock frequency of at least one of the plurality of processor cores.
Example 70 includes the machine readable storage medium of any of the previous examples, wherein to reconfigure the compute resources based on the second resource demand, the instructions are to cause the programmable circuitry to deactivate a first one of the plurality of processor cores, the first one of the plurality of processor cores associated with a first instruction set architecture (ISA), and activate a second one of the plurality of processor cores, the second one of the plurality of processor cores associated with a second ISA different than the first ISA.
Example 71 includes the machine readable storage medium of any of the previous examples, wherein the instructions are to cause the programmable circuitry to obtain telemetry data including a communication signal strength associated with an electronic device in communication with the edge compute device, and cause, based on the telemetry data, the electronic device to switch from a first communication network to a second communication network to communicate with the edge compute device.
In any of the previous examples, the machine readable storage medium may be a non-transitory machine readable storage medium.
Example 72 includes a method comprising configuring, by executing an instruction with programmable circuitry, compute resources of an edge compute device based on a first resource demand associated with a first location of the edge compute device, detecting, by executing an instruction with the programmable circuitry, a change in location of the edge compute device to a second location, and reconfiguring, by executing an instruction with the programmable circuitry in response to detection of the change in location, the compute resources of the edge compute device based on a second resource demand associated with the second location.
Example 73 includes the method of any of the previous examples, further including configuring network resources of the edge compute device based on a first spectrum availability associated with the first location of the edge compute device, and reconfiguring the network resources of the edge compute device in response to the detection of the change in location.
Example 74 includes the method of any of the previous examples, wherein the edge compute device is a mobile edge compute device included in a network of edge compute devices, the network of edge compute devices including at least one stationary compute device.
Example 75 includes the method of any of the previous examples, further including configuring the compute resources of the edge compute device responsive to an input from another one of the edge compute devices, the input based on a third resource demand.
Example 76 includes the method of any of the previous examples, further including reconfiguring the compute resources based on an output of a machine learning model, the machine learning model to process input telemetry data, the input telemetry data including at least one of a vendor identifier, an Internet Protocol address, or a media access control address.
Example 77 includes the method of any of the previous examples, further including executing a virtual machine to reconfigure the compute resources.
Example 78 includes the method of any of the previous examples, further including, further including collecting telemetry data associated with the first resource demand, the telemetry data including a timestamp associated with the first resource demand, a number of compute cores assigned to the first resource demand, and network communication metrics associated with the first resource demand.
Example 79 includes the method of any of the previous examples, wherein the compute resources include a plurality of processor cores, and to reconfigure the compute resources, the programmable circuitry is to change a clock frequency of at least one of the plurality of processor cores.
Example 80 includes the method of any of the previous examples, further including deactivating a first one of the plurality of processor cores, the first one of the plurality of processor cores associated with a first instruction set architecture (ISA), and activating a second one of the plurality of processor cores, the second one of the plurality of processor cores associated with a second ISA different than the first ISA.
Example 81 includes the method of any of the previous examples, further including obtaining telemetry data including a communication signal strength associated with an electronic device in communication with the edge compute device, and causing, based on the telemetry data, the electronic device to switch from a first communication network to a second communication network to communicate with the edge compute device.
The following claims are hereby incorporated into this Detailed Description by this reference. Although certain example systems, apparatus, articles of manufacture, and methods have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all systems, apparatus, articles of manufacture, and methods fairly falling within the scope of the claims of this patent.
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2022/082979 | Mar 2022 | WO | international |
This patent claims priority to International Application No. PCT/CN2022/082979, which was filed on Mar. 25, 2022. International Patent Application No. PCT/CN2022/082979 is hereby incorporated herein by reference in its entirety. Priority to International Patent Application No. PCT/CN2022/082979 is hereby claimed.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/082979 | Mar 2022 | US |
Child | 18189813 | US |