The present disclosure generally relates to remote analytics systems and more specifically to computing architectures providing low latency analytics and control of devices via edge nodes using edge communication links.
As network technologies continue to advance, both in terms of accessibility and connectivity, the utilization of networks has also expanded. As an example, mobile devices (e.g., cellular communication devices, tablet computing devices, or other handheld electronic devices) were initially limited to certain types of networks (e.g., cellular voice networks) but as cellular communication networks advanced, the capabilities of mobile devices also expanded to include data applications and other functionality. The expanded capabilities of cellular communication networks have become widely available in recent years in certain developed countries and continue to expand in other regions of the world, which have created new ways to utilize various types of network technologies.
Despite the increases in data rates provided by cellular communication and traditional data networks (e.g., broadband, fiber, and Wi-Fi networks), computing resources remain a limiting factor with respect to certain types of processing and functionality. For example, despite increases in computing hardware capabilities, edge computing devices typically remain limited with respect to computing resources (e.g., processor computational capabilities, memory, etc.) as compared to traditional types computing devices (e.g., servers, personal computing devices, laptop computing devices, and the like). As a result of the computing resources limitations of edge computing devices, the edge computing functionality has remained limited and resulted in use of more centralized, non-edge computing devices for many applications. While such computing devices and setups have benefited from the increases to speed and connectivity of existing networks, certain types of applications and functionality (e.g., computer vision-based applications) remain unacceptably slow due to latency and other factors associated with use of traditional network technologies despite the availability of powerful computing hardware.
The present disclosure provides a computing architecture that enables computer vision and other analytical techniques to be provided in a manner that provides for low latency/rapid response by leveraging edge computing devices. In an aspect, sensor devices (e.g., cameras, temperature sensors, motion sensors, etc.) may be disposed in an environment and may capture information that may be analyzed to evaluate a state of the environment or a state of one or more devices and/or persons within the environment. Information recorded by the sensor devices may be transmitted to an edge node using an edge communication link, such as a communication link provided over a next generation network, such as a 5th Generation (5G) communication network. The edge node may implement a computing architecture in accordance with the present disclosure that leverages multiple independent threads processing input data streams in parallel to perform analysis of the environment. The multiple independent threads may include threads executed by a central processing unit (CPU) of the edge node, such as to perform data reception and initial processing of the input data to prepare the input data streams for analysis via one or more machine learning models (e.g., computer vision models). Additionally, the multiple independent threads may include threads executed by a graphics processing unit (GPU) for evaluating model input data (i.e., the results of the pre-processing of the input data) against the one or more machine learning models. The one or more machine learning models may be configured to analyze the model input data according to one or more specific use cases (e.g., to determine whether a worker is wearing appropriate safety equipment or is operating machinery in an appropriate manner), and may generate model outputs for further analysis.
The model outputs may be evaluated using additional independent threads of the CPU and control logic configured to generate control data and outcome data. The control data may be used by one or more threads of a message broker service executing on the CPU to generate command messages for controlling remote devices or notifying users of situations within an environment (e.g., to slow or turn off a remote device or warn a user of unsafe conditions). The data utilized by the various analytics processes may be maintained locally at the edge node in cache memory to facilitate rapid access to the relevant data and a longer term storage may be used to store analytics data for a period of time. The relevant data stored in the longer term storage of the edge node may be used to present information in a graphical user interface and may be periodically transferred to an external system (e.g., a central server or other non-edge computing device).
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.
For a more complete understanding of the disclosed methods and apparatuses, reference should be made to the embodiments illustrated in greater detail in the accompanying drawings, wherein:
It should be understood that the drawings are not necessarily to scale and that the disclosed embodiments are sometimes illustrated diagrammatically and in partial views. In certain instances, details which are not necessary for an understanding of the disclosed methods and apparatuses or which render other details difficult to perceive may have been omitted. It should be understood, of course, that this disclosure is not limited to the particular embodiments illustrated herein.
Embodiments of the present disclosure provide a computing architecture that facilitates rapid analysis and control of an environment via edge computing nodes. Input data streams may be received at an edge node and prepared for processing by one or more machine learning models. The machine learning models may be trained according to different use cases to facilitate a multi-faceted and comprehensive analysis of the input data. The input data may be evaluated against the machine learning models to produce model outputs that are then evaluated using control logic to produce a set of outcomes and control data. The control data may be utilized to generate one or more command messages or control signals that may be used to provide feedback to a remote device or user regarding a state of a monitored environment or other observed conditions. To improve the throughput of the analytics process, the evaluation of the input data against the machine learning models may be performed on a separate processor than other computing processes. For example, the reception of the input data (and pre-processing of the input data for use with the machine learning models) may be performed using one or more threads running on first processor (e.g., a central processing unit (CPU)) while independent threads running on a second processor (e.g., a graphics processing unit (GPU)) may be utilized for each of the machine learning models. Additionally, independent threads running on the first processor may also be utilized to evaluate the model outputs and produce the control and outcome data, as well as to facilitate generation of command messages. As described in more detail below, the disclosed computing architecture enables computer vision-type analytics and other analytical processes to be performed via edge computing nodes in a manner that is significantly faster than existing techniques.
Referring to
The memory 114 may include read only memory (ROM) devices, random access memory (RAM) devices, one or more hard disk drives (HDDs), flash memory devices, solid state drives (SSDs), other devices configured to store data in a persistent or non-persistent state, or a combination of different memory devices. The memory 114 may store instructions 116 that, when executed by the one or more processors 112, cause the one or more processors 112 to perform the operations described in connection with the edge device 110 with reference to
The one or more communication interfaces 124 may communicatively couple the edge node 110 to remote computing devices 140, 160 via one or more networks 130. In an aspect, the edge node 110 may be communicatively coupled to the computing devices 140, 160 via wired or wireless communication links according to one or more communication protocols or standards (e.g., an Ethernet protocol, a transmission control protocol/internet protocol (TCP/IP), an institute of electrical and electronics engineers (IEEE) 802.11 protocol, and an IEEE 802.16 protocol, and the like). In addition to being communicatively coupled to the computing devices 140, 160 via the one or more networks 130, the one or more communication interfaces 124 may communicatively couple edge node 110 to one or more sensor devices, such as sensor devices 150A-150C, or monitored devices, such as device 152. The edge node 110 may be communicatively coupled to the sensor devices 150A-150C and the devices(s) 152 via an edge communication link (e.g., a communication link established according to a 4th Generation (4G)/long term evolution (LTE) communication standard, a 5th Generation (5G) communication standard).
As shown in
Sensor devices 150A-150C may include cameras (e.g., video cameras, imaging cameras, thermal cameras, etc.), temperature sensors, pressure sensors, acoustic sensors (e.g., ultrasound sensors, transducers, microphones, etc.), motion sensors (e.g., accelerometers, gyroscopes, etc.), or other types of devices capable of capturing and recording information associated with the device 152. For example, device 152 may be a drill press, a saw, or other type of equipment and the sensor devices 150A-150C may monitor the state of the device 152, the environment surrounding the device 152, or other factors. The sensor devices 150A-150C may capture information that may be provided to the edge node 110 for analysis to determine whether a hazard condition is present in the vicinity of the device 152 (e.g., a user has a body part too close to the saw, etc.). The edge node 110 may evaluate the information captured by the sensor devices 150A-150C using the modelling engine 120 and may determine whether to transmit commands to the device 152 based the evaluating. For example, where a hazardous or dangerous condition is detected, the edge services 122 may transmit a command to the device 152 to cause the device 152 turn off or modify one or more operating parameters, thereby creating a safer environment and reducing the likelihood of an accident. Exemplary techniques for analyzing the information captured by the sensor devices 150A-150C and for exchanging commands with the device 152 via the edge services 122 are described in more detail below with reference to
In addition to leveraging edge node 110 to facilitate rapid analysis of data captured by sensor devices 150A-150C and providing feedback or commands to the device 152 (or other devices), the system 100 may also enable users to remotely monitor the status of one or more devices (e.g., one or more devices 152) and environments where the devices are operating. For example, a user may utilize computing device 140 to access one or more graphical user interfaces supported by computing device 140. The one or more graphical user interfaces may be configured to present information about the environment(s) and device(s) within the environment(s) to the user. Exemplary aspects of the types of information that may be provided to the user via the graphical user interface(s) and other functionality provided via the graphical user interfaces are described in more detail below.
As shown in
As briefly described above, the edge node 110 is configured to receive information about a monitored environment, such as information captured by the sensor devices 150A-150C. The monitored environment may include one or more devices, such as the device 152, and the edge services 122 of the edge node 110 may be configured to analyze the information received from the sensor devices 150A-150C and determine whether to issue one or more commands to devices within the monitored environment. A computing architecture of the edge node 110 may be configured to enable rapid analysis of the received information and to enable the commands to be issued, where appropriate based on the analysis, to the devices of the monitored environment in real-time or near-real-time. For example, the computing architecture of the edge node 110 may enable the information to be received from the sensor devices 150A-150C, analyzed, and commands to be issued to and received at the device 152 within a threshold period of time. In an aspect, the threshold period of time may be less than 200 milliseconds (ms). In an additional or alternative aspect, the threshold period of time may be less 100 ms. In yet another additional or alternative aspect, the threshold period of time may be less than 100 ms. In some aspects, the threshold period of time may be between 30 ms and 80 ms (e.g., 30-35 ms, 30-40 ms, 40-50 ms, 40-60 ms, 50-60 ms, 60-80 ms, and the like). In some aspects, the threshold period of time may be approximately 50 ms.
Referring to
As shown in
As the various types of information are captured by the capture service 210, information associated with the captured information may be stored in a cache memory 220 (e.g., a cache memory of the memory 114 of
As shown in
Each of the models 232 may be configured to evaluate input data of a particular type (e.g., image or video frame data, etc.) according to a particular use case. Moreover, the models 232 configured to analyze image data may be trained using data captured from a particular viewing angle, such as the viewing angle associated with the video frame data 212 or the viewing angle associated with the video frame data 214. Using training data captured from different viewing angles may enable the models 232 to be trained to identify relevant use case scenarios in a more accurate manner. For example, where the use case involves monitoring safety of a worker utilizing a drill press, the models 232 may be configured to evaluate whether the worker is safely operating the drill press and detect when an unsafe operating condition occurs. Information from the video frame data 212 and video frame data 214 may be captured from different angles to more effectively monitor the safety of the environment where the worker and drill press are located. For example, the viewing angle associated with the video frame data 212 may show normal/safe operation of the drill press by the worker but the viewing angle associated with the video frame data 214 may show unsafe operation of the drill press by the worker. In such a situation, the model evaluating the video frame data 212′ may determine that normal operating conditions are occurring and the model evaluating the video frame data 214′ may determine that an unsafe operating condition is occurring. It is noted that the models may not be configured to actually evaluate whether the video frame data indicates “safe” or “unsafe” operating conditions and instead may simply classify the scene depicted in the video frame data. For example, the models 232 may be configured to classify the video frame data into one of a plurality of classifications, such as drill press off, worker not present, worker's hands away from drill press, worker's hand(s) near drill press but not on handles of drill press, worker's hand(s) near drill press but on handles of drill press, etc.
It is noted that the models 232 of the GPU module 230 may include models configured to perform different types of analysis, which may include different types of analysis on a same dataset. For example, a set of video frame data may be analyzed by the GPU module 230 using two different models, each model trained to identify different scenario information (e.g., a worker's hand in an unsafe position with respect to a drill press and whether the worker is wearing appropriate safety gear, such as a hard hat, gloves, eyewear, etc.). Utilizing different models to analyze a same stream of video frame data may enable the models to be maintained in a more compact manner and provide for efficient processing of video frame data in a rapid fashion as compared to trying to use a single (larger) model to evaluate all potential types of information that may be derived from a particular set of video frame data. Accordingly, it should be understood that a single set of input video frame data (or another type of data) may be analyzed using a single model or multiple models depending on the particular configuration of the GPU module 230 and the use cases being considered.
As the cached data (e.g., the video frame data 212′, 214′) is evaluated against the models 232, outputs associated with the classifications derived from analysis of the cached data may be produced. For example, evaluation of video frame data 212′ by model M1 may produce a classification {A}, evaluation of video frame data 214′ by model M2 may produce a classification {B}, and so on. The classifications output by the GPU module 230 may be stored at the cache memory 220 as classifications 222. The classifications 222 may be evaluated by control logic 240 to determine a state of the monitored environment, such as whether the drill press in the above-described scenario is being operated safely. For example, the control logic 240 may configured with various logic parameters 242 (e.g., L1, L2, . . . , Lz) configured to evaluate the classifications 222. In the example above, the control logic parameters 242 may be applied to or used to evaluate the classifications 222 (or other outputs of the GPU module 230) to produce control data. The control data generated by control logic 240 may include different sets of data, such as a first set of data providing control information and a second set of data corresponding to analysis outcomes. In
For example, in the above-example involving a drill press, the command message may be configured for delivery to the drill press or a device coupled to the drill press (e.g., a network enabled device configured to provide control functionality for the drill press) and may include command data to control operations the drill press. For example, where the control logic 240 determines, based on application of the logic parameters 242 to the classifications 222, that the drill press is being operated in an unsafe manner, the command message 252 may include commands to slow or stop the drill press, a command to generate an auditory alert to the drill press operator, or other types of operations to address the unsafe operating conditions detected by the control logic 240. The command message 252 may transmitted to the device by the message broker service 250 via the edge communication link. The second set of data (e.g., “{A2}{B2}{C2}”) may be stored in a database 260, which may be one of the one or more databases 118 of
In the exemplary flow shown in
To further streamline the processing flow, multi-threaded processing may be utilized. For example, each incoming data stream (e.g., the data streams associated with the video frame data 212, 214, and the temperature information 216) may be handled by processes performed by the CPU and/or the GPU via a separate thread. Utilizing different threads in the CPU and GPU enables parallel execution of various processes for different data streams and analysis, allowing multiple use cases or perspectives (e.g., different viewing angles for computer vision processes, etc.) to be considered simultaneously. Additionally, the functionality provided by the different threads executed in parallel produce optimized outputs that are appropriate for the next step of processing, such as pre-processing the video data to a form that is appropriate for the models 232, outputting data objects (e.g., classifications, etc.) via the GPU module 230 that are suitable for handling by the CPU and the logic parameters 242, and the like. Moreover, using the cache memory 220 to share data inputs and outputs between the different threads of the CPU and GPU enables rapid data transfer between the various stages of processing.
In addition to performance efficiencies provided by the computing architecture 200 described above, which enables edge nodes in accordance with the present disclosure to achieve low-latency control and messaging workflows, the computing architecture 200 of the present disclosure also leverages additional techniques to reduce latency and improve the flow and processing of data. For example, prioritization techniques may be utilized to allocate computing resources of the edge node 110 to workflow and processes in a manner that ensures sufficient computing resources (e.g., the CPU, GPU, cache memory, etc.) are allocated to critical workflows and capabilities so that those processes are not starved for computing resources by non-critical workflows and capabilities. To illustrate, the priority levels may include 3 priority levels, such as high, medium, and low. The high priority level may be associated with critical (e.g., in terms of latency or information) workflows and capabilities, such as data ingestion and model object detection and classification. The low priority level may be associated with workflows and capabilities that do not require or mandate real-time “ultra-low latency” operation, and the medium priority level may be associated with workflows and capabilities being used to process important workflows that do not require a lot of processing time (e.g., important micro tasks) and/or do not retrain or hold control of computing resources for a relatively long time (e.g., seconds, minutes, etc.).
As an example of applying the different priority levels described above, the high priority level may be utilized for workflows and capabilities involving ingestion and conditioning of data for analysis by the models and evaluating the conditioned data using the models, as well as allocation of resources in the cache memory for storing data generated and/or used by those processes. The medium priority level may be applied to workflows and capabilities associated with the control logic 240, which may provide time sensitive functionality, such as determining whether to enable or disable devices (e.g., machinery, equipment, etc.) or other control functionality based on analysis of classifications output by the models 232. It is noted that while the ability to control devices based on analysis of the control logic 240 may be time sensitive in certain ways, such as turning off a saw or drill press if requirements for worker safety are not met, as may be determined by the control logic 240, using the medium priority for such tasks may be sufficient since evaluating the classifications output by the models may be performed quickly relative to the computational requirements and time requirements for ingesting, pre-processing, and analyzing the data streams using the models. Since the classifications resulting from the latter are inputs to the control logic 240, applying the higher priority level to the data ingestion and modelling processes ensures that the information relied on by the (medium priority) processes of the control logic 240 is up-to-date or real-time data. Furthermore, when the control logic 240 makes a decision, such as to enable a piece of equipment or machinery when a worker is wearing all safety gear or to disable the piece of equipment when the worker is not wearing all required safety gear, it is not critical that the control logic 240 make additional decisions in real-time and a few ms (e.g., 5-10 ms) may be sufficient to ensure that the control signals are provided to enable/disable the piece of equipment (e.g., because the worker is not likely to be able to remove a piece of safety equipment in such a small time frame). The low priority level may be applied to non-critical tasks, such as storing the control data and/or analysis outcomes in a database.
It is also noted that while in the description above high priority levels are allocated to functionality of the capture service 210, the cache memory 220, and the GPU module 230, medium priority levels are associated with the functionality of the control logic 240, and low priority levels are associated with the message broker service 250 and the database(s) 260, such priority level assignments have been provided by way of illustration, rather than by way of limitation. For example, certain input data streams and processing, as well as the models that analyze those data streams, may be assigned medium or low priority levels while other input data streams, processing, and associated models may be assigned the high priority level (e.g., worker safety models and associated processes may be the high priority level while models for evaluating performance of equipment may be assigned the medium or low priority level). Similarly, certain functionality provided by the control logic 240 and the message broker service 250 may be assigned the high priority level while other functionality of the control logic 240 and the message broker service 250 may be assigned low or medium priority levels (e.g., control logic for determining whether equipment should be enabled/disabled, as well as transmission of control signals to enable/disable the equipment may be assigned high or medium priority levels while other types of functionality by the control logic 240 and the message broker service 250 may be assigned low or medium priority levels).
It should be understood that the application and assignment of priority levels described above has been provided for purposes of illustration, rather than by way of limitation and that other combinations and configurations of the priority level assignments to the functionality of the edge node may be utilized. Moreover, it is noted that the priority levels may be assigned dynamically (i.e., change over time) depending on the state of the monitored environment. For example, in a worker safety use case involving machinery or equipment, models and control logic used to detect whether a worker is wearing required safety equipment may be assigned low or medium priority when a worker is not detected in the vicinity of the machinery or equipment, but may be assigned a higher priority level (e.g., high or medium) after a worker is detected in the vicinity of the machinery or equipment. Other functionality and processes of the computing architecture may similarly be assigned dynamic priority levels according to the particular use case and state of the environment or other target of the monitoring by the sensor devices, etc.
The various features described above enable the computing architecture 200 to compute, store, and share data in a rapid fashion. For example, the computing architecture 200 can complete a cycle of analysis (e.g., receive and process input data via the capture service 210, analyze the input data via the GPU module 230, evaluate the model outputs via the control logic 240, and transmit a message via the message broker service 250 that is received by the target device) within the above-described threshold period of time.
Referring back to
A user may monitor the environment where the device 152 is being operated via a graphical user interface provided by the computing device 140. For example, the graphical user interface may be configured to present information associated with monitored devices and environments. The user may select one of the devices or environments and the graphical user interface may display information associated with a current status of the selected device(s) and environment. Additionally, the graphical user interface may also display information associated with a history of the device 152 or monitored environment. For example, the history information may include information associated with historical events within the environment or associated with the device 152. The user can select events to view detailed information about the event, such as to view a clip of video content associated with the event, a time of the event, or other types of information. In some aspects, the graphical user interface may also provide functionality for recording notes associated with an event, such as to record whether an injury occurred, whether a cause of the event was resolved, or other types of information. In an aspect, the graphical user interface may present data from different data sources simultaneously. For example, a portion of the presented data may be obtained from the database(s) 118 of the edge node 110 (e.g., the database 260 of
As briefly described above, the edge services 122 may include a message broker service (e.g., the message broker service 250) that is configured to provide commands to devices, such as the device 152, based on analysis of input data provided by the sensor devices 150A-150C. The commands may include commands to change a mode of operation of the device 152, such as to slow down an operating speed of the device 152, increase the operating speed of the device 152, stop or turn off the device 152, or turn on the device 152. The commands may additionally or alternatively include other types of commands, such as commands configured to play an alarm or audible alert to notify an operator of the device 152 of a particular environmental condition (e.g., the worker is not wearing gloves, a hardhat, eye protection, etc.), display an alert on a computing device (e.g., the computing device 160), or other types of commands.
Referring to
At block 310, the method 300 includes receiving, via a capture service executable by a first processor, input data from one or more data sources. As described above with reference to
At step 320, the method 300 includes applying, by a modelling engine executable by a second processor, one or more machine learning models to at least a portion of the input data to produce model output data. In an aspect, the modelling engine may be the modelling engine 120 of
At step 330, the method 300 includes executing, by control logic executable by the first processor, logic parameters against the model output data to produce control data. In an aspect, the logic parameters (e.g., the logic parameters 242 of the control logic 240 of
At step 340, the method 300 includes generating, via a message broker service executable by the first processor, at least one control message based on the control data and at step 350, the method 300 includes transmitting, by the message broker service, the at least one control message to the remote device. In an aspect, the message broker service may be one of the edge services 122 of
As described above, the method 300 enables computer vision techniques to be leveraged from edge computing nodes, such as edge node 110 of
Moreover, it is to be understood that method 300 and the concepts described and illustrated with reference to
Table 1, below, highlights exemplary use cases and examples of the applications and capabilities that may be realized using the computing architectures and functionality disclosed herein. It is noted that the exemplary use cases shown in Table 1 are provided for purposes of illustration, rather than by way of limitation and that the computing architecture and processes described herein may be applied to other use cases where edge devices and computer vision or other modelling techniques and low latency processing are advantageous.
In the non-limiting and exemplary use cases shown above in Table 1, sensors and devices may be deployed in various types of environments to capture data that may be provided to one or more edge nodes, such as the edge node(s) 110 of
Referring to
As described above with reference to
To further illustrate the concepts of the system 400 described above, the edge node 110 may utilize a computing architecture in accordance with the concepts disclosed herein, such as the computing architecture 200 of
The model(s) may be used to evaluate the retrieved sensor data via a GPU module of the edge node 110 (e.g., the GPU module 230 of
The control data and analysis outcomes may be stored in the cache memory for subsequent processing by a message broker service (e.g., the message broker service 250 of
Additionally, the messages 420, 422 may also be used to store information at a remote database, such as to store information regarding the analysis outcomes (e.g., “{A2 B2 C2}”) and/or the sensor data (e.g., A1-An, B1-Bn, C1-Cn, etc., or portions thereof) at a remote database (e.g., a database maintained at the computing device 140 or the computing device 160). In some aspects, the sensor data may only be stored in the local and/or remote database when certain events occur, such as a state change with respect to the worker's safety equipment (e.g., one or more pieces of media content upon which a determination was made that the worker(s) is or is not wearing required safety equipment, a worker has been detected in the vicinity of the machinery 402, etc.). In this manner the volume of data stored at the remote or local database(s) may be minimized while retaining a record of the state of certain key features being monitored within an environment. Similarly, control data may also be stored in the database(s) based on key events, such as when the machinery 402 is enabled, disabled, slowed, etc. based on the state of workers and their safety equipment. The records stored at the database(s) may be timestamped to enable time sequencing of the data, such as to enable a piece of media content to be associated with a control signal transmitted to the controller 404, which may enable a user of the computing device 140 or the computing device 160 to review the control signals and associated media content from which the control signals were generated at a later time, such as during a safety or system audit.
In addition to the messages 420, 422, the message broker of the edge node 110 may also provide control signals 424 to the controller 404 to control the operational state (e.g., enable, disable, slow down, etc.) of the machinery 402 based on the analysis by the control logic 440. For example, in the above-example involving a drill press, the edge node 110 may provide the control signals 424 to the controller 404 to control operations the drill press. The control signals may be generated based on application of the logic parameters 442 of the control logic 440 to the classifications output by the model(s). The logic parameters 442 may be configured to determine whether the drill press is being operated in a safe or unsafe manner based on the outputs of the model(s), and the control signals 424 may include commands to slow or stop the drill press, a command to generate an auditory alert to the drill press operator, or other types of operations to address any unsafe operating conditions detected by the control logic 440. For example, logic parameters 442 are shown in
For example, a first set of the logic parameters 442 may be used to determine whether workers are present in the environment and that required pieces of safety equipment are being worn and a second set of the logic parameters 442 may then determine whether to generate control signals based on the outputs of the evaluation by the certain logic parameters. Exemplary pseudocode illustrating aspects the first and second set of logic parameters described above is shown below:
In the exemplary pseudocode above, worker_present( ) represents a logic parameter that uses classifications {C} as an input to determine whether a worker is present in the monitored environment; gloves_on( ) represents a logic parameter that uses classifications {B} as an input to determine whether gloves are being worn; and eye_protection_on( ) ear_protection_on( ) helmet_on( ) represent logic parameters that use classifications {A} as an input to determine whether eye protection, ear protection, and helmets are being worn. As can be appreciated from the pseudocode above, if no worker is present in the monitored environment (e.g., “if worker_present({C})” evaluates to no) the “else” statement is executed, which sets the “control_signal” variable to “disable” and outputs the “control_signal” variable (e.g., a control signal 424 is transmitted to controller 404 to disable the machinery 402). If a worker is present in the monitored environment (e.g., “if worker_present({C})” evaluates to yes), the nested “if” statements are executed to confirm that required safety equipment is being worn by the worker(s). If gloves_on( ) eye_protection_on( ), ear_protection_on( ), or helmet_on( ) evaluates to “no”, the “else” statement may be executed as described above. However, if gloves_on( ), eye_protection_on( ), ear_protection_on( ), or helmet_on( ) evaluates to “yes” (i.e., all required safety equipment is being worn by the worker(s)), the “control signal” variable is set to “enable” and output (e.g., a control signal 424 is transmitted to controller 404 to enable the machinery 402). In this manner, if a worker is not present or any piece of safety equipment is missing, a control signal 424 will be sent to the machinery 402 to disable operation of the machinery 402, and the machinery will only be enabled if a worker is present and all required safety equipment is detected.
To reduce the number of control signals transmitted by the edge node 110, the pseudocode could be modified to maintain state information and only send the control signal if the state of the machinery 402 is changing. For example:
Using the modified pseudocode above, which maintains state information, the state of the machinery 402 is checked and the control signals are only sent when there is a state change. For example, if a worker is present and all required safety equipment is being worn then the machinery 402 should be in the enabled state. The “if state=disabled” checks to see if the current state of the machinery 402 is disabled, and if disabled (e.g., “if state=disabled” is true), the state is set to enabled, the control_signal variable is set to enable, and the control_signal is output. Similarly, if a worker is not present or all required safety equipment is being worn, the machinery 402 should be in the disabled state. In the “else” clause, the state is first checked to see if the machinery 402 is already in the enabled state, and if enabled, the control_signal variable is set to disable, the state variable is set to disabled, and the control_signal is transmitted to the controller 404. In this manner, the number of control signals transmitted by the edge node 110 may be reduced. It is noted that the exemplary pseudocode described above has been provided for purposes of illustration, rather than by way of limitation and that other techniques may be used to evaluate the classifications and generate control signals in accordance with the concepts disclosed herein. It is noted that the control signals 424 may be transmitted to the controller 404 by a message broker service of the edge node 110 via an edge communication link, as described above.
It is noted that while control logic 440 is shown in
As another example, suppose that the machinery 402 is intended to be operated by a worker that is not wearing gloves (e.g., to provide improved interaction with certain controls of the machinery 402 that may be impeded when the worker is wearing gloves). Suppose that the worker is operating the machinery 402 and then puts on a pair of gloves to pick up an item the worker is working on (e.g., a welded item) and reposition the item for further processing using the machinery 402 or to start working on a new item. The edge node 110 may detect that the user has put on gloves and may transmit a control signal to turn the machinery 402 off. When the worker finishes repositioning the item or has positioned the new item appropriately, the worker may then remove the gloves. The edge node 110 may detect the worker has removed the gloves and provide a control signal to the controller 404 that places the machinery 402 back in the operational state, thereby allowing the worker to continue using the machinery 402.
In addition to models for detecting whether the worker is wearing safety equipment, the models of the edge node 110 may also be configured to provide computer vision-based functionality for monitoring other aspects of worker safety. For example, the models of the edge node 110 may include models configured to detect whether the worker is using the machinery 402 in a safe manner, such as to detect whether a portion of the worker's body (e.g., hands, legs, arms, etc.) is close to one or more moving parts of the machinery 402 (e.g., a saw blade, a drill bit of a drill press, and the like). If the edge node 110 detects that the machinery 402 is being operated in an unsafe manner by the worker, the edge node 110 may provide a control signal to the controller 404 to turn off a particular portion of the machinery 402 (e.g., stop rotation or oscillation of a saw blade, etc.) or turn off the machinery 402 completely. In some aspects, the edge node 110 may provide control signals to the controller 404 that may be used to provide feedback to the worker regarding detection of unsafe operation of the machinery 402. For example, where the machinery 402 is a saw, a first control signal may be transmitted from the edge node 110 to the controller 404 to change a characteristic of the rotation or oscillation of the saw blade, such as to slow down the saw blade or to pulse the saw blade (e.g., speed up and slow down the saw blade multiple times). The changing of the characteristic of the rotation or oscillation of the saw blade may inform the worker of an unsafe operating condition, such as to indicate that the worker's hand(s) are approaching a position considered too close to the blade (e.g., once the worker's hand(s) reach the position deemed too close to the blade the saw may be turned off) or that another worker is present in the environment in the vicinity of the machinery 402.
As an additional example, the models of the edge node 110 may include a model configured to detect movement of workers in the environment where the machinery is located, and the control logic 440 may be configured to selectively turn off the machinery 402 based on detection of the worker. For example, in
In addition to monitoring worker safety, the exemplary configuration of the system 400 of
Additionally, the control logic 440 may provide a notification to the computing device 140 and/or the computing device 160 indicating the detection of a problem condition with respect to operation of the machinery 402. For example, the computing device 140 may be associated with maintenance personnel and the notification may indicate that a potential problem has been detected with respect to the machinery 402. The notification may include information associated with a predicted problem with the machinery 402, which may be predicted based on a classification of the sensor data by the one or more models. The maintenance personnel may subsequently inspect the machinery 402 to confirm the existence of a problem with the machinery 402 and make any necessary repairs. As described above, information associated with the analysis performed by the edge node 110 may also be stored in a database and presented to a user via a graphical user interface, such as a graphical user interface presented at a display device associated with the computing device 140 and/or a display device associated with the computing device 160. Presenting the information at the graphical user interface may facilitate real-time monitoring of the environment where the machinery 402 is located. The graphical user interface may also enable the user to view historic information associated with the environment where the machinery 402 is located, as described above.
In addition to utilizing the computing architectures disclosed herein to achieve low-latency control and messaging, the edge node 110 may utilize additional techniques to improve the flow and processing of data, which may further improve the low latency capabilities of the edge node 110. For example, prioritization techniques may be utilized to prioritize memory cache streams and control priority of computing and processing resources of the edge node 110. As explained above, the edge node 110 may provide functionality to support different workflows and capabilities, such as processes to condition sensor data for ingestion by the model(s), evaluating the conditioned sensor data by the model(s), evaluation of classifications generated by the model(s) by the control logic, and transmission of control signals and messages. The prioritization techniques may include multiple priority levels for different processing and data streams. For example, the priority levels may include 3 priority levels: high, medium, and low. High priority levels may be associated with critical (e.g., in terms of latency or information) workflows and capabilities, such as data ingestion and model object detection and classification. Medium priority levels may be associated with streams currently being used to process important workflows that do not require a lot of processing time (e.g., important micro tasks) and/or do not get a hold of the resource for a long time, such as applying control logic 440 to classification data to extract meaningful outcomes. Low priority levels may be associated with processes that do not require or mandate a real-time “ultra-low latency” action or processing. As a non-limiting example, the 3 priority levels may be applied in the above-described use case as follows: low priority may be assigned to processes and streams used to store data to a local and/or remote database, serve data to dashboards (e.g., provide data to GUIs or other devices via APIs, data syncs, etc.), or other tasks (e.g., workflows and processes related to analysis of sensor data related to performance of the machinery 402, which may be useful but lower priority than worker safety processes); medium priority may be assigned to processes for evaluating classification data for detection of worker safety issues and conditioning and modelling processes for processing; and high Priority may be used for processing ingesting sensor data, pre-processing the sensor data for analysis by the models, and evaluating the processed or conditioned data by the models. As explained above with reference to
As shown above, systems incorporating edge nodes configured in accordance with the computing architectures and techniques disclosed herein enable monitoring of environments via analysis of data streams provided by various sensors using one or more machine learning models. The machine learning models may characterize or classify events occurring within the monitored environment based on the information included in the data streams and control logic may evaluate the events occurring within the environment based on the outputs of the machine learning models to provide feedback to the monitored environment (e.g., to control operations of machinery or other devices in the monitored environment) and/or users associated with the monitored environment (e.g., workers within the environment, maintenance personnel, a supervisor, and the like). Due to the utilization of edge nodes for analysis of the data streams and the computing architectures of the present disclosure, the feedback (e.g., the messages 420, 422 and the control signals 424) may be provided in real-time or near-real-time (if desired), which may prevent injury to individuals within the environment (e.g., in a worker safety use case) and/or mitigate a likelihood of damage or failure of machinery and equipment within the environment (e.g., in a predictive maintenance and/or remote diagnostics use case).
It is noted that while
Referring to
The sensors 510-518 may be configured to monitor various portions of a production infrastructure 502. The production infrastructure 502 may include components or machinery to facilitate movement of items or products 506 in the direction shown by arrows 520, 522 (e.g., from left to right in
The model(s) of the edge node 110 and/or the control logic 540 may additionally or alternatively be configured to determine a cause of at least some of the defects identified by the edge node 110. For example, the production infrastructure 502 may involve heating and/or cooling processes and certain types of defects may be more prevalent when the heating and/or cooling processes occur too rapidly or result in temperatures that too high or too low for current environmental conditions (e.g., ambient temperature, humidity, etc.). The sensors 510-518 may include devices that provide environmental data regarding the environment where the production infrastructure (or a portion thereof) is located, such as ambient temperature data, humidity data, temperature data associated with heating or cooling processes, temperature data associated with products moving through the production infrastructure, and the like. The environmental data may be analyzed by the model(s) and/or the control logic to predict causes of one or more types of defects. For example, if one or more of the models classify detected defects as cracks, another model and/or the control logic may evaluate the environmental data to determine whether a cooling process is occurring too rapidly or slowly (e.g., due to a temperature of the cooling processes being too cold or too hot or because a conveyor is moving the product(s) through the cooling process too slow or fast). When a potential cause for the cracks is determined based on the environmental data, one or more of the messages 520, 522 may be provided to the computing devices 140, 160 to indicate the cause of identified defects. In an aspect, one or more of the messages 520, 522 transmitted by the edge node 110 may include other types of information, such as information that indicates a possible cause of the detected or predicted defects (e.g., the defect is being caused by one or more processes or functionality of the production infrastructure 502, other environmental conditions, and the like). As described above with reference to
In some aspects, control signals 524 may also be sent to one or more controller devices 504, which may be configured to control operations of the production infrastructure 502. To illustrate, a control signal 524 may be sent to a controller 504 configured to control a cooling temperature used by a cooling process of the production infrastructure 502 to modify the temperature (e.g., increase or decrease the temperature) of the cooling process. Additionally or alternatively, a control signal 524 may be provided to a controller 504 configured to control a rate or speed at which products are moved through the cooling process (e.g., to speed up or slow down the cooling process). Other types of control signals 524 may also be provided to controllers 504 of the production infrastructure to minimize further occurrences of defects detected by the edge node 110. In additional or alternative aspects, the messages 520, 522 transmitted to one or more of the computing devices 140, 160 may include recommended modifications to the operations of the production infrastructure 502 and the control signals 524 may be provided to the controller(s) 504 by the computing device(s) after review by a user, such as in response to inputs provided by the user to a graphical user interface (e.g., a dashboard or other application).
Using the system 500, users of the computing devices 140, 160 may monitor the production infrastructure 502 and receive information in real-time or near-real-time (e.g., less than 100 ms, less than 75 ms, less than 50 ms, approximately 30 ms, etc.) regarding defects or other abnormalities detected with respect to products moving through the production infrastructure 502. Moreover, the functionality provided by the edge node 110 of the system 500 may enable actions to mitigate detected defects and anomalies to be implemented automatically (e.g., via control signals provided from the edge node(s) 110 to the controller(s) 504) or recommendations regarding actions to mitigate detected defects and anomalies to be provided to the users of the computing devices 140, 160. Using such capabilities may enable the production infrastructure 502 to be controlled and operated in a more efficient manner and enable mitigation of defects or other issues to be addressed more quickly as compared to currently available production management solutions. Moreover, in some implementations the models of the edge node(s) 110 may be configured to predict the occurrence of defects or other production anomalies prior to the widespread occurrence of the defects based on information provided by one or more of the sensor devices 510-518, which may enable mitigation actions to be implemented (automatically or at the direction of the user(s)) in a pre-emptive, rather than reactive manner.
To further illustrate the concepts of the system 500 described above, the edge node 110 may utilize a computing architecture in accordance with the concepts disclosed herein, such as the computing architecture 200 of
As in the examples described above, the model(s) of the GPU module may be used to evaluate the retrieved sensor data, and one or more classifications may be output based on evaluation of the cached media content. In the example use case above where the system 500 is used to monitor the production infrastructure 502, the classifications may include classifications indicating whether defects are or are not detected, as well as other types of classifications associated with the processes of the production infrastructure 502, such as classifications associated with a speed at which products are moving through the production infrastructure 502, temperature classifications (e.g., classification of temperatures of cooling or heating processes, ambient environment temperatures, and the like), or other classifications. The classifications output by the model(s) may be stored in the cache memory and may be subsequently retrieved for analysis by control logic 540, which may be similar to the control logic 240 of
The control data and analysis outcomes may be subsequently retrieved from the cache memory for processing by a message broker service (e.g., the message broker service 250 of
Additionally, the messages 520, 522 may also be used to store information at a remote database, such as to store information regarding the analysis outcomes (e.g., “{A2 B2 C2}”) and/or the sensor data (e.g., A1-An, B1-Bn, C1-Cn, etc., or portions thereof) at a remote database (e.g., a database maintained at the computing device 140 or the computing device 160). In some aspects, the sensor data may only be stored in the local and/or remote database when certain events occur, such as to store one or more pieces of media content upon which a determination was made that a defect has occurred). In this manner the volume of data stored at the remote or local database(s) may be minimized while retaining a record of the state of certain key features being monitored within an environment. Similarly, control data may also be stored in the database(s) based on key events, such as when defects are detected or operations of the production infrastructure 502 are outside of tolerable ranges. The records stored at the database(s) may be timestamped to enable time sequencing of the data, such as to enable a piece of sensor data to be associated with a control signal transmitted to the controller 504, which may enable a user of the computing device 140 or the computing device 160 to review the control signals and associated sensor data from which the control signals were generated at a later time, such as during a system or performance audit.
In addition to the messages 520, 522, the message broker of the edge node 110 may also provide control signals 524 to the controller 404 to control operations of the production infrastructure 502 based on the analysis by the control logic 540. To illustrate, in the above-example involving monitoring the production infrastructure 502 for defects, the edge node 110 may provide the control signals 424 to the controller 504 to control operations the production infrastructure 502. The control signals 524 may be generated based on application of the logic parameters 542 of the control logic 540 to the classifications output by the model(s). The logic parameters 542 may be configured to determine whether defects are present, whether operational parameters are within tolerable ranges, or other features related to the production infrastructure 502. For example, the logic parameters 542 are shown in
While the description of
In an aspect, sensor devices utilized by systems in accordance with the present disclosure may also be used to trigger analysis by the edge nodes. For example, in an asset tracking and warehouse management use case a sensor device (e.g., an RFID device) may detect items as they pass a certain location (e.g., an entry way to a warehouse, an aisle, a loading dock, etc.) and information associated with the detected items may be transmitted to an edge node(s). The edge node may then use media content received from other sensor devices (e.g., cameras) and models to track movement of the items to particular locations within the warehouse. Information associated with the locations of the items may then be stored in a database (e.g., a database stored at a memory of the computing device 140, the computing device 160, and/or another data storage device). It is noted that the description above where RFID devices are used as triggering events to detect movement of items has been provided by way of illustration, rather than by way of limitation and the asset tracking and warehouse management systems operating in accordance with the present disclosure may utilize different techniques to detect and track items.
Although the embodiments of the present disclosure and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
The present application claims the benefit of priority from U.S. Provisional Application No. 63/245,192 filed Sep. 16, 2021 and entitled “SYSTEMS AND METHODS FOR LOW LATENCY ANALYTICS AND CONTROL OF DEVICES VIA EDGE NODES AND NEXT GENERATION NETWORKS,” the disclosure of which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63245192 | Sep 2021 | US |