The present disclosure relates generally to implementing machine-learning on edge devices in a building management system.
The present disclosure relates generally to a building management system (BMS) that operates for a building, and automatic configuration techniques that may be utilized to configure various computing systems or equipment of a building.
Machine-learning models may utilize computational resources to process data. Such machine-learning models often leverage high-performance computing components, such as graphics processing units (GPUs) or high-performance field programmable gate arrays (FPGAs). It is therefore challenging to execute such machine-learning models on edge devices, which may have limited processing capabilities or hardware.
At least one aspect of the present disclosure is directed to a building management system. The building management system can include one or more processors coupled to non-transitory memory. The building management system can receive optimization criteria to optimize a first machine-learning model for a target platform. The first machine-learning model can have one or more model parameters. The building management system can transform, based on the target platform and the optimization criteria, at least one datatype of the one or more model parameters of the first machine-learning model to generate a second machine-learning model. The building management system can determine, using a verification dataset, an accuracy of the second machine-learning model. The building management system can retrain the second machine-learning model using a training dataset responsive to the accuracy being less than a predetermined threshold.
In some implementations, the building management system can prune at least one parameter of the second machine-learning model. In some implementations, the building management system can retrain the second machine-learning model responsive to pruning the at least one parameter of the second machine-learning model. In some implementations, the building management system can select the at least one parameter of the second machine-learning model based on a layer type of a machine-learning layer including the at least one parameter.
In some implementations, the building management system can prune the at least one parameter of the second machine-learning model according to a transfer learning process. In some implementations, the building management system can modify a runtime of the second machine-learning model based on the target platform. In some implementations, at least one datatype is a 64-bit or a 32-bit floating point type, and the one or more model parameters of the second machine-learning model are transformed to include a 32-bit, 16-bit, or 8-bit floating point datatype. In some implementations, the building management system can determine, using the verification dataset, a second accuracy of the second machine-learning model responsive to retraining the second machine-learning model.
At least one other aspect of the present disclosure is directed to a method. The method may be performed by one or more processors of a building management system. The method includes receiving optimization criteria to optimize a first machine-learning model for a target platform. The first machine-learning model has one or more model parameters. The method includes transforming, based on the target platform and the optimization criteria, at least one datatype of the one or more model parameters of the first machine-learning model to generate a second machine-learning model. The method includes determining, using a verification dataset, an accuracy of the second machine-learning model. The method includes retraining the second machine-learning model using a training dataset responsive to the accuracy being less than a predetermined threshold.
In some implementations, the method includes pruning at least one parameter of the second machine-learning model. In some implementations, the method includes retraining the second machine-learning model responsive to pruning the at least one parameter of the second machine-learning model. In some implementations, the method includes selecting the at least one parameter of the second machine-learning model based on a layer type of a machine-learning layer including the at least one parameter. In some implementations, the method includes pruning the at least one parameter of the second machine-learning model according to a transfer learning process.
In some implementations, the method includes modifying a runtime of the second machine-learning model based on the target platform. In some implementations, at least one datatype is a 64-bit or a 32-bit floating point type, and the one or more model parameters of the second machine-learning model are transformed to include a 32-bit, 16-bit, or 8-bit floating point datatype. In some implementations, the method includes determining, using the verification dataset, a second accuracy of the second machine-learning model responsive to retraining the second machine-learning model.
Yet one other aspect of the present disclosure is directed to a non-transitory computer-readable medium with instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform one or more operations. The operations include receiving optimization criteria to optimize a first machine-learning model for a target platform. The first machine-learning model has one or more model parameters. The operations include transforming, based on the target platform and the optimization criteria, at least one datatype of the one or more model parameters of the first machine-learning model to generate a second machine-learning model. The operations include determining, using a verification dataset, an accuracy of the second machine-learning model. The operations include retraining the second machine-learning model using a training dataset responsive to the accuracy being less than a predetermined threshold.
In some implementations, the operations further include pruning at least one parameter of the second machine-learning model. In some implementations, the operations further include retraining the second machine-learning model responsive to pruning the at least one parameter of the second machine-learning model. In some implementations, the operations further include selecting the at least one parameter of the second machine-learning model based on a layer type of a machine-learning layer including the at least one parameter.
Various objects, aspects, features, and advantages of the disclosure will become more apparent and better understood by referring to the detailed description taken in conjunction with the accompanying drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements.
Referring generally to the figures, systems and methods for a building management system (BMS) with an edge system is shown, according to various exemplary embodiments. The edge system may, in some embodiments, be a software service added to a network of a BMS that can run on one or multiple different nodes of the network. The software service can be made up in terms of components, e.g., integration components, connector components, a building normalization component, software service components, endpoints, etc. The various components can be deployed on various nodes of the network to implement an edge platform that facilitates communication between a cloud or other off-premises platform and the local subsystems of the building. In some embodiments, the edge platform techniques described herein can be implemented for supporting off-premises platforms such as servers, computing clusters, computing systems located in a building other than the edge platform, or any other computing environment.
The nodes of the network could be servers, desktop computers, controllers, virtual machines, etc. In some implementations, the edge system can be deployed on multiple nodes of a network or multiple devices of a BMS with or without interfacing with a cloud or off-premises system. For example, in some implementations, the systems and methods of the present disclosure could be used to coordinate between multiple on-premises devices to perform functions of the BMS partially or wholly without interacting with a cloud or off-premises device (e.g., in a peer-to-peer manner between edge-based devices or in coordination with an on-premises server/gateway).
In some embodiments, the various components of the edge platform can be moved around various nodes of the BMS network as well as the cloud platform. The components may include software services, e.g., control applications, analytics applications, machine-learning models, artificial intelligence systems, user interface applications, etc. The software services may have requirements, e.g., a requirement that another software service be present or be in communication with the software service, a particular level of processing resource availability, a particular level of storage availability, etc. In some embodiments, the services of the edge platform can be moved around the nodes of the network based on available data, processing hardware, memory devices, etc. of the nodes. The various software services can be dynamically relocated around the nodes of the network based on the requirements for each software service. In some embodiments, an orchestrator run in a cloud platform, orchestrators distributed across the nodes of the network, and/or the software service itself can make determinations to dynamically relocate the software service around the nodes of the network and/or the cloud platform.
In some embodiments, the edge system can implement plug and play capabilities for connecting devices of a building and connecting the devices to the cloud platform. In some embodiments, the components of the edge system can automatically configure the connection for a new device. For example, when a new device is connected to the edge platform, a tagging and/or recognition process can be performed. This tagging and recognition could be performed in a first building. The result of the tagging and/or recognition may be a configuration indicating how the new device or subsystem should be connected, e.g., point mappings, point lists, communication protocols, necessary integrations, etc. The tagging and/or discovery can, in some embodiments, be performed in a cloud platform and/or twin platform, e.g., based on a digital twin. The resulting configuration can be distributed to every node of the edge system, e.g., to a building normalization component. In some embodiments, the configuration can be stored in a single system, e.g., the cloud platform, and the building normalization component can retrieve the configuration from the cloud platform.
When another device of the same type is installed in the building or another building, a building normalization component can store an indication of the configuration and/or retrieve the indication of the configuration from the cloud platform. The building normalization component can facilitate plug and play by loading and/or implementing the configuration for the device without requiring a tagging and/or discover process. This can allow for the device to be installed and run without requiring any significant amount of setup.
In some embodiments, the building normalization component of one node may discover a device connected to the node. Responsive to detecting the new device, the building normalization component may search a device library and/or registry stored in the normalization component (or on another system) to identify a configuration for the new device. If the new device configuration is not present, the normalization component may send a broadcast to other nodes. For example, the broadcast could indicate an air handling unit (AHU) of a particular type, for a particular vendor, with particular points, etc. Other nodes could respond to the broadcast message with a configuration for the AHU. In some embodiments, a cloud platform could unify configurations for devices of multiple building sites and thus a configuration discovered at one building site could be used at another building site through the cloud platform. In some embodiments, the configurations for different devices could be stored in a digital twin. The digital twin could be used to perform auto configuration, in some embodiments.
In some embodiments, a digital twin of a building could be analyzed to identify how to configure a new device when the new device is connected to an edge device. For example, the digital twin could indicate the various points, communication protocols, functions, etc. of a device type of the new device (e.g., another instance of the device type). Based on the indication of the digital twin, a particular configuration for the new device could be deployed to the edge device that facilitates communication for the new device.
Referring now to
The building data platform 100 includes applications 110. The applications 110 can be various applications that operate to manage the building subsystems 122. The applications 110 can be remote or on-premises applications (or a hybrid of both) that run on various computing systems. The applications 110 can include an alarm application 168 configured to manage alarms for the building subsystems 122. The applications 110 include an assurance application 170 that implements assurance services for the building subsystems 122. In some embodiments, the applications 110 include an energy application 172 configured to manage the energy usage of the building subsystems 122. The applications 110 include a security application 174 configured to manage security systems of the building.
In some embodiments, the applications 110 and/or the cloud platform 106 interacts with a user device 176. In some embodiments, a component or an entire application of the applications 110 runs on the user device 176. The user device 176 may be a laptop computer, a desktop computer, a smartphone, a tablet, and/or any other device with an input interface (e.g., touch screen, mouse, keyboard, etc.) and an output interface (e.g., a speaker, a display, etc.).
The applications 110, the twin manager 108, the cloud platform 106, and the edge platform 102 can be implemented on one or more computing systems, e.g., on processors and/or memory devices. For example, the edge platform 102 includes processor(s) 118 and memories 120, the cloud platform 106 includes processor(s) 124 and memories 126, the applications 110 include processor(s) 164 and memories 166, and the twin manager 108 includes processor(s) 148 and memories 150.
The processors can be a general purpose or specific purpose processors, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a group of processing components, or other suitable processing components. The processors may be configured to execute computer code and/or instructions stored in the memories or received from other computer readable media (e.g., CDROM, network storage, a remote server, etc.).
The memories can include one or more devices (e.g., memory units, memory devices, storage devices, etc.) for storing data and/or computer code for completing and/or facilitating the various processes described in the present disclosure. The memories can include random access memory (RAM), read-only memory (ROM), hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, or any other suitable memory for storing software objects and/or computer instructions. The memories can include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. The memories can be communicably connected to the processors and can include computer code for executing (e.g., by the processors) one or more processes described herein.
The edge platform 102 can be configured to provide connection to the building subsystems 122. The edge platform 102 can receive messages from the building subsystems 122 and/or deliver messages to the building subsystems 122. The edge platform 102 includes one or multiple gateways, e.g., the gateways 112-116. The gateways 112-116 can act as a gateway between the cloud platform 106 and the building subsystems 122. The gateways 112-116 can be the gateways described in U.S. patent application Ser. No. 17/127,303, filed Dec. 18, 2020, the entirety of which is incorporated by reference herein. In some embodiments, the applications 110 can be deployed on the edge platform 102. In this regard, lower latency in management of the building subsystems 122 can be realized.
The edge platform 102 can be connected to the cloud platform 106 via a network 104. The network 104 can communicatively couple the devices and systems of building data platform 100. In some embodiments, the network 104 is at least one of and/or a combination of a Wi-Fi network, a wired Ethernet network, a ZigBee network, a Bluetooth network, and/or any other wireless network. The network 104 may be a local area network or a wide area network (e.g., the Internet, a building WAN, etc.) and may use a variety of communications protocols (e.g., BACnet, IP, LON, etc.). The network 104 may include routers, modems, servers, cell towers, satellites, and/or network switches. The network 104 may be a combination of wired and wireless networks.
The cloud platform 106 can be configured to facilitate communication and routing of messages between the applications 110, the twin manager 108, the edge platform 102, and/or any other system. The cloud platform 106 can include a platform manager 128, a messaging manager 140, a command processor 136, and an enrichment manager 138. In some embodiments, the cloud platform 106 can facilitate messaging between the building data platform 100 via the network 104.
The messaging manager 140 can be configured to operate as a transport service that controls communication with the building subsystems 122 and/or any other system, e.g., managing commands to devices (C2D), commands to connectors (C2C) for external systems, commands from the device to the cloud (D2C), and/or notifications. The messaging manager 140 can receive different types of data from the applications 110, the twin manager 108, and/or the edge platform 102. The messaging manager 140 can receive change on value data 142, e.g., data that indicates that a value of a point has changed. The messaging manager 140 can receive timeseries data 144, e.g., a time correlated series of data entries each associated with a particular time stamp. Furthermore, the messaging manager 140 can receive command data 146. All of the messages handled by the cloud platform 106 can be handled as an event, e.g., the data 142-146 can each be packaged as an event with a data value occurring at a particular time (e.g., a temperature measurement made at a particular time).
The cloud platform 106 includes a command processor 136. The command processor 136 can be configured to receive commands to perform an action from the applications 110, the building subsystems 122, the user device 176, etc. The command processor 136 can manage the commands, determine whether the commanding system is authorized to perform the particular commands, and communicate the commands to the commanded system, e.g., the building subsystems 122 and/or the applications 110. The commands could be a command to change an operational setting that control environmental conditions of a building, a command to run analytics, etc.
The cloud platform 106 includes an enrichment manager 138. The enrichment manager 138 can be configured to enrich the events received by the messaging manager 140. The enrichment manager 138 can be configured to add contextual information to the events. The enrichment manager 138 can communicate with the twin manager 108 to retrieve the contextual information. In some embodiments, the contextual information is an indication of information related to the event. For example, if the event is a timeseries temperature measurement of a thermostat, contextual information such as the location of the thermostat (e.g., what room), the equipment controlled by the thermostat (e.g., what VAV), etc. can be added to the event. In this regard, when a consuming application, e.g., one of the applications 110 receives the event, the consuming application can operate based on the data of the event, the temperature measurement, and also the contextual information of the event.
The enrichment manager 138 can solve a problem that when a device produces a significant amount of information, the information may contain simple data without context. An example might include the data generated when a user scans a badge at a badge scanner of the building subsystems 122. This physical event can generate an output event including such information as “DeviceBadgeScannerID,” “BadgeID,” and/or “Date/Time.” However, if a system sends this data to a consuming application, e.g., Consumer A and a Consumer B, each customer may need to call the building data platform knowledge service to query information with queries such as, “What space, build, floor is that badge scanner in?” or “What user is associated with that badge?”
By performing enrichment on the data feed, a system can be able to perform inferences on the data. A result of the enrichment may be transformation of the message “DeviceBadgeScannerId, BadgeId, Date/Time,” to “Region, Building, Floor, Asset, DeviceId, BadgeId, UserName, EmployeeId, Date/Time Scanned.” This can be a significant optimization, as a system can reduce the number of calls by 1/n, where n is the number of consumers of this data feed.
By using this enrichment, a system can also have the ability to filter out undesired events. If there are 100 building in a campus that receive 100,000 events per building each hour, but only 1 building is actually commissioned, only 1/10 of the events are enriched. By looking at what events are enriched and what events are not enriched, a system can do traffic shaping of forwarding of these events to reduce the cost of forwarding events that no consuming application wants or reads.
An example of an event received by the enrichment manager 138 may be:
An example of an enriched event generated by the enrichment manager 138 may be:
By receiving enriched events, an application of the applications 110 can be able to populate and/or filter what events are associated with what areas. Furthermore, user interface generating applications can generate user interfaces that include the contextual information based on the enriched events.
The cloud platform 106 includes a platform manager 128. The platform manager 128 can be configured to manage the users and/or subscriptions of the cloud platform 106. For example, what subscribing building, user, and/or tenant utilizes the cloud platform 106. The platform manager 128 includes a provisioning service 130 configured to provision the cloud platform 106, the edge platform 102, and the twin manager 108. The platform manager 128 includes a subscription service 132 configured to manage a subscription of the building, user, and/or tenant while the entitlement service 134 can track entitlements of the buildings, users, and/or tenants.
The twin manager 108 can be configured to manage and maintain a digital twin. The digital twin can be a digital representation of the physical environment, e.g., a building. The twin manager 108 can include a change feed generator 152, a schema and ontology 154, a projection manager 156, a policy manager 158, an entity, relationship, and event database 160, and a graph projection database 162.
The graph projection manager 156 can be configured to construct graph projections and store the graph projections in the graph projection database 162. Entities, relationships, and events can be stored in the database 160. The graph projection manager 156 can retrieve entities, relationships, and/or events from the database 160 and construct a graph projection based on the retrieved entities, relationships and/or events. In some embodiments, the database 160 includes an entity-relationship collection for multiple subscriptions.
In some embodiment, the graph projection manager 156 generates a graph projection for a particular user, application, subscription, and/or system. In this regard, the graph projection can be generated based on policies for the particular user, application, and/or system in addition to an ontology specific for that user, application, and/or system. In this regard, an entity could request a graph projection and the graph projection manager 156 can be configured to generate the graph projection for the entity based on policies and an ontology specific to the entity. The policies can indicate what entities, relationships, and/or events the entity has access to. The ontology can indicate what types of relationships between entities the requesting entity expects to see, e.g., floors within a building, devices within a floor, etc. Another requesting entity may have an ontology to see devices within a building and applications for the devices within the graph.
The graph projections generated by the graph projection manager 156 and stored in the graph projection database 162 can be a knowledge graph and is an integration point. For example, the graph projections can represent floor plans and systems associated with each floor. Furthermore, the graph projections can include events, e.g., telemetry data of the building subsystems 122. The graph projections can show application services as nodes and API calls between the services as edges in the graph. The graph projections can illustrate the capabilities of spaces, users, and/or devices. The graph projections can include indications of the building subsystems 122, e.g., thermostats, cameras, VAVs, etc. The graph projection database 162 can store graph projections that keep up a current state of a building.
The graph projections of the graph projection database 162 can be digital twins of a building. Digital twins can be digital replicas of physical entities that enable an in-depth analysis of data of the physical entities and provide the potential to monitor systems to mitigate risks, manage issues, and utilize simulations to test future solutions. Digital twins can play an important role in helping technicians find the root cause of issues and solve problems faster, in supporting safety and security protocols, and in supporting building managers in more efficient use of energy and other facilities resources. Digital twins can be used to enable and unify security systems, employee experience, facilities management, sustainability, etc.
In some embodiments the enrichment manager 138 can use a graph projection of the graph projection database 162 to enrich events. In some embodiments, the enrichment manager 138 can identify nodes and relationships that are associated with, and are pertinent to, the device that generated the event. For example, the enrichment manager 138 could identify a thermostat generating a temperature measurement event within the graph. The enrichment manager 138 can identify relationships between the thermostat and spaces, e.g., a zone that the thermostat is located in. The enrichment manager 138 can add an indication of the zone to the event.
Furthermore, the command processor 136 can be configured to utilize the graph projections to command the building subsystems 122. The command processor 136 can identify a policy for a commanding entity within the graph projection to determine whether the commanding entity has the ability to make the command. For example, the command processor 136, before allowing a user to make a command, determine, based on the graph projection database 162, to determine that the user has a policy to be able to make the command.
In some embodiments, the policies can be conditional based policies. For example, the building data platform 100 can apply one or more conditional rules to determine whether a particular system has the ability to perform an action. In some embodiments, the rules analyze a behavioral based biometric. For example, a behavioral based biometric can indicate normal behavior and/or normal behavior rules for a system. In some embodiments, when the building data platform 100 determines, based on the one or more conditional rules, that an action requested by a system does not match a normal behavior, the building data platform 100 can deny the system the ability to perform the action and/or request approval from a higher level system.
For example, a behavior rule could indicate that a user has access to log into a system with a particular IP address between 8A.M. through 5 P.M. However, if the user logs in to the system at 7 P.M., the building data platform 100 may contact an administrator to determine whether to give the user permission to log in.
The change feed generator 152 can be configured to generate a feed of events that indicate changes to the digital twin, e.g., to the graph. The change feed generator 152 can track changes to the entities, relationships, and/or events of the graph. For example, the change feed generator 152 can detect an addition, deletion, and/or modification of a node or edge of the graph, e.g., changing the entities, relationships, and/or events within the database 160. In response to detecting a change to the graph, the change feed generator 152 can generate an event summarizing the change. The event can indicate what nodes and/or edges have changed and how the nodes and edges have changed. The events can be posted to a topic by the change feed generator 152.
The change feed generator 152 can implement a change feed of a knowledge graph. The building data platform 100 can implement a subscription to changes in the knowledge graph. When the change feed generator 152 posts events in the change feed, subscribing systems or applications can receive the change feed event. By generating a record of all changes that have happened, a system can stage data in different ways, and then replay the data back in whatever order the system wishes. This can include running the changes sequentially one by one and/or by jumping from one major change to the next. For example, to generate a graph at a particular time, all change feed events up to the particular time can be used to construct the graph.
The change feed can track the changes in each node in the graph and the relationships related to them, in some embodiments. If a user wants to subscribe to these changes and the user has proper access, the user can simply submit a web API call to have sequential notifications of each change that happens in the graph. A user and/or system can replay the changes one by one to reinstitute the graph at any given time slice. Even though the messages are “thin” and only include notification of change and the reference “id/seq id,” the change feed can keep a copy of every state of each node and/or relationship so that a user and/or system can retrieve those past states at any time for each node. Furthermore, a consumer of the change feed could also create dynamic “views” allowing different “snapshots” in time of what the graph looks like from a particular context. While the twin manager 108 may contain the history and the current state of the graph based upon schema evaluation, a consumer can retain a copy of that data, and thereby create dynamic views using the change feed.
The schema and ontology 154 can define the message schema and graph ontology of the twin manager 108. The message schema can define what format messages received by the messaging manager 140 should have, e.g., what parameters, what formats, etc. The ontology can define graph projections, e.g., the ontology that a user wishes to view. For example, various systems, applications, and/or users can be associated with a graph ontology. Accordingly, when the graph projection manager 156 generates an graph projection for a user, system, or subscription, the graph projection manager 156 can generate a graph projection according to the ontology specific to the user. For example, the ontology can define what types of entities are related in what order in a graph, for example, for the ontology for a subscription of “Customer A,” the graph projection manager 156 can create relationships for a graph projection based on the rule:
For the ontology of a subscription of “Customer B,” the graph projection manager 156 can create relationships based on the rule:
The policy manager 158 can be configured to respond to requests from other applications and/or systems for policies. The policy manager 158 can consult a graph projection to determine what permissions different applications, users, and/or devices have. The graph projection can indicate various permissions that different types of entities have and the policy manager 158 can search the graph projection to identify the permissions of a particular entity. The policy manager 158 can facilitate fine grain access control with user permissions. The policy manager 158 can apply permissions across a graph, e.g., if “user can view all data associated with floor 1” then they see all subsystem data for that floor, e.g., surveillance cameras, HVAC devices, fire detection and response devices, etc.
The twin manager 108 includes a query manager 165 and a twin function manager 167. The query manger 164 can be configured to handle queries received from a requesting system, e.g., the user device 176, the applications 110, and/or any other system. The query manager 165 can receive queries that include query parameters and context. The query manager 165 can query the graph projection database 162 with the query parameters to retrieve a result. The query manager 165 can then cause an event processor, e.g., a twin function, to operate based on the result and the context. In some embodiments, the query manager 165 can select the twin function based on the context and/or perform operates based on the context.
The twin function manager 167 can be configured to manage the execution of twin functions. The twin function manager 167 can receive an indication of a context query that identifies a particular data element and/or pattern in the graph projection database 162. Responsive to the particular data element and/or pattern occurring in the graph projection database 162 (e.g., based on a new data event added to the graph projection database 162 and/or change to nodes or edges of the graph projection database 162, the twin function manager 167 can cause a particular twin function to execute. The twin function can execute based on an event, context, and/or rules. The event can be data that the twin function executes against. The context can be information that provides a contextual description of the data, e.g., what device the event is associated with, what control point should be updated based on the event, etc. The twin function manager 167 can be configured to perform the operations of the
Referring now to
The graph projection 200 includes a device hub 202 which may represent a software service that facilitates the communication of data and commands between the cloud platform 106 and a device of the building subsystems 122, e.g., door actuator 214. The device hub 202 is related to a connector 204, an external system 206, and a digital asset “Door Actuator” 208 by edge 250, edge 252, and edge 254.
The cloud platform 106 can be configured to identify the device hub 202, the connector 204, the external system 206 related to the door actuator 214 by searching the graph projection 200 and identifying the edges 250-254 and edge 258. The graph projection 200 includes a digital representation of the “Door Actuator,” node 208. The digital asset “Door Actuator” 208 includes a “DeviceNameSpace” represented by node 207 and related to the digital asset “Door Actuator” 208 by the “Property of Object” edge 256.
The “Door Actuator” 214 has points and timeseries. The “Door Actuator” 214 is related to “Point A” 216 by a “has_a” edge 260. The “Door Actuator” 214 is related to “Point B” 218 by a “has_A” edge 258. Furthermore, timeseries associated with the points A and B are represented by nodes “TS” 220 and “TS” 222. The timeseries are related to the points A and B by “has_a” edge 264 and “has_a” edge 262. The timeseries “TS” 220 has particular samples, sample 210 and 212 each related to “TS” 220 with edges 268 and 266 respectively. Each sample includes a time and a value. Each sample may be an event received from the door actuator that the cloud platform 106 ingests into the entity, relationship, and event database 160, e.g., ingests into the graph projection 200.
The graph projection 200 includes a building 234 representing a physical building. The building includes a floor represented by floor 232 related to the building 234 by the “has_a” edge from the building 234 to the floor 232. The floor has a space indicated by the edge “has_a” 270 between the floor 232 and the space 230. The space has particular capabilities, e.g., is a room that can be booked for a meeting, conference, private study time, etc. Furthermore, the booking can be canceled. The capabilities for the floor 232 are represented by capabilities 228 related to space 230 by edge 280. The capabilities 228 are related to two different commands, command “book room” 224 and command “cancel booking” 226 related to capabilities 228 by edge 284 and edge 282 respectively.
If the cloud platform 106 receives a command to book the space represented by the node, space 230, the cloud platform 106 can search the graph projection 200 for the capabilities for the 228 related to the space 230 to determine whether the cloud platform 106 can book the room.
In some embodiments, the cloud platform 106 could receive a request to book a room in a particular building, e.g., the building 234. The cloud platform 106 could search the graph projection 200 to identify spaces that have the capabilities to be booked, e.g., identify the space 230 based on the capabilities 228 related to the space 230. The cloud platform 106 can reply to the request with an indication of the space and allow the requesting entity to book the space 230.
The graph projection 200 includes a policy 236 for the floor 232. The policy 236 is related set for the floor 232 based on a “To Floor” edge 274 between the policy 236 and the floor 232. The policy 236 is related to different roles for the floor 232, read events 238 via edge 276 and send command 240 via edge 278. The policy 236 is set for the entity 203 based on has edge 251 between the entity 203 and the policy 236.
The twin manager 108 can identify policies for particular entities, e.g., users, software applications, systems, devices, etc. based on the policy 236. For example, if the cloud platform 106 receives a command to book the space 230. The cloud platform 106 can communicate with the twin manager 108 to verify that the entity requesting to book the space 230 has a policy to book the space. The twin manager 108 can identify the entity requesting to book the space as the entity 203 by searching the graph projection 200. Furthermore, the twin manager 108 can further identify the edge has 251 between the entity 203 and the policy 236 and the edge between the policy 236 and the command 240.
Furthermore, the twin manager 108 can identify that the entity 203 has the ability to command the space 230 based on the edge between the policy 236 and the edge 270 between the floor 232 and the space 230. In response to identifying the entity 203 has the ability to book the space 230, the twin manager 108 can provide an indication to the cloud platform 106.
Furthermore, if the entity makes a request to read events for the space 230, e.g., the sample 210 and the sample 212, the twin manager 108 can identify the edge has 251 between the entity 203 and the policy 236, the edge between the policy 236 and the read events 238, the edge between the policy 236 and the floor 232, the “has_a” edge 270 between the floor 232 and the space 230, the edge 268 between the space 230 and the door actuator 214, the edge 260 between the door actuator 214 and the point A 216, the “has_a” edge 264 between the point A 216 and the TS 220, and the edges 268 and 266 between the TS 220 and the samples 210 and 212 respectively.
Referring now to
The connection broker 353 is related to an agent that optimizes a space 356 via edge 398b. The agent represented by the node 356 can book and cancel bookings for the space represented by the node 230 based on the edge 398b between the connection broker 353 and the node 356 and the edge 398a between the capabilities 228 and the connection broker 353.
The connection broker 353 is related to a cluster 308 by edge 398c. Cluster 308 is related to connector B 302 via edge 398e and connector A 306 via edge 398d. The connector A 306 is related to an external subscription service 304. A connection broker 310 is related to cluster 308 via an edge 311 representing a rest call that the connection broker represented by node 310 can make to the cluster represented by cluster 308.
The connection broker 310 is related to a virtual meeting platform 312 by an edge 354. The node 312 represents an external system that represents a virtual meeting platform. The connection broker represented by node 310 can represent a software component that facilitates a connection between the cloud platform 106 and the virtual meeting platform represented by node 312. When the cloud platform 106 needs to communicate with the virtual meeting platform represented by the node 312, the cloud platform 106 can identify the edge 354 between the connection broker 310 and the virtual meeting platform 312 and select the connection broker represented by the node 310 to facilitate communication with the virtual meeting platform represented by the node 312.
A capabilities node 318 can be connected to the connection broker 310 via edge 360. The capabilities 318 can be capabilities of the virtual meeting platform represented by the node 312 and can be related to the node 312 through the edge 360 to the connection broker 310 and the edge 354 between the connection broker 310 and the node 312. The capabilities 318 can define capabilities of the virtual meeting platform represented by the node 312. The node 320 is related to capabilities 318 via edge 362. The capabilities may be an invite bob command represented by node 316 and an email bob command represented by node 314. The capabilities 318 can be linked to a node 320 representing a user, Bob. The cloud platform 106 can facilitate email commands to send emails to the user Bob via the email service represented by the node 304. The node 304 is related to the connect a node 306 via edge 398f. Furthermore, the cloud platform 106 can facilitate sending an invite for a virtual meeting via the virtual meeting platform represented by the node 312 linked to the node 318 via the edge 358.
The node 320 for the user Bob can be associated with the policy 236 via the “has” edge 364. Furthermore, the node 320 can have a “check policy” edge 366 with a portal node 324. The device API node 328 has a check policy edge 370 to the policy node 236. The portal node 324 has an edge 368 to the policy node 236. The portal node 324 has an edge 323 to a node 326 representing a user input manager (UIM). The portal node 324 is related to the UIM node 326 via an edge 323. The UIM node 326 has an edge 323 to a device API node 328. The UIM node 326 is related to the door actuator node 214 via edge 372. The door actuator node 214 has an edge 374 to the device API node 328. The door actuator 214 has an edge 335 to the connector virtual object 334. The device hub 332 is related to the connector virtual object via edge 380. The device API node 328 can be an API for the door actuator 214. The connector virtual object 334 is related to the device API node 328 via the edge 331.
The device API node 328 is related to a transport connection broker 330 via an edge 329. The transport connection broker 330 is related to a device hub 332 via an edge 378. The device hub represented by node 332 can be a software component that hands the communication of data and commands for the door actuator 214. The cloud platform 106 can identify where to store data within the graph projection 300 received from the door actuator by identifying the nodes and edges between the points 216 and 218 and the device hub node 332. Similarly, the cloud platform 308 can identify commands for the door actuator that can be facilitated by the device hub represented by the node 332, e.g., by identifying edges between the device hub node 332 and an open door node 352 and an lock door node 350. The door actuator 114 has an edge “has mapped an asset” 280 between the node 214 and a capabilities node 348. The capabilities node 348 and the nodes 352 and 350 are linked by edges 396 and 394.
The device hub 332 is linked to a cluster 336 via an edge 384. The cluster 336 is linked to connector A 340 and connector B 338 by edges 386 and the edge 389. The connector A 340 and the connector B 338 is linked to an external system 344 via edges 388 and 390. The external system 344 is linked to a door actuator 342 via an edge 392.
Referring now to
A building node 404 represents a particular building that includes two floors. A floor 1 node 402 is linked to the building node 404 via edge 460 while a floor 2 node 406 is linked to the building node 404 via edge 462. The floor 2 includes a particular room represented by edge 464 between floor 2 node 406 and room node 408. Various pieces of equipment are included within the room. A light represented by light node 416, a bedside lamp node 414, a bedside lamp node 412, and a hallway light node 410 are related to room node 408 via edge 466, edge 472, edge 470, and edge 468.
The light represented by light node 416 is related to a light connector 426 via edge 484. The light connector 426 is related to multiple commands for the light represented by the light node 416 via edges 484, 486, and 488. The commands may be a brightness setpoint 424, an on command 425, and a hue setpoint 428. The cloud platform 106 can receive a request to identify commands for the light represented by the light 416 and can identify the nodes 424-428 and provide an indication of the commands represented by the node 424-428 to the requesting entity. The requesting entity can then send commands for the commands represented by the nodes 424-428.
The bedside lamp node 414 is linked to a bedside lamp connector 481 via an edge 413. The connector 481 is related to commands for the bedside lamp represented by the bedside lamp node 414 via edges 492, 496, and 494. The command nodes are a brightness setpoint node 432, an on command node 434, and a color command 436. The hallway light 410 is related to a hallway light connector 446 via an edge 498d. The hallway light connector 446 is linked to multiple commands for the hallway light node 410 via edges 498g, 498f, and 498e. The commands are represented by an on command node 452, a hue setpoint node 450, and a light bulb activity node 448.
The graph projection 400 includes a name space node 422 related to a server A node 418 and a server B node 420 via edges 474 and 476. The name space node 422 is related to the bedside lamp connector 481, the bedside lamp connector 444, and the hallway light connector 446 via edges 482, 480, and 478. The bedside lamp connector 444 is related to commands, e.g., the color command node 440, the hue setpoint command 438, a brightness setpoint command 456, and an on command 454 via edges 498c, 498b, 498a, and 498.
Referring now to
The edge platform 102 can include a device hub 502, a connector 504, and/or an integration layer 512. The edge platform 102 can facilitate communication between the devices 514-518 and the cloud platform 106 and/or twin manager 108. The communication can be telemetry, commands, control data, etc. Examples of command and control via a building data platform is described in U.S. patent application Ser. No. 17/134,661 filed Dec. 28, 2020, the entirety of which is incorporated by reference herein.
The devices 514-518 can be building devices that communicate with the edge platform 102 via a variety of various building protocols. For example, the protocol could be Open Platform Communications (OPC) Unified Architecture (UA), Modbus, BACnet, etc. The integration layer 512 can, in some embodiments, integrate the various devices 514-518 through the respective communication protocols of each of the devices 514-518. In some embodiments, the integration layer 512 can dynamically include various integration components based on the needs of the instance of the edge platform 102, for example, if a BACnet device is connected to the edge platform 102, the edge platform 102 may run a BACnet integration component. The connector 504 may be the core service of the edge platform 102. In some embodiments, every instance of the edge platform 102 can include the connector 504. In some embodiments, the edge platform 102 is a light version of a gateway.
In some embodiments, the connectivity manager 506 operates to connect the devices 514-518 with the cloud platform 106 and/or the twin manager 108. The connectivity manager 506 can allow a device running the connectivity manager 506 to connect with an ecosystem, the cloud platform 106, another device, another device which in turn connects the device to the cloud, connects to a data center, a private on-premises cloud, etc. The connectivity manager 506 can facilitate communication northbound (with higher level networks), southbound (with lower level networks), and/or east/west (e.g., with peer networks). The connectivity manager 506 can implement communication via MQ Telemetry Transport (MQTT) and/or sparkplug, in some embodiments. The operational abilities of the connectivity manager 506 can be extended via an software development toolkit (SDK), and/or an API. In some embodiments, the connectivity manager 506 can handle offline network states with various networks.
In some embodiments, the device manager 508 can be configured to manage updates and/or upgrades for the device that the device manager 508 is run on, the software for the edge platform 102 itself, and/or devices connected to the edge platform 102, e.g., the devices 514-518. The software updates could be new software components, e.g., services, new integrations, etc. The device manager 508 can be used to manage software for edge platforms for a site, e.g., make updates or changes on a large scale across multiple devices. In some embodiments, the device manager 508 can implement an upgrade campaign where one or more certain device types and/or pieces of software are all updated together. The update depth may be of any order, e.g., a single update to a device, an update to a device and a lower level device that the device communication with, etc. In some embodiments, the software updates are delta updates, which are suitable for low-bandwidth devices. For example, instead of replacing an entire piece of software on the edge platform 102, only the portions of the piece of software that need to be updated may be updated, thus reducing the amount of data that needs to be downloaded to the edge platform 102 in order to complete the update.
The device identity manager 510 can implement authorization and authentication for the edge platform 102. For example, when the edge platform 102 connects with the cloud platform 106, the twin manager 108, and/or the devices 514-518, the device identity manager 510 can identify the edge platform 102 to the various platforms, managers, and/or devices. Regardless of the device that the edge platform 102 is implemented on, the device identity manager 510 can handle identification and uniquely identify the edge platform 102. The device identity manager 510 can handle certification management, trust data, authentication, authorization, encryption keys, credentials, signatures, etc. Furthermore, the device identity manager 510 may implement various security features for the edge platform 102, e.g., antivirus software, firewalls, verified private networks (VPNs), etc. Furthermore, the device identity manager 510 can manage commissioning and/or provisioning for the edge platform 102.
Referring now to
The edge platform 102 can include a protocol integration layer 610 that facilities communication with the building subsystems 122 via one or more protocols. In some embodiments, the protocol integration layer 610 can be dynamically updated with a new protocol integration responsive to detecting that a new device is connected to the edge platform 102 and the new device requires the new protocol integration. In some embodiments, the protocol integration layer 610 can be customized through an SDK 612.
In some embodiments, the edge platform 102 can handle MQTT communication through an MQTT layer 608 and an MQTT connector 606. In some embodiments, the MQTT layer 608 and/or the MQTT connector 606 handles MQTT based communication and/or any other publication/subscription based communication where devices can subscribe to topics and publish to topics. In some embodiments, the MQTT connector 606 implements an MQTT broker configured to manage topics and facilitate publications to topics, subscriptions to topics, etc. to support communication between the building subsystems 122 and/or with the cloud platform 106. An example of devices of a building communicating via a publication/subscription method is shown in
The edge platform 102 includes a translations, rate-limiting, and routing layer 604. The layer 604 can handle translating data from one format to another format, e.g., from a first format used by the building subsystems 122 to a format that the cloud platform 106 expects, or vice versa. The layer 604 can further perform rate limiting to control the rate at which data is transmitted, requests are sent, requests are received, etc. The layer 604 can further perform message routing, in some embodiments. The cloud connector 602 may connect the edge platform 102, e.g., establish and/or communicate with one or more communication endpoints between the cloud platform 106 and the cloud connector 602.
Referring now to
In some embodiments, the device 662 and/or the device 664 implement gateway operations for connecting the devices of the building subsystems 122 with the cloud platform 106 and/or the twin manager 108. In some embodiments, the devices 662 and/or 664 can communicate with the building subsystems 122, collect data from the building subsystems 122, and communicate the data to the cloud platform 106 and/or the twin manager 108. In some embodiments, the devices 662 and/or the device 664 can push commands from the cloud platform 106 and/or the twin manager 108 to the building subsystem 122.
The systems and devices 656-664 can each run an instance of the edge platform 102. In some embodiments, the systems and devices 656-664 run the connector 504 which may include, in some embodiments, the connectivity manager 506, the device manager 508, and/or the device identity manager 510. In some embodiments, the device manager 508 controls what services each of the systems and devices 656-664 run, e.g., what services from a service catalog 630 each of the systems and devices 656-664 run.
The service catalog 630 can be stored in the cloud platform 106, within a local server (e.g., in the server database 658 of the local server 656), on the computing system 660, on the device 662, on the device 664, etc. The various services of the service catalog 630 can be run on the systems and devices 656-664, in some embodiments. The services can further move around the systems and devices 656-664 based on the available computing resources, processing speeds, data availability, the locations of other services which produce data or perform operations required by the service, etc.
The service catalog 630 can include an analytics service 632 that generates analytics data based on building data of the building subsystems 122, a workflow service 634 that implements a workflow, and/or an activity service 636 that performs an activity. The service catalog 630 includes an integration service 638 that integrates a device with a particular subsystem (e.g., a BACnet integration, a Modbus integration, etc.), a digital twin service 640 that runs a digital twin, and/or a database service 642 that implements a database for storing building data. The service catalog 630 can include a control service 644 for operating the building subsystems 122, a scheduling service 646 that handles scheduling of areas (e.g., desks, conference rooms, etc.) of a building, and/or a monitoring service 648 that monitors a piece of equipment of the building subsystem 122. The service catalog 630 includes a command service 650 that implements operational commands for the building subsystems 122, an optimization service 652 that runs an optimization to identify operational parameters for the building subsystems 122, and/or achieve service 654 that archives settings, configurations, etc. for the building subsystem 122, etc.
In some embodiments, the various systems 656, 660, 662, and 664 can realize technical advantages by implementing services of the service catalog 630 locally and/or storing the service catalog 630 locally. Because the services can be implemented locally, i.e., within a building, lower latency can be realized in making control decisions or deriving information since the communication time between the systems 656, 660, 662, and 664 and the cloud is not needed to run the services. Furthermore, because the systems 656, 660, 662, and 664 can run independently of the cloud (e.g., implement their services independently) even if the network 104 fails or encounters an error that prevents comm8unication between the cloud and the systems 656, 660, 662, and 664, the systems can continue operation without interruption. Furthermore, by balancing computation between the cloud and the systems 656, 660, 662, and 664, power usage can be balanced more effectively. Furthermore, the system 629 has the ability to scale (e.g., grow or shrink) the functionality/services provided on edge devices based on capabilities of edge hardware onto which edge system is being implemented.
Referring now to
The local server 702 can include a connector 704, services 706-710, a building normalization layer 712, and integrations 714-718. These components of the local server 702 can be deployed to the local server 702, e.g., from the cloud platform 106. These components may further be dynamically moved to various other devices of the building, in some embodiments. The connector 704 may be the connector described with reference to
The building normalization layer 712 can be a software component that runs the integrations 714-718 and/or the analytics 706-710. The building normalization layer 712 can be configured to allow for a variety of different integrations and/or analytics to be deployed to the local server 702. In some embodiments, the building normalization layer 712 could allow for any service of the service catalog 630 to run on the local server 702. Furthermore, the building normalization layer 712 can relocate, or allow for relocation, of services and/or integrations across the cloud platform 106, the local server 702, and/or the device/gateway 720. In some embodiments, the services 706-710 are relocatable based on processing power of the local server 702, based on communication bandwidth, available data, etc. The services can be moved from one device to another in the system 700 such that the requirements for the service are met appropriately.
Furthermore, instances of the integrations 714-718 can be relocatable and/or deployable. The integrations 714-718 may be instantiated on devices of the system 700 based on the requirements of the devices, e.g., whether the local server 702 needs to communicate with a particular device (e.g., the Modbus integration 714 could be deployed to the local server 702 responsive to a detection that the local server 702 needs to communicate with a Modbus device). The locations of the integrations can be limited by the physical protocols that each device is capable of implementing and/or security limitations of each device.
In some embodiments, the deployment and/or movement of services and/or integrations can be done manually and/or in an automated manner. For example, when a building site is commissioned, a user could manually select, e.g., via a user interface on the user device 176, the devices of the system 700 where each service and/or integration should run. In some embodiments, instead of having a user select the locations, a system, e.g., the cloud platform 106, could deploy services and/or integrations to the devices of the system 700 automatically based on the ideal locations for each of multiple different services and/or integrations.
In some embodiments, an orchestrator (e.g., run on instances of the building normalization layer 712 or in the cloud platform 106) or a service and/or integration itself could determine that a particular service and/or integration should move from one device to another device after deployment. In some embodiments, as the devices of the system 700 change, e.g., more or less services are run, hard drives are filled with data, physical building devices are moved, installed, and/or uninstalled, the available data, bandwidth, computing resources, and/or memory resources may change. The services and/or integrations can be moved from a first device to a second more appropriate device responsive to a detection that the first device is not meeting the requirements of the service and/or integration.
As an example, an energy efficiency model service could be deployed to the system 700. For example, a user may request that an energy efficiency model service run in their building. Alternatively, a system may identify that an energy efficiency model service would improve the performance of the building and automatically deploy the service. The energy efficiency model service may have requirements. For example, the energy efficiency model may have a high data throughput requirement, a requirement for access to weather data, a high requirement for data storage to store historical data needed to make inferences, etc. In some embodiments, a rules engine with rules could define whether services get pushed around to other devices, whether model goes back to the cloud for more training, whether an upgrade is needed to implement an increase in points, etc.
As another example, a historian service may manage a log of historical building data collected for a building, e.g., store a record of historical temperature measurements of a building, store a record of building occupant counts, store a record of operational control decisions (e.g., setpoints, static pressure setpoints, fan speeds, etc.), etc. One or more other services may depend on the historian, for example, the one or more other services may consume historical data recorded by the historian. In some embodiments, other services can be relocated along with the historian service such that the other services can operate on the historian data. For example, an occupancy prediction service may need a historical log of occupancy record by the historian service to run. In some embodiments, instead of having the occupancy prediction service and the historian run on the same physical device, a particular integrations between the two devices that the historian service and the occupancy prediction service run on could be established such that occupancy data of the historian service can be provided from the historian service to the occupancy prediction service.
This portability of services and/or integrations removes dependencies between hardware and software. Allowing services and/or integrations to move from one device to another device can keep services running continuously even if the run on a variety of locations. This decouples software from hardware.
In some embodiments, the building normalization layer 712 can facilitate auto discovery of devices and/or perform auto configuration. In some embodiments, the building normalization 726 of the cloud platform 106 performs the auto discovery. In some embodiments, responsive to detecting a new device connected to the local server 702, e.g., a new device of the building subsystems 122, the building normalization can identify points of the new device, e.g., identify measurement points, control points, etc. In some embodiments, the building normalization layer 712 performs a discovery process where strings, tags, or other metadata is analyzed to identify each point. In some embodiments, a discover process as discussed in U.S. patent application Ser. No. 16/885,959 filed May 28, 2020, U.S. patent application Ser. No. 16/885,968 filed May 28, 2020, U.S. patent application Ser. No. 16/722,439 filed Dec. 20, 2019 (now U.S. Pat. No. 10,831,163), and U.S. patent application Ser. No. 16/663,623 filed Oct. 25, 2019, which are incorporated by reference herein in their entireties.
In some embodiments, the cloud platform 106 performs a site survey of all devices of a site or multiple sites. For example, the cloud platform 106 could identify all devices installed in the system 700. Furthermore, the cloud platform 106 could perform discovery for any devices that are not recognized. The result of the discovery of a device could be a configuration for the device, for example, indications of points to collect data from and/or send commands to. The cloud platform 106 can, in some embodiments, distribute a copy of the configuration for the device to all of the instances of the building normalization layer 712. In some embodiments, the copy of the configuration can be distributed to other buildings different from the building that the device was discovered at. In this regard, responsive to a similar device type being installed somewhere else, e.g., in the same building, in a different building, at a different campus, etc. the instance of the building normalization can select the copy of the device configuration and implement the device configuration for the device.
Similarly, if the instance of the building normalization detects a new device that is not recognized, the building normalization could perform a discovery process for the new device and distribute the configuration for the new device to other instances of the building normalization. In this regard, each building normalization instance can implement learning by discovering new devices and injecting device configurations into a device catalog stored and distributed across each building normalization instance.
In some embodiments, the device catalog can store names of every data point of every device. In some embodiments, the services that operate on the data points can consume the data points based on the indications of the data points in the device catalog. Furthermore, the integrations may collect data from data points and/or send actions to the data points based on the naming of the device catalog. In some embodiments, the various building normalization and synchronize the device catalogs they store. For example, changes to one device catalog can be distributed to other building normalizations. If a point name was changed for a device, this change could be distributed across all building normalizations through the device catalog synchronization such that there are no disruptions to the services that consume the point.
The analytics service 706 may be a service that generates one or more analytics based on building data received from a building device, e.g., directly from the building device or through a gateway that communicates with the building device, e.g., from the device/gateway 720. The analytics service 706 can be configured to generate an analytics data based on the building data such as a carbon emissions metric, an energy consumption metric, a comfort score, a health score, etc. The database service 708 can operate to store building data, e.g., building data collected from the device/gateway 720. In some embodiments, the analytics service 706 may operate against historical data stored in the database service 708. In some embodiments, the analytics service 706 may have a requirement that the analytics service 706 is implemented with access to a database service 706 that stores historical data. In this regard, the analytics service 706 can be deployed to, or relocated to a device including an instantiation of the database service 708. In some embodiments, the database service 708 could be deployed to the local server 702 responsive to determining that the analytics service 706 requires the database service 708 to run.
The optimization service 710 can be a service that operates to implement an optimization of one or more variables based on one or more constraints. The optimization service 710 could, in some embodiments, implement optimization for allocating loads, making control decisions, improving energy usage and/or occupant comfort etc. The optimization performed by the optimization service 710 could be the optimization described in U.S. patent application Ser. No. 17/542,184 filed Dec. 3, 2021, which is incorporated by reference herein.
The Modbus integration 714 can be a software component that enables the local server 702 to collect building data for data points of building devices that operate with a Modbus protocol. Furthermore, the Modbus integration 714 can enable the local server 702 to communicate data, e.g., operating parameters, setpoints, load allocations, etc. to the building device. The communicated data may, in some embodiments, be control decisions determined by the optimization service 710.
Similarly, the BACnet integration 716 can enable the local server 702 to communicate with one or more BACnet based devices, e.g., send data to, or receive data from, the BACnet based devices. The endpoint 718 could be an endpoint for MQTT and/or Sparkplug. In some embodiments, the element 718 can be a software service including an endpoint and/or a layer for implementing MQTT and/or Sparkplug communication. In the system 700, the endpoint 718 can be used for communicating by the local server 702 with the device/gateway 720, in some embodiments.
The cloud platform 106 can include an artificial intelligence (AI) service 721, an archive service 722, and/or a dashboard service 724. The AI service 721 can run one or more artificial intelligence operations, e.g., inferring information, performing autonomous control of the building, etc. The archive service 722 may archive building data received from the device/gateway 720 (e.g., collected point data). The archive service 722 may, in some embodiments, store control decisions made by another service, e.g., the AI service 721, the optimization service 710, etc. The dashboard service 724 can be configured to provide a user interface to a user with analytic results, e.g., generated by the analytics service 706, command interfaces, etc. The cloud platform 106 is further shown to include the building normalization 726, which may be an instance of the building normalization layer 712.
The cloud platform 106 further includes an endpoint 754 for communicating with the local server 702 and/or the device/gateway 720. The cloud platform 106 may include an integration 756, e.g., an MQTT integration supporting MQTT based communication with MQTT devices.
The device/gateway 720 can include a local server connector 732 and a cloud platform connector 734. The cloud platform connector 734 can connect the device/gateway 720 with the cloud platform 106. The local server connector 732 can connect the device/gateway 720 with the local server 702. The device/gateway 720 includes a commanding service 736 configured to implement commands for devices of the building subsystems 122 (e.g., the device/gateway 720 itself or another device connected to the device/gateway 720). The monitoring service 738 can be configured to monitor operation of the devices of the building subsystems 122, the scheduling service 740 can implement scheduling for a space or asset, the alarm/event service 742 can generate alarms and/or events when specific rules are tripped based on the device data, the control service 744 can implement a control algorithm and/or application for the devices of the building subsystems 122, and/or the activity service 746 can implement a particular activity for the devices of the building subsystems 122.
The device/gateway 720 further includes a building normalization 748. The building normalization 748 may be an instance of the building normalization layer 712, in some embodiments. The device/gateway 720 may further include integrations 750-752. The integration 750 may be a Modbus integration for communicating with a Modbus device. The integration 752 may be a BACnet integration for communicating with BACnet devices.
Referring now to
The local BMS server 804 may be a server that implements building applications and/or data collection. The building applications can be the various services discussed herein, e.g., the services of the service catalog 630. In some embodiments, the BMS server 804 can include data storage for storing historical data. In some embodiments, the local BMS server 804 can be the local server 656 and/or the local server 702. In some embodiments, the local BMS server 804 can implement user interfaces for viewing on a user device 176. The local BMS server 804 includes a BMS normalization API 810 for allowing external systems to communicate with the local BMS server 804. Furthermore, the local BMS server 804 includes BMS components 812. These components may implement the user interfaces, applications, data storage and/or logging, etc. Furthermore, the local BMS server 804 includes a BMS endpoint 814 for communicating with the network engine 816. The BMS endpoint 814 may also connect to other devices, for example, via a local or external network. The BMS endpoint 814 can connect to any type of device capable of communicating with the local BMS server 804.
The system 800 includes a network engine 816. The network engine 816 can be configured to handle network operations for networks of the building. For example, the engine integrations 824 of the network engine 816 can be configured to facilitate communication via BACnet, Modbus, CAN, N2, and/or any other protocol. In some embodiments, the network communication is non-IP based communication. In some embodiments, the network communication is IP based communication, e.g., Internet enabled smart devices, BACnet/IP, etc. In some embodiments, the network engine 816 can communicate data collected from the building subsystems 122 and pass the data to the local BMS server 804.
In some embodiments, the network engine 816 includes existing engine components 822. The engine components 822 can be configured to implement network features for managing the various building networks that the building subsystems 122 communicate with. The network engine 816 may further include a BMS normalization API 820 that implements integration with other external systems. The network engine 816 further includes a BMS connector 818 that facilitates a connection between the network engine 816 and a BMS endpoint 814. In some embodiments, the BMS connector 818 collects point data received from the building subsystems 122 via the engine integrations 824 and communicates the collected points to the BMS endpoint 814.
In the system 800, the local BMS server 804 can be adapted to facilitate communication between the local BMS server 804, the network engine 816, and/or the building subsystems 122 with the cloud platform 106. In some embodiments, the adaption can be implemented by deploying an endpoint 802 to the cloud platform 106. The endpoint 802 can be an MQTT and/or Sparkplug endpoint, in some embodiments. Furthermore, a cloud platform connector 806 could be deployed to the local BMS server 804. The cloud platform connector 806 could facilitate communication between the local BMS server 804 and the cloud platform 106. Furthermore, a BMS API adapter service 808 can be deployed to the local BMS server 804 to implement an integration between the cloud platform connector 806 and the BMS normalization API 810. The BMS API adapter service 808 can form a bridge between the existing BMS components 812 and the cloud platform connector 806.
Referring now to
In the system 900, reusable cloud connector components and/or a reusable adapter service are deployed to the network engine 816 to enable the network engine 816 to communicate directly with the cloud platform 106 endpoint 802. In this regard, components of the edge platform 102 can be deployed to the network engine 816 itself allowing for plug and play on the engine such that gateway functions can be run on the network engine 816 itself.
In the system 900, a cloud platform connector 906 and a cloud platform connector 904 can be deployed to the network engine 816. The cloud platform connector 906 and/or the cloud platform connector 904 can be instances of the cloud platform 806. Furthermore, an endpoint 902 can be deployed to the local BMS server 804. The endpoint 902 can be a sparkplug and/or MQTT endpoint. The cloud platform connector 906 can be configured to facilitate communication between the network engine 816 and the endpoint 902. In some embodiments, point data can be communicated between the building subsystems 122 and the endpoint 902. Furthermore, the cloud platform connector 904 can configured to facilitate communication between the endpoint 802 and the network engine 816, in some embodiments. A BMS API adapter service 908 can integrate the cloud platform connector 906 and/or the cloud platform connector 904 with the BMS normalization API 820.
Referring now to
In some embodiments, the gateway 1004 can be deployed on a computing node of a building that the gateway software, e.g., the components 1006-1014. In some embodiments, the gateway 1004 can be installed in a building as a new physical device. In some embodiments, gateway devices can be built on computing nodes of a network to communicate with legacy devices, e.g., the network engine 816 and/or the building subsystems 122. In some embodiments, the gateway 1004 can be deployed to a computing system to enable the network engine 816 to communicate with the cloud platform 106. In some embodiments, the gateway 1004 is a new physical device and/or is a modified existing gateway. In some embodiments, the cloud platform 106 can identify what physical devices are near and/or are connected to the network engine 816. The cloud platform 106 can deploy the gateway 1004 to the identified physical device. Some pieces of the software stack of the gateway may be legacy.
The gateway 1004 can include a cloud platform connector 1006 configured to facilitate communication between the endpoint 802 of the cloud platform 106 and/or the gateway 1004. The cloud platform connector 1006 can be an instance of the cloud platform 806 and/or the connector 504. The gateway 1004 can further include services 1008. The services 1008 can be the services described with reference to
In some embodiments, the gateway 1004, via the cloud platform connector 1006 and/or the BMS API adapter service 1012 can facilitate direct communication between the network engine 816 and the cloud platform 106. For example, data collected from the building subsystems 122 can be collected via the engine integrations 824 and communicated to the gateway 1004 via the BMS normalization API 820 and the BMS API adapter service 1012. The cloud platform connector 1006 can communicate the collected data points to the endpoint 802 of the cloud platform 106. The BMS API adapter service 1012 and the BMS API adapter service 808 can be common adapters which can make calls and/or responses to the BMS normalization API 810 and/or the BMS normalization API 820.
The gateway 1004 can allow for the addition of services (e.g., the services 1008) and/or integrations (e.g., integrations endpoint 1014) to the system 1000 that may not be deployable to the local BMS server 804 and/or the network engine 816. In
Referring now to
In some embodiments, the surveillance camera 1106 and/or the smart thermostat 1108 are themselves gateways. The gateways may be built in a portable language such as RUST and embedded within the surveillance camera 1106 and/or the smart thermostat 1108. In some embodiments, one or both of the surveillance camera 1106 and/or the smart thermostat 1108 can implement a building device broker 1105. In some embodiments, the building device broker 1105 can be implemented on a separate building gateway, e.g., the device/gateway 720 and/or the gateway 1004.
In some embodiments, the surveillance camera 1106 can perform motion detection, e.g., detect the presence of the user 1104. In some embodiments, responsive to detecting the user 1104, the surveillance camera 1106 can generate an occupancy trigger event. The occupancy trigger event can be published to a topic by the surveillance camera 1106. The building device broker 1105 can, in some embodiments, handle various topics, handle topic subscriptions, topic publishing, etc. In some embodiments, the smart thermostat 1108 may be subscribed to an occupancy topic for the zone 1102 that the surveillance camera 1106 publishes occupancy trigger events to. The smart thermostat 1108 may, in some embodiments, adjust a temperature setpoint responsive to receiving an occupancy trigger event being published to the topic.
In some embodiments, an IoT platform and/or other application is subscribed to the topic that the surveillance camera 1106 subscribes to and commands the smart thermostat 1108 to adjust its temperature setpoint responsive to detecting the occupancy trigger event. In some embodiments the events, topics, publishing, and/or subscriptions are MQTT based messages. In some embodiments, the event communicated by the surveillance camera 1106 is an Open Network Video Interface Forum (ONVIF) event.
Referring now to
In some embodiments, such a gateway could include a mini personal computer (PC) with various software connectors that connect the gateway to the building subsystems 122, e.g., a BACnet connector, an OPC/UA connector, a Modbus connector, a Transmission Control Protocol and Internet Protocol TCP/IP connector, and/or various other protocols. In some embodiments, the mini PC runs an operating system that hosts various micro-services for the communication.
In some embodiments, hosting a mini PC in a building has issues. For example, the operating system on the mini PC may need to be updated for security patches and/or operating system updates. This might result in impacting the micro-services which the mini PC runs. Micro-services may stop, may be deleted, and/or may have to updated to manage the changes in operating system. Furthermore, the mini PC may need to be managed by a local building information technologies (IT) team. The mini PC may be impacted by the building network and/or IT policies on the network. The mini PC may need to be commissioned by a technician visit to a local site. Similarly, a site visit by the technician may be required for trouble shooting any time that the mini PC encounters issues. For an increase in demand for the services of the mini PC, a technician may need to visit the site to make physical and/or software updates to the mini PC, which may incur additional cost for field testing and/or certifying new hardware and/or software.
To solve one or more of these issues, the system 1200 could include a cluster gateway 1206. The cluster gateway 1206 cold be a cluster including one or more micro-services in containers. For example, the cluster gateway 1206 could be a Kubernetes cluster with Docker instances of micro-services. For example, the cluster gateway 1206 could run a BACnet micro-serve 1208, a Modbus micro-service 1210, and/or an OPC/U micro-service 1212. The cluster gateway 1206 can replace the mini PC with a more generic hardware device with the capability to host one or more different and/or changing containers.
In some embodiments, software updates to the cluster gateway 1206 can be managed centrally by a gateway manager 1202. The gateway manager 1202 could push new micro-services, e.g., a BACnet micro-service, a Modbus micro-service 1210, and/or a OPC/UA micro-service to the cluster gateway 1206. In this manner, software upgrades are not dependent on an IT infrastructure at a building. A building owner may manage the underlying hardware that the cluster gateway 1206 runs on while the cluster gateway 1206 may be managed by a separate development entity. In some embodiments, commissioning for the cluster gateway 1206 is managed remotely. Furthermore, the workload for the cluster gateway 1206 can be managed, in some embodiments. In some embodiments, the cluster gateway 1206 runs independent of the hardware on which it is hosted, and thus any underlying hardware upgrades do not require testing for software tools and/or software stack of the cluster gateway 1206.
The gateway manager 1202 can be configured to install and/or upgrade the cluster gateway 1206. The gateway manager 1202 can make upgrades to the micro-services that the cluster gateway 1206 runs and/or make upgrades to the operating environment of the cluster gateway 1206. In some embodiments, upgrades, security patches, new software, etc. can be pushed by the gateway manager 1202 to the cluster gateway 1206 in an automated manner. In some embodiments, errors and/or issues of the cluster gateway 1206 can be managed remotely and users can receive notifications regarding the errors and/or issues. In some embodiments, comissioning for the cluster gateway 1206 can be automated and the cluster gateway 1206 can be set up to run on a variety of different hardware enviornments.
In some embodiments, the cluster gateway 1206 can provide telemetry data of the building subsystems 122 to the cloud applications 1204. Furthermore, the cloud applications 1204 can provide command and control data to the cluster gateway 1206 for controlling the building subsystems 122. In some embodiments, command and/or control operations can be handled by the cluster gateway 1206. This may provide the ability to manage the demand and/or bandwidth requirements of the site by commanding the various containers including the micro-services on the cluster gateway 1206. This may allow for the management of upgrades and/or testing. Furthermore, this may allow for the replication of development, testing, and/or production environments. The cloud applications 1204 could be energy management applications, optimization applications, etc. In some embodiments, the cloud applications 1204 are the applications 110. In some embodiments, the cloud applications 1204 are the cloud platform 106.
Referring to
At step 1305, the building system can store one or more gateway components on one or more storage devices of the building system. The building system may be located within, or located remote from, the building to which the building system corresponds. The gateway components stored on the storage devices of the building system can facilitate communication with a cloud platform (e.g., the cloud platform 106) and facilitate communication with a physical building device (e.g., the device/gateway 720, the building subsystems 122, etc.). The gateway components can be, for example, any of the, connectors, building normalization layers, services, or integrations described herein, including but certainly not limited to the connector 704, services 706-710, a building normalization layer 712, and integrations 714-718, among other components, software, integrations, configuration settings, or any other software-related data described in connection with
At step 1310, the building system can identify a computing system of the building that is in communication with the physical building device, the physical building device storing one or more data samples. Identifying the computing system can include accessing a database or lookup table of computing systems or devices that are present within or otherwise associated with managing one or more aspects of the building. In some implementations, the building system can query a network of the building to which the building system is communicatively coupled, to identify one or more other computing systems on the network. The computing systems may be associated with respective identifiers, and may communicate with the building system via the network or another suitable communications interface, connector, or integration, as described herein. The computing system may be in communication with one or more physical building devices, as described herein. In some implementations, the building system can identify each of the computing systems of the building that are in communication with at least one physical building device.
At step 1315, the building system can deploy the one or more gateway components to the identified computing system responsive to identifying that the computing system is in communication with the physical building device(s). For example, the building system can utilize one or more communication channels, which may be established via a network of the building, to transmit the gateway components to each of the identified computing systems of the building. Deploying the one or more gateway components can include installing or otherwise configuring the gateway components to execute at the one or more identified computing systems. Generally, the gateway components can be executed to perform any of the operations described herein. Deploying the gateway components can include forming storing computer-executable instructions corresponding to the gateway components at the identified computing systems. In some implementations, the particular gateway components deployed at an identified computing system can be selected based on the type of the physical building device to which the identified computing system is connected. Likewise, in some embodiments, the particular gateway components deployed at an identified computing system can be selected to correspond to an operation, type, or processing capability of the identified computing system, among other factors as described herein. Deploying the gateway components may include storing the gateway components in one or more predetermined memory regions at the computing system (e.g., in a particular directory, executable memory region, etc.), and may include installing, configuring, or otherwise applying one or more configuration settings for the gateway components or for the operation of the computing system.
As described herein, the one or more gateway components can include any type of software component, hardware configuration settings, or combinations thereof. The gateway components may include processor-executable instructions, which can be executed by the computing system to which the gateway component(s) are deployed. The one or more gateway components can cause the computing system to communicate with the physical building device to receive the one or more data samples (e.g., via one or more networks or communication interfaces). Additionally, the one or more gateway components cause the computing system to communicate the one or more data samples to the cloud platform. For example, the gateway components can include one or more adapters or communication software APIs that facilitate communication between computing devices within, and external to, the building. The gateway components may include adapters that cause the computing system to communicate with one or more network engines. The gateway components can include instructions that, when executed by the computing system, cause the computing system to detect a new physical building device connected to the computing system (e.g., by searching through different connected devices by device identifier, etc.), and then search a device library for a configuration of the new physical building device. Using the configuration for the new physical device, the gateway components can cause the computing system to implement the configuration to facilitate communication with the new physical building device. The gateway components can also perform a discovery process to discover the configuration for the new physical building device and store the configuration in the device library, for example, if the device library did not include the configuration. The device library can be stored at the cloud platform or on the one or more gateway components themselves. In some implementations, the device library is distributed across one or more instances of the one or more gateway components in a plurality of different buildings, and may be retrieved, for example, by accessing one or more networks to communicate with the multiple instances of gateway components to retrieve portions of, or all of, the device library. The gateway components can receive one or more values for control points of the physical building device, for example, from the building system, from the cloud platform, or from another system or device described herein, and communicate the one or more values to the control points of the physical building device via the one or more gateway components.
The one or more gateway components can include a building service that causes the computing system to generate data based on the one or more data samples, which may be analytics data or any other type of data described herein that may be based on or associated with the data samples. When deploying the gateway components, the building system can identify one or more requirements for the building service, or any other of the gateway components. The requirements may include required processing resources, storage resources, data availability, or a presence of another building service executing at the computing system. The building system can query the computing system to determine the current operating characteristics (e.g., processing resources, storage resources, data availability, or a presence of another building service executing at the computing system, etc.), to determine that the computing system meets the one or more requirements for the gateway component(s). If the computing system meets the requirements, the building system can deploy the corresponding gateway components to the computing system. If the requirements are not met, the building system may deploy the gateway components to another computing system. The building system can periodically query, or otherwise receive messages from, the computing system that indicate the current operating characteristics of the computing system. In doing so, the building system can identify whether the requirements for the building service (or other gateway components) are no longer met by the computing system. If the requirements are no longer met, the building system can move (e.g., terminate execution of the gateway components or remove the gateway components from the computing system, and re-deploy the gateway components) the gateway components (e.g., the building service) from the computing system to a different computing system that meets the one or more requirements of the building service or gateway component(s).
Referring to
At step 1405, the building system can store one or more gateway components on one or more storage devices of the building system. The building system may be located within, or located remote from, the building to which the building system corresponds. The gateway components stored on the storage devices of the building system can facilitate communication with a cloud platform (e.g., the cloud platform 106) and facilitate communication with a physical building device (e.g., the device/gateway 720, the building subsystems 122, etc.). The gateway components can be, for example, any of the, connectors, building normalization layers, services, or integrations described herein, including but certainly not limited to the connector 704, services 706-710, a building normalization layer 712, and integrations 714-718, among other components, software, integrations, configuration settings, or any other software-related data described in connection with
At step 1410, the building system can deploy the one or more gateway components to a BMS server, which may be in communication with one or more building devices via one or more network engines, as shown in
As described herein, the one or more gateway components can include any type of software component, hardware configuration settings, or combinations thereof. The gateway components may include processor-executable instructions, which can be executed by the BMS server to which the gateway component(s) are deployed. The one or more gateway components can cause the BMS server to communicate with the physical building device to receive the one or more data samples (e.g., via one or more networks or communication interfaces). Additionally, the one or more gateway components cause the BMS server to communicate the one or more data samples to the cloud platform. For example, the gateway components can include one or more adapters or communication software APIs that facilitate communication between computing devices within, and external to, the building. The gateway components may include adapters that cause the BMS server to communicate with one or more network engines. The gateway components can include instructions that, when executed by the BMS server, cause the BMS server to detect a new physical building device connected to the BMS server (e.g., by searching through different connected devices by device identifier, etc.), and then search a device library for a configuration of the new physical building device. Using the configuration for the new physical device, the gateway components can cause the BMS server to implement the configuration to facilitate communication with the new physical building device. The gateway components can also perform a discovery process to discover the configuration for the new physical building device and store the configuration in the device library, for example, if the device library did not include the configuration. The device library can be stored at the cloud platform or on the one or more gateway components themselves. In some implementations, the device library is distributed across one or more instances of the one or more gateway components in a plurality of different buildings, and may be retrieved, for example, by accessing one or more networks to communicate with the multiple instances of gateway components to retrieve portions of, or all of, the device library. The gateway components can receive one or more values for control points of the physical building device, for example, from the building system, from the cloud platform, or from another system or device described herein, and communicate the one or more values to the control points of the physical building device via the one or more gateway components.
The one or more gateway components can include a building service that causes the BMS server to generate data based on the one or more data samples, which may be analytics data or any other type of data described herein that may be based on or associated with the data samples. When deploying the gateway components, the building system can identify one or more requirements for the building service, or any other of the gateway components. The requirements may include required processing resources, storage resources, data availability, or a presence of another building service executing at the BMS server. The building system can query the BMS server to determine the current operating characteristics (e.g., processing resources, storage resources, data availability, or a presence of another building service executing at the BMS server, etc.), to determine that the BMS server meets the one or more requirements for the gateway component(s). If the BMS server meets the requirements, the building system can deploy the corresponding gateway components to the BMS server. If the requirements are not met, the building system may deploy the gateway components to another BMS server. The building system can periodically query, or otherwise receive messages from, the BMS server that indicate the current operating characteristics of the BMS server. In doing so, the building system can identify whether the requirements for the building service (or other gateway components) are no longer met by the BMS server. If the requirements are no longer met, the building system can move (e.g., terminate execution of the gateway components or remove the gateway components from the BMS server, and re-deploy the gateway components) the gateway components (e.g., the building service) from the BMS server to a different computing system that meets the one or more requirements of the building service or gateway component(s). In some implementations, the building system can identify communication protocols corresponding to the physical building devices associated with the BMS server, and deploy one or more integration components (e.g., associated with the physical building devices) to the BMS server to communicate with the one or more physical building devices via the one or more communication protocols. The integration components can be part of the one or more gateway components.
Referring to
At step 1505, the building system can store one or more gateway components on one or more storage devices of the building system. The building system may be located within, or located remote from, the building to which the building system corresponds. The gateway components stored on the storage devices of the building system can facilitate communication with a cloud platform (e.g., the cloud platform 106) and facilitate communication with a physical building device (e.g., the device/gateway 720, the building subsystems 122, etc.). The gateway components can be, for example, any of the, connectors, building normalization layers, services, or integrations described herein, including but certainly not limited to the connector 704, services 706-710, a building normalization layer 712, and integrations 714-718, among other components, software, integrations, configuration settings, or any other software-related data described in connection with
At step 1510, the building system can deploy the one or more gateway components to a network engine, which may implement one or more local communications networks for one or more building devices of the building and receive one or more data samples from the one or more building devices, as described herein. To deploy the gateway components, the building system can utilize one or more communication channels, which may be established via a network of the building, to transmit the gateway components to the Network engine of the building. Deploying the one or more gateway components can include installing or otherwise configuring the gateway components to execute at the network engine. Generally, the gateway components can be executed to perform any of the operations described herein. Deploying the gateway components can include forming storing computer-executable instructions corresponding to the gateway components at the network engine. In some implementations, the particular gateway components deployed at the network engine can be selected based on the type of the physical building device(s) to which the network engine is connected (e.g., via one or more networks implemented by the network engine, etc.), or to other types of computing systems with which the network engine is in communication. Likewise, in some embodiments, the particular gateway components deployed at the network engine can be selected to correspond to an operation, type, or processing capability of the network engine, among other factors as described herein. Deploying the gateway components may include storing the gateway components in one or more predetermined memory regions at the network engine (e.g., in a particular directory, executable memory region, etc.), and may include installing, configuring, or otherwise applying one or more configuration settings for the gateway components or for the operation of the network engine.
As described herein, the one or more gateway components can include any type of software component, hardware configuration settings, or combinations thereof. The gateway components may include processor-executable instructions, which can be executed by the network engine to which the gateway component(s) are deployed. The one or more gateway components can cause the network engine to communicate with the physical building device to receive the one or more data samples (e.g., via one or more networks or communication interfaces). Additionally, the one or more gateway components cause the network engine to communicate the one or more data samples to the cloud platform. For example, the gateway components can include one or more adapters or communication software APIs that facilitate communication between computing devices within, and external to, the building. The gateway components may include adapters that cause the network engine to communicate with one or more other computing systems (e.g., a BMS server, other building subsystems, etc.). The gateway components can include instructions that, when executed by the network engine, cause the network engine to detect a new physical building device connected to the network engine (e.g., by searching through different connected devices by device identifier, etc.), and then search a device library for a configuration of the new physical building device. Using the configuration for the new physical device, the gateway components can cause the network engine to implement the configuration to facilitate communication with the new physical building device. The gateway components can also perform a discovery process to discover the configuration for the new physical building device and store the configuration in the device library, for example, if the device library did not include the configuration. The device library can be stored at the cloud platform or on the one or more gateway components themselves. In some implementations, the device library is distributed across one or more instances of the one or more gateway components in a plurality of different buildings, and may be retrieved, for example, by accessing one or more networks to communicate with the multiple instances of gateway components to retrieve portions of, or all of, the device library. The gateway components can receive one or more values for control points of the physical building device, for example, from the building system, from the cloud platform, or from another system or device described herein, and communicate the one or more values to the control points of the physical building device via the one or more gateway components.
The one or more gateway components can include a building service that causes the network engine to generate data based on the one or more data samples, which may be analytics data or any other type of data described herein that may be based on or associated with the data samples. When deploying the gateway components, the building system can identify one or more requirements for the building service, or any other of the gateway components. The requirements may include required processing resources, storage resources, data availability, or a presence of another building service executing at the network engine. The building system can query the network engine to determine the current operating characteristics (e.g., processing resources, storage resources, data availability, or a presence of another building service executing at the network engine, etc.), to determine that the network engine meets the one or more requirements for the gateway component(s). If the network engine meets the requirements, the building system can deploy the corresponding gateway components to the network engine. If the requirements are not met, the building system may deploy the gateway components to another network engine. The building system can periodically query, or otherwise receive messages from, the network engine that indicate the current operating characteristics of the network engine. In doing so, the building system can identify whether the requirements for the building service (or other gateway components) are no longer met by the network engine. If the requirements are no longer met, the building system can move (e.g., terminate execution of the gateway components or remove the gateway components from the network engine, and re-deploy the gateway components) the gateway components (e.g., the building service) from the network engine to a different computing system that meets the one or more requirements of the building service or gateway component(s). In some implementations, the building system can identify communication protocols corresponding to the physical building devices associated with the network engine, and deploy one or more integration components (e.g., associated with the physical building devices) to the network engine to communicate with the one or more physical building devices via the one or more communication protocols. The integration components can be part of the one or more gateway components.
Referring to
At step 1605, the building system can store one or more gateway components on one or more storage devices of the building system. The building system may be located within, or located remote from, the building to which the building system corresponds. The gateway components stored on the storage devices of the building system can facilitate communication with a cloud platform (e.g., the cloud platform 106) and facilitate communication with a physical building device (e.g., the device/gateway 720, the building subsystems 122, etc.). The gateway components can be, for example, any of the, connectors, building normalization layers, services, or integrations described herein, including but certainly not limited to the connector 704, services 706-710, a building normalization layer 712, and integrations 714-718, among other components, software, integrations, configuration settings, or any other software-related data described in connection with
At step 1610, the building system can deploy the one or more gateway components to a physical gateway, which may communicate and receive data samples from one or more physical building devices of the building, and provide the data samples to the cloud platform. To deploy the gateway components, the building system can utilize one or more communication channels, which may be established via a network of the building, to transmit the gateway components to the physical gateway of the building. Deploying the one or more gateway components can include installing or otherwise configuring the gateway components to execute at the physical gateway. Generally, the gateway components can be executed to perform any of the operations described herein. Deploying the gateway components can include forming storing computer-executable instructions corresponding to the gateway components at the physical gateway. In some implementations, the particular gateway components deployed at the physical gateway can be selected based on the type of the physical building device(s) to which the physical gateway is connected, or to other types of computing systems with which the physical gateway is in communication. Likewise, in some embodiments, the particular gateway components deployed at the physical gateway can be selected to correspond to an operation, type, or processing capability of the physical gateway, among other factors as described herein. Deploying the gateway components may include storing the gateway components in one or more predetermined memory regions at the physical gateway (e.g., in a particular directory, executable memory region, etc.), and may include installing, configuring, or otherwise applying one or more configuration settings for the gateway components or for the operation of the physical gateway.
As described herein, the one or more gateway components can include any type of software component, hardware configuration settings, or combinations thereof. The gateway components may include processor-executable instructions, which can be executed by the physical gateway to which the gateway component(s) are deployed. The one or more gateway components can cause the physical gateway to communicate with the physical building device to receive the one or more data samples (e.g., via one or more networks or communication interfaces). Additionally, the one or more gateway components cause the physical gateway to communicate the one or more data samples to the cloud platform. For example, the gateway components can include one or more adapters or communication software APIs that facilitate communication between computing devices within, and external to, the building. The gateway components may include adapters that cause the physical gateway to communicate with one or more other computing systems (e.g., a BMS server, other building subsystems, etc.). The gateway components can include instructions that, when executed by the physical gateway, cause the physical gateway to detect a new physical building device connected to the physical gateway (e.g., by searching through different connected devices by device identifier, etc.), and then search a device library for a configuration of the new physical building device. Using the configuration for the new physical device, the gateway components can cause the physical gateway to implement the configuration to facilitate communication with the new physical building device. The gateway components can also perform a discovery process to discover the configuration for the new physical building device and store the configuration in the device library, for example, if the device library did not include the configuration. The device library can be stored at the cloud platform or on the one or more gateway components themselves. In some implementations, the device library is distributed across one or more instances of the one or more gateway components in a plurality of different buildings, and may be retrieved, for example, by accessing one or more networks to communicate with the multiple instances of gateway components to retrieve portions of, or all of, the device library. The gateway components can receive one or more values for control points of the physical building device, for example, from the building system, from the cloud platform, or from another system or device described herein, and communicate the one or more values to the control points of the physical building device via the one or more gateway components.
At step 1615, the building system can identify a building device (e.g., via the gateway on which the gateway components are deployed) that is executing one or more building services that does not meet the requirements for executing the one or more building services. The buildings services, for example, may cause the building device to generate data based on the one or more data samples, which may be analytics data or any other type of data described herein that may be based on or associated with the data samples. The requirements may include required processing resources, storage resources, data availability, or a presence of another building service executing at the building device. The building system can query the building device to determine the current operating characteristics (e.g., processing resources, storage resources, data availability, or a presence of another building service executing at the building device, etc.), to determine that the building device meets the one or more requirements for the building service(s). If the requirements are not met, the building system can perform step 1620. The building system may periodically query the building device to determine whether the building device meets the requirements for the building services.
At step 1620, the building system can cause (e.g., by transmitting computer-executable instructions to the building device and the gateway) the building services to be relocated to the gateway on which the gateway component(s) are deployed. To do so, the building system can move the building services from the building device to the gateway on which the gateway component(s) are deployed, for example, by terminating execution of the building services or removing the building services from the building device, and then re-deploying or copying the building services, including any application state information or configuration information, to the gateway.
Referring to
At step 1705, the building device can receive one or more gateway components and implement the one or more gateway components on the building device. The one or more gateway components can facilitate communication between a cloud platform and the building device. The gateway components can be, for example, any of the, connectors, building normalization layers, services, or integrations described herein, including but certainly not limited to the connector 704, services 706-710, a building normalization layer 712, and integrations 714-718, among other components, software, integrations, configuration settings, or any other software-related data described in connection with
At step 1710, the building device can identify a physical device connected to the building device based on the one or more gateway components. For example, the gateway components can include instructions that, when executed by the physical gateway, cause the physical gateway to detect a physical device connected to the physical gateway (e.g., by searching through different connected devices by device identifier, etc.). then The gateway components can receive one or more values for control points of the physical device, for example, from the building system, from the cloud platform, or from another system or device described herein, and communicate the one or more values to the control points of the physical device via the one or more gateway components.
At step 1715, the building device can search a library of configurations for a plurality of different physical devices with the identity of the physical device to identify a configuration for collecting data samples from the physical device connected to the building device and retrieve the configuration. Search a device library for a configuration of the physical device. The gateway components can also perform a discovery process to discover the configuration for the physical device and store the configuration in the device library, for example, if the device library did not include the configuration. The device library can be stored at the cloud platform or on the one or more gateway components themselves. In some implementations, the device library is distributed across one or more instances of the one or more gateway components in a plurality of different buildings, and may be retrieved, for example, by accessing one or more networks to communicate with the multiple instances of gateway components to retrieve portions of, or all of, the device library.
At step 1720, the building device can implement the configuration for the one or more gateway components. Using the configuration for the physical device, the gateway components can cause the physical gateway to implement the configuration to facilitate communication with the physical device. The configuration may include configuration for communication hardware (e.g., wireless or wired communications interfaces, etc.) that configure the communication hardware to communicate with the physical device. The configuration can specify a communication protocol that can be used to communicate with the physical device, and may include computer-executable instructions that, when executed, cause the building device to execute an API that carries out the communication protocol to communicate with the physical device.
At step 1725, the building device can collect one or more data samples from the physical device based on the one or more gateway components and the configuration. For example, the gateway components or the configuration can include an API, or other computer-executable instructions, that the building device can utilize to communicate with and retrieve one or more data samples from the physical device. The data samples can be, for example, sensor data, operational data, configuration data, or any other data described herein. Additionally, the building device can utilize one or more of the gateway components to communicate the data samples to another computing system, such as the cloud platform, a BMS server, a network engine, or a physical gateway, among others.
Referring to
At step 1805, the building system can store one or more gateway components on one or more storage devices of the building system. The building system may be located within, or located remote from, the building to which the building system corresponds. The gateway components stored on the storage devices of the building system can facilitate communication with a cloud platform (e.g., the cloud platform 106) and facilitate communication with a physical building device (e.g., the device/gateway 720, the building subsystems 122, etc.). The gateway components can be, for example, any of the, connectors, building normalization layers, services, or integrations described herein, including but certainly not limited to the connector 704, services 706-710, a building normalization layer 712, and integrations 714-718, among other components, software, integrations, configuration settings, or any other software-related data described in connection with
At step 1810, the building system can a first instance of the one or more gateway components to a first edge device and a second instance of the one or more gateway components to a second edge device. The first edge device can measure a first condition of the building and the second edge device can control the first condition or a second condition of the building. The first edge device (e.g., a building device) can be a surveillance camera, and the first condition can be a presence of a person in the building (e.g., within the field of view of the surveillance camera). The second edge device can be a smart thermostat, and the second condition can be a temperature setting of the building. However, it should be understood that the first edge device and the second edge device can be any type of building device capable of capturing data relating to the building or controlling one or more functions, conditions, or other controllable characteristics of the building. To deploy the gateway components, the building system can utilize one or more communication channels, which may be established via a network of the building, to transmit the gateway components to the first edge device and the second edge device of the building.
Deploying the one or more gateway components can include installing or otherwise configuring the gateway components to execute at the first edge device and the second edge device. Generally, the gateway components can be executed to perform any of the operations described herein. Deploying the gateway components can include forming storing computer-executable instructions corresponding to the gateway components at the first edge device and the second edge device. In some implementations, the particular gateway components deployed at the first edge device and the second edge device can be selected based on the operations, functionality, type, or processing capabilities of the first edge device and the second edge device, among other factors as described herein. Deploying the gateway components may include storing the gateway components in one or more predetermined memory regions at the first edge device and the second edge device (e.g., in a particular directory, executable memory region, etc.), and may include installing, configuring, or otherwise applying one or more configuration settings for the gateway components or for the operation of the first edge device and the second edge device. Gateway components can be deployed to the first edge device or the second edge device based on a communication protocol utilized by the first edge device or the second edge device. The building system can select gateway components to deploy to the first edge device or the second edge device that include computer-executable instructions that allow the first edge device and the second edge device to communicate with one another, and with other computing systems using various communication protocols.
As described herein, the one or more gateway components can include any type of software component, hardware configuration settings, or combinations thereof. The gateway components may include processor-executable instructions, which can be executed by the physical gateway to which the gateway component(s) are deployed. The one or more gateway components can cause the physical gateway to communicate with a building device broker (e.g., the building device broker 1105) to facilitate communication of data samples, conditions, operations, or signals between the first edge device and the second edge device. Additionally, the one or more gateway components cause the first edge device or the second edge device to communicate data samples, operations, signals, or messages to the cloud platform. The gateway components may include adapters or integrations that facilitate communication with one or more other computing systems (e.g., a BMS server, other building subsystems, etc.). The gateway components can cause the first edge device to communicate an event (e.g., a person entering the building, entering a room, or any other detected event, etc.) to the second edge device based on a rule being triggered associated with the first condition. The rule can be, for example, to set certain climate control settings (e.g., temperature, etc.) when a person has been detected. However, it should be understood that any type of user-definable condition can be utilized. The second instance of the one or more gateway components executing at the second edge device can cause the second edge device to control the second condition (e.g., the temperature of the building, etc.) upon receiving the event from the first edge device (e.g., via the building device broker, via the cloud platform, via direct communication, etc.). The building components may include one or more building services that can generate additional analytics data based on detected events, conditions, or other information gathered or processed by the first edge device or the second edge device.
The techniques described herein may be utilized to optimize and configure edge devices utilizing various computing systems described herein, including the cloud platform 106, the twin manager 108, the edge platform 102, the user device 176, the local server 656, the computing system 660, the local server 702, the local BMS server 804, the network engine 816, the gateway 1004, the building broker device 1105, the gateway manager 1206, the cluster gateway 1206, or the building subsystems 122, among others.
Cloud-based data processing has become more popular due to the decreased cost and increased scale and efficiency of cloud computing systems. Cloud computing is useful when attempting to process data gathered from devices, such as the various building devices described herein, that would otherwise lack the processing power or appropriately optimized software to process that data locally. However, the use of cloud computing platforms for processing large amounts of data from a large pool of edge devices becomes more and more inefficient as the number of edge devices increases. The reduction in processing efficiency and increased latency makes certain types of processing, such as real-time or near real-time processing, impractical to perform using a cloud-processing system architecture.
To address these issues, the systems and methods described herein can be utilized to optimize software components, such as machine-learning models, to execute directly on edge devices. The optimization techniques described herein can be utilized to automatically modify, configure, or generate various components (e.g., gateway components, engine components, connectors, machine-learning models, APIs, etc.) such that the components are optimized for the particular edge device on which they will execute. The configuration of the components can be performed based on the architecture, processing capability, and processing demand of the edge device, among other factors as described herein. While various implementations described herein are configured to allow for processing to be performed at edge devices, it should be understood that, in various embodiments, processing may additionally or alternatively be performed both in edge devices and in other on-premises and/or off-premises devices, including cloud or other off-premises standalone or distributed computing systems, and all such embodiments are contemplated within the scope of the present disclosure.
Automatically optimizing and configuring components for edge devices, when those components would otherwise execute on a cloud computing system, improves the overall computational efficiency of the system. In particular, the use of edge processing enables a distributed processing platform that reduces the inherent latency in communicating and polling a cloud computing system, which enables real-time or near real-time processing of data captured by the edge device. Additionally, utilizing edge processing improves the efficiency and bandwidth of the networks on which the edge devices operate. In a cloud computing architecture, all edge devices would need to transmit all of the data points captured to the cloud computing system for processing (which is particularly burdensome for near real-time processing). By automatically optimizing components to execute on edge devices, the data points captured by the edge devices need not be transmitted en masse to the cloud computing system, which significantly reduces the amount of network resources required to execute certain components, and improves the overall efficiency of the system.
Additionally, the systems and methods described herein can be utilized to automatically configure (sometimes referred to herein as “autoconfigure” or performing “autoconfiguration”) edge devices by managing the components, connectors, operating system features, and other related data via a cloud computing system. The techniques described herein can be utilized to manage the operations of and coordinate the lifecycle of edge devices remotely, via a cloud computing system. The device management techniques described herein can be utilized to manage and execute commands that update software of edge devices, reboot edge devices, manage the configuration of edge devices, restore edge devices to their factory default settings or software configuration, and activate or deactivate edge devices, among other operations. The techniques described herein can be utilized to define and customize connector software, which can facilitate communications between two or more computing devices described herein. The connector software can be remotely defined and managed via user interfaces provided by a cloud computing system. The connector software can then be pushed to edge devices using the device management techniques described herein.
Various implementations of the present disclosure may utilize any feature or combination of features described in U.S. Patent Application Nos. 63/315,442, 63/315,452, 63/315,454, 63/315,459, and/or 63/315,463, each of which is incorporated herein by reference in its entirety and for all purposes. For example, in some such implementations, embodiments of the present disclosure may utilize a common data bus at the edge devices, be configured to ingest information from other on-premises/edge devices via one or more protocol agents or brokers, and/or may utilize various other features shown and described in the aforementioned patent applications. In some such implementations, the systems and methods of the present disclosure may incorporate one or more of the features shown and described, for example, with respect to
Referring to
As described herein, the cloud platform 106 can include one or more processors 124 and one or more memories 126. The processor(s) 124 can include a general purpose or specific purpose processors, an ASIC, a GPU one or more field programmable gate arrays, a group of processing components, or other suitable processing components. The processor(s) 124 may be configured to execute computer code and/or instructions stored in the memories 126 or received from other computer readable media (e.g., CDROM, network storage, a remote server, etc.). The processor(s) 124 may be part of multiple servers or computing systems that make up the cloud platform 106, for example, in a remote datacenter, server farm, or other type of distributed computing environment.
The memories 126 can include one or more devices (e.g., memory units, memory devices, storage devices, etc.) for storing data or computer code for completing or facilitating the various processes described in the present disclosure. The memories 126 can include RAM, ROM, hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, or any other suitable memory for storing software objects or computer instructions. The memories 126 can include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. The memories 126 can be communicably connected to the processors and can include computer code for executing (e.g., by the processors 124) one or more processes described herein.
Although not necessarily pictured here, the configuration data 1932 and the components 1934 may be stored as part of the memories 126, or may be stored in external databases that are in communication with the cloud platform 106 (e.g., via one or more networks). The configuration data 1932 can include any of the data relating to configuring the edge devices 1902, as described herein. The configuration data can include software information of the edge devices 1902, operating system information of the edge devices 1902, status information (e.g., device up-time, service schedule, maintenance history, etc.), as well as metadata corresponding to the edge devices 1902, among other information. The configuration data 1932 can be created, updated, or modified by the cloud platform 106 based on the techniques described herein. In embodiment, in response to corresponding requests from the user device 176, or in response to scheduled updates or changes, the cloud platform 106 can update a local configuration of a respective edge device 1902 based on the techniques described herein.
The configuration data 1932 can include data configured for a number of edge devices 1902, and for a wide variety of edge devices 1902 (e.g., network engines, device gateways, local servers, etc.). For example, the configuration data 1932 can include configuration data for any of the computing devices, systems, or platforms described herein. The configuration data 1932 can be managed, updated, or otherwise utilized by the configuration manager 1928, as described herein. The configuration data 1932 may also include connectivity data. The connectivity data may include information relating to which edge devices 1902 are connected to other devices in a network, one or more possible communication pathways (e.g., via routers, switches, gateways, etc.) to communicate with the edge devices 1902, and network topology information (e.g., of the network 1904, of networks to which the network 1904 is connected, etc.).
The components 1934 can include software that can be optimized using various techniques described herein. The components 1934 can include connectors, data processing applications, or other types of processor-executable instructions. The components 1934 may be executable by the cloud platform 106 to perform one or more data processing operations (e.g., analysis of sensor data, machine-learning operations, unsupervised clustering of data retrieved using various techniques described herein, etc.). As described in further detail herein, the optimization manager 1930 can optimize one or more of the components 1934 for one or more target edge devices 1902. In brief overview, the optimization manager 1930 can access the computational capabilities, architecture, status, and other information relating to the target edge device 1902, and can automatically modify one or more of the components to be optimized for the target edge device 1902.
Each of the configuration manager 1928 and the optimization manager 1930 may be hardware, software, or a combination of hardware and software of the cloud platform 106. The configuration manager 1928 and the optimization manager 1930 can execute one or more computing devices or servers of the cloud platform 106 to perform the various operations described herein. In an embodiment, the configuration manager 1928 and the optimization manager 1930 can be stored as processor-executable instructions in the memories 126, and when executed by the cloud platform 106, cause the cloud platform 106 to perform the various operations associated with each of the configuration manager 1928 and the optimization manager 1930.
The edge device 1902 may include any of the functionality of the edge device 102, or the components thereof. The edge device 1902 can communicate with the building subsystems 122, as described herein. The edge device 1902 can receive messages from the building subsystems 122 or deliver messages to the building subsystems 122. The edge device 1902 can includes one or multiple optimized components, e.g., the optimized components 1912, 1914, and 1916. Additionally, the edge device 1902 can include a local configuration, which may include a software configuration or installation, an operating system configuration or installation, driver configuration or installation, or any other type of component configuration described herein.
The optimized components 1912-1916 can include software that has been optimized by the optimization manager 1930 of the cloud platform 106 to execute on the edge device 1902, for example, to perform edge processing of data received by or retrieved from the building subsystems 122. Although not pictured here for visual clarify, the edge devices 1902 may include communication components, such as connectors or other communication software, hardware, or executable instructions as described herein, that can act as a gateway between the cloud platform 106 and the building subsystems 122. In some embodiments, the cloud platform 106 can deploy one or more of the optimized components 1912-1916 to the edge device 1902, using various techniques described herein. In this regard, lower latency in management of the building subsystems 122 can be realized.
The edge device 1902 can be connected to the cloud platform 106 via a network 1904. The network 1904 can communicatively couple the devices and systems of the system 1900. In some embodiments, the network 1904 is at least one of and/or a combination of a Wi-Fi network, a wired Ethernet network, a ZigBee network, a Bluetooth network, and/or any other wireless network. The network 1904 may be a local area network or a wide area network (e.g., the Internet, a building WAN, etc.) and may use a variety of communications protocols (e.g., BACnet, IP, LON, etc.). The network 1904 may include routers, modems, servers, cell towers, satellites, and/or network switches. The network 1904 may be a combination of wired and wireless networks. Although only one edge device 1902 is shown in the system 1900 for visual clarity and simplicity, it should be understood that any number of edge devices 1902 (corresponding to any number of buildings) can be included in the system 1900 and communicate with the cloud platform 106 as described herein.
The cloud platform 106 can be configured to facilitate communication and routing of messages between the user device 176 and the edge device 1902, and/or any other system. The cloud platform 106 can include any of the components described herein, and can implement any of the processing functionality of the devices described herein. In an embodiment, the cloud platform 106 can host a web-based service or website, via which the user device 176 can access one or more user interfaces to coordinate various functionality described herein. In some embodiments, the cloud platform 106 can facilitate communications between various computing systems described herein via the network 1904.
The user device 176 may be a laptop computer, a desktop computer, a smartphone, a tablet, and/or any other device with an input interface (e.g., touch screen, mouse, keyboard, etc.) and an output interface (e.g., a speaker, a display, etc.). The user device 176 can receive input via the input interface, and provide output via the output interface. For example, the user device 176 can receive user input (e.g., interactions such as mouse clicks, keyboard input, tap or touch gestures, etc.), which may correspond to interactions. The user device 176 can present one or more user interfaces described herein (e.g., the user interfaces provided by the cloud platform 106) via the output interface.
The user device 176 can be in communication with the cloud platform 106 via the network 1904. For example, the user device 176 can access one or more web-based user interfaces provided by the cloud platform 106 (e.g., by accessing a corresponding uniform resource locator (URL) or uniform resource identifier (URI), etc.). In response to corresponding interactions with the user interfaces, the user device 176 can transmit requests to the cloud platform 106 to perform one or more operations, including the operations described in connection with the configuration manager 1928 or the optimization manager 1930.
Referring now to the operations of the configuration manager 1928, the configuration manager 1928 can coordinate and facilitate management of edge devices 1902, including the creation and autoconfiguration of connector templates for one or more edge devices 1902, and providing device management functionality via the network 1904. The configuration manager 1928 can manage and execute commands that update software of edge devices, reboot edge devices, manage the configuration of edge devices 1902, restore edge devices 1902 to their factory default settings or software configuration, and activate or deactivate edge devices 1902, among other operations. The connection manager 1928 may also monitor connectivity between edge devices, identify a connection failure between two edge devices, and determine a recommendation to address the connection failure.
The configuration manager 1928 can access and provide a list of edge devices 1902 with which the cloud platform 106 can communicate, for example, for display in a user interface. To generate and display the list, the configuration manager 1928 can access the configuration data 1932, which stores identifiers of the edge devices 1902, along with their corresponding status. The user interface can display various information about each edge device 1902, including a device name, a group name, an edge status, a platform name (e.g., processor architecture), an operating system version, a software package version (e.g., which may corresponding to one or more components described herein), a hostname (shown here as an IP address), a gateway name of a gate to which the edge device is connected (if any), and a date identifying the last software upgrade.
In the user interface, each item in the list of devices includes a button that, when interacted with, enables the user to issue one or more commands to the configuration manager 1928 to manage the respective device. Drop-down menus can provide a list of commands for each edge device, such as a command to reboot the respect edge device 1902, a reset to factory default command, a deactivation command, and an upgrade software command. To upgrade, update, or configure software, the configuration manager 1928 can transmit updated software to the respective edge device 1902, and cause the respective edge device 1902 to execute processor-executable instructions to install and configure the software according to the commands issued by the configuration manager 1928.
In an embodiment, when an upgrade software command is selected at the user interfaces provided by the configuration manager 1928, the configuration manager 1928 can provide another user interface to enable the user to select one or more software components, versions, or deployments to deploy to the respective edge device. In an embodiment, if a software version is already up-to-date (e.g., no upgrades available) the configuration manager 1928 can display a notification indicating that the software is up-to-date. The configuration manager 1928 can further provide graphical user interfaces (or other types of selectable user interface elements), or application programming interfaces, that can be utilized to specify which software components to deploy, upgrade, or otherwise provide to the edge device 1902.
The configuration manager 1928 can manage any type of software, component, connector, or other processor-executable instructions that can be provided to and executed by the edge device 1902 in a similar manner. When a software upgrade is selected or specified, the configuration manager 1928 can begin to deploy the selected software to the edge device 1902, and can execute one or more scripts or processor-executable instructions to install and configure the selected software at the edge device 1902. The configuration manager 1928 can transmit the data for the installation to the edge device 1902 via the network 1904.
As the selected components are being deployed, the configuration manager 1928 can maintain and display information indicating a status of the edge device 1902 and the status of the deployment. A historic listing of other operations performed by the configuration manager 1928 can also be maintained or displayed in a status interface. Each item in the listing can include a name of the action performed by the configuration manager 1928, a status of the respective item (e.g., “InProgress,” “Completed,” “Failed,” etc.), a date and timestamp corresponding to the operation, and a message (e.g., a status message, etc.) corresponding to the respective action. Any of the information presented on the user interfaces provided by the configuration manager 1928 can be stored as part of the configuration data 1932.
The configuration manager 1928 can provide user interfaces that enable an operator to configure one or more edge devices 1902, or the components deployed thereon. For example, the configuration manager 1928 can display a user interface that shows a list of configuration templates. The example that follows describes a configuration process for a chiller controller with a device name “VSExxx.” However, similar operations may be performed for any software on any number of edge devices, in order to configure one or more connectors, components, or other processor-executable instructions to facilitate communication between building devices.
The connectors implemented by the configuration manager 1928 can be utilized to connect with different sensors and devices at the edge (e.g., the building subsystems 122), retrieve and format data retrieved from the building subsystems 122, and provide said data in one or more data structures to the cloud platform 106. The connectors may be similar to, or may be or include, any of the connectors described herein. The configuration manager 1928 can provide user interfaces that enable a user to specify parameters for a template connector, which can then be generated by the configuration manager 1928 and provided to the edge device 1902 to retrieve data. In this example, a new connector for a VSExxx device has been defined.
Upon creating the connector template for the VSExxx device, the configuration manager 1928 can enables selection or specification of one or more parameters for the template connector, such as a name for the template, a direction for the data (e.g., inbound is receiving data, such as from a sensor, outbound is providing data, and bidirectional includes functionality for inbound and output), as well as using sensor discovery (e.g., the device discovery functionality described herein). Additionally, the configuration manager 1928 can also enable selection or specification of one or more applications that execute on the edge device 1902 that implement the connector. In an embodiment, if an application is not selected, a default application may be selected based on, for example, other parameters specified for the connector, such as data types or server fields. The application can be developed by the operator for the specific edge device using a software development kit that invokes one or more APIs of the cloud platform 106 or the configuration manager 1928, thereby enabling the cloud platform 106 to communicate with the edge device 1902 via the APIs.
The configuration manager 1928 can enable selection or specification of one or more server parameters for the connector (e.g., parameters that coordinate data retrieval or provision, ports, addresses, device data, etc.). The configuration manager 1928 can enable selection or specification of one or more parameters for each field (e.g., field name, property name, value type (e.g., data type such as string, integer, floating-point value, etc.), default value, whether the parameter is a required parameter, and one or more guidance notes that may be accessed while working with the respective connector via the user device 176, etc.).
The configuration manager 1928 can enable selection of one or more sensor data parameters for the connector template. The sensor parameters can similarly be selected and added form the user interface elements (or via APIs) provided by the configuration manager 1928. The sensor parameters can include parameters of the sensors in communication with the edge device 1902 that are accessed using the connector template. Fields similar to those provided for the server parameters can be specified for each field of the sensor parameters, as shown. In this example, the edge device is in communication with a building subsystem 122 that gathers data from four vibration sensors, and therefore there are fields for sensor parameters that correspond to each of the four vibration sensors. In an embodiment, the device discovery functionality described herein can be utilized to identify one or more configurations or sensors, which can be provided to the configuration manager 1928 such that the template connector can be automatically populated.
The configuration manager 1928 can save the template in the configuration data 1932. When the operator wants to deploy the generated template to an edge device, the configuration manager 1928 can be utilized to deploy one or more connectors. The configuration manager 1928 can present a user interface that enables the operator to deploy one or more connectors to a selected edge device. In this example, there is one edge device listed, but it should be understood that any number of edge devices may be listed and managed by the configuration manager 1928. The configuration manager 1928 can allow selection of one or more generated connector templates (e.g., via a user interface or an API), which can then be deployed on the edge device 1902 using the techniques described herein.
Referring now to the operations of the optimization manager 1930, the optimization manager 1930 can optimize one or more of the components 1934 to execute on a target edge device 1902, by generating corresponding optimized components (e.g. the optimized components 1912-1916). As described herein, cloud-based computing is impractical or impossible for real-time or near real-time data processing, due to the inherent latency of cloud computing. To address these issues, the optimization manager 1930 can optimize and deploy one or more components 1934 for a target edge device 1902, such that the target edge device 1902 can execute the corresponding optimized component at the edge without necessarily performing cloud computing. Optimizing the one or more components 1934 may including reducing the memory storage requirements of the components 1934 or the number of computational operations required to execute the one or more components 1934, for example, to accommodate for reduced processing capabilities of the edge devices 1902.
The components 1934 may include machine-learning models that execute using data gathered from the building subsystems 122 as input. An example machine-learning workflow can include of preprocessing, prediction (or executing another type of machine-learning operation), and post processing. Constrained devices (e.g., the edge devices 1902) may generally have fewer resources to run machine-learning workflows than the cloud platform 106. This problem is compounded by the fact that typical machine-learning workflows are written in dynamic languages like Python. Although dynamic languages can accelerate deployment of machine-learning implementations, such languages are inefficient when it comes to resource usage and are not as computationally efficient compared to compiled languages. As such, machine-learning models are typically developed in a dynamic language and then executed on a large cluster of servers (e.g., the cloud platform 106). Additionally, the data is pre- and post-processed before and after machine-learning model prediction in a workflow by the cloud platform 106 (e.g., by another cluster of computing devices, etc.).
One approach to solving this problem is to combine machine-learning and stream processing using components (e.g., the optimized components 1912-1916) to be executed on an edge device 1902. To do so, the optimization manager 1930 can generate code that gets compiled into code specific to the machine-learning model and the target edge device 1902, thereby using the computational resources and memory of the edge device 1902 as efficiently as possible. To do so, the optimization manager 1930 can utilize two sets of APIs. One set of APIs is utilized for stream processing and other set of APIs is used for machine-learning. The stream processing APIs can be used to read data, and perform pre-processing and post-processing. The machine-learning APIs can be executed on the edge device 1902 to load the model, bind the model inputs to the streams of data and bind the outputs to streams that can be processed further.
The optimization manager 1930 can support existing machine-learning libraries as any new machine libraries that may be developed as part of the components 1934. Once a machine-learning model is developed in a framework of their choice, they can define all the pre-processing and post-processing of inputs and outputs using API bindings that invoke functionality of the optimization manager 1930. Once the code for the machine-learning model and the pre-processing and post-processing steps have been developed, the optimization manager 1930 can apply software optimization techniques and generate an optimized model and stream processing definitions (e.g., the optimized components 1912-1916) into a compiled language (e.g., C, C++, Rust, etc.). The optimization manager 1930 can then compiles the generated code while targeting a native binary for the target edge device 1902, suing runtime that is already deployed on the target edge device 1902 (e.g., one or more software configurations, operating systems, hardware acceleration libraries, etc.).
One advantage of this approach is operators that develop machine-learning models need not manually optimize the machine-learning models for any specific target edge device 1902. The optimization manager 1930 can automatically identify and apply optimizations to machine-learning models based on the respective type of model, input data, and other operator-specified (e.g., via one or more user interfaces) parameters of the machine-learning model. Some example optimizations include pruning. The optimization manager 1930 can generate code for machine-learning models that can execute efficiently while using fewer computational resources and with faster inference times for a target edge device 1902. This enables efficient edge processing without tedious manual intervention or optimizations.
Models that will be optimized by the optimization manager 1930 can be platform agnostic and may be developed using any suitable the machine-learning library or framework. Once a model has been developed and tested locally using a framework implemented or utilized by the optimization manager 1930, the optimization manager 1930 can utilize input provided by a user to determine one or more model parameters. The model parameters can include, but are not limited to, model architecture type, number of layers, layer type, loss function type, layer architecture, or other types of machine-learning model architecture parameters. The optimization manager 1930 can also enable a user to specify target system information (e.g., architecture, computational resources, other constraints, etc.). Based on this data, the optimization manager 1930 can select an optimal runtime for the model, which can be used to compile the model while targeting the target edge device 1902.
In an example implementation, an operator may first define a machine-learning model using a library such as Tensorflow, which may utilize more computational resources than are practically available at a target edge device 1902. Because the model is specified in a dynamic language, the model is agnostic of a target platform, but may implemented in a target runtime which could be different from runtimes present at the target edge device 1902. The optimization manager 1930 can then perform one or more optimization techniques on the model, to optimize the model in various dimensions. For example, the optimization manager 1930 can detect the processor types present on the target edge device 1902 (e.g., via the configuration data 1932 or by communicating with the target edge device 1902 via the network 1904). Furthering this example, if the model can be targeted to run on one or more GPUs, and the target edge device 1902 includes a GPU that is available for machine-learning processing, the optimization manager 1930 can configure the model to utilize the GPU accelerated runtimes of the target edge device. Likewise, if the model can be targeted to run on a general-purpose CPU, and the target edge device includes a general-purpose CPU that is available for machine-learning processing, the optimization manager 1930 can automatically transform the model to execute on a CPU runtime for the target edge device 1902 (e.g., OpenVINO, etc.). In another example, if the target edge device 1902 is a resource constrained device, such as an ARM platform, the optimization manager 1930 can transform the model to utilize the tflite runtime, which is less computationally intensive and optimized for ARM devices. Additionally, the optimization manager 1930 may deploy tflite to the target edge device 1902, if not already installed. In addition, the optimization manager 1930 can further optimize the model to take advantage of vendor-specific libraries like armnn, for example, when targeting an ARM device.
Referring back to the functionality of the configuration manager 1928, the configuration manager 1928 can monitor and identify connection failures in the network 1904 or other networks to which the edge devices 1902 are connected. In particular, the configuration manager can monitor connectivity between edge devices, identify a connection failure between two edge devices, and determine a recommendation to address the connection failure. The configuration manager 1928 can perform these operations, for example, in response to a corresponding request from the user device 176. As described herein, the configuration manager 1928 can provide one or more web-based user interfaces that enable the user device 176 to provide requests relating to the connectivity functionality of the configuration manager 1928. The configuration manager 1928 can store connectivity data as part of the configuration information 1930. The connectivity data can include information relating to which edge devices 1902 are connected to other devices in a network, one or more possible communication pathways (e.g., via routers, switches, gateways, etc.) to communicate with the edge devices 1902, and network topology information (e.g., of the network 1904, of networks to which the network 1904 is connected, etc.), network state information, among other network features described herein.
The configuration manager 1928 can utilize a variety of techniques to diagnose connectivity problems on various networks (e.g., the network 1904, underlay networks, overlay networks, etc.). For example, the configuration manager 1928 can ping local devices to check the connectivity of local devices behind an Airwall gateway, check tunnels to determine whether communications can travel over a host identity protocol (HIP) tunnel (e.g., and create a tunnel between two Airwalls if one does not exist), ping an IP or hostname from an Airwall via an underlay or overlay network (e.g., both of which may be included in the network 1904), perform a traceroute to an IP or hostname from an Airwall from an overlay or underlay network, as well as check HIP connectivity to an Airwall relay (e.g., an Airwall that relays traffic between two other Airwalls when they cannot communicate directly on an underlay network due to potential network address translation (NAT) issues), among other functionality.
Based on requests from the user device 176 and based on network information in the configuration data 1932, the configuration manager 1928 can automatically select and execute operations to check and diagnose potential connectivity issues between at least two edge devices 1902 (or between an edge device 1902 and another computing system described herein, or between two other computing systems that communicate via the network 1904). Automatic detection and diagnosis of network connectivity issues is useful because operators may not have all of the information or resources to manually detect or rectify the connectivity issues without the present techniques. Some example network issues include Airwalls that need to be in a relay rule so they can communicate via relay because they do not have direct underlay connectivity, firewall rules inadvertently blocking a HIP port preventing connectivity, or broken underlay network connectivity due to a gateway and its local device(s) not having routes set up to communicate with remote devices, among others.
The configuration manager 1928 can detect network settings (e.g., portions of the configuration data 1932) that have been misconfigured and are causing connectivity issues between two or more devices. Some example network configuration issues can include disabled devices, disabled gateways, disabled networks or subnets, or rules that otherwise block traffic between two or more devices (e.g., blocked ports, blocked connectivity functionality, etc.). Using the user interfaces provided by the configuration manager 1928, the user device 176 can select two or more device for which to check and diagnose connectivity. Based on the results of its analysis, the configuration manager 1928 can provide one or more suggestions in the web-based interface to address any detected connectivity issues.
Some example conditions in the network 1904 that the configuration manager 1928 can detect include connectivity rules (or lack thereof) in the underlay or overlay network that prevent device connectivity, port filtering that blocks internet control message protocol (ICMP) traffic, offline gateways (e.g., Airwalls), or lack of configuration to communicate with remote devices, among others. To detect these conditions, the configuration manager 1928 can identify and maintain various information about the status of the network in the configuration data 1932, including device groups policies and blocks; the status (e.g., enabled, disabled) of devices, gateways (e.g., Airwalls), and overlay networks; relay rule data; local device ping; remote device ping on an overlay network; information from gateway underlay network pings and BEX (e.g., HIP tunnel handshake); gateway connectivity data (e.g., whether the gateway is connecting to other Airwalls successfully); relay probes; and relay diagnostic information; among other data.
One or more source devices (e.g., an edge device 1902, other computing systems described herein) and one or more destination devices (e.g., another edge device 1902, other computing systems described herein, etc.) can be selected (e.g., via a user interface or an API) or identified, in order to evaluate connectivity between the selected devices. The A hostname or an IP address may be provided as the source or destination device. Upon selection of the devices, the configuration manager 1928 can access the network topology information in the configuration data 1932, and generate a graph indicating a communication pathway (e.g., via the network 1904, which may include one or more gateways) between the two devices.
The configuration manager 1928 can then present the generated graph showing the communication pathway on another user interface. The configuration manager 1928 can check the connectivity between the two selected devices. The configuration manager 1928 can begin executing the various connectivity checks described herein. In an embodiment, the configuration manager 1928 may execute one or more of the connectivity operations in parallel to improve computational efficiency. In doing so, the configuration manager 1928 can analyze the results of the diagnostic tests performed between the two devices to determine whether connectivity was successful.
When the configuration manager 1928 is performing the connectivity checks, the configuration manager 1928 can display another user interface that shows a status of the diagnostic operations. As each diagnostic test completes, the configuration manager 1928 can dynamically update the user interface to include each result of each diagnostic test. The user interface can be dynamically updated to display a list of each completed diagnostic test and its corresponding status (e.g., passed, failed, awaiting results, etc.). Once all of the diagnostic tests have been performed, the configuration manager 1928 can provide a list of recommendations to address any connectivity issues that are detected.
The configuration manager 1928 can detect or implement port filtering (e.g., including layer 4 rules), provide tunnel statistics, pass application traffic (e.g., RDP, HTTP/S, SSH, etc.), and inspect cloud routes and security groups, among other functionality. In some embodiments, the configuration manager 1928 can enable a user to select a network object and indicate an IP address within the network object. In addition to recommendations, the configuration manager 1928 may provide links that, when interacted with, cause the configuration manager 1928 to attempt to address the detected connectivity issues automatically. For example, the configuration manager 1928 may enable one or more devices, device groups, or overlay networks, add one or more gateways to a relay rule, or activate managed relay rules for an overlay network, among other operations.
Additional functionality of the configuration manager 1928 includes spoofing traffic from a local device so a gateway can directly ping or pass traffic to a remote device, to address limitations relating to initiating traffic on device that are not under the control of the configuration manager 1928. The configuration manager 1928 can mine data from a policy builder that can indicate what the connectivity intention should be, as well as add the ability to detect device-to-device traffic on overlay networks. The configuration manager 1928 can provide a beacon server on an overlay network to detect whether the beacon server is accessible to a selected device. The configuration manager 1928 can test the basic connectivity of an overlay network by determining whether a selected device can communicate with another device on the network.
Edge devices, such as gateways, network devices, or other types of network-capable building equipment can be utilized to manage building subsystems that otherwise lack “smart” capabilities such as intelligent management or connectivity to cloud computing environments. Edge devices may be any type of device that executes software, including any of the computing devices described herein. Building device gateways, which may include any of the gateways, network devices, or edge devices described herein, may execute remote management, automatic configuration, and additional controls, and may implement such functionality in part using machine-learning models.
Edge devices in building environments may execute machine-learning models to process data in real-time, in near real-time, or upon detecting a processing condition or request. This edge processing reduces the amount of data that would otherwise be transmitted to external computing systems for off-site or cloud processing, thereby reducing overall network consumption, and reducing the amount of processing performed by external systems. Transmitting data to external computing systems further increases the time between data capture and data processing, due to inherent transmission and processing delays and making real-time or near real-time processing challenging or impracticable.
The techniques described herein provide approaches for optimizing and implementing machine-learning models that can be executed directly by edge devices, reducing the amount of network traffic transmitted to external or offsite computing systems. The techniques described herein can be utilized to transform, prune, retrain, tune, or otherwise modify machine-learning models that may be optimized or tailored for traditional computing systems (e.g., distributed or cloud computing systems, etc.). The optimization techniques described herein can generate machine-learning models that are compatible with and optimized for execution on edge devices, which may have a limited subset of the processing capabilities and/or more limited computing resources of more robust systems. The techniques can be applied to any type of machine-learning model, including neural networks, convolutional neural networks, or other types of deep-learning models. Accordingly, various present techniques described herein may be used to add/implement different types of machine learning capabilities that otherwise would not be possible to implement on such devices in the absence of the present disclosure.
Referring to
The system 2000 is shown as including the optimization manager 1930, the input machine-learning model data 2002, the optimized machine-learning model 2014, training data with targets 2016, and the model optimization criteria 2020. The optimization manager 1930 is shown as including a hardware optimization component 2004, an intermediate model 2006, an accuracy checker 2008, a model pruner 2010, and a model retrainer 2012. In some embodiments, the optimization manager 1930 may be implemented in off-premises computing systems, e.g., outside a building. For example, the optimization manager 1930 may be implemented as part of the cloud platform 106. In some embodiments, the optimization manager 1930 may be implemented using an on-premises computing system, e.g., within the building. However, the components of the system 2000 can be implemented using any combination of on-premises and off-premises equipment or systems.
The optimization manager 1930 may include software, hardware, or any combination thereof. The optimization manager 1930 may include, or may be implemented, using one or more computing systems, such as the cloud platform 106. As described herein and with reference to
The memories 126 can include one or more devices (e.g., memory units, memory devices, storage devices, etc.) for storing data or computer code to complete or facilitate the various processes described in the present disclosure. The memories 126 can include RAM, ROM, hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, or any other suitable memory for storing software objects or computer instructions. The memories 126 can include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. The memories 126 can be communicably connected to the processors 124 and can include computer code for executing (e.g., by the processors 124) one or more processes described herein. In embodiments where the optimization manager 1930 is itself a computing system, the optimization manager 1930 may include one or more processors 124 and one or more memories 126.
Referring back to
The input machine-learning model data 2002 can include any input data that may specify the architecture, weights or biases (e.g., any trainable parameter, etc.) and their corresponding datatypes, among other machine-learning model parameters. The architecture of a machine-learning model refers to an underlying structure and design of the machine-learning model that enables it to perform a specific task. The architecture may include the number and type of layers, the activation functions, the optimization algorithm, and other hyperparameters of the machine-learning model.
In a non-limiting example, the input machine-learning model data 2002 may define each layer of an input machine-learning model, including the input layer and an output layer of the model. The input layer may be the first layer in the model, and may receive input data (e.g., images, text, data points, etc.) for the model. The input machine-learning model data 2002 may also specify one or more pre-processing operations that are performed on input data prior to being provided to the input layer of the corresponding model. The output layer may be the last layer in the machine-learning model, and may produce an output based on the processing operations performed via each of the hidden layers of the machine-learning model.
The input machine-learning model data 2002 may specify the number, type, characteristics, and trained parameters of one or more hidden layers of the input machine-learning model. The hidden layers may include any layers between the input and output layers in the input machine-learning model. The hidden layers may include neurons, convolutional kernels, or other trainable parameters that can be utilized to perform computations on data received from the previous layer in the model. The input machine-learning model data 2002 may include default trainable parameters (e.g., parameters with default or untrained values), or may include trainable parameters that have already been trained (or partially trained) on one or more training datasets.
The input machine-learning model data 2002 may include one or more configuration files (e.g., a YAML file, a JavaScript Object Notation (JSON) file, an INI file, etc.) that specifies the architecture of the model, the datatypes used for each trainable parameter of the model, the types or functional operations of each layer of the input machine-learning model, as well as the characteristics of input data received by the model and output data generated by the model, among other model parameters. The input machine-learning model data 2002 may specify activation functions, optimization algorithms, and other hyperparameters of the machine-learning model.
Machine-learning hyperparameters specified in the input machine-learning model data 2002 may include learning rate, regularization techniques, and training techniques and parameters used to train the machine-learning model. The learning rate can be utilized to determine the amount that the trainable parameters of the model are adjusted during each iteration of the optimization algorithm. Regularization may be utilized to prevent overfitting during training by adding a penalty term to the loss function.
The input machine-learning model data 2002 may itself define the input machine-learning model, or may store a portion, or all, of the trainable parameters of the input machine-learning model. The input machine-learning model data 2002 may include, specify, or identify one or more sets of training data that were utilized to train the input machine-learning model, as well as any training parameters used when performing the training process to train the input machine-learning model (e.g., if the input machine-learning model has been trained prior to the optimization techniques described herein). The training parameters may include number of training examples, batch size, or number of epochs, among others.
The input machine-learning model data 2002 may be included in the request to optimize the input machine-learning model, or may be identified in the request to optimize the input machine-learning model. The optimization manager 1930 may utilize the identifier of the input machine-learning model data 2002 to receive, retrieve, download, or otherwise access the various information included in the input machine-learning model data 2002. In some implementations, the input machine-learning model data 2002 may be retrieved from one or more external data sources via a network. In some implementations, the input machine-learning model data 2002 may be retrieved from memory of the optimization manager 1930, the cloud platform 106, a computing system executing the optimization manager 1930, or a database, for example. The input machine-learning model data 2002 may be accessed and processed in response to a request to optimize the input machine-learning model, in response to user input (e.g., via a human-machine interface in communication with the optimization manager 1930, via user input to a user device 176, etc.), or in response to receiving the input machine-learning model data 2002.
In some implementations, the input machine-learning model data 2002 may be stored as part of the components 1934 described in connection with
In addition to receiving, retrieving, or otherwise accessing the input machine-learning model data 2002, the optimization manager 1930 (or the components thereof) may receive, retrieve, or otherwise access the training data with targets 2016 and the model optimization criteria 2020. In some implementations, the optimization manager 1930 can receive the training data with targets 2016 and the model optimization criteria 2020 as part of the request to optimize the input machine-learning model. In some implementations, the optimization manager 1930 can receive identifier(s) of the training data with targets 2016 and the model optimization criteria 2020 in the request, and utilize the identifier(s) to retrieve or access the training data with targets 2016 and the model optimization criteria 2020.
The training data with targets 2016 may include any of the training data utilized to train the input machine-learning model (e.g., if the input machine-learning model was trained prior to performing the techniques described herein). The input machine-learning model may have been trained using a dataset discussed that is publicly available or stored in one or more distributed computing systems, for example. In some implementations, the training data with targets 2016 may include that training data, or other training data that may be utilized for transfer learning. For example, the training data with targets 2016 may include a set of training data that may be utilized to retrain or tune the input machine-learning model as part of the optimization techniques described herein.
Transfer learning can be utilized to retrain or tune a trained machine-learning model to solve a new or related problem. In this example, the input machine-learning model may be a model that has been trained on a large training dataset for a specific task, such as image classification, natural language processing, or other types of data processing. Transfer learning is utilized to improve training times on new tasks, by initially training a machine-learning model on an initial task using a large, generic dataset, and then transferring and tuning the trained weights for a new task, which may have a different input domain or output label. Transfer learning may significantly reduce the amount of data and computing resources required to train a new model from scratch, and may improve the accuracy or other performance metrics of the model.
The training data with targets 2016 may include data that can be utilized to retrain the machine-learning model, from scratch or using transfer learning. The training data with targets 2016 may include a full, large dataset that may be utilized in one or more training processes, such as supervised learning, self-supervised learning, semi-supervised learning, or unsupervised learning. The training set can be utilized to retrain the machine-learning model (e.g., the intermediate model 2006) to generate outputs based on labeled input examples. The labels can correspond to a ground truth value, which may be compared to the output of the model to determine an error (e.g., a loss). The loss may be utilized to update the trainable parameters of the model, as described herein. The training set may include any type of data that can be provided as input to the machine-learning models described herein, including building-related data. Each example (e.g., item of training data) of the training set may include corresponding ground truth labels, which may be utilized in the various training processes described herein.
The training data with targets 2016 may include examples of training data that are used for validation (e.g., a validation set). The validation set may be a subset of the training dataset. The validation set may be used to evaluate the performance of the input machine-learning model (or the intermediate model 2006) during the various training processes described herein. The validation set can be used to prevent overfitting and to determine whether training termination conditions or optimization criteria 2020 have been met. In some implementations, the optimization manager 1930 may generate the validation set by randomly selecting a portion of the training data with targets 2016. The validation set may be stored separately from portions of the training data with targets 2016 that are used to train the models described herein. Rather than training, the ground truth labels of the validation set may be utilized to evaluate the performance of the machine-learning models described herein.
The model optimization criteria 2020 can include any data that may define how and to what degree the input machine-learning model data 2002 should be optimized to generate the optimized machine-learning model 2014. The model optimization criteria 2020 may include various optimization targets for the optimized machine-learning model 2014, including the minimum accuracy loss (e.g., 1%, 2%, etc.) representing the amount of accuracy loss that is tolerated when comparing the accuracy of the input machine-learning model (e.g., the accuracy of which may be specified in the input machine-learning model data 2002) and the optimized machine-learning model 2014. The model optimization criteria 2020 may define one or more target platforms and device information for the edge device(s) upon which the optimized machine-learning model 2014 will be deployed. For example, the optimization criteria 2020 may define compatible datatypes, processing architectures, memory constraints, and other constraints imposed by the target edge device platforms.
The model optimization criteria 2020 may specify whether the input machine-learning model should be optimized for speed or size. When optimizing for size, the optimization manager 1930 may perform the techniques described herein while minimizing the storage size of the optimized machine-learning model 2014 within specified performance thresholds. When optimizing for speed, the speed of the model is optimized within specified performance thresholds without regard to (or with less emphasis on) optimizing for storage size. Additional optimization targets may also be specified in the optimization criteria 2020, such as accuracy targets on particular verification datasets.
Referring now to the operations of the optimization manager 1930, the optimization manager 1930 can receive optimization criteria 2020 to optimize the input machine-learning model for a target platform as specified in the input machine-learning model data 2002. As described herein, the input machine-learning model data 2002 can specify one or more trainable model parameters (e.g., weights, layers, etc.) that may be modified according to the techniques described herein to generate the optimized machine-learning model 2014.
Upon receiving, identifying, or otherwise accessing the input machine-learning model data 2002, the hardware optimization component 2004 can transform one or more datatypes of the input machine-learning model to generate an intermediate model 2006. Different processing architectures (e.g., Intel, ARM, etc.) may support different types of data. Data types may include 64-bit floating point numbers, 32-bit floating point numbers, 16-bit floating point numbers, or 8-bit floating point numbers, among others. If the input machine-learning model data 2002 indicates the input machine-learning model utilizes a datatype that is unsupported by, not optimal for, the target edge device platform, the hardware optimization component 2004 can transform the trainable parameters that utilize said incompatible datatype into a datatype that is compatible with the target platform.
This transformation may be referred to as quantization. Quantization is used to reduce the memory and computational requirements of the input machine-learning model, and involves converting the weights, activations, or other trainable parameters of the input machine-learning model into a lower precision. This may include truncating (and in some implementations, rounding) the higher-precision values of the weights, activations, or other trainable parameters of the input machine-learning model to lower precision. The lower precision target size may be specified as part of the model optimization criteria 2020, or may be determined based on configuration data for the target platform specified in the optimization criteria 2020. In addition to conforming to the requirements of the target platform, the quantization process may be performed to optimize the model for a predetermined size. For example, reducing the precision from 64-bit floating point values to 16-bit floating point values may reduce the storage footprint of the input machine-learning model by about a factor of four.
The hardware optimization component 2004 may perform a variety of different quantization techniques. In a first example of quantization, a fixed precision reduction may be performed for one or more of the input layer, the output layer, the weights, the bias values, or other trainable parameters of the input machine-learning model. A second type of quantization may include dynamic range quantization, in which the weight values may be reduced to integer values (e.g., 8-bit integer values, etc.) rather than floating point values. Dynamic range quantization may be performed in combination with other quantization techniques (e.g., the fixed precision reduction) for the trainable parameters.
In some implementations, the hardware optimization component 2004 may perform full integer quantization of the input machine-learning model, in which all trainable parameters (e.g., the input layer, the output layer, the weights, the bias values, or other trainable parameters, etc.) are transformed into the integer datatype. The transformed data may have a predetermined resolution defined in the optimization criteria 2020, such as 8-bit integers, 16-bit integers, 32-bit integers, or 64-bit integers, among others. In some implementations, the hardware optimization component 2004 may perform float16 quantization, in which all trainable parameters (e.g., the input layer, the output layer, the weights, the bias values, or other trainable parameters, etc.) are transformed into the 16-bit floating point number datatype.
In some implementations, the hardware optimization component 2004 may perform combinations of different types of quantization, such that any trainable parameter may be transformed to have any datatype or precision. In a non-limiting example, the hardware optimization component 2004 may perform quantization such that the inputs and output layers are transformed into the 16-bit integer datatype, the weights are transformed into the 8-bit integer datatype, and the bias values are transformed into the 64-bit integer datatype. The output of the quantization process can be stored in the memory of the optimization manager 1930 as the intermediate model 2006.
Because the input and output layers of the model may expect different datatypes than the training data that was utilized to train the input machine-learning model (or candidate training data that may be utilized to train the machine-learning model in the training data with targets 2016), the hardware optimization component 2004 may perform similar quantization techniques to conform any training data or validation data used to train or evaluate the intermediate model 2006 to the respective datatypes of the input and output layers of the intermediate model 2006. For example, the hardware optimization component 2004 may transform each training example (e.g., item of training data) that would be provided as input to the intermediate model 2006 to the respective datatype of the input layer, and transform the ground truth label of the training example to the respective datatype of the output layer.
In some implementations, if the optimization techniques described herein do not result in the optimized machine-learning model 2014 conforming to the targets in the optimization criteria 2020, the hardware optimization component 2004 may automatically select and perform a different quantization approach to generate a second intermediate model, which can then be pruned, retrained, and evaluated as described herein to arrive at an optimized machine-learning model 2014 that satisfies the requirements specified in the optimization criteria 2020. Different quantization approaches may perform differently for different types of machine-learning models and different tasks. The iterative approach implemented by the hardware optimization component 2004 may be utilized to accommodate for these differences.
Once the hardware optimization component 2004 has generated the intermediate model 2006, the accuracy checker 2008 can determine an accuracy of the intermediate machine-learning model 2006 using a verification dataset. To do so, the accuracy checker 2008 can propagate items of the validation set (e.g., a subset of the training data with targets 2016 that is not utilized to retrain the intermediate model 2006) through the intermediate model 2006 to generate corresponding outputs, which can be compared with the ground-truth labels for the validation set to evaluate the accuracy of the intermediate model 2006. The accuracy of the intermediate model 2006 may be calculated by dividing the number of correct predictions by the total number of predictions. The accuracy may be expressed as a percentage or as a decimal value.
The accuracy checker 2008 can compare the calculated accuracy of the intermediate model 2006 to the accuracy of the input machine-learning model (e.g., trained using the same training data) to determine an amount by which the accuracy of the intermediate model 2006 has fallen due to the quantization process. In some implementations, the accuracy checker 2008 can evaluate additional performance metrics of the intermediate model 2006, such as the precision, recall, F1 score, and area-under-the-curve (AUC). Each of these additional performance metrics may be specified as part of the model optimization criteria 2020, and compared to corresponding metrics of the input machine-learning model.
The optimization criteria 2020 may specify one or more thresholds for the comparison (e.g., an amount by which a performance metric of the intermediate model 2006 can fall while still satisfying the optimization criteria 2020). If the optimization criteria 2020 are not satisfied, the intermediate model 2006 may be pruned, quantized, or retrained as described herein iteratively until the optimization criteria 2020 are satisfied. The final model output by the optimization manager can be stored as the optimized machine-learning model 2014, which may be optimized for one or more edge devices specified in the optimization criteria 2014, as described herein. As described in connection with
The model pruner 2010 can prune at least one parameter of the intermediate model 2006. In some implementations, the model pruner 2010 may prune the intermediate model 2006 to satisfy a storage size requirement or a speed requirement (e.g., by reducing the number of computational operations to execute the model to a predetermined threshold) specified in the optimization criteria 2020. In some implementations, the model pruner 2010 can perform at least one iteration of pruning regardless of the output of the accuracy checker 2008 or the constraints specified in the optimization criteria 2020. The model pruner 2010 may be utilized to remove weights, biases, or other trainable parameters from the intermediate model 2006 to reduce the size and number of computational operations required to execute the intermediate model 2006.
Weight pruning may include a process of analyzing which weights (or trainable parameters) are least important to generating useful output using the intermediate model 2006, and pruning those weights to reduce the number of computational operations required to execute the model, and in some implementations, reduce the storage size of the intermediate model 2006. Weights may be pruned from the intermediate model 2006 by regenerating one or more layers of the intermediate model 2006 without the respective weight value. The removal of such trainable parameters may therefore change the architecture of one or more layers of the intermediate model 2006, which may affect the accuracy or other performance metrics of the intermediate model 2006.
In some implementations, the model pruner 2010 can prune one or more weights or neurons from any type of layer of the intermediate model 2006, including convolutional layers, or groups of layers that form functional machine-learning operations within a model, such as pattern detectors or feature extraction layers. The model pruner 2010 can select a type of pruning based on the architecture of the intermediate model 2006. For example, the intermediate model 2006 (e.g., a neural network) may include many sub-systems, such as memory units, skip level residual connections, attention mechanisms, convolutional layers, fully-connected feedforward layers, among others. The model pruner 2010 can account for these variations in model architectures, and automatically select layers of the intermediate model 2006 to prune that would not result in a lack of model functionality or significant deviations in expected performance.
In some implementations, one or more layers may be entirely pruned from the intermediate model 2006. For example, if the input machine-learning model was trained on a variety of image subjects that includes classification layers that are unrelated to the classification task for which the optimized machine-learning model 2014 is being trained (e.g., via transfer learning), the model pruner 2010 can prune the unrelated classification layers from the intermediate model 2006. In some implementations, the model pruner 2010 can automatically replace pruned layers with application-specific layers for the task on which the model is being trained. The application-specific layers may be specified in the model optimization criteria 2020, the input machine-learning model data 2002, or the training data with targets 2016. In a non-limiting example using these techniques, the model pruner 2010 can convert an input machine-learning model that was trained to detect 5000 objects into a pruned intermediate model 2006 that is trained to detect 29 objects. This example pruning process pruning may involve changing the architecture of or replacing the output layer of the intermediate model 2006.
In some implementations, the model pruner 2010 may avoid pruning certain layers or trainable parameters based on characteristics of the architecture of the intermediate model 2006. For example, in an image classification task, the model pruner 2010 can limit or completely avoid pruning the low-level convolution layers, which are tailored towards low-level features such as horizontal or vertical edge detection. Furthering this example, the model pruner 2010 can minimally prune or restrict pruning layers that detect color or pattern features. Such low-level feature trained machine-learning layers may produce useful results during transfer learning, and may therefore degrade model performance if reduced using pruning.
Once the intermediate model 2006 has been generated (and in some implementations, pruned), the model retrainer 2012 can retrain the intermediate model 2006 using the training data with targets 2016. In some implementations, the accuracy checker 2008 can be executed to generate an accuracy of the intermediate model 2006 following quantization or pruning, and can compare the accuracy of the intermediate model 2006 to the accuracy of the input machine-learning mode as described herein. In some implementations, the model retrainer 2012 can retrain the intermediate model 2006 in response to determining that the accuracy of the model has fallen below the accuracy of the input machine-learning model by an amount that exceeds a threshold specified in the model optimization criteria 2020.
In some implementations, the intermediate model 2006 can be retrained by the model retrainer 2012 in response to the intermediate model 2006 being pruned by the model pruner 2010. To retrain the intermediate model 2006, the model retrainer 2012 can perform an iterative training process, such as supervised learning, semi-supervised learning, self-supervised learning, or unsupervised learning, to update the trainable parameters of the intermediate model 2006 to an output. In the example of supervised learning, the model retrainer 2012 can propagate each training example of the training data with targets 2016 through each layer of the intermediate model 2006, starting with the input layer, while executing the computational processes of each layer and passing the output of each layer as input to the next layer or operation in accordance with the architecture of the intermediate model 2006, to generate an output. The model retrainer 2012 can then calculate a loss using the ground truth data for the input training example and the output of the model. The loss can be utilized to update the parameters of the intermediate model 2006, for example, using an optimization process such as gradient descent (or another suitable optimization algorithm) and backpropagation.
Retraining the model may include retraining the model from scratch (e.g., by resetting the trainable parameters of the intermediate model 2006 to default values and retraining using full sets of the training data with targets 2016). In some implementations, retraining the intermediate model 2006 may include tuning the model without resetting the weight values. For example, the model retrainer 2012 can tune the intermediate model 2006 when implementing transfer learning, and in such implementations the training data with targets 2016 may include training data corresponding to the specific application for which the optimized machine-learning model 2014 has been trained.
After retraining the intermediate model 2006, the accuracy checker 2008 can determine an updated accuracy of the retrained intermediate machine-learning model using the verification dataset (which may be the same verification dataset previously described herein, or another verification dataset included in the training data with targets 2016). To generate the accuracy (or other performance metrics) of the intermediate model 2006, the accuracy checker 2008 can propagate items of the validation set through the intermediate model 2006 to generate corresponding outputs, which can be compared with the ground-truth labels for the validation set to evaluate the accuracy of the intermediate model 2006. The accuracy of the intermediate model 2006 may be calculated by dividing the number of correct predictions by the total number of predictions. The accuracy may be expressed as a percentage or as a decimal value.
The various optimization techniques (e.g., quantization, pruning, retraining) can be iteratively repeated until the optimization criteria 2020 (e.g., storage size, number of computational operations/speed, accuracy) for the target platform have been met. For example, after retraining the intermediate model 2006, if the size requirements of the model have not yet been met, further quantization or pruning may be performed. In another example, if the specified storage size and speed requirements have been met, but accuracy is less than required by the optimization criteria 2020, the model retrainer 2012 may further retrain the intermediate model 2006 using additional training epochs. In some implementations, the model retrainer 2012 may automatically execute one or more data augmentation techniques (e.g., translation, flipping, rotation, cropping, padding, jittering, normalization, time shifting, etc.) to artificially increase the size of the training dataset in the training data with targets 2016, which improve model generalization and accuracy.
Once all of the optimization criteria 2020 have been met, the optimization manager 1930 may provide the retrained intermediate model 2006 as the optimized machine-learning model 2014. In some implementations, the optimization manager 1930 can modify a runtime of the second machine-learning model based on the target platform. For example, the target platform be associated with a particular machine-learning runtime or set of libraries (e.g., a target runtime) that are compatible with the target platform. The optimization manager 1930 can extract the weights, biases, architecture, and trainable parameters of the intermediate model 2006 and convert the data into a format that is compatible with the target runtime, which may be stored as the optimized machine-learning model 2014.
As described in connection with
At step 2105, the building management system can receive optimization criteria to optimize a first machine-learning model for a target platform. The optimization criteria may be received in a request to optimize a first machine-learning model, as described herein. The request may include data relating to the machine-learning model to be optimized (e.g., the first machine-learning model data 2002), or may include an identifier that can be used to retrieve said data from memory or an external computing system via a network. The request may also include, specify, or otherwise identify the training data (e.g., the training data with targets 2016) and the optimization criteria (e.g., the model optimization criteria 2020). The request may be provided with particular optimization routes or materials, and may include one or more identifiers of edge devices (e.g., the building subsystems 122) to which the machine-learning model may be deployed once optimized.
At step 2110, the building management system can transform at least one datatype of tone or more model parameters of the first machine-learning model to generate a second machine-learning model (e.g., the intermediate machine-learning model). This transformation may include quantization. Quantization is used to reduce the memory and computational requirements of the first machine-learning model, and involves converting the weights, activations, or other trainable parameters of the first machine-learning model a lower precision. The lower precision target size may be specified as part of the model optimization criteria, or may be determined based on configuration data for the target platform specified in the optimization criteria. In addition to conforming to the requirements of the target platform, the quantization process may be performed to optimize the model for a predetermined size. For example, reducing the precision from 64-bit floating point values to 16-bit floating point values may reduce the storage footprint of the first machine-learning model by about a factor of four.
In a first example of quantization, a fixed precision reduction may be performed for one or more of the input layer, the output layer, the weights, the bias values, or other trainable parameters of the first machine-learning model. A second type of quantization may include dynamic range quantization, in which the weight values may be reduced to integer values (e.g., 8-bit integer values, etc.) rather than floating point values. Dynamic range quantization may be performed in combination with other quantization techniques (e.g., the fixed precision reduction) for the trainable parameters. Various additional quantization techniques are described herein in connection with
In some implementations, the building management system can prune at least one parameter of the second machine-learning model. The pruning process may be performed according to satisfy a storage size requirement or a speed requirement (e.g., by reducing the number of computational operations to execute the model to a predetermined threshold) specified in the optimization criteria. The building management system can perform pruning to remove weights, biases, or other trainable parameters from the second machine-learning model to reduce the size of and number of computational operations required to execute the second machine-learning model, as described herein.
In some implementations, the building management system can prune one or more weights or neurons from any type of layer of the second machine-learning model, including convolutional layers, or groups of layers that form functional machine-learning operations within a model, such as pattern detectors or feature extraction layers. The building management system can select a type of pruning based on the architecture of the second machine-learning model. For example, the second machine-learning model (e.g., a neural network) may include many sub-systems, such as memory units, skip level residual connections, attention mechanisms, convolutional layers, fully-connected feedforward layers, among others. The building management system can account for these variations in model architectures, and automatically select layers of the second machine-learning model to prune that would not result in a lack of model functionality or significant deviations in expected performance.
In some implementations, one or more layers may be entirely pruned from the second machine learning model. For example, if the first machine-learning model was trained on a variety of image subjects that includes classification layers that are unrelated to the classification task for which the second machine-learning model is being trained (e.g., via transfer learning), the building management system can prune the unrelated classification layers from the second machine learning model. In some implementations, the building management system can automatically replace pruned layers with application-specific layers for the task on which the model is being trained.
At step 2115, the building management system can determine, using a verification dataset, an accuracy of the second machine-learning model. To do so, the building management system can propagate items of the validation set (e.g., a subset of the that is not utilized to retrain the second machine-learning model) through the second machine-learning model to generate corresponding outputs, which can be compared with the ground-truth labels for the validation set to evaluate the accuracy of the second machine-learning model. The accuracy of the second machine-learning model may be calculated by dividing the number of correct predictions by the total number of predictions. The accuracy may be expressed as a percentage or as a decimal value.
The building management system can compare the calculated accuracy of the second machine-learning model to the accuracy of the first machine-learning model (e.g., trained using the same training data) to determine an amount by which the accuracy of the second machine-learning model has fallen due to the quantization process. In some implementations, the building management system can evaluate additional performance metrics of the second machine-learning model, such as the precision, recall, F1 score, and AUC, among others. Each of these additional performance metrics may be specified as part of the model optimization criteria, and compared to corresponding metrics of the first machine-learning model.
At step 2120, the building management system can retrain the second machine-learning model using a training dataset responsive to the accuracy being less than a predetermined threshold. In some implementations, the building management system can retrain the second machine-learning model responsive to pruning the at least one parameter of the second machine-learning model. To retrain the second machine-learning model, the building management system can perform an iterative training process, such as supervised learning, semi-supervised learning, self-supervised learning, or unsupervised learning, to update the trainable parameters of the second machine-learning model to an output. In the example of supervised learning, the building management system can propagate each training example of the training data with targets 2016 through each layer of the second machine-learning model, starting with the input layer, while executing the computational processes of each layer and passing the output of each layer as input to the next layer or operation in accordance with the architecture of the second machine-learning model, to generate an output.
The building management system can then calculate a loss using the ground truth data for the input training example and the output of the model. The loss can be utilized to update the parameters of the second machine-learning model, for example, using an optimization process such as gradient descent (or another suitable optimization algorithm) and backpropagation. Retraining the model may include retraining the model from scratch (e.g., by resetting the trainable parameters of the second machine-learning model to default values and retraining using full sets of the training data). In some implementations, retraining the second machine-learning model may include tuning the model without resetting the weight values. For example, the building management system can tune the second machine-learning model when implementing transfer learning, and in such implementations, the training data may correspond to the specific application for which the second machine-learning model has been trained.
The building management system can repeat the various optimization techniques (e.g., quantization, pruning, retraining) iteratively until the optimization criteria (e.g., storage size, number of computational operations/speed, accuracy) for the target platform have been met. For example, after retraining the second machine-learning model, if the size requirements of the model have not yet been met, further quantization or pruning may be performed. In another example, if the specified storage size and speed requirements have been met, but accuracy is less than required by the optimization criteria, the building management system may further retrain the second machine-learning model using additional training epochs.
The construction and arrangement of the systems and methods as shown in the various exemplary embodiments are illustrative only. Although only a few embodiments have been described in detail in this disclosure, many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.). For example, the position of elements may be reversed or otherwise varied and the nature or number of discrete elements or positions may be altered or varied. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. The order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. Other substitutions, modifications, changes, and omissions may be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure.
The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine-readable medium. Thus, any such connection is properly termed a machine-readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.
Although the figures show a specific order of method steps, the order of the steps may differ from what is depicted. Also two or more steps may be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.
References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms.
In various implementations, the steps and operations described herein may be performed on one processor or in a combination of two or more processors. For example, in some implementations, the various operations could be performed in a central server or set of central servers configured to receive data from one or more devices (e.g., edge computing devices/controllers) and perform the operations. In some implementations, the operations may be performed by one or more local controllers or computing devices (e.g., edge devices), such as controllers dedicated to and/or located within a particular building or portion of a building. In some implementations, the operations may be performed by a combination of one or more central or offsite computing devices/servers and one or more local controllers/computing devices. All such implementations are contemplated within the scope of the present disclosure. Further, unless otherwise indicated, when the present disclosure refers to one or more computer-readable storage media and/or one or more controllers, such computer-readable storage media and/or one or more controllers may be implemented as one or more central servers, one or more local controllers or computing devices (e.g., edge devices), any combination thereof, or any other combination of storage media and/or controllers regardless of the location of such devices.
This application claims the benefit of and priority to U.S. Provisional Patent Application No. 63/527,511, filed Jul. 18, 2023, the contents of which is incorporated by reference herein for all purposes.
Number | Date | Country | |
---|---|---|---|
63527511 | Jul 2023 | US |