BUILDING MANAGEMENT SYSTEM WITH EQUIPMENT SERVICE RECOMMENDATIONS AND ANALYTICS USING LARGE LANGUAGE MODEL FINE-TUNED WITH EQUIPMENT SERVICE RECORDS

Information

  • Patent Application
  • 20240403613
  • Publication Number
    20240403613
  • Date Filed
    May 31, 2024
    a year ago
  • Date Published
    December 05, 2024
    a year ago
  • CPC
    • G06N3/0475
    • G06N3/045
    • G06N3/0895
  • International Classifications
    • G06N3/0475
    • G06N3/045
    • G06N3/0895
Abstract
A method includes generating a plurality of data pairs by prompting a generative AI model to output, for each of a plurality of service records, a data pair comprising a question and an answer. The question relates to an equipment issue indicated in the service record and the answer relates to a service task indicated in the service record. The method also includes fine-tuning the generative AI model using the plurality of data pairs and providing a service recommendation using the generative AI model.
Description
BACKGROUND

This application relates generally to a building system of a building. This application relates more particularly to systems for managing and processing data of the building system.


Various interactions between building systems, components of building systems, users, technicians, and/or devices managed by users or technicians can rely on timely generation and presentation of data relating to the interactions, including for performing service operations. However, it can be difficult to generate the data elements to precisely identify proper response actions or sequences of response actions, as well as options for modified response actions, depending on various factors associated with items of equipment to be serviced, technical issues with the items of equipment, and the availability of timely, precise data to use for supporting the service operations.


SUMMARY

One or more aspects relate to building management systems and methods that facilitate building equipment servicing. For example, a method can include generating a plurality of data pairs by prompting a generative AI model to output, for each of a plurality of service records, a data pair comprising a question and an answer. The question relates to an equipment issue indicated in the service record and the answer relates to a service task indicated in the service record. The method can also include fine-tuning the generative AI model using the plurality of data pairs and providing a service recommendation using the generative AI model.


As another example, a method can include generating a plurality of data pairs by prompting a generative AI model to output, for each of a plurality of warranty records, a data pair comprising a cause and a solution indicated in the warranty record, fine-tuning the generative AI model using the plurality of data pairs, and generating, by the fine-tuned generative AI model and based on user input, at least one of a service summary, a labelling of services, or an investigative service report.


Another example implementation is a method that includes generating a fine-tuning dataset by prompting a generative AI model to isolate, from each of a plurality of service or warranty records, a problem, a cause, and a solution indicated in the service or warranty record, structuring, for the plurality of service or warranty records, the problem, the cause, and the solution as at least one question-and-answer pair, and aggregating the question-and-answer pairs for the plurality of service or warranty records as the fine-tuning dataset. The method can also include fine-tuning at least one of the generative AI model or a second AI model using the fine-tuning dataset.


In some embodiments, the method includes generating and executing a maintenance action using the at least one of the generative AI model or the second AI model after the fine-tuning of the generative AI model or the second AI model. The method can also include providing learning of the generative AI model based on exposure of the generative AI model to the plurality of service or warranty records. In some embodiments, structuring the problem, the cause, and the solution as the at least one question-and-answer pair includes inserting the problem and the cause into a first template question and the solution into a first template answer and inserting the problem into a second template question and the cause and the solution into a second template answer.


In some embodiments, the method includes automatically providing a service recommendation by receiving a freeform natural language input to a device from a user, providing the freeform natural language input as an input to the generative AI model, and generating the service recommendation as an output of the generative AI model and providing the service recommendation to the user via the device. The plurality of service or warranty records can include natural language data input by humans relating to warranty or service requests and completed service or warranty tasks.


In some embodiments, the method includes generating, responsive to an indication of an equipment problem and by the at least one of the generative AI model or a second AI model after fine-tuning, a description of at least one of a inferred cause or an inferred solution to the equipment problem. The description can be a service summary, a labelling of services, or an investigative service report. The fine-tuning dataset includes different question-and-answer pairs associated with different service or warranty records of the plurality of service or warranty records.


Another example implementation of the present disclosure is one or more non-transitory computer-readable media storing program instructions that, when executed by one or more processors, cause the one or more processors to perform operations. The operations include generating a fine-tuning dataset by prompting a generative AI model to isolate, from a plurality of service or warranty records, a problem, a cause, and a solution indicated in the service or warranty record, structuring, for the plurality of service or warranty records, the problem, the cause, and the solution as at least one question-and-answer pair, and aggregating the question-and-answer pairs for the plurality of service or warranty records as the fine-tuning dataset. The method includes fine-tuning at least one of the generative AI model or a second AI model using the fine-tuning dataset.


In some embodiments, the operations include generating and executing a maintenance action using the at least one of the generative AI model or the second AI model after the fine-tuning of the generative AI model or the second AI model. The operations can also include providing learning of the generative AI model based on exposure of the generative AI model to the plurality of service or warranty records.


In some embodiments, structuring the problem, the cause, and the solution as the at least one question-and-answer pair includes inserting the problem and the cause into a first template question and the solution into a first template answer and inserting the problem into a second template question and the cause and the solution into a second template answer.


In some embodiments, the operations include automatically providing a service recommendation by receiving a freeform natural language input to a device from a user, providing the freeform natural language input as an input to the generative AI model, and generating the service recommendation as an output of the generative AI model and providing the service recommendation to the user via the device.


In some embodiments, the plurality of service or warranty records include natural language data input by humans relating to warranty or service requests and completed service or warranty tasks. In some embodiments, the operations include generating, responsive to an indication of an equipment problem and by the at least one of the generative AI model or a second AI model after fine-tuning, a description of at least one of a inferred cause or an inferred solution to the equipment problem. The description can be a service summary, a labelling of services, or an investigative service report. The fine-tuning dataset can include different question-and-answer pairs associated with different service or warranty records of the plurality of service or warranty records.





BRIEF DESCRIPTION OF THE DRAWINGS

Various objects, aspects, features, and advantages of the disclosure will become more apparent and better understood by referring to the detailed description taken in conjunction with the accompanying drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements.



FIG. 1 is a block diagram of an example of a machine learning model-based system for equipment servicing applications.



FIG. 2 is a block diagram of an example of a language model-based system for equipment servicing applications.



FIG. 3 is a block diagram of an example of the system of FIG. 2 including user application session components.



FIG. 4 is a block diagram of an example of the system of FIG. 2 including feedback training components.



FIG. 5 is a block diagram of an example of the system of FIG. 2 including data filters.



FIG. 6 is a block diagram of an example of the system of FIG. 2 including data validation components.



FIG. 7 is a block diagram of an example of the system of FIG. 2 including expert review and intervention components.



FIG. 8 is a flow diagram of a method of managing equipment servicing responsive to fault detection using machine learning models.



FIG. 9 is a block diagram of a fine-tuning procedure for generative artificial intelligence using service records.



FIG. 10 is a flow diagram of a method of generating a service recommendation using generative artificial intelligence.



FIG. 11 is a block diagram of a fine-tuning procedure for generative artificial intelligence using warranty records.



FIG. 12 is a flow diagram of a method of providing a service analytics using generative artificial intelligence.





DETAILED DESCRIPTION

Referring generally to the FIGURES, systems and methods in accordance with the present disclosure can implement various systems to precisely generate data relating to operations to be performed for managing building systems and components and/or items of equipment, including heating, ventilation, cooling, and/or refrigeration (HVAC-R) systems and components. For example, various systems described herein can be implemented to more precisely generate data for various applications including, for example and without limitation, virtual assistance for supporting technicians responding to service requests; generating technical reports corresponding to service requests; facilitating diagnostics and troubleshooting procedures; recommendations of services to be performed; and/or recommendations for products or tools to use or install as part of service operations. Various such applications can facilitate both asynchronous and real-time service operations, including by generating text data for such applications based on data from disparate data sources that may not have predefined database associations amongst the data sources, yet may be relevant at specific steps or points in time during service operations.


In some systems, service operations can be supported by text information, such as predefined text documents such as service, diagnostic, and/or troubleshooting guides. Various such text information may not be useful for specific service requests and/or technicians performing the service. For example, the text information may correspond to different items of equipment or versions of items of equipment to be serviced. The text information, being predefined, may not account for specific technical issues that may be present in the items of equipment to be serviced.


AI and/or machine learning (ML) systems, including but not limited to LLMs, can be used to generate text data and data of other modalities in a more responsive manner to real-time conditions, including generating strings of text data that may not be provided in the same manner in existing documents, yet may still meet criteria for useful text information, such as relevance, style, and coherence. For example, LLMs can predict text data based at least on inputted prompts and by being configured (e.g., trained, modified, updated, fine-tuned) according to training data representative of the text data to predict or otherwise generate.


However, various considerations may limit the ability of such systems to precisely generate appropriate data for specific conditions. For example, due to the predictive nature of the generated data, some LLMs may generate text data that is incorrect, imprecise, or not relevant to the specific conditions. Using the LLMs may require a user to manually vary the content and/or syntax of inputs provided to the LLMs (e.g., vary inputted prompts) until the output of the LLMs meets various objective or subjective criteria of the user. The LLMs can have token limits for sizes of inputted text during training and/or runtime/inference operations (and relaxing or increasing such limits may require increased computational processing, API calls to LLM services, and/or memory usage), limiting the ability of the LLMs to be effectively configured or operated using large amounts of raw data or otherwise unstructured data.


Systems and methods in accordance with the present disclosure can use machine learning models, including LLMs and other generative AI systems, to capture data, including but not limited to unstructured knowledge from various data sources, and process the data to accurately generate outputs, such as completions responsive to prompts, including in structured data formats for various applications and use cases. The system can implement various automated and/or expert-based thresholds and data quality management processes to improve the accuracy and quality of generated outputs and update training of the machine learning models accordingly. The system can enable real-time messaging and/or conversational interfaces for users to provide field data regarding equipment to the system (including presenting targeted queries to users that are expected to elicit relevant responses for efficiently receiving useful response information from users) and guide users, such as service technicians, through relevant service, diagnostic, troubleshooting, and/or repair processes.


This can include, for example, receiving data from technician service reports in various formats, including various modalities and/or multi-modal formats (e.g., text, speech, audio, image, and/or video). The system can facilitate automated, flexible customer report generation, such as by processing information received from service technicians and other users into a standardized format, which can reduce the constraints on how the user submits data while improving resulting reports. The system can couple unstructured service data to other input/output data sources and analytics, such as to relate unstructured data with outputs of timeseries data from equipment (e.g., sensor data; report logs) and/or outputs from models or algorithms of equipment operation, which can facilitate more accurate analytics, prediction services, diagnostics, and/or fault detection. The system can perform classification or other pattern recognition or trend detection operations to facilitate more timely assignment of technicians, scheduling of technicians based on expected times for jobs, and provisioning of trucks, tools, and/or parts. The system can perform root cause prediction by being trained using data that includes indications of root causes of faults or errors, where the indications are labels for or otherwise associated with (unstructured or structure) data such as service requests, service reports, service calls, etc. The system can receive, from a service technician in the field evaluating the issue with the equipment, feedback regarding the accuracy of the root cause predictions, as well as feedback regarding how the service technician evaluated information about the equipment (e.g., what data did they evaluate; what did they inspect; did the root cause prediction or instructions for finding the root cause accurately match the type of equipment, etc.), which can be used to update the root cause prediction model.


For example, the system can provide a platform for fault detection and servicing processes in which a machine learning model is configured based on connecting or relating unstructured data and/or semantic data, such as human feedback and written/spoken reports, with time-series product data regarding items of equipment, so that the machine learning model can more accurately detect causes of alarms or other events that may trigger service responses. For instance, responsive to an alarm for a chiller, the system can more accurately detect a cause of the alarm, and generate a prescription (e.g., for a service technician) for responding to the alarm; the system can request feedback from the service technician regarding the prescription, such as whether the prescription correctly identified the cause of the alarm and/or actions to perform to respond to the cause, as well as the information that the service technician used to evaluate the correctness or accuracy of the prescription; the system can use this feedback to modify the machine learning models, which can increase the accuracy of the machine learning models.


In some instances, significant computational resources (or human user resources) can be required to process data relating to equipment operation, such as time-series product data and/or sensor data, to detect or predict faults and/or causes of faults. In addition, it can be resource-intensive to label such data with identifiers of faults or causes of faults, which can make it difficult to generate machine learning training data from such data. Systems and methods in accordance with the present disclosure can leverage the efficiency of language models (e.g., GPT-based models or other pre-trained LLMs) in extracting semantic information (e.g., semantic information identifying faults, causes of faults, and other accurate expert knowledge regarding equipment servicing) from the unstructured data in order to use both the unstructured data and the data relating to equipment operation to generate more accurate outputs regarding equipment servicing. As such, by implementing language models using various operations and processes described herein, building management and equipment servicing systems can take advantage of the causal/semantic associations between the unstructured data and the data relating to equipment operation, and the language models can allow these systems to more efficiently extract these relationships in order to more accurately predict targeted, useful information for servicing applications at inference-time/runtime. While various implementations are described as being implemented using generative AI models such as transformers and/or GANs, in some embodiments, various features described herein can be implemented using non-generative AI models or even without using AI/machine learning, and all such modifications fall within the scope of the present disclosure.


The system can enable a generative AI-based service wizard interface. For example, the interface can include user interface and/or user experience features configured to provide a question/answer-based input/output format, such as a conversational interface, that directs users through providing targeted information for accurately generating predictions of root cause, presenting solutions, or presenting instructions for repairing or inspecting the equipment to identify information that the system can use to detect root causes or other issues. The system can use the interface to present information regarding parts and/or tools to service the equipment, as well as instructions for how to use the parts and/or tools to service the equipment.


In various implementations, the systems can include a plurality of machine learning models that may be configured using integrated or disparate data sources. This can facilitate more integrated user experiences or more specialized (and/or lower computational usage for) data processing and output generation. Outputs from one or more first systems, such as one or more first algorithms or machine learning models, can be provided at least as part of inputs to one or more second systems, such as one or more second algorithms or machine learning models. For example, a first language model can be configured to process unstructured inputs (e.g., text, speech, images, etc.) into a structure output format compatible for use by a second system, such as a root cause prediction algorithm or equipment configuration model.


The system can be used to automate interventions for equipment operation, servicing, fault detection and diagnostics (FDD), and alerting operations. For example, by being configured to perform operations such as root cause prediction, the system can monitor data regarding equipment to predict events associated with faults and trigger responses such as alerts, service scheduling, and initiating FDD or modifications to configuration of the equipment. The system can present to a technician or manager of the equipment a report regarding the intervention (e.g., action taken responsive to predicting a fault or root cause condition) and requesting feedback regarding the accuracy of the intervention, which can be used to update the machine learning models to more accurately generate interventions.


I. Machine Learning Models for Building Management and Equipment Servicing


FIG. 1 depicts an example of a system 100. The system 100 can implement various operations for configuring (e.g., training, updating, modifying, transfer learning, fine-tuning, etc.) and/or operating various AI and/or ML systems, such as neural networks of LLMs or other generative AI systems. The system 100 can be used to implement various generative AI-based building equipment servicing operations.


For example, the system 100 can be implemented for operations associated with any of a variety of building management systems (BMSs) or equipment or components thereof. A BMS can include a system of devices that can control, monitor, and manage equipment in or around a building or building area. The BMS can include, for example, a HVAC system, a security system, a lighting system, a fire alerting system, any other system that is capable of managing building functions or devices, or any combination thereof. The BMS can include or be coupled with items of equipment, for example and without limitation, such as heaters, chillers, boilers, air handling units, sensors, actuators, refrigeration systems, fans, blowers, heat exchangers, energy storage devices, condensers, valves, or various combinations thereof.


The items of equipment can operate in accordance with various qualitative and quantitative parameters, variables, setpoints, and/or thresholds or other criteria, for example. In some instances, the system 100 and/or the items of equipment can include or be coupled with one or more controllers for controlling parameters of the items of equipment, such as to receive control commands for controlling operation of the items of equipment via one or more wired, wireless, and/or user interfaces of controller.


Various components of the system 100 or portions thereof can be implemented by one or more processors coupled with or more memory devices (memory). The processors can be a general purpose or specific purpose processors, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a group of processing components, or other suitable processing components. The processors may be configured to execute computer code and/or instructions stored in the memories or received from other computer readable media (e.g., CDROM, network storage, a remote server, etc.). The processors can be configured in various computer architectures, such as graphics processing units (GPUs), distributed computing architectures, cloud server architectures, client-server architectures, or various combinations thereof. One or more first processors can be implemented by a first device, such as an edge device, and one or more second processors can be implemented by a second device, such as a server or other device that is communicatively coupled with the first device and may have greater processor and/or memory resources.


The memories can include one or more devices (e.g., memory units, memory devices, storage devices, etc.) for storing data and/or computer code for completing and/or facilitating the various processes described in the present disclosure. The memories can include random access memory (RAM), read-only memory (ROM), hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, or any other suitable memory for storing software objects and/or computer instructions. The memories can include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. The memories can be communicably connected to the processors and can include computer code for executing (e.g., by the processors) one or more processes described herein.


Machine Learning Models

The system 100 can include or be coupled with one or more first models 104. The first model 104 can include one or more neural networks, including neural networks configured as generative models. For example, the first model 104 can predict or generate new data (e.g., artificial data; synthetic data; data not explicitly represented in data used for configuring the first model 104). The first model 104 can generate any of a variety of modalities of data, such as text, speech, audio, images, and/or video data. The neural network can include a plurality of nodes, which may be arranged in layers for providing outputs of one or more nodes of one layer as inputs to one or more nodes of another layer. The neural network can include one or more input layers, one or more hidden layers, and one or more output layers. Each node can include or be associated with parameters such as weights, biases, and/or thresholds, representing how the node can perform computations to process inputs to generate outputs. The parameters of the nodes can be configured by various learning or training operations, such as unsupervised learning, weakly supervised learning, semi-supervised learning, or supervised learning.


The first model 104 can include, for example and without limitation, one or more language models, LLMs, attention-based neural networks, transformer-based neural networks, generative pretrained transformer (GPT) models, bidirectional encoder representations from transformers (BERT) models, encoder/decoder models, sequence to sequence models, autoencoder models, generative adversarial networks (GANs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), diffusion models (e.g., denoising diffusion probabilistic models (DDPMs)), or various combinations thereof.


For example, the first model 104 can include at least one GPT model. The GPT model can receive an input sequence, and can parse the input sequence to determine a sequence of tokens (e.g., words or other semantic units of the input sequence, such as by using Byte Pair Encoding tokenization). The GPT model can include or be coupled with a vocabulary of tokens, which can be represented as a one-hot encoding vector, where each token of the vocabulary has a corresponding index in the encoding vector; as such, the GPT model can convert the input sequence into a modified input sequence, such as by applying an embedding matrix to the token tokens of the input sequence (e.g., using a neural network embedding function), and/or applying positional encoding (e.g., sin-cosine positional encoding) to the tokens of the input sequence. The GPT model can process the modified input sequence to determine a next token in the sequence (e.g., to append to the end of the sequence), such as by determining probability scores indicating the likelihood of one or more candidate tokens being the next token, and selecting the next token according to the probability scores (e.g., selecting the candidate token having the highest probability scores as the next token). For example, the GPT model can apply various attention and/or transformer based operations or networks to the modified input sequence to identify relationships between tokens for detecting the next token to form the output sequence.


The first model 104 can include at least one diffusion model, which can be used to generate image and/or video data. For example, the diffusional model can include a denoising neural network and/or a denoising diffusion probabilistic model neural network. The denoising neural network can be configured by applying noise to one or more training data elements (e.g., images, video frames) to generate noised data, providing the noised data as input to a candidate denoising neural network, causing the candidate denoising neural network to modify the noised data according to a denoising schedule, evaluating a convergence condition based on comparing the modified noised data with the training data instances, and modifying the candidate denoising neural network according to the convergence condition (e.g., modifying weights and/or biases of one or more layers of the neural network). In some implementations, the first model 104 includes a plurality of generative models, such as GPT and diffusion models, that can be trained separately or jointly to facilitate generating multi-modal outputs, such as technical documents (e.g., service guides) that include both text and image/video information.


In some implementations, the first model 104 can be configured using various unsupervised and/or supervised training operations. The first model 104 can be configured using training data from various domain-agnostic and/or domain-specific data sources, including but not limited to various forms of text, speech, audio, image, and/or video data, or various combinations thereof. The training data can include a plurality of training data elements (e.g., training data instances). Each training data element can be arranged in structured or unstructured formats; for example, the training data element can include an example output mapped to an example input, such as a query representing a service request or one or more portions of a service request, and a response representing data provided responsive to the query. The training data can include data that is not separated into input and output subsets (e.g., for configuring the first model 104 to perform clustering, classification, or other unsupervised ML operations). The training data can include human-labeled information, including but not limited to feedback regarding outputs of the models 104, 116. This can allow the system 100 to generate more human-like outputs.


In some implementations, the training data includes data relating to building management systems. For example, the training data can include examples of HVAC-R data, such as operating manuals, technical data sheets, configuration settings, operating setpoints, diagnostic guides, troubleshooting guides, user reports, technician reports. In some implementations, the training data used to configure the first model 104 includes at least some publicly accessible data, such as data retrievable via the Internet.


Referring further to FIG. 1, the system 100 can configure the first model 104 to determine one or more second models 116. For example, the system 100 can include a model updater 108 that configures (e.g., trains, updates, modifies, fine-tunes, etc.) the first model 104 to determine the one or more second models 116. In some implementations, the second model 116 can be used to provide application-specific outputs, such as outputs having greater precision, accuracy, or other metrics, relative to the first model, for targeted applications.


The second model 116 can be similar to the first model 104. For example, the second model 116 can have a similar or identical backbone or neural network architecture as the first model 104. In some implementations, the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models. The second model 116 can be configured using processes analogous to those described for configuring the first model 104.


In some implementations, the model updater 108 can perform operations on at least one of the first model 104 or the second model 116 via one or more interfaces, such as application programming interfaces (APIs). For example, the models 104, 116 can be operated and maintained by one or more systems separate from the system 100. The model updater 108 can provide training data to the first model 104, via the API, to determine the second model 116 based on the first model 104 and the training data. The model updater 108 can control various training parameters or hyperparameters (e.g., learning rates, etc.) by providing instructions via the API to manage configuring the second model 116 using the first model 104.


Data Sources

The model updater 108 can determine the second model 116 using data from one or more data sources 112. For example, the system 100 can determine the second model 116 by modifying the first model 104 using data from the one or more data sources 112. The data sources 112 can include or be coupled with any of a variety of integrated or disparate databases, data warehouses, digital twin data structures (e.g., digital twins of items of equipment or building management systems or portions thereof), data lakes, data repositories, documentation records, or various combinations thereof. In some implementations, the data sources 112 include HVAC-R data in any of text, speech, audio, image, or video data, or various combinations thereof, such as data associated with HVAC-R components and procedures including but not limited to installation, operation, configuration, repair, servicing, diagnostics, and/or troubleshooting of HVAC-R components and systems. Various data described below with reference to data sources 112 may be provided in the same or different data elements, and may be updated at various points. The data sources 112 can include or be coupled with items of equipment (e.g., where the items of equipment output data for the data sources 112, such as sensor data, etc.). The data sources 112 can include various online and/or social media sources, such as blog posts or data submitted to applications maintained by entities that manage the buildings. The system 100 can determine relations between data from different sources, such as by using timeseries information and identifiers of the sites or buildings at which items of equipment are present to detect relationships between various different data relating to the items of equipment (e.g., to train the models 104, 116 using both timeseries data (e.g., sensor data; outputs of algorithms or models, etc.) regarding a given item of equipment and freeform natural language reports regarding the given item of equipment).


The data sources 112 can include unstructured data or structured data (e.g., data that is labeled with or assigned to one or more predetermined fields or identifiers). For example, using the first model 104 and/or second model 116 to process the data can allow the system 100 to extract useful information from data in a variety of formats, including unstructured/freeform formats, which can allow service technicians to input information in less burdensome formats. The data can be of any of a plurality of formats (e.g., text, speech, audio, image, video, etc.), including multi-modal formats. For example, the data may be received from service technicians in forms such as text (e.g., laptop/desktop or mobile application text entry), audio, and/or video (e.g., dictating findings while capturing video).


The data sources 112 can include engineering data regarding one or more items of equipment. The engineering data can include manuals, such as installation manuals, instruction manuals, or operating procedure guides. The engineering data can include specifications or other information regarding operation of items of equipment. The engineering data can include engineering drawings, process flow diagrams, refrigeration cycle parameters (e.g., temperatures, pressures), or various other information relating to structures and functions of items of equipment.


In some implementations, the data sources 112 can include operational data regarding one or more items of equipment. The operational data can represent detected information regarding items of equipment, such as sensor data, logged data, user reports, or technician reports. The operational data can include, for example, service tickets generated responsive to requests for service, work orders, data from digital twin data structures maintained by an entity of the item of equipment, outputs or other information from equipment operation models (e.g., chiller vibration models), or various combinations thereof. Logged data, user reports, service tickets, billing records, time sheets, and various other such data can provide temporal information, such as how long service operations may take, or durations of time between service operations, which can allow the system 100 to predict resources to use for performing service as well as when to request service.


The data sources 112 can include, for instance, warranty data. The warranty data can include warranty documents or agreements that indicate conditions under which various entities associated with items of equipment are to provide service, repair, or other actions corresponding to items of equipment, such as actions corresponding to service requests. Warranty data can include warranty requests, conversational data from conversations between customers making warranty claims and warranty providers, descriptions of equipment issues (faults, etc.), descriptions of causes of equipment issues, solutions to such causes (e.g., service actions taken, replacement parts provided, etc.). Accordingly, warranty data can include a combination of structured data (fields for equipment model, equipment type, claim type, cost, etc.) and unstructured data (e.g., natural language input from customer claims, service technicians, and warranty claim processing staff). Warranty data can be provided from a database of warranty claims made to an equipment provider over multiple years (e.g., greater than 10 years), for a large number of claims (e.g., greater than 100,000 claims), for a large number of different equipment units (e.g., greater than 30,000 chillers), for a variety of different equipment types (e.g., greater than 30 chiller types), for a variety of different components/parts of such equipment (e.g., relating to greater than 200 components that can fail), resulting in a very large dataset in natural language word volume (e.g., greater than 75,000,000 words). As such, extracting meaning from such data using manual human reading and interpretation is unfeasible, and the computerized approaches herein using language models, AI models, and the like provide technical solutions to processing warranty data.


To provide one example of warranty claim information that can be available in a warranty database, one warranty entry may be as follows: p: Chiller tripping building breaker/C: compressor failure/S: order compressor & replace. Estimated cost is unknown at this time as Service Mgr has been working with Baltimore to get compressor component part numbers set up to be ordered. MUST LIST RMA #BEFORE CLAIM CAN BE PROCESSED. Returned (1) 015 04560 104 bitzer compressor via Con-Way, Pro #352-729871 This claim is complete, please process. WHAT IS YOUR RMA #? NO RECORD OF RECEIVING RMA—SEE RETURN INFOReturned (1) 015 04560 104 bitzer compressor via Con-Way, Pro #352-729871 NO RECORD OF RECEIVING RMA—SEE RETURN INFOReturned (1) 015 04560 104 bitzer compressor via Con-Way, Pro #352-729871 Took final oil samples on 9-22. Please advise if you need copies. Thank you ALL COMPRESSOR FAILURES IN WARRANTY REQUIRE AND RMA #. WHEN WAS COMPRESSOR SHIPPED AND TO WHERE? Compressor shipped 9/16/15 to JCI Warranty Center, 631 Richland Ave, York PA 17403, BOL #352-729871. Delivered 9/21/15 by G Lehr. WHY DIDN'T YOU GET AN RMA #? THIS CLAIM WILL BE SUBJECT FOR REJECTION WITHOUT THE RMA #! NO RESPONSE AFTER 40 DAYS—REJECTED—AUTO RESPONSE. This warranty was worked with the technical support staff, they determined to replace one compressor instead of all 3 tandems. PLEASE LIST WHY YOU DIDN'T GET AN RMA #OR CLAIM WILL BE REJECTED. This is a set of tandem compressors. BSM worked with tech support to determine if all 3 or just 1 of the compressors would be replaced. It was finally determined that only 1 compressor would be replaced. No RMA was issued during this process. IT IS YOUR RESPONSIBILITY TO GET AN RMA #. THIS PROCESS HAS BEEN AROUND FOR YEARS. THE NEXT COMPRESSOR WARRANTY SUBMITTED WITHOUT AND RMA #WILL BE REJECTED.


To provide another example, an entry in warranty claim data may show: Unit—5 year—Parts & Labor; Starter—5 year—Parts & Labor; Refrigerant—5 year—Parts P: Chiller 1 High Phase A Input Baseplate Temp fault message; C: Dark Blue Coolant fouled causing high baseplate temp; S: Disassemble drive and clean; VSD Model Number Hyp1278xhc30b-46a. Compressor Model Number M6c-331fac; Activities are done—please review. Thank you! SD Warranty Completed WHAT COOLANT ORIGINALLY SHIPPED WITH THIS UNIT? Checking on this—will respond when I have a confirmation. THANK YOU. PLEASE UPDATE WHEN ABLE. Coolant was dark blue. we worked directly with the factory engineering group on this issue and followed their instructions. Please process and accept this claim. thank you! THANK YOU. WHAT WAS THE COOLANT FOULED WITH? WAS THE SYSTEM ONLY FLUSHED, OR TAKEN APART AND CLEANED? IF IT WAS ONLY FLUSHED, HOW MANY TIMES WAS IT FLUSHED? NO RESPONSE AFTER 40 DAYS—REJECTED—AUTO RESPONSE I responded to this claim on 8-22 and was update on approval. This should not be rejected! Please review and advise what else is needed to approve this claim. Thank you. THAT COMMENT DID NOT POST TO MY SYSTEM. PLEASE REPOST THE COMMENT FROM 08/22. Please confirm you have received my notes on this claim. I am being told this is on the verge of rejection due to non response. Please advise asap. Thank you. I CAN SEE THE LAST COMMENT POSTED, BUT I STILL SEE NO NEW COMMENT WITH THE DETAILS I REQUESTED. PLEASE REPOST THE COMMENT FROM AUGUST THAT NEVER CAME THROUGH. THIS CLAIM IS NOT GOING TO BE REJECTED NO RESPONSE AFTER 40 DAYS—REJECTED—AUTO RESPONSE I have responded multiple times on this claim, via email and this space.—PTS has not gotten back to me. This claim SHOULD NOT be rejected. Please review my last responses. Thank you. AGAIN, PLEASE REPOST THE INFORMATION SUPPLIED ON 08/02 IN THE COMMENTS OF THIS CLAIM. TO DATE NO COMMENT HAS BEEN ADDED WITH THE REQUESTED DETAIL. Please confirm you can see my latest comment (prior to this one) I just reposted earlier today. thank you! See below—which was cut and pasted from 8-22. This has also been emailed directly to you previously. Please see tech notes on how this was handled. It was flushed several times—not sure what you mean when you ask ‘what the coolant was fouled with’—Below are the tech notes. 5/21—Chiller 1 tripping on high input temperature phase A. It is not consistently failing it will run some days with no issues. The drive has had conditioner procedure done to it. It tripped off today at 70% fla. The tower temperature is 66 degrees, last week temperatures was in 80 and unit stayed on line. But it also has tripped on tower temps in the 80's. 05/22/—Started. Disassembling power assembly to clean SADM445330 YMC2 Chiller 1 05/22/2019—Locked out chiller 1. Started disassembling the vsd. SADM445330 YMC2 Chiller 1 05/23/2019 Removed Phase A and disassembled the rectifier side. Found 1 igbt that cooling path was blocked. Cleaned flushed with distilled water and leak checked. 05/23/2019 Assisted Dan with removing the left power assembly. Removed & cleaned the IGBT's on the rectifier side. Reassembled & flushed power assembly several times. Flushed the other 2 power assemblies & the bottom portion of the cooling circuit. 05123/2019; 05124/2019 Flushed out the top portion of the cooling system of the vsd. Nate dropped off the new o-rings. I replaced the inverter side o-rings & leak checked the power module after i put it back together. Went to shop to pick up cleaner & new coolant. 05/24/2019 Picked up parts and delivers to St Vincent for the drive repair. I had to leave for a service call 05/28/2019 Continued working on power module on chiller 1. 05/29/2019 Continued cleaning the IGBT's on the middle & right power modules. Then flushed them out. 05/29/2019 Removed igbt from phase b and C and cleaned them. Put them back together and connected lines back up. Flushed each phase out individually with water and leak tested. 05/30/2019 Reassembled power drives to the vsd. Ran conditioner & flushed a couple times. Left distilled water over night. 05/30/2019 We finished installing phase A and reinstalled bus bars. We turned power back on and began conditioning treatment. Drained it and flushed it 3 times. We will do the final flush tomorrow and check levels. 05/31/2019 Arrived on site and started flushing the vsd drive. Flushed it out with distilled water and then put the right coolant in. Cleaned up the work area before leaving. 05/31/2019 I drained the water and ran conditioner through it for 2 hours. I then flushed it 4 times with distilled water and tested conductivity levels they were at 317 us. I then flushed system with coolant light blue and filled back up with new coolant. I tested the PH, molybdenum, and sodium nitrite. Ph-8.38 Molybdenum-80 ppm Sodium nitrite-1400 ppm All acceptable We started the chiller back up and everything is running good.


Accordingly, warranty data can include a variety of technical information (e.g., describing equipment issues, causes of faults, solutions to equipment issues, etc.) as well as procedural issues, interpersonal communications, etc. with less relevance to generating information relating to services and replacement parts provided for particular equipment. A technical challenge thus exists in extracting relevant information from such warranty data.


The data sources 112 can include service data. The service data can include data from any of various service providers, such as service reports. The service data can indicate service procedures performed, including associated service procedures with initial service requests and/or sensor data related conditions to trigger service and/or sensor data measured during service processes. Service reports can include natural language descriptions of services performed by technicians, structured data indicating services performed (e.g., options selected from pre-set service options), invoices for services performed, etc. In some embodiments, the service data includes operating data of equipment that was serviced according to the service reports (e.g., showing operations before and after servicing). In some embodiments, the service data includes user guides, service manuals, training materials for service technicians, and/or other content describing equipment servicing. The following passages are examples of a service reports that can be included as service data in the data sources 112:

    • Completed installing bolts and torquing on liquid refrigerant line sppol piece, ball valve and variable orifice valve. Removed O-ring fittings caps from oil cooler and fitting on the bottom of Evaporator. Removed O-rings from fittings on oil cooler and evaporator fitting. Cleaned fittings and installed new O-rings. Installed new line that was made. From oil cooler to the bottom of the Evaporator. Added R-22 trace gas to chiller. Pressurized Chiller #5 with 80 lbs of nitrogen on Chiller.
    • Went to shop to pickup part and delivered to the site. Once arrived on-site check in with security then completed PJC. Removed defective level control actuator from Chiller 1 and being installing new. Once installed manually opened and closed valve through the opti-view panel. Had operations start Chiller and monitored operation. Chiller is maintaining proper refrigerant level in the condenser and is running normal at this time. Updated customer of Chiller status and checkout with security before leaving site.


Accordingly, service data can include a variety of technical information (e.g., describing equipment issues, causes of faults, solutions to equipment issues, etc.) as well as procedural issues, travel information, etc. with less relevance to generating information relating to services provided for various equipment. A technical challenge thus exists in extracting relevant information from such service data.


In some implementations, the data sources 112 can include parts data, including but not limited to parts usage and sales data. For example, the data sources 112 can indicate various parts associated with installation or repair of items of equipment. The data sources 112 can indicate tools for performing service and/or installing parts.


The system 100 can include, with the data of the data sources 112, labels to facilitate cross-reference between items of data that may relate to common items of equipment, sites, service technicians, customers, or various combinations thereof. For example, data from disparate sources may be labeled with time data, which can allow the system 100 (e.g., by configuring the models 104, 116) to increase a likelihood of associating information from the disparate sources due to the information being detected or recorded (e.g., as service reports) at the same time or near in time.


For example, the data sources 112 can include data that can be particular to specific or similar items of equipment, buildings, equipment configurations, environmental states, or various combinations thereof. In some implementations, the data includes labels or identifiers of such information, such as to indicate locations, weather conditions, timing information, uses of the items of equipment or the buildings or sites at which the items of equipment are present, etc. This can enable the models 104, 116 to detect patterns of usage (e.g., spikes; troughs; seasonal or other temporal patterns) or other information that may be useful for determining causes of issues or causes of service requests, or predict future issues, such as to allow the models 104, 116 to be trained using information indicative of causes of issues across multiple items of equipment (which may have the same or similar causes even if the data regarding the items of equipment is not identical). For example, an item of equipment may be at a site that is a museum; by relating site usage or occupancy data with data regarding the item of equipment, such as sensor data and service reports, the system 100 can configure the models 104, 116 to determine a high likelihood of issues occurring before events associated with high usage (e.g., gala, major exhibit opening), and can generate recommendations to perform diagnostics or servicing prior to the events.


Model Configuration

Referring further to FIG. 1, the model updater 108 can perform various machine learning model configuration/training operations to determine the second models 116 using the data from the data sources 112. For example, the model updater 108 can perform various updating, optimization, retraining, reconfiguration, fine-tuning, or transfer learning operations, or various combinations thereof, to determine the second models 116. The model updater 108 can configure the second models 116, using the data sources 112, to generate outputs (e.g., completions) in response to receiving inputs (e.g., prompts), where the inputs and outputs can be analogous to data of the data sources 112.


For example, the model updater 108 can identify one or more parameters (e.g., weights and/or biases) of one or more layers of the first model 104, and maintain (e.g., freeze, maintain as the identified values while updating) the values of the one or more parameters of the one or more layers. In some implementations, the model updater 108 can modify the one or more layers, such as to add, remove, or change an output layer of the one or more layers, or to not maintain the values of the one or more parameters. The model updater 108 can select at least a subset of the identified one or parameters to maintain according to various criteria, such as user input or other instructions indicative of an extent to which the first model 104 is to be modified to determine the second model 116. In some implementations, the model updater 108 can modify the first model 104 so that an output layer of the first model 104 corresponds to output to be determined for applications 120.


Responsive to selecting the one or more parameters to maintain, the model updater 108 can apply, as input to the second model 116 (e.g., to a candidate second model 116, such as the modified first model 104, such as the first model 104 having the identified parameters maintained as the identified values), training data from the data sources 112. For example, the model updater 108 can apply the training data as input to the second model 116 to cause the second model 116 to generate one or more candidate outputs.


The model updater 108 can evaluate a convergence condition to modify the candidate second model 116 based at least on the one or more candidate outputs and the training data applied as input to the candidate second model 116. For example, the model updater 108 can evaluate an objective function of the convergence condition, such as a loss function (e.g., L1 loss, L2 loss, root mean square error, cross-entropy or log loss, etc.) based on the one or more candidate outputs and the training data; this evaluation can indicate how closely the candidate outputs generated by the candidate second model 116 correspond to the ground truth represented by the training data. The model updater 108 can use any of a variety of optimization algorithms (e.g., gradient descent, stochastic descent, Adam optimization, etc.) to modify one or more parameters (e.g., weights or biases of the layer(s) of the candidate second model 116 that are not frozen) of the candidate second model 116 according to the evaluation of the objective function. In some implementations, the model updater 108 can use various hyperparameters to evaluate the convergence condition and/or perform the configuration of the candidate second model 116 to determine the second model 116, including but not limited to hyperparameters such as learning rates, numbers of iterations or epochs of training, etc.


As described further herein with respect to applications 120, in some implementations, the model updater 108 can select the training data from the data of the data sources 112 to apply as the input based at least on a particular application of the plurality of applications 120 for which the second model 116 is to be used for. For example, the model updater 108 can select data from the parts data source 112 for the product recommendation generator application 120, or select various combinations of data from the data sources 112 (e.g., engineering data, operational data, and service data) for the service recommendation generator application 120. The model updater 108 can apply various combinations of data from various data sources 112 to facilitate configuring the second model 116 for one or more applications 120.


In some implementations, the system 100 can perform at least one of conditioning, classifier-based guidance, or classifier-free guidance to configure the second model 116 using the data from the data sources 112. For example, the system 100 can use classifiers associated with the data, such as identifiers of the item of equipment, a type of the item of equipment, a type of entity operating the item of equipment, a site at which the item of equipment is provided, or a history of issues at the site, to condition the training of the second model 116. For example, the system 100 combine (e.g., concatenate) various such classifiers with the data for inputting to the second model 116 during training, for at least a subset of the data used to configure the second model 116, which can enable the second model 116 to be responsive to analogous information for runtime/inference time operations.


Applications

Referring further to FIG. 1, the system 100 can use outputs of the one or more second models 116 to implement one or more applications 120. For example, the second models 116, having been configured using data from the data sources 112, can be capable of precisely generating outputs that represent useful, timely, and/or real-time information for the applications 120. In some implementations, each application 120 is coupled with a corresponding second model 116 that is specifically configured to generate outputs for use by the application 120. Various applications 120 can be coupled with one another, such as to provide outputs from a first application 120 as inputs or portions of inputs to a second application 120.


The applications 120 can include any of a variety of desktop, web-based/browser-based, or mobile applications. For example, the applications 120 can be implemented by enterprise management software systems, employee or other user applications (e.g., applications that relate to BMS functionality such as temperature control, user preferences, conference room scheduling, etc.), equipment portals that provide data regarding items of equipment, or various combinations thereof. The applications 120 can include user interfaces, wizards, checklists, conversational interfaces, chatbots, configuration tools, or various combinations thereof. The applications 120 can receive an input, such as a prompt (e.g., from a user), provide the prompt to the second model 116 to cause the second model 116 to generate an output, such as a completion in response to the prompt, and present an indication of the output. The applications 120 can receive inputs and/or present outputs in any of a variety of presentation modalities, such as text, speech, audio, image, and/or video modalities. For example, the applications 120 can receive unstructured or freeform inputs from a user, such as a service technician, and generate reports in a standardized format, such as a customer-specific format. This can allow, for example, technicians to automatically, and flexibly, generate customer-ready reports after service visits without requiring strict input by the technician or manually sitting down and writing reports; to receive inputs as dictations in order to generate reports; to receive inputs in any form or a variety of forms, and use the second model 116 (which can be trained to cross-reference metadata in different portions of inputs and relate together data elements) to generate output reports (e.g., the second model 116, having been configured with data that includes time information, can use timestamps of input from dictation and timestamps of when an image is taken, and place the image in the report in a target position or label based on time correlation).


In some implementations, the applications 120 include at least one virtual assistant (e.g., virtual assistance for technician services) application 120. The virtual assistant application can provide various services to support technician operations, such as presenting information from service requests, receiving queries regarding actions to perform to service items of equipment, and presenting responses indicating actions to perform to service items of equipment. The virtual assistant application can receive information regarding an item of equipment to be serviced, such as sensor data, text descriptions, or camera images, and process the received information using the second model 116 to generate corresponding responses.


For example, the virtual assistant application 120 can be implemented in a UI/UX wizard configuration, such as to provide a sequence of requests for information from the user (the sequence may include requests that are at least one of predetermined or dynamically generated responsive to inputs from the user for previous requests). For example, the virtual assistant application 120 can provide one or more requests for users such as service technicians, facility managers, or other occupants, and provide the received responses to at least one of the second model 116 or a root cause detection function (e.g., algorithm, model, data structure mapping inputs to candidate causes, etc.) to determine a prediction of a cause of the issue of the item of equipment and/or solutions. The virtual assistant application 120 can use requests for information such as for unstructured text by which the user describes characteristics of the item of equipment relating to the issue; answers expected to correspond to different scenarios indicative of the issue; and/or image and/or video input (e.g., images of problems, equipment, spaces, etc. that can provide more context around the issue and/or configurations). For example, responsive to receiving a response via the virtual assistant application 120 indicating that the problem is with temperature in the space, the system 100 can request, via the virtual assistant application 120, information regarding HVAC-R equipment associated with the space, such as pictures of the space, an air handling unit, a chiller, or various combinations thereof.


The virtual assistant application 120 can include a plurality of applications 120 (e.g., variations of interfaces or customizations of interfaces) for a plurality of respective user types. For example, the virtual assistant application 120 can include a first application 120 for a customer user, and a second application 120 for a service technician user. The virtual assistant applications 120 can allow for updating and other communications between the first and second applications 120 as well as the second model 116. Using one or more of the first application 120 and the second application 120, the system 100 can manage continuous/real-time conversations for one or more users, and evaluate the users' engagement with the information provided (e.g., did the user, customer, service technician, etc., follow the provided steps for responding to the issue or performing service, did the user discontinue providing inputs to the virtual assistant application 120, etc.), such as to enable the system 100 to update the information generated by the second model 116 for the virtual assistant application 120 according to the engagement. In some implementations, the system 100 can use the second model 116 to detect sentiment of the user of the virtual assistant application 120, and update the second model 116 according to the detected sentiment, such as to improve the experience provided by the virtual assistant application 120.


In some embodiments, the virtual assistant application 120 generates service analytics or other content relating to services provided or to be provided (e.g., equipment maintenance and repair services). For example, the virtual assistant application 120 may generate a summarization of services provided to a particular unit of equipment, at a particular building, etc. As another example, the virtual assistant application 120 may generate a summarization of services typically provided to a type of equipment over time (e.g., to a component of a chiller after a certain number of years in use) and/or otherwise summarizing services that may be a best practice to provide for preventative maintenance, to improve efficiency, etc. In some embodiments, the virtual assistant application 120 can generate a labelling of services, for example classifying a freeform description of services provided into a service category (e.g., for further data analysis, for inclusion in invoices, for inclusion in equipment service records, etc.). In some embodiments, the virtual assistant application 120 can generate an investigative service report (e.g., an explanation of investigations, tests, troubleshooting, etc. conducted for building equipment, the findings of such steps, and any actions taken to provide services responsive to such findings).


The applications 120 can include at least one document writer application 120, such as a technical document writer. The document writer application 120 can facilitate preparing structured (e.g. form-based) and/or unstructured documentation, such as documentation associated with service requests. For example, the document writer application 120 can present a user interface corresponding to a template document to be prepared that is associated with at least one of a service request or the item of equipment for which the service request is generated, such as to present one or more predefined form sections or fields. The document writer application 120 can use inputs, such as prompts received from the users and/or technical data provided by the user regarding the item of equipment, such as sensor data, text descriptions, or camera images, to generate information to include in the documentation. For example, the document writer application 120 can provide the inputs to the second model 116 to cause the second model 116 to generate completions for text information to include in the fields of the documentation.


The applications 120 can include, in some implementations, at least one diagnostics and troubleshooting application 120. The diagnostics and troubleshooting application 120 can receive inputs including at least one of a service request or information regarding the item of equipment to be serviced, such as information identified by a service technician. The diagnostics and troubleshooting application 120 can provide the inputs to a corresponding second model 116 to cause the second model 116 to generate outputs such as indications of potential items to be checked regarding the item of equipment, modifications or fixes to make to perform the service, or values or ranges of values of parameters of the item of equipment that may be indicative of specific issues to for the service technician to address or repair.


The applications 120 can at least one service recommendation generator application 120. The service recommendation generator application 120 can receive inputs such as a service request or information regarding the item of equipment to be serviced, and provide the inputs to the second model 116 to cause the second model 116 to generate outputs for presenting service recommendations, such as actions to perform to address the service request.


In some implementations, the applications 120 can include a product recommendation generator application 120. The product recommendation generator application 120 can process inputs such as information regarding the item of equipment or the service request, using one or more second models 116 (e.g., models trained using parts data from the data sources 112), to determine a recommendation of a part or product to replace or otherwise use for repairing the item of equipment.


Feedback Training

Referring further to FIG. 1, the system 100 can include at least one feedback trainer 128 coupled with at least one feedback repository 124. The system 100 can use the feedback trainer 128 to increase the precision and/or accuracy of the outputs generated by the second models 116 according to feedback provided by users of the system 100 and/or the applications 120.


The feedback repository 124 can include feedback received from users regarding output presented by the applications 120. For example, for at least a subset of outputs presented by the applications 120, the applications 120 can present one or more user input elements for receiving feedback regarding the outputs. The user input elements can include, for example, indications of binary feedback regarding the outputs (e.g., good/bad feedback; feedback indicating the outputs do or do not meet the user's criteria, such as criteria regarding technical accuracy or precision); indications of multiple levels of feedback (e.g., scoring the outputs on a predetermined scale, such as a 1-5 scale or 1-10 scale); freeform feedback (e.g., text or audio feedback); or various combinations thereof.


The system 100 can store and/or maintain feedback in the feedback repository 124. In some implementations, the system 100 stores the feedback with one or more data elements associated with the feedback, including but not limited to the outputs for which the feedback was received, the second model(s) 116 used to generate the outputs, and/or input information used by the second models 116 to generate the outputs (e.g., service request information; information captured by the user regarding the item of equipment).


The feedback trainer 128 can update the one or more second models 116 using the feedback. The feedback trainer 128 can be similar to the model updater 108. In some implementations, the feedback trainer 128 is implemented by the model updater 108; for example, the model updater 108 can include or be coupled with the feedback trainer 128. The feedback trainer 128 can perform various configuration operations (e.g., retraining, fine-tuning, transfer learning, etc.) on the second models 116 using the feedback from the feedback repository 124. In some implementations, the feedback trainer 128 identifies one or more first parameters of the second model 116 to maintain as having predetermined values (e.g., freeze the weights and/or biases of one or more first layers of the second model 116), and performs a training process, such as a fine tuning process, to configure parameters of one or more second parameters of the second model 116 using the feedback (e.g., one or more second layers of the second model 116, such as output layers or output heads of the second model 116).


In some implementations, the system 100 may not include and/or use the model updater 108 (or the feedback trainer 128) to determine the second models 116. For example, the system 100 can include or be coupled with an output processor (e.g., an output processor similar or identical to accuracy checker 316 described with reference to FIG. 3) that can evaluate and/or modify outputs from the first model 104 prior to operation of applications 120, including to perform any of various post-processing operations on the output from the first model 104. For example, the output processor can compare outputs of the first model 104 with data from data sources 112 to validate the outputs of the first model 104 and/or modify the outputs of the first model 104 (or output an error) responsive to the outputs not satisfying a validation condition.


Connected Machine Learning Models

Referring further to FIG. 1, the second model 116 can be coupled with one or more third models, functions, or algorithms for training/configuration and/or runtime operations. The third models can include, for example and without limitation, any of various models relating to items of equipment, such as energy usage models, sustainability models, carbon models, air quality models, or occupant comfort models. For example, the second model 116 can be used to process unstructured information regarding items of equipment into predefined template formats compatible with various third models, such that outputs of the second model 116 can be provided as inputs to the third models; this can allow more accurate training of the third models, more training data to be generated for the third models, and/or more data available for use by the third models. The second model 116 can receive inputs from one or more third models, which can provide greater data to the second model 116 for processing.


Automated Service Scheduling and Provisioning

The system 100 can be used to automate operations for scheduling, provisioning, and deploying service technicians and resources for service technicians to perform service operations. For example, the system 100 can use at least one of the first model 104 or the second model 116 to determine, based on processing information regarding service operations for items of equipment relative to completion criteria for the service operation, particular characteristics of service operations such as experience parameters of scheduled service technicians, identifiers of parts provided for the service operations, geographical data, types of customers, types of problems, or information content provided to the service technicians to facilitate the service operation, where such characteristics correspond to the completion criteria being satisfied (e.g., where such characteristics correspond to an increase in likelihood of the completion criteria being satisfied relative to other characteristics for service technicians, parts, information content, etc.). For example, the system 100 can determine, for a given item of equipment, particular parts to include on a truck to be sent to the site of the item of equipment. As such, the system 100, responsive to processing inputs at runtime such as service requests, can automatically and more accurately identify service technicians and parts to direct to the item of equipment for the service operations. The system 100 can use timing information to perform batch scheduling for multiple service operations and/or multiple technicians for the same or multiple service operations. The system 100 can perform batch scheduling for multiple trucks for multiple items of equipment, such as to schedule a first one or more parts having a greater likelihood for satisfying the completion criteria for a first item of equipment on a first truck, and a second one or more parts having a greater likelihood for satisfying the completion criteria for a second item of equipment on a second truck.


II. System Architectures for Generative AI Applications for Building Management System and Equipment Servicing


FIG. 2 depicts an example of a system 200. The system 200 can include one or more components or features of the system 100, such as any one or more of the first model 104, data sources 112, second model 116, applications 120, feedback repository 124, and/or feedback trainer 128. The system 200 can perform specific operations to enable generative AI applications for building managements systems and equipment servicing, such as various manners of processing input data into training data (e.g., tokenizing input data; forming input data into prompts and/or completions), and managing training and other machine learning model configuration processes. Various components of the system 200 can be implemented using one or more computer systems, which may be provided on the same or different processors (e.g., processors communicatively coupled via wired and/or wireless connections).


The system 200 can include at least one data repository 204, which can be similar to the data sources 112 described with reference to FIG. 1. For example, the data repository 204 can include a transaction database 208, which can be similar or identical to one or more of warranty data or service data of data sources 112. For example, the transaction database 208 can include data such as parts used for service transactions; sales data indicating various service transactions or other transactions regarding items of equipment; warranty and/or claims data regarding items of equipment; and service data.


The data repository 204 can include a product database 212, which can be similar or identical to the parts data of the data sources 112. The product database 212 can include, for example, data regarding products available from various vendors, specifications or parameters regarding products, and indications of products used for various service operations. The products database 212 can include data such as events or alarms associated with products; logs of product operation; and/or time series data regarding product operation, such as longitudinal data values of operation of products and/or building equipment.


The data repository 204 can include an operations database 216, which can be similar or identical to the operations data of the data sources 112. For example, the operations database 216 can include data such as manuals regarding parts, products, and/or items of equipment; customer service data; and or reports, such as operation or service logs.


In some implementations, the data repository 204 can include an output database 220, which can include data of outputs that may be generated by various machine learning models and/or algorithms. For example, the output database 220 can include values of pre-calculated predictions and/or insights, such as parameters regarding operation items of equipment, such as setpoints, changes in setpoints, flow rates, control schemes, identifications of error conditions, or various combinations thereof.


As depicted in FIG. 2, the system 200 can include a prompt management system 228. The prompt management system 228 can include one or more rules, heuristics, logic, policies, algorithms, functions, machine learning models, neural networks, scripts, or various combinations thereof to perform operations including processing data from data repository 204 into training data for configuring various machine learning models. For example, the prompt management system 228 can retrieve and/or receive data from the data repository 228, and determine training data elements that include examples of input and outputs for generation by machine learning models, such as a training data element that includes a prompt and a completion corresponding to the prompt, based on the data from the data repository 228.


In some implementations, the prompt management system 228 includes a pre-processor 232. The pre-processor 232 can perform various operations to prepare the data from the data repository 204 for prompt generation. For example, the pre-processor 232 can perform any of various filtering, compression, tokenizing, or combining (e.g., combining data from various databases of the data repository 204) operations.


The prompt management system 228 can include a prompt generator 236. The prompt generator 236 can generate, from data of the data repository 204, one or more training data elements that include a prompt and a completion corresponding to the prompt. In some implementations, the prompt generator 236 receives user input indicative of prompt and completion portions of data. For example, the user input can indicate template portions representing prompts of structured data, such as predefined fields or forms of documents, and corresponding completions provided for the documents. The user input can assign prompts to unstructured data. In some implementations, the prompt generator 236 automatically determines prompts and completions from data of the data repository 204, such as by using any of various natural language processing algorithms to detect prompts and completions from data. In some implementations, the system 200 does not identify distinct prompts and completions from data of the data repository 204.


Referring further to FIG. 2, the system 200 can include a training management system 240. The training management system 240 can include one or more rules, heuristics, logic, policies, algorithms, functions, machine learning models, neural networks, scripts, or various combinations thereof to perform operations including controlling training of machine learning models, including performing fine tuning and/or transfer learning operations.


The training management system 240 can include a training manager 244. The training manager 244 can incorporate features of at least one of the model updater 108 or the feedback trainer 128 described with reference to FIG. 1. For example, the training manager 244 can provide training data including a plurality of training data elements (e.g., prompts and corresponding completions) to the model system 260 as described further herein to facilitate training machine learning models.


In some implementations, the training management system 240 includes a prompts database 248. For example, the training management system 240 can store one or more training data elements from the prompt management system 228, such as to facilitate asynchronous and/or batched training processes.


The training manager 244 can control the training of machine learning models using information or instructions maintained in a model tuning database 256. For example, the training manager 244 can store, in the model tuning database 256, various parameters or hyperparameters for models and/or model training.


In some implementations, the training manager 244 stores a record of training operations in a jobs database 252. For example, the training manager 244 can maintain data such as a queue of training jobs, parameters or hyperparameters to be used for training jobs, or information regarding performance of training.


Referring further to FIG. 2, the system 200 can include at least one model system 260 (e.g., one or more language model systems). The model system 260 can include one or more rules, heuristics, logic, policies, algorithms, functions, machine learning models, neural networks, scripts, or various combinations thereof to perform operations including configuring one or more machine learning models 268 based on instructions from the training management system 240. In some implementations, the training management system 240 implements the model system 260. In some implementations, the training management system 240 can access the model system 260 using one or more APIs, such as to provide training data and/or instructions for configuring machine learning models 268 via the one or more APIs. The model system 260 can operate as a service layer for configuring the machine learning models 268 responsive to instructions from the training management system 240. The machine learning models 268 can be or include the first model 104 and/or second model 116 described with reference to FIG. 1.


The model system 260 can include a model configuration processor 264. The model configuration processor 264 can incorporate features of the model updater 108 and/or the feedback trainer 128 described with reference to FIG. 1. For example, the model configuration processor 264 can apply training data (e.g., prompts 248 and corresponding completions) to the machine learning models 268 to configure (e.g., train, modify, update, fine-tune, etc.) the machine learning models 268. The training manager 244 can control training by the model configuration processor 264 based on model tuning parameters in the model tuning database 256, such as to control various hyperparameters for training. In various implementations, the system 200 can use the training management system 240 to configure the machine learning models 268 in a similar manner as described with reference to the second model 116 of FIG. 1, such as to train the machine learning models 268 using any of various data or combinations of data from the data repository 204.


Application Session Management


FIG. 3 depicts an example of the system 200, in which the system 200 can perform operations to implement at least one application session 308 for a client device 304. For example, responsive to configuring the machine learning models 268, the system 200 can generate data for presentation by the client device 304 (including generating data responsive to information received from the client device 304) using the at least one application session 308 and the one or more machine learning models 268.


The client device 304 can be a device of a user, such as a technician or building manager. The client device 304 can include any of various wireless or wired communication interfaces to communicate data with the model system 260, such as to provide requests to the model system 260 indicative of data for the machine learning models 268 to generate, and to receive outputs from the model system 260. The client device 304 can include various user input and output devices to facilitate receiving and presenting inputs and outputs.


In some implementations, the system 200 provides data to the client device 304 for the client device 304 to operate the at least one application session 308. The application session 308 can include a session corresponding to any of the applications 120 described with reference to FIG. 1. For example, the client device 304 can launch the application session 308 and provide an interface to request one or more prompts. Responsive to receiving the one or more prompts, the application session 308 can provide the one or more prompts as input to the machine learning model 268. The machine learning model 268 can process the input to generate a completion, and provide the completion to the application session 308 to present via the client device 304. In some implementations, the application session 308 can iteratively generate completions using the machine learning models 268. For example, the machine learning models 268 can receive a first prompt from the application session 308, determine a first completion based on the first prompt and provide the first completion to the application session 308, receive a second prompt from the application 308, determine a second completion based on the second prompt (which may include at least one of the first prompt or the first completion concatenated to the second prompt), and provide the second completion to the application session 308.


In some implementations, the model system 260 includes at least one sessions database 312. The sessions database 312 can maintain records of application session 308 implemented by client devices 304. For example, the sessions database 312 can include records of prompts provided to the machine learning models 268 and completions generated by the machine learning models 268. As described further with reference to FIG. 4, the system 200 can use the data in the sessions database 312 to fine-tune or otherwise update the machine learning models 268.


Completion Checking

In some implementations, the system 200 includes an accuracy checker 316. The accuracy checker 316 can include one or more rules, heuristics, logic, policies, algorithms, functions, machine learning models, neural networks, scripts, or various combinations thereof to perform operations including evaluating performance criteria regarding the completions determined by the model system 260. For example, the accuracy checker 316 can include at least one completion listener 320. The completion listener 320 can receive the completions determined by the model system 320 (e.g., responsive to the completions being generated by the machine learning model 268 and/or by retrieving the completions from the sessions database 312).


The accuracy checker 316 can include at least one completion evaluator 324. The completion evaluator 324 can evaluate the completions (e.g., as received or retrieved by the completion listener 320) according to various criteria. In some implementations, the completion evaluator 324 evaluates the completions by comparing the completions with corresponding data from the data repository 204. For example, the completion evaluator 324 can identify data of the data repository 204 having similar text as the prompts and/or completions (e.g., using any of various natural language processing algorithms), and determine whether the data of the completions is within a range of expected data represented by the data of the data repository 204.


In some implementations, the accuracy checker 316 can store an output from evaluating the completion (e.g., an indication of whether the completion satisfies the criteria) in an evaluation database 328. For example, the accuracy checker 316 can assign the output (which may indicate at least one of a binary indication of whether the completion satisfied the criteria or an indication of a portion of the completion that did not satisfy the criteria) to the completion for storage in the evaluation database 328, which can facilitate further training of the machine learning models 268 using the completions and output.


Feedback Training


FIG. 4 depicts an example of the system 200 that includes a feedback system 400, such as a feedback aggregator. The feedback system 400 can include one or more rules, heuristics, logic, policies, algorithms, functions, machine learning models, neural networks, scripts, or various combinations thereof to perform operations including preparing data for updating and/or updating the machine learning models 268 using feedback corresponding to the application sessions 308, such as feedback received as user input associated with outputs presented by the application sessions 308. The feedback system 400 can incorporate features of the feedback repository 124 and/or feedback trainer 128 described with reference to FIG. 1.


The feedback system 400 can receive feedback (e.g., from the client device 304) in various formats. For example, the feedback can include any of text, speech, audio, image, and/or video data. The feedback can be associated (e.g., in a data structure generated by the application session 308) with the outputs of the machine learning models 268 for which the feedback is provided. The feedback can be received or extracted from various forms of data, including external data sources such as manuals, service reports, or Wikipedia-type documentation.


In some implementations, the feedback system 400 includes a pre-processor 400. The pre-processor 400 can perform any of various operations to modify the feedback for further processing. For example, the pre-processor 400 can incorporate features of, or be implemented by, the pre-processor 232, such as to perform operations including filtering, compression, tokenizing, or translation operations (e.g., translation into a common language of the data of the data repository 204).


The feedback system 400 can include a bias checker 408. The bias checker 408 can evaluate the feedback using various bias criteria, and control inclusion of the feedback in a feedback database 416 (e.g., a feedback database 416 of the data repository 204 as depicted in FIG. 4) according to the evaluation. The bias criteria can include, for example and without limitation, criteria regarding qualitative and/or quantitative differences between a range or statistic measure of the feedback relative to actual, expected, or validated values.


The feedback system 400 can include a feedback encoder 412. The feedback encoder 412 can process the feedback (e.g., responsive to bias checking by the bias checker 408) for inclusion in the feedback database 416. For example, the feedback encoder 412 can encode the feedback as values corresponding to outputs scoring determined by the model system 260 while generating completions (e.g., where the feedback indicates that the completion presented via the application session 308 was acceptable, the feedback encoder 412 can encode the feedback by associating the feedback with the completion and assigning a relatively high score to the completion).


As indicated by the dashed arrows in FIG. 4, the feedback can be used by the prompt management system 228 and training management system 240 to further update one or more machine learning models 268. For example, the prompt management system 228 can retrieve at least one feedback (and corresponding prompt and completion data) from the feedback database 416, and process the at least one feedback to determine a feedback prompt and feedback completion to provide to the training management system 240 (e.g., using pre-processor 232 and/or prompt generator 236, and assigning a score corresponding to the feedback to the feedback completion). The training manager 244 can provide instructions to the model system 260 to update the machine learning models 268 using the feedback prompt and the feedback completion, such as to perform a fine-tuning process using the feedback prompt and the feedback completion. In some implementations, the training management system 240 performs a batch process of feedback-based fine tuning by using the prompt management system 228 to generate a plurality of feedback prompts and a plurality of feedback completion, and providing instructions to the model system 260 to perform the fine-tuning process using the plurality of feedback prompts and the plurality of feedback completions.


Data Filtering and Validation Systems


FIG. 5 depicts an example of the system 200, where the system 200 can include one or more data filters 500 (e.g., data validators). The data filters 500 can include any one or more rules, heuristics, logic, policies, algorithms, functions, machine learning models, neural networks, scripts, or various combinations thereof to perform operations including modifying data processed by the system 200 and/or triggering alerts responsive to the data not satisfying corresponding criteria, such as thresholds for values of data. Various data filtering processes described with reference to FIG. 5 (as well as FIGS. 6 and 7) can enable the system 200 to implement timely operations for improving the precision and/or accuracy of completions or other information generated by the system 200 (e.g., including improving the accuracy of feedback data used for fine-tuning the machine learning models 268). The data filters 500 can allow for interactions between various algorithms, models, and computational processes.


For example, the data filters 500 can be used to evaluate data relative to thresholds relating to data including, for example and without limitation, acceptable data ranges, setpoints, temperatures, pressures, flow rates (e.g., mass flow rates), or vibration rates for an item of equipment. The threshold can include any of various thresholds, such as one or more of minimum, maximum, absolute, relative, fixed band, and/or floating band thresholds.


The data filters 500 can enable the system 200 to detect when data, such as prompts, completions, or other inputs and/or outputs of the system 200, collide with thresholds that represent realistic behavior or operation or other limits of items of equipment. For example, the thresholds of the data filters 500 can correspond to values of data that are within feasible or recommended operating ranges. In some implementations, the system 200 determines or receives the thresholds using models or simulations of items of equipment, such as plant or equipment simulators, chiller models, HVAC-R models, refrigeration cycle models, etc. The system 200 can receive the thresholds as user input (e.g., from experts, technicians, or other users). The thresholds of the data filters 500 can be based on information from various data sources. The thresholds can include, for example and without limitation, thresholds based on information such as equipment limitations, safety margins, physics, expert teaching, etc. For example, the data filters 500 can include thresholds determined from various models, functions, or data structures (e.g., tables) representing physical properties and processes, such as physics of psychometrics, thermodynamics, and/or fluid dynamics information.


The system 200 can determine the thresholds using the feedback system 400 and/or the client device 304, such as by providing a request for feedback that includes a request for a corresponding threshold associated with the completion and/or prompt presented by the application session 308. For example, the system 200 can use the feedback to identify realistic thresholds, such as by using feedback regarding data generated by the machine learning models 268 for ranges, setpoints, and/or start-up or operating sequences regarding items of equipment (and which can thus be validated by human experts). In some implementations, the system 200 selectively requests feedback indicative of thresholds based on an identifier of a user of the application session 308, such as to selectively request feedback from users having predetermined levels of expertise and/or assign weights to feedback according to criteria such as levels of expertise.


In some implementations, one or more data filters 500 correspond to a given setup. For example, the setup can represent a configuration of a corresponding item of equipment (e.g., configuration of a chiller, etc.). The data filters 500 can represent various thresholds or conditions with respect to values for the configuration, such as feasible or recommendation operating ranges for the values. In some implementations, one or more data filters 500 correspond to a given situation. For example, the situation can represent at least one of an operating mode or a condition of a corresponding item of equipment.



FIG. 5 depicts some examples of data (e.g., inputs, outputs, and/or data communicated between nodes of machine learning models 268) to which the data filters 500 can be applied to evaluate data processed by the system 200 including various inputs and outputs of the system 200 and components thereof. This can include, for example and without limitation, filtering data such as data communicated between one or more of the data repository 204, prompt management system 228, training management system 240, model system 260, client device 304, accuracy checker 316, and/or feedback system 400. For example, the data filters 500 (as well as validation system 600 described with reference to FIG. 6 and/or expert filter collision system 700 described with reference to FIG. 7) can receive data outputted from a source (e.g., source component) of the system 200 for receipt by a destination (e.g., destination component) of the system 200, and filter, modify, or otherwise process the outputted data prior to the system 200 providing the outputted data to the destination. The sources and destinations can include any of various combinations of components and systems of the system 200.


The system 200 can perform various actions responsive to the processing of data by the data filters 500. In some implementations, the system 200 can pass data to a destination without modifying the data (e.g., retaining a value of the data prior to evaluation by the data filter 500) responsive to the data satisfying the criteria of the respective data filter(s) 500. In some implementations, the system 200 can at least one of (i) modify the data or (ii) output an alert responsive to the data not satisfying the criteria of the respective data filter(s) 500. For example, the system 200 can modify the data by modifying one or more values of the data to be within the criteria of the data filters 500.


In some implementations, the system 200 modifies the data by causing the machine learning models 268 to regenerate the completion corresponding to the data (e.g., for up to a predetermined threshold number of regeneration attempts before triggering the alert). This can enable the data filters 500 and the system 200 selectively trigger alerts responsive to determining that the data (e.g., the collision between the data and the thresholds of the data filters 500) may not be repairable by the machine learning model 268 aspects of the system 200.


The system 200 can output the alert to the client device 304. The system 200 can assign a flag corresponding to the alert to at least one of the prompt (e.g., in prompts database 224) or the completion having the data that triggered the alert.



FIG. 6 depicts an example of the system 200, in which a validation system 600 is coupled with one or more components of the system 200, such as to process and/or modify data communicated between the components of the system 200. For example, the validation system 600 can provide a validation interface for human users (e.g., expert supervisors, checkers) and/or expert systems (e.g., data validation systems that can implement processes analogous to those described with reference to the data filters 500) to receive data of the system 200 and modify, validate, or otherwise process the data. For example, the validation system 600 can provide to human expert supervisors, human checkers, and/or expert systems various data of the system 200, receive responses to the provided data indicating requested modifications to the data or validations of the data, and modify (or validate) the provided data according to the responses.


For example, the validation system 600 can receive data such as data retrieved from the data repository 204, prompts outputted by the prompt management system 228, completions outputted by the model system 260, indications of accuracy outputted by the accuracy checker 316, etc., and provide the received data to at least one of an expert system or a user interface. In some implementations, the validation system 600 receives a given item of data prior to the given item of data being processed by the model system 260, such as to validate inputs to the machine learning models 268 prior to the inputs being processed by the machine learning models 268 to generate outputs, such as completions.


In some implementations, the validation system 600 validates data by at least one of (i) assigning a label (e.g., a flag, etc.) to the data indicating that the data is validated or (ii) passing the data to a destination without modifying the data. For example, responsive to receiving at least one of a user input (e.g., from a human validator/supervisor/expert) that the data is valid or an indication from an expert system that the data is valid, the validation system 600 can assign the label and/or provide the data to the destination.


The validation system 600 can selectively provide data from the system 200 to the validation interface responsive to operation of the data filters 500. This can enable the validation system 600 to trigger validation of the data responsive to collision of the data with the criteria of the data filters 500. For example, responsive to the data filters 500 determining that an item of data does not satisfy a corresponding criteria, the data filters 500 can provide the item of data to the validation system 600. The data filters 500 can assign various labels to the item of data, such as indications of the values of the thresholds that the data filters 500 used to determine that the item of data did not satisfy the thresholds. Responsive to receiving the item of data from the data filters 500, the validation system 600 can provide the item of data to the validation interface (e.g., to a user interface of client device 304 and/or application session 308; for comparison with a model, simulation, algorithm, or other operation of an expert system) for validation. In some implementations, the validation system 600 can receive an indication that the item of data is valid (e.g., even if the item of data did not satisfy the criteria of the data filters 500) and can provide the indication to the data filters 500 to cause the data filters 500 to at least partially modify the respective thresholds according to the indication.


In some implementations, the validation system 600 selectively retrieves data for validation where (i) the data is determined or outputted prior to use by the machine learning models 268, such as data from the data repository 204 or the prompt management system 228, or (ii) the data does not satisfy a respective data filter 500 that processes the data. This can enable the system 200, the data filters 500, and the validation system 600 to update the machine learning models 268 and other machine learning aspects (e.g., generative AI aspects) of the system 200 to more accurately generate data and completions (e.g., enabling the data filters 500 to generate alerts that are received by the human experts/expert systems that may be repairable by adjustments to one or more components of the system 200).



FIG. 7 depicts an example of the system 200, in which an expert filter collision system 700 (“expert system” 700) can facilitate providing feedback and providing more accurate and/or precise data and completions to a user via the application session 308. For example, the expert system 700 can interface with various points and/or data flows of the system 200, as depicted in FIG. 7, where the system 200 can provide data to the expert filter collision system 700, such as to transmit the data to a user interface and/or present the data via a user interface of the expert filter collision system 700 that can accessed via an expert session 708 of a client device 704. For example, via the expert session 708, the expert session 700 can enable functions such as receiving inputs for a human expert to provide feedback to a user of the client device 304; a human expert to guide the user through the data (e.g., completions) provided to the client device 304, such as reports, insights, and action items; a human expert to review and/or provide feedback for revising insights, guidance, and recommendations before being presented by the application session 308; a human expert to adjust and/or validate insights or recommendations before they are viewed or used for actions by the user; or various combinations thereof. In some implementations, the expert system 700 can use feedback received via the expert session as inputs to update the machine learning models 268 (e.g., to perform fine-tuning).


In some implementations, the expert system 700 retrieves data to be provided to the application session 308, such as completions generated by the machine learning models 268. The expert system 700 can present the data via the expert session 708, such as to request feedback regarding the data from the client device 704. For example, the expert system 700 can receive feedback regarding the data for modifying or validating the data (e.g., editing or validating completions). In some implementations, the expert system 700 requests at least one of an identifier or a credential of a user of the client device 704 prior to providing the data to the client device 704 and/or requesting feedback regarding the data from the expert session 708. For example, the expert system 700 can request the feedback responsive to determining that the at least one of the identifier or the credential satisfies a target value for the data. This can allow the expert system 708 to selectively identify experts to use for monitoring and validating the data.


In some implementations, the expert system 700 facilitates a communication session regarding the data, between the application session 308 and the expert session 708. For example, the expert session 700, responsive to detecting presentation of the data via the application session 308, can request feedback regarding the data (e.g., user input via the application session 308 for feedback regarding the data), and provide the feedback to the client device 704 to present via the expert session 708. The expert session 708 can receive expert feedback regarding at least one of the data or the feedback from the user to provide to the application session 308. In some implementations, the expert system 700 can facilitate any of various real-time or asynchronous messaging protocols between the application session 308 and expert session 708 regarding the data, such as any of text, speech, audio, image, and/or video communications or combinations thereof. This can allow the expert system 700 to provide a platform for a user receiving the data (e.g., customer or field technician) to receive expert feedback from a user of the client device 704 (e.g., expert technician). In some implementations, the expert system 700 stores a record of one or more messages or other communications between the sessions 308, 708 in the data repository 204 to facilitate further configuration of the machine learning models 268 based on the interactions between the users of the sessions 308, 708.


Building Data Platforms and Digital Twin Architectures

Referring further to FIGS. 1-7, various systems and methods described herein can be executed by and/or communicate with building data platforms, including data platforms of building management systems. For example, the data repository 204 can include or be coupled with one or more building data platforms, such as to ingest data from building data platforms and/or digital twins. The client device 304 can communicate with the system 200 via the building data platform, and can feedback, reports, and other data to the building data platform. In some implementations, the data repository 204 maintains building data platform-specific databases, such as to enable the system 200 to configure the machine learning models 268 on a building data platform-specific basis (or on an entity-specific basis using data from one or more building data platforms maintained by the entity).


For example, in some implementations, various data discussed herein may be stored in, retrieved from, or processed in the context of building data platforms and/or digital twins; processed at (e.g., processed using models executed at) a cloud or other off-premises computing system/device or group of systems/devices, an edge or other on-premises system/device or group of systems/devices, or a hybrid thereof in which some processing occurs off-premises and some occurs on-premises; and/or implemented using one or more gateways for communication and data management amongst various such systems/devices. In some such implementations, the building data platforms and/or digital twins may be provided within an infrastructure such as those described in U.S. patent application Ser. No. 17/134,661 filed Dec. 28, 2020, Ser. No. 18/080,360, filed Dec. 13, 2022, Ser. No. 17/537,046 filed Nov. 29, 2021, and Ser. No. 18/096,965, filed Jan. 13, 2023, and Indian Patent Application No. 202341008712, filed Feb. 10, 2023, the disclosures of which are incorporated herein by reference in their entireties.


III. Generative AI-Based Systems and Methods for Equipment Servicing

As described above, systems and methods in accordance with the present disclosure can use machine learning models, including LLMs and other generative AI models, to ingest data regarding building management systems and equipment in various unstructured and structured formats, and generate completions and other outputs targeted to provide useful information to users. Various systems and methods described herein can use machine learning models to support applications for presenting data with high accuracy and relevance.


Although the systems and methods described herein are primarily described as using generative AI models, it is contemplated that one or more components of the disclosed systems and/or one or more of the features or steps of the disclosed methods may be implemented and/or executed using non-generative AI models, non-AI models, or other types of models or algorithms, alone or in combination with generative AI models. Accordingly, it should be understood that the generative AI models described throughout the present disclosure can be supplemented or replaced with other types of AI models (e.g., natural language processing models, rules-based models, heuristic models, various machine learning models or algorithms, etc.) regardless of whether the model is a generative AI model. Additionally, in some embodiments, the generative AI models can be replaced or supplemented with other types of models (e.g., analytical models, predictive models, regression models, parametric models, etc.), regardless of whether such models are AI models, generative AI models, or other types of models (e.g., non-generative AI or non-AI). Several examples of other types of AI models and non-AI models which can be used, as well as other functions which can be performed by the generative AI models and other types of models are described in detail in U.S. Provisional Patent Application No. 63/470,122 filed May 31, 2023, the entire disclosure of which is incorporated by reference herein. The systems and methods described herein may include and/or can be used in combination with any of the systems, components, methods, processes, features, or other items described in U.S. Provisional Patent Application No. 63/470,122.


Equipment Service Management Responsive to Fault Detection Using Machine Learning Models


FIG. 8 depicts an example of a method 800. The method 800 can be performed using various devices and systems described herein, including but not limited to the systems 100, 200 or one or more components thereof. Various aspects of the method 800 can be implemented using one or more devices or systems that are communicatively coupled with one another, including in client-server, cloud-based, or other networked architectures.


At 805, a fault condition of an item of equipment can be detected. The fault condition can be detected responsive to manual and/or automated monitoring of various data sources regarding the item of equipment. In some implementations, the fault condition is detected responsive to an alarm notification from an alarm of the equipment or coupled with the equipment. For example, sensor data of the equipment or from a sensor directed to the equipment can be monitored by the alarm, and evaluated according to one or more alarm conditions (e.g., threshold values) to trigger the alarm notification. The fault condition can be detected responsive to user input indicative of the fault condition, or images or other data received indicative of the fault condition.


At 810, the fault condition can be validated. For example, the fault condition can be validated to determine whether the alarm notification corresponds to a false alarm. In some implementations, the fault condition can be validated by verifying the data used to detect the fault condition at a second point in time (e.g., subsequent to a first point in time at which the fault condition was initially detected), such as by evaluating the one or more alarm conditions using data regarding the equipment at the second point in time; this may include using the same or different data than the data used to initially detect the fault condition to validate the fault condition. The fault condition can be validated by providing the alarm notification to a device of a user, and requesting a confirmation (or indication of false alarm) from the user via the device. Responsive to the fault condition being identified as a false alarm, the equipment can be continued to be monitored.


At 815, a cause of the fault condition can be identified, such as by performing a root cause analysis. In some implementations, the cause is detected using a function that includes one or more algorithms, tables, simulations, or machine learning models described herein. For example, at least one of an identifier of the equipment, the fault condition, user text or speech identifying the fault condition (e.g., notes from any of a variety of entities, such as a facility manager, on-site technician, etc.), or data regarding the equipment used to detect the fault condition can be applied as input to the function to enable the function to determine an indication of a cause of the fault condition. For example, the function can include a table mapping various such inputs to one or more causes of fault conditions. The function can include a machine learning model configured using various forms of data described herein. For example, the machine learning model can include one or more classifiers, language models, or combinations thereof that are trained using data that includes information indicative of fault conditions and associated causes of fault conditions.


At 820, a prescription is generated based on the cause of the fault condition. For example, one or more of the cause of the fault condition, the fault condition, and an identifier of the equipment can be provided to a language model to cause the language model to generate the prescription. The prescription can have a natural language format. The prescription can indicate one or more actions for a service technician to perform to verify, service, and/or repair the fault condition, such as instructions for tools and/or parts to use for the item of equipment. The language model can include any of various models described herein that are configured, using training data representative of prescriptions. The prescription can be generated for presentation using various output modalities, such as text, speech, audio, image, and/or video, including in real-time, conversational, or asynchronous formats.


In some implementations, generating the prescription includes conditioning or guiding the language model to generate the prescription based on a class of at least one of the service technician or the site at which the item of equipment is present. For example, the language model can have its configuration (e.g., training, etc.) modified according to labels of identifiers or classes of technicians, sites, types of equipment, or other characteristics relating to the item of equipment and/or the service technician, which can enable the prescription to be generated in a manner that is more accurate and/or relevant to the service to be performed.


At 825, a warranty is evaluated based on one or more items (e.g., the equipment, parts or tools for servicing the equipment) identified by the prescription. For example, the warranty can be retrieved from various sources, such as a contract database associated with the entity that maintains the site, according to an identifier of the type of equipment, from the service request, or various combinations thereof. The prescription (or the service request) can be parsed to identify one or more items, such as items of equipment, identified by the prescription. For example, the item of equipment for which the service request is generated can be identified from the prescription, and compared with the warranty (e.g., using natural language processing algorithms, etc.) to identify one or more warranty conditions assigned to the item of equipment. The warranty conditions can indicate, for example, timing criteria for authorizing and/or payment for servicing the item of equipment by a vendor or supplier of the item of equipment. Responsive to the warranty conditions being satisfied (e.g., a termination of the warranty not being met), various actions can be performed to trigger servicing of the item of equipment. In some implementations, one or more warranty conditions are evaluated prior to, during, and or subsequent to generation of the prescription, such as to allow the prescription to be generated to incorporate one or more outputs of the evaluation of the warranty (or avoid computational resources for generating the prescription responsive to the warranty conditions not being satisfied).


At 830, scheduling of deployment of at least one of a service technician or one or more parts identified by the prescription can be performed. In some implementations, the prescription can identify the service technician, such as to select the service technician from a plurality of candidate service technicians according to an expertise that the service technician is labeled with and which corresponds to the item of equipment. Scheduling deployment of the one or more parts can including identifying a provider of the one or more parts and assigning the one or more parts to a vehicle (e.g., trucks) for delivering the one or more parts to the site of the item of equipment. By using the language model to generate the prescription—which identifies the one or more parts—the one or more parts that are delivered to the site can be more accurately identified, which can reduce resource usage and/or wasted space or weight on the vehicle. In some implementations, scheduling deployment includes generating a service ticket indicative of the service to be performed, such as to identify the service technician, the parts, and/or the item of equipment.


Depending on the determined prescription, the scheduling can include automated servicing of the item of equipment, such as to provide commands to adjust parameters of the item of equipment to a controller of the item of equipment. The scheduling can include providing instructions for performing remote service, such as to provide instructions to a service technician to use on-site tools and/or parts, or manual adjustment of the item of equipment, to service the item of equipment (e.g., to avoid a truck deployment or truck roll to the site).


At 835, an application session for a service operation corresponding to the service request (and the prescription) can be provided. In some implementations, the application session is provided via a device of the service technician. For example, the device can provide one or more credentials to access the application session (e.g., credentials that uniquely identify the service technician). The application session can present information to the service technician in any of various conversational, messaging, graphical, real-time, and/or asynchronous formats. The application session can receive one or more prompts from the device (e.g., from a user input device of the device), and provide the one or more prompts to the language model to cause the language model to provide corresponding completions responsive to the one or more prompts. For example, the device can receive text or image data (among other formats) as inputs provided by actions of the user (e.g., via an input interface of the device; by the user controlling a camera of the device), and provide the inputs as prompts to the language model. The application session can present the completions via the device to facilitate guiding the service technician through the actions to perform to service the item of equipment. In some implementations, the application session automatically (e.g., responsive to detecting a condition for escalating the guidance to a human expert) or manually (e.g., responsive to user input requesting guidance from a human expert) can establish a communication session between the device and a device of a human expert to provide further guidance to the service technician; the language model can provide various information such as the service request, prescription, and/or communications between the user and the language model via the application session to the device of the human expert, and can label various portions of the communications as potential causes of the escalation. The application session can be implemented as a virtual assistant, such as to provide information such as instruction manuals or technical reports regarding the item of equipment, responsive to requests from the service technician inputted at the device of the service technician.


At 840, operation of the item of equipment can be updated responsive to one or more actions performed by the service technician. For example, various parameters of operation of the item of equipment, such as setpoints, can be updated according to the one or more actions.


In some implementations, information from the service request, prescription, and application session processes can be used to perform analytics regarding entities that maintain sites and items of equipment (e.g., to evaluate customer churn). For example, information including unstructured data (e.g., service reports) regarding items of equipment and entity engagement or disengagement (e.g., deals) can be correlated to identify patterns regarding ways that service can be performed to maintain or increase the likelihood of increasing performance of one or more items of equipment of the entity, completion of deals or of maintaining engagement with the entity.


Referring now to FIG. 9, a diagram 900 of a training procedure for fine-tuning a generative artificial intelligence (AI) model to provide service recommendations is shown, according to some embodiments. The diagram 900 illustrates an architecture and procedure for exposing a generative AI model 906 to service records, using the generative AI model 906 to extract relevant data pairs from the service records, and then fine-tuning (or augmenting, updating, adapting, etc.) the generative AI model 906 using the extracted data pairs. The generative AI model 906 can thereby both generate a dataset for fine-tuning and then be trained using that dataset, in order to adapt the generative AI model 906 to provide service recommendations or other useful technical outputs to a user. The generative AI model 906 can be the model 116 and/or the model 104 described above, according to various embodiments.


As shown in FIG. 9, to initiate a fine-tuning procedure, an individual service record 902 and a request to provide a question/answer data pair 904 are provided as inputs to the generative AI model 906. The individual service record 902 can be a service record from service data of data sources 112 shown in FIG. 1 and described with reference thereto above, which can include a large number of different service records. The service record 902 can include natural language text, structured data, and/or other information relating to equipment serviced, service tasks completed, equipment serviced, equipment issues resolved, etc. in a given service visit, for example.


The request 904 is a prompt for the generative AI model 906 to process the individual service record 902 and provide an output in the form of a question and an answer. For example, the request 904 can prompt the generative AI model 906 to provide a question asking about an equipment issue being addressed in the service record and an answer indicative of the service provided to address that issue. The request 904 can specify a form of the question and the answer, for example an instruction to include a phrase such as “What are the common causes and solutions for [issue] in [equipment type],” with the generative AI model 906 to fill in the [issue] and [equipment type] fields based on the individual service record 902. Other question structures or options can also be made available to the generative AI model 906 as part of the request 904 (e.g., “If [equipment issue] exists for [equipment type], what service should we provide?”). The request 904 can also specify that the answer generated by the generative AI model 906 provide the answer to the generated question, based on the service described in the service record 902. For example, the request 904 can indicated that the answer should provide a set of service or troubleshooting steps that can be taken as a service recommendation based on the question presented. In some embodiments, the request 904 includes requirements on the outputs of the generative AI model 906 to facilitate further data processing and handing, for example to provide the question before the answer, to separate the question and answer with a fixed separator (e.g., “\n\n###\n\n”), to start the answer with a white space, to end the answer with a fixed stop sequence (e.g., “###”), etc.


In the procedure of FIG. 9, the generative AI model 906 accordingly uses the individual service record 902 to generate, in accordance with the request 904, a data pair including a generated question 908 and a generated answer 910. The generated question 908 relates to an equipment issue indicated in the service record (e.g., asks for a solution to such an issue, asks for what service should be performed in view of that issue) and the answer relates to a service task indicated in the service record. The generated question 908 and the generated answer 910 can be described as being extracted from the individual service record 902 by the generative AI model 906, and are in a semi-structured form which correlates with conversational interactions suitable for presentation via a virtual assistance tool, service recommendation generator, etc. application 120. Advantageously, the generated question 908 and generated answer 910 thus provide a data pair that includes detailed technical information from the service record 902, while being structured in a manner that does not include extraneous information, typographical errors, unhelpful formatting, etc. as may be present in the raw, individual service record 902 (see, e.g., examples provided above with reference to FIG. 1 and data sources 112).


The generated question 908 and the generated answer 910 thereby provided a data pair that can be added to a data set for use in further model training, fine-tuning, or augmentation, shown as fine-tuning dataset 912. The procedure of FIG. 9 includes providing the generated question 908 and the generated answer 910 to the fine-tuning dataset 912.


As shown in FIG. 9, a block 914 is illustrated for repeating, for a next service record from a dataset of service records, the process of providing an individual service record 902 and the request 904 to the generative AI model 906 to generate another data pair including a generated question 908 and the generated answer 910. Such steps can be iteratively executed for a set of available service records (e.g., hundreds of records, thousands of records, tens of thousands of records, hundreds of thousands of records, etc.). Because such procedure is performed for different service records (in which different servicing is described for addressing a variety of equipment issues) as an input to the generative AI model 906, a variety of different data pairs including generated questions 908 and generated answers 910 are generated and added to the fine-tuning dataset 912. In some embodiments, any duplicates are deleted from the fine-tuning dataset 912.


The fine-tuning dataset 912 can thereby be automatically populated with many pairs of generated questions and answers representing information extracted from the service records, thus providing a rich data set for fine-tuning (or augmenting, retraining, etc.) the generative AI model 906 such that the generative AI model 906 is able to generate answers (e.g., service recommendations) in response to user-input questions (e.g., requests for recommendations on steps to solve an equipment issue). Such use of the generative AI model 906 is shown in FIG. 10 and described with reference thereto below. In particular, the fine-tuning the generative AI model 906 on data pairs that conceptually and structurally match the inputs and outputs expected for the generative AI model 906 during online use enables the generative AI model 906 to perform more accurately and reliably in generating such outputs (e.g., answers, service recommendations) in response to such inputs (e.g., questions relating to service to perform) as compared to performance of the generative AI model 906 before such fine-tuning.


As illustrated at block 918 in FIG. 9, the generative AI model 916 can also be configured to self-learn from exposure to service records as many service records are provided to the generative AI model 906 to build the fine-tuning dataset 912. The generative AI model 906 can include reinforcement learning and other techniques used in large language models for self-learning from new content provided to such models. The procedure of FIG. 9 thereby facilitates improvements to the generative AI model 906 for use in providing outputs relating to servicing of building equipment by exposing the generative AI model 906 to large numbers of service records.


Referring now to FIG. 10, a flowchart of a method 1000 is shown, according to some embodiments. The method 1000 can be executed using the generative AI model 906 resulting from the teachings of FIG. 9, in some embodiments. The method 1000 can be performed using various devices and systems described herein, including but not limited to the systems 100, 200 or one or more components thereof. Various aspects of the method 1000 can be implemented using one or more devices or systems that are communicatively coupled with one another, including in client-server, cloud-based, or other networked architectures.


At step 1002, a question relating to an equipment issue is received from a user. The question can be received in via a conversational interface, free-text entry field, audio input (microphone), etc. in various embodiments. The question received in step 1002 can be structurally similar (e.g., grammatically similar) as the generated questions 908 generated in the procedure of FIG. 9. In some embodiments, step 1002 includes providing a user interface configured to guide the user into asking an appropriately-formatted or scoped question, for example providing example questions or providing conversational interactions with a user to prompt the user to provide or adjust inputs in order to obtain an appropriate question for input to the generative AI model. The question input by a user can relate to an equipment issue currently being experienced by the user (e.g., “If this YMC2 chiller has been running 7654 hours, and bacteria are found in inhibitor, what services should we provide?”) and/or an equipment issue of hypothetical or general interest to the user (e.g., “What are the risks of not using the variable speed drive proactive maintenance kit?”; “What are the common causes and solutions of faults in chiller model XYZ?”). In some embodiments, the generative AI model has been trained on industry-specific, manufacturer-specific, service-provider-specific, etc. service records including detailed technical information, the questions can include particular technical details that may not be understandable to a generic large language model. For example, the questions may ask about a chiller performance index (CPI) or other metric used by a particular vendor, enterprise management platform, building management system, etc. (e.g., “What should we do if the CPI for our chiller is 45?”).


At step 1004, an answer is generated using a fine-tuned generative AI model. The fine-tuned generative AI model can result from the procedure of FIG. 9, in some embodiments. The fine-tuned generative AI model uses the question from step 1002 as an input and generates an answer to the question. The fine-tuned generative AI model can operate according to the teachings above relating to artificial intelligence modeling, large language models, etc. Because the generative AI model resulting from the fine-tuning procedure of FIG. 9 is fine-tuned on data pairs extracted from the service records including substantial detailed, technical information about particular equipment types, particular services provided, particular replacement parts installed, etc., such a generative AI model is well-adapted to provide particularized, actionable, reliable answers to the questions provided in step 1002.


At step 1006, a service recommendation is provided based on the answer output from the generative AI model in step 1004. Providing the service recommendation can include providing the answer to a user via a graphical user interface. Step 1006 can include executing portions of process 800, for example steps 830-840 shown in FIG. 8 and described above. Method 1000 thereby provides a user with a service recommendation in response to an easy-to-input question (e.g., freeform, natural language, etc.) input which is tailored to the particular equipment of interest. By using a fine-tuned generative AI model as results from the procedure of FIG. 8, method 1000 is able to efficiently and reliably provide actionable, understandable, reliable recommendations in a technical field for which a generic large language model without such fine-tuning would not be capable of understanding the relevant questions, much less providing quality answers and recommendations.


To provide an example, an input provided by a user to method 1000 may be a query stating: “Query: If a YMC2 chiller has been running 7654 hours, and bacteria are found in inhibitor, what services should we provide?” An output of method 1000 can then be an answer and/or recommendation such as:

    • 1. Treat the water system: When bacteria are detected in the inhibitor, a thorough water treatment should be performed. This may include biocide treatments, flushing the system, and adjusting the water treatment program to effectively control bacterial growth.
    • 2. Inspect and clean the evaporator and condenser: Bacterial growth in the inhibitor can lead to fouling in the heat exchangers. Inspect both the evaporator and condenser for fouling or scaling, and clean them as necessary to maintain optimal heat transfer performance.
    • 3. Check and adjust inhibitor concentration: After treating the water system, retest the concentration of the inhibitor to ensure it is within the recommended range. Adjust the dosage as necessary to maintain proper protection against corrosion, scaling, and fouling.
    • 4. Monitor water quality: Regularly test the water quality parameters, such as pH, conductivity, and hardness. Keep a close eye on bacterial levels, and adjust the treatment program as needed to maintain water quality and prevent bacterial growth.


This example was generated experimentally by an implementation of the features of FIGS. 9-10 and other teachings herein. Without the teachings here, a large language model may return an error such as “I am not familiar with “YMC2 chiller” rather than providing actionable, particularized information as in the example given here.


Referring now to FIG. 11, a diagram 1100 of a training procedure for fine-tuning a generative AI model to provide service analytics (e.g., summaries, labeling, reports, etc.) is shown, according to some embodiments. The diagram 1100 illustrates an architecture and procedure for expositing a generative AI model 1106 to warranty data, using the generative AI model 1106 to extract relevant data trios from the warranty records, structuring the data trios into at least one question-and-answer pair, and then fine-tuning (or augmenting, updating, adapting, etc.) the generative AI model 1106 or another model using the extracted data pairs. The generative AI model 1106 can thereby both generate a dataset for fine-tuning and then be trained using that dataset, in order to adapt the generative AI model 1106 to provide service analytics or other useful technical outputs. The generative AI model 1106 can be or including the model 116, the model 104, and/or the generative AI model 906 described above, according to various embodiments.


As shown in FIG. 11, to initiate a fine-tuning procedure, an individual warranty record (or service record, etc. in various embodiments) 1102 and a request to provide a problem/cause/solution data trio 1104 are provided as inputs to the generative AI model 1106. The individual warranty record 1102 can be a warranty record from warrant data of data sources 112 shown in FIG. 1 and described with reference thereto above, which can include a large number of different warranty records. The warranty record 902 can include natural language text, structured data, and/or other information relating to equipment failures, replaced parts, replaced equipment, equipment serviced, service tasks completed, equipment issues resolved, etc., corresponding to a warranty claim.


The request 1104 is a prompt for the generative AI model 1106 to process the individual warranty record 1102 and provide an output in the form of a data trio including a problem, a cause, and a solution reflected in the warranty record 1102. For example, the request 1104 can prompt the generative AI model 1106 to provide a description of a problem identified in the warranty record 1102 (e.g., overheating equipment, vibrating equipment, leaking equipment, inability of equipment to meet a setpoint, other equipment fault or degraded performance), a cause of the problem as identified in the warranty record 1102 (e.g., a particular broken part, a particular incorrect configuration or setting) and a solution provided to resolve the cause of the problem (e.g., replacement of the particular broken part, a change in configuration or setting). The request 1104 can specify a form of the output to be provided by the generative AI model 1106, for example specifying that the problem, cause, and solution should be presented as an answer to a question such as “What is a problem, cause, and solution described for the equipment type involved in the warranty record?” The form can include structural requirements to facilitate additional processing, for example separating parts of the outputs (e.g., a problem, a cause, and a solution as separate data entries, etc.) with a fixed separator (e.g., “\n\n###\n\n”), starting an output with a whitespace, ending an output with a fixed stop sequence (e.g., “###”), etc. The generative AI model 1106 can thereby be prompted to isolate a problem, cause, and solution from a particular warranty claim record (or other service record or the like).


The request can thereby cause the generative AI model 1106 to provide a semi-structured outputs isolating a problem observed for equipment, a cause of the problem with the equipment as reflected in the warranty data, and a solution taken to resolve that problem and/or address the cause. FIG. 11 shows such outputs as a data trio including a generated problem 1107, a generated cause 1108, and a generated solution 1110, i.e., a problem, cause, and solution isolated by the generative AI model 1106 from the individual warranty claim record 1102. The generated problem 1107, the generated cause 1108, and the generated solution 1110 can be described as being extracted from, isolated from, summarized from, etc. the individual warranty claim record 1102.


The data trio (i.e., the generated problem 1107, the generated cause 1108, and the generated solution 1110) can then be structured as at least one question and answer pair 1120. Structuring the data trio as at least one question and answer pair 1120 can include providing one or more template question and answer pairs with fields to be filled in with the generated problem 1107, the generated cause 1108, and the generated solution 1110. For example, a template can indicate that a question will have the form “What is the solution if a {equipment type} has a {generated problem 1107} caused by {the generated cause 1108}?” while a corresponding answer can have the form of {“The {generated problem 1107} can be addressed by {the generated solution 1110}.” Various other examples are possible, such as “If {the generated problem 1107} was solved by {“the generated solution 1110” }, then what was the cause?” together with an answer indicating the generated cause 1108. Any and all such questions and answers that can be formulated using the generated problem 1107, the generated cause 1108, and the generated solution 1110 can be provided as question/answer pair(s) 1120. Accordingly, one or more question/answer pair(s) 1120 can be created for each data trio and for each warranty claim record 1102.


The question/answer pair(s) 1120 can then be stored in the fine-tuning dataset 1111. Advantageously, as stored in the fine-tuning dataset 1111, the question/answer pairs 1120 provide focused, technical information without extraneous commentary, conversational inputs, non-technical warranty processing information, etc. present in the raw warranty claim record data and further structured in a manner which anticipates the types of inputs (questions) and outputs (answers) which a model fine-tuned using the fine-tuning dataset 1111 may be expected to handle at inference time.


As shown in FIG. 11, block 1112 is illustrated for repeating, for a next warranty record from a dataset of warranty records, the process for providing a warranty record to the generative AI model 1106 to generate another data pair including a generated cause 1108 and a generated solution 1110. Such steps can be iteratively executed for a set of available warranty records (e.g., hundreds of records, thousands of records, tens of thousands of records, hundreds of thousands of records, etc.). When such steps are performed for different warranty records (in which different issues, causes, solutions, etc. are described) as inputs to the generative AI model 1106, a variety of different data pairs including generated causes 1108 and generated solutions 1110 are generated and added to the fine-tuning dataset 1111. In some embodiments, any duplicates are deleted from the fine-tuning dataset. In some embodiments, the fine-tuning dataset is provided with greater than 100 data pairs for each equipment type, each component, etc. to provide a robust set of data for use in fine-tuning.


The fine-tuning dataset 1111 can thereby be automatically populated with main pairs of generated causes and solutions representing information extracted from the warranty data, thus providing a rich dataset for fine-tuning (or augmenting, retraining, etc.) the generative AI model 1106 such that the augmented AI model 1106 is able to generate content relating to causes and solutions relating to equipment issues, faults, failures, inefficiencies, etc. FIG. 11 includes block 1114 of FIG. 11 indicating fine-tuning of the generative AI model 1106 using the fine-tuning dataset and (additionally or alternatively) block 1115 of FIG. 11 indicating fine-tuning of an additional model (e.g., an AI model other than the Generative AI model 1106 such as a different generative AI model, a reliability model, a fault prediction model, a service recommendation model, a predictive maintenance model, etc.) using the fine-tuning dataset. The fine-tuning dataset 1111 can also or alternatively be used for initial training of a model in some embodiments. Because the fine-tuning dataset includes focused information that structurally corresponds to the structure of content that may be of interest to users of the generative AI model 1106 or the additional model (e.g., relationships between causes and solutions for equipment servicing), the fine-tuning dataset generated as in FIG. 11 can facilitate enhanced training, fine-tuning, adaptation, etc. of the generative AI model 1106 and/or the additional model as compared to other approaches (e.g., as compared to attempting to fine tune an AI model directly on raw warranty claim records).


As illustrated at block 1116 in FIG. 11, the generative AI model can also be configured to self-learn from exposure to warranty records as many warranty records are provided to the generative AI model 1106 to build the fine-tuning dataset 1111. The generative AI model 1106 can include reinforcement learning and other techniques used in large language models for self-learning from new content provided to such models. The procedure of FIG. 11 thereby facilitates improvements to the generative AI model 1106 for use in providing outputs relating to servicing of building equipment by exposing the generative AI model 906 to large numbers of warranty records.


Referring now to FIG. 12, a flowchart of a method 1200 is shown, according to some embodiments. The method 1200 can be executed using the generative AI model 1106 resulting from the teachings of FIG. 11, in some embodiments. The method 1200 can be performed using various devices and systems described herein, including but not limited to the systems 100, 200 or one or more components thereof. Various aspects of the method 1200 can be implemented using one or more devices or systems that are communicatively coupled with one another, including in client-server, cloud-based, or other networked architectures.


At step 1202, a request (or other prompt, question, input, etc.) relating to an equipment issue is received from a user. The request can indicate an undesirable state of a particular unit of equipment or type of equipment and request information on possible causes and solutions. In other scenarios, the request may include information on service that was performed or parts that were replaced and request summarization or labelling of what root cause created the need for such service and/or what solution was provided. Other scenarios are possible in which a user requests various information relating to equipment servicing, warranty claims, or the like.


At step 1204, description of one or more causes and/or solutions are generated using the fine-tuned generative AI model based on the question. The fine-tuned generative AI model used in step 1204 can be the fine-tuned generative AI model fine-tuned according to the procedure of FIG. 12. The fine-tuned generative AI model can operate according to the teachings above relating to artificial intelligence modeling, large language models, etc. Because the generative AI model resulting from the fine-tuning procedure of FIG. 11 is fine-tuned on data pairs extracted from warranty records including substantial detailed, technical information about particular equipment types, particular services provided, particular replacement parts installed, etc., such a generative AI model is well-adapted to provide particularized, actionable, reliable cause and/or solution information requested in step 1002.


At step 1206, at least one of a service summary, a labelling of services, or an investigative service report based on the one or more solutions are generated based on the description. Providing a service summary includes generating a summary of service provided and/or to be provided based on the request (e.g., providing a summary can include providing a summary of common causes and solutions). For example, if the request received in step 1202 is “What are the common causes and solutions for Inhibitor in YMC2 chillers?,” step 1206 can output an summary such as:


Common Causes:





    • 1. Low refrigerant charge: Low refrigerant charge can cause the compressor to overheat, leading to the formation of inhibitors.

    • 2. Contaminated refrigerant: Contaminated refrigerant can cause the formation of inhibitors due to the presence of impurities.

    • 3. High condensing temperatures: High condensing temperatures can cause the formation of inhibitors due to the increased pressure in the system.

    • 4. Poor maintenance: Poor maintenance can lead to the formation of inhibitors due to the accumulation of dirt and debris in the system.





Solutions:





    • 1. Check and adjust the refrigerant charge: Check and adjust the refrigerant charge to ensure that it is at the correct level.

    • 2. Replace the contaminated refrigerant: Replace the contaminated refrigerant with clean, fresh refrigerant.

    • 3. Install a condenser fan: Install a condenser fan to reduce the condensing temperature and reduce the formation of inhibitors.

    • 4. Perform regular maintenance Perform regular maintenance to ensure that the system is clean and free of dirt and debris.





In scenarios where step 1206 includes providing a labelling of services, step 1206 can include providing a solution label (e.g., describing a service provided) and a cause label (e.g., describing a root cause) to a service report provided with the request received in step 1202. In some such embodiments, the request includes a table of service reports and step 1206 includes applying the labels to the different service reports. The following provides example of such labelling with a service report in a first column, a solution in a second column, and a cause in the third column:
















Chiller was leaking from solenoid off of filter
refrigerant removed
refrigerant leak


dryer below condenser and leaking . . .




Went to the shop to pickup part and delivered to
refrigerant unchanged
unspecified leak


the site. Once arrived on-site check in with . . .




York Chiller 2. Serial #SADM430770 Began
refrigerant removed
refrigerant leak


recovering refrigerant from chiller.




Added 125# of new R134a to chiller. This was
refrigerant added
refrigerant leak


added because refrigerant was recovered. Cle . . .




York Chiller 2 Checked vacuum. It was down to
refrigerant added
unspecified leak


500 microns. Added Refr. vapor to chiller.




York Chiller 2 Picked up nitrogen. Replaced oring
refrigerant unchanged
unspecified leak


on VGD cover. Did leak check. All ok. Started . . .




Added oil to South Trane chiller and began
refrigerant unchanged
refrigerant leak


comparing readings. Found purge was not




showing . . .




Mar. 19, 2021 Responded to call for chiller tripped
refrigerant unchanged
no leak


on evaporator transducer or temperature sensor . . .




Responded to call for chiller tripped on evaporator
refrigerant unchanged
unspecified leak


transducer or temperature sensor fault.




Responded to call for excessive condenser
refrigerant unchanged
unspecified leak


refrigerant level . . .




Showed Kirlin how to grease motor with 15 grams
refrigerant added
refrigerant leak


of polyrex em grease per bearing. Also sh . . .




Started chiller 3 ran and failed on low evaporator
refrigerant unchanged
unspecified leak


pressure. Noticed condenser refrigerant level . . .




Talked to Dave to see what was going on, he said
refrigerant unchanged
unspecified leak


chiller shut down on excessive surge, He said . . .









In scenarios where an investigative service report is generated, a detailed report according to a desired format can be generated in step 1206 including various data including the cause and/or solution from step 1204. An example of a report that can be generated is as follows:














Investigation Report


Date: 2022 Dec. 31


Customer: ABCD CORP, United States


YMC2 Chiller Serial Number: SACM143010


Run Hour: 14913 Hours


Failure Probability for Inhibitor: 98.55%


Failure Probability for Safety Flow Switch: 83.11%


 This investigation report is based on the YMC2 chiller serial number


 SACM143010 with 14913 hours of run time. The failure probability for


 the inhibitor is 98.55% and the failure probability for the safety flow


 switch is 83.11%.


Common Causes and Solutions for Inhibitor Issues:


 1. Low inhibitor concentration: Low inhibitor concentration can cause


corrosion and scale formation in the chiller system. The solution is to


increase the inhibitor concentration to the recommended level.


 2. Inadequate inhibitor type: The wrong type of inhibitor can cause


corrosion and scale formation in the chiller system. The solution is to use


the correct type of inhibitor for the system.


 3. Inadequate inhibitor dosage: Inadequate inhibitor dosage can cause


corrosion and scale formation in the chiller system. The solution is to


increase the inhibitor dosage to the recommended level.


Common Causes and Solutions for Safety Flow Switch Issues:


 1. Clogged filter: Clogged filters can cause the safety flow switch to


malfunction. The solution is to clean or replace the filter.


 2. Faulty wiring: Faulty wiring can cause the safety flow switch to


malfunction. The solution is to check and repair the wiring.


 3. Faulty switch: A faulty switch can cause the safety flow switch to


malfunction. The solution is to replace the switch.


Conclusion:


 Based on the information provided, the failure probability for the


inhibitor is 98.55% and the failure probability for the safety flow switch is


83.11%. The common causes and solutions for inhibitor and safety flow


switch issues have been provided. It is recommended that the customer


take the necessary steps to address the issues in order to ensure the proper


functioning of the chiller system.










Such an example may be generated in response to a query received in step 1202 such as “Generate an investigation report for customer: investigation date: 2022-12-31; YMC2 chiller serial number: SACM143010, run hour: 14913 hours; customer: ABCD CORP, United States; the failure probability for inhibitor: 98.55%; the failure probability for Safety Flow Switch is 83.11%; including the common causes and solutions for inhibitor and safety flow switch issues based on all above descriptions.” The generative AI model can be used to generate common causes and solutions including in such reports as well as particularized causes and solutions for a given scenario expressed in such a query.


Accordingly, process 1200 can generate useful, reliable, actionable content relating to equipment issues, servicing, etc. with technical details and specific information enabled by model fine-tuning according to the teachings of FIG. 11.


In some embodiments, the teachings of FIGS. 9-12 can be implemented as features of a building system. For example a building system can include building equipment configured to heat, cool, or ventilate a building (e.g., heaters, boilers, chillers, air handling units, variable air volume units, cooling towers, heat pumps, air conditioners, fans, variable refrigerant flow systems, rooftop units, etc.). The building system can also include a computer system (e.g., local or cloud-based controller or combination thereof) programmed to generate a fine-tuning dataset by prompting a generative AI model to isolate, from each of a plurality of service or warranty records, a problem, a cause, and a solution indicated in the service or warranty record, structuring, for the plurality of service or warranty records, the problem, the cause, and the solution as at least one question-and-answer pair; and aggregating the question-and-answer pairs for the plurality of service or warranty records as the fine-tuning dataset. The computer system can also be programmed to generate a fine-tuned model by fine-tuning at least one of the generative AI model or a second AI model using the fine-tuning dataset, and apply the fine-tuned model to affect operations of the building equipment. In some embodiments, the computer system is programmed to apply the fine-tuned model to affect the operations of the building equipment by applying an indication of an actual problem relating to the building equipment as part of an input to the fine-tuned model, generating, by the fine-tuned model, an output comprising an inferred cause of the problem and an inferred solution to the problem, causing implementation of the inferred solution to the problem (e.g., operating the building equipment automatically with a different setting or configuration, causing the building equipment to execute a troubleshooting routine or other test or operation to self-repair or self-resolve a problem, causing a specific maintenance tasks to be performed for the building equipment to provide the inferred solution, etc. as may be consistent with different inferred solutions in different scenarios).


The construction and arrangement of the systems and methods as shown in the various exemplary embodiments are illustrative only. Although only a few embodiments have been described in detail in this disclosure, many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.). For example, the position of elements may be reversed or otherwise varied and the nature or number of discrete elements or positions may be altered or varied. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. The order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. Other substitutions, modifications, changes, and omissions may be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure.


The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine-readable medium. Thus, any such connection is properly termed a machine-readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.


Although the figures show a specific order of method steps, the order of the steps may differ from what is depicted. Also two or more steps may be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.


In various implementations, the steps and operations described herein may be performed on one processor or in a combination of two or more processors. For example, in some implementations, the various operations could be performed in a central server or set of central servers configured to receive data from one or more devices (e.g., edge computing devices/controllers) and perform the operations. In some implementations, the operations may be performed by one or more local controllers or computing devices (e.g., edge devices), such as controllers dedicated to and/or located within a particular building or portion of a building. In some implementations, the operations may be performed by a combination of one or more central or offsite computing devices/servers and one or more local controllers/computing devices. All such implementations are contemplated within the scope of the present disclosure. Further, unless otherwise indicated, when the present disclosure refers to one or more computer-readable storage media and/or one or more controllers, such computer-readable storage media and/or one or more controllers may be implemented as one or more central servers, one or more local controllers or computing devices (e.g., edge devices), any combination thereof, or any other combination of storage media and/or controllers regardless of the location of such devices.

Claims
  • 1. A method, comprising: generating a fine-tuning dataset by: prompting a generative AI model to isolate, from a plurality of service or warranty records, a problem, a cause, and a solution indicated in the service or warranty record;structuring, for the plurality of service or warranty records, the problem, the cause, and the solution as at least one question-and-answer pair; andaggregating the question-and-answer pairs for the plurality of service or warranty records as the fine-tuning dataset; andfine-tuning at least one of the generative AI model or a second AI model using the fine-tuning dataset.
  • 2. The method of claim 1, further comprising generating and executing a maintenance action using the at least one of the generative AI model or the second AI model after the fine-tuning of the generative AI model or the second AI model.
  • 3. The method of claim 1, further comprising providing learning of the generative AI model based on exposure of the generative AI model to the plurality of service or warranty records.
  • 4. The method of claim 1, wherein structuring the problem, the cause, and the solution as the at least one question-and-answer pair comprises: inserting the problem and the cause into a first template question and the solution into a first template answer; andinserting the problem into a second template question and the cause and the solution into a second template answer.
  • 5. The method of claim 1, comprising automatically providing a service recommendation by: receiving a freeform natural language input to a device from a user;providing the freeform natural language input as an input to the generative AI model; andgenerating the service recommendation as an output of the generative AI model and providing the service recommendation to the user via the device.
  • 6. The method of claim 1, wherein the plurality of service or warranty records comprises natural language data input by humans relating to warranty or service requests and completed service or warranty tasks.
  • 7. The method of claim 1, further comprising generating, responsive to an indication of an equipment problem and by the at least one of the generative AI model or a second AI model after fine-tuning, a description of at least one of a inferred cause or an inferred solution to the equipment problem.
  • 8. The method of claim 7, wherein the description is a service summary, a labelling of services, or an investigative service report.
  • 9. The method of claim 1, wherein the fine-tuning dataset comprises different question-and-answer pairs associated with different service or warranty records of the plurality of service or warranty records.
  • 10. One or more non-transitory computer-readable media storing program instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: generating a fine-tuning dataset by: prompting a generative AI model to isolate, from a plurality of service or warranty records, a problem, a cause, and a solution indicated in the service or warranty record;structuring, for the plurality of service or warranty records, the problem, the cause, and the solution as at least one question-and-answer pair; andaggregating the question-and-answer pairs for the plurality of service or warranty records as the fine-tuning dataset; andfine-tuning at least one of the generative AI model or a second AI model using the fine-tuning dataset.
  • 11. The one or more non-transitory computer-readable media of claim 10, the operations further comprising generating and executing a maintenance action using the at least one of the generative AI model or the second AI model after the fine-tuning of the generative AI model or the second AI model.
  • 12. The one or more non-transitory computer-readable media of claim 10, the operations further comprising providing learning of the generative AI model based on exposure of the generative AI model to the plurality of service or warranty records.
  • 13. The one or more non-transitory computer-readable media of claim 10, wherein structuring the problem, the cause, and the solution as the at least one question-and-answer pair comprises: inserting the problem and the cause into a first template question and the solution into a first template answer; andinserting the problem into a second template question and the cause and the solution into a second template answer.
  • 14. The one or more non-transitory computer-readable media of claim 10, the operations comprising automatically providing a service recommendation by: receiving a freeform natural language input to a device from a user;providing the freeform natural language input as an input to the generative AI model; andgenerating the service recommendation as an output of the generative AI model and providing the service recommendation to the user via the device.
  • 15. The one or more non-transitory computer-readable media of claim 10, wherein the plurality of service or warranty records comprises natural language data input by humans relating to warranty or service requests and completed service or warranty tasks.
  • 16. The one or more non-transitory computer-readable media of claim 10, the operations further comprising generating, responsive to an indication of an equipment problem and by the at least one of the generative AI model or a second AI model after fine-tuning, a description of at least one of a inferred cause or an inferred solution to the equipment problem.
  • 17. The one or more non-transitory computer-readable media of claim 16, wherein the description is a service summary, a labelling of services, or an investigative service report.
  • 18. The one or more non-transitory computer-readable media of claim 10, wherein the fine-tuning dataset comprises different question-and-answer pairs associated with different service or warranty records of the plurality of service or warranty records.
  • 19. A building system, comprising: building equipment configured to heat, cool, or ventilate a building;a computer system programmed to: generate a fine-tuning dataset by:prompting a generative AI model to isolate, from a plurality of service or warranty records, a problem, a cause, and a solution indicated in the service or warranty record;structuring, for the plurality of service or warranty records, the problem, the cause, and the solution as at least one question-and-answer pair; andaggregating the question-and-answer pairs for the plurality of service or warranty records as the fine-tuning dataset; andgenerate a fine-tuned model by fine-tuning at least one of the generative AI model or a second AI model using the fine-tuning dataset;apply the fine-tuned model to affect operations of the building equipment.
  • 20. The building system of claim 19, wherein the computer system is programmed to apply the fine-tuned model to affect the operations of the building equipment by: applying an indication of an actual problem relating to the building equipment as part of an input to the fine-tuned model;generating, by the fine-tuned model, an output comprising an inferred cause of the problem and an inferred solution to the problem; andcausing implementation of the inferred solution to the problem.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of and priority to U.S. Provisional Patent Application No. 63/470,754 filed Jun. 2, 2023, the entire disclosure of which is incorporated by reference herein.

Provisional Applications (1)
Number Date Country
63470754 Jun 2023 US