System and Method For Generating Improved Prescriptors

Information

  • Patent Application
  • 20230025388
  • Publication Number
    20230025388
  • Date Filed
    June 08, 2022
    2 years ago
  • Date Published
    January 26, 2023
    a year ago
Abstract
A system and method of combining and improving sets of diverse prescriptors for Evolutionary Surrogate-assisted Prescription (ESP) model is described. The prescriptors are distilled into neural networks and evolved further using ESP. The system and method can handle diverse sets of prescriptors in that it makes no assumptions about the form of the input (i.e., contexts) of the initial prescriptors; it relies only on the prescriptions made in order to distill each prescriptor to a neural network with a fixed form. The resulting set of high performing prescriptors provides a practical way for ESP to incorporate external human and machine knowledge and generate more accurate and fitting set of solutions.
Description
BACKGROUND
Field of the Embodiments

The subject matter described herein, in general, relates to a system and method for generating improved prescriptor models, and, in particular, to a system and method for combining and improving sets of diverse prescriptors by distilling and injecting them into Evolutionary Surrogate-assisted Prescription (ESP).


Description of Related Art

Solving societal problems on a global scale requires the collection and processing of ideas and methods from diverse sets of international experts. As the number and diversity of human experts/teams increase, so does the likelihood that some combinations and refinements of this collected knowledge will reveal improved policy opportunities. However, the difficulty in effectively extracting, combining, and refining complementary information in an increasingly large and diverse knowledge base presents a challenge.


Building predictive models for strategic decision making has an underlying limitation of non-specification of optimal outcomes. Since the optimal decision-making outcome remains unknown, with domains being only partially observable and decision variables interacting in a non-linear fashion, adopting conventional machine learning based approaches such as gradient descent, linear programming or other traditional optimization approaches may not be a suitable proposition.


For a superior and sophisticated decision-making strategy, it is recommended that an option to choose from multiple strategies based on their merits is provided. Accordingly, given availability of historical data on past decisions along with corresponding outcomes, a surrogate predictive model can be utilized to perform relevant search, evaluation, and discovery of most optimum strategy. However, even with a previously proposed ESP solution, as the initial population fed to the model consists only of neural networks with randomly generated weights, low quality random solutions are generated, a problem which the present disclosure attempts to address.


SUMMARY OF THE EMBODIMENTS

In a first non-limiting exemplary embodiment, a computer-implemented method for generating optimized prescriptor models for optimal decision making, includes: generating a set of prescriptor models having a context space and an action space; and distilling each of the prescriptor models into a functional form evolvable with an evolutionary algorithm framework over multiple generations.


In a second non-limiting exemplary embodiment, a method for developing optimized prescriptor models for determining optimal decision policy outcomes includes: building a predictor surrogate model using historical training data to predict an outcome; receiving multiple known model candidates for determining decision policy outcomes, wherein the multiple known models are in one or more formats incompatible with an evolutionary algorithm frame; distilling the multiple known model candidates into a functional architecture that is compatible with the evolutionary algorithm framework; feeding the predictor surrogate model in an evolutionary algorithm framework to train a prescriptor model using evolution over multiple generations, wherein an initial population of candidate prescriptor models includes the distilled multiple known model candidates, and further wherein subsequent generations are evolved based on results of prior generations until a set of optimized prescriptor models are determined.


In a third non-limiting exemplary embodiment, a method for automatic discovery of intervention policies (IP) to optimize one or more objectives related to an epidemiological event, includes: training a predictor model, Pd (C, A)=O, implemented on a processor, the predictor model being configured to receive input training data, the input historical training data sets (C, A, O) including context information (C), actions (A) performed in a given context, and outcomes (O) resulting from action performed in the given context; establishing an initial population of candidate prescriptor models, said establishing including receiving multiple known model candidates for determining intervention policies, wherein the multiple known models are in one or more formats incompatible with an evolutionary algorithm framework; distilling the multiple known model candidates into a functional architecture that is compatible with the evolutionary algorithm framework; evolving prescriptor models, Ps (C)=A, implemented on a processor, wherein the prescriptor models are evolved over multiple generations using the trained predictor model as a surrogate to evolve a subset of the candidate prescriptor models in the initial population, the evolved prescriptor models being configured to receive context information as input data, wherein the context information includes epidemiological event data; and output actions that optimize the one or more objectives as outcomes corresponding to the received context information, wherein the output actions include implementation of intervention policies.





BRIEF DESCRIPTION OF THE FIGURES

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.



FIGS. 1a and 1b an exemplary evolutionary framework incorporating a D&E method therein in accordance with the embodiments herein;



FIGS. 2a, 2b, 2c depict visualizations of comparative solutions, e.g., initialization, evolution from scratch, distillation and D&E, in accordance with embodiments herein;



FIGS. 3a, 3b, 3c, 3d, 3e, 3f, 3g, 3h, 3i, 3j, 3k, 31 are plots showing trends of how stringency tends to increase as the overall cost of the policy increases on a per policy basis, in accordance with embodiments herein;



FIGS. 4a, 4b, 4c provide visualizations of policy complexities FIGS. 3a, 3b, 3c, in accordance with embodiments herein;



FIGS. 5a, 5b, 5c, 5d, 5e, 5f, 5g, 5h plot policies for the highest-complexity D&E prescriptor model on a daily cycle for select geographic regions/countries, in accordance with embodiments herein;



FIGS. 6a, 6b, 6c, 6d, 6e, 6f, 6g, 6h plot policies for the highest-complexity model submitted as part of the XPRIZE challenge on a daily cycle for select geographic regions/countries, in accordance with embodiments herein;



FIGS. 7a, 7b, 7c, 7d, 7e, 7f, 7g, 7h plots policies for the highest-complexity D&E prescriptor model on a weekly cycle for select geographic regions/countries, in accordance with embodiments herein;



FIGS. 8a, 8b, 8c, 8d, 8e, 8f, 8g, 8h plots real-world policies that were implemented for select geographic regions/countries, in accordance with embodiments herein;



FIGS. 9a, 9b provide visualizations of contributions of individual distilled models from the XPRIZE challenge to the population of D&E models, in accordance with embodiments herein; and



FIGS. 10a, 10b, 10c, 10d provide exemplary visualizations of crossover and ancestries for select D&E models, in accordance with embodiments herein.





DETAILED DESCRIPTION OF THE EMBODIMENTS

In describing the preferred and alternate embodiments of the present disclosure, specific terminology is employed for the sake of clarity. The disclosure, however, is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner to accomplish similar functions. The disclosed embodiments are merely exemplary methods of the invention, which may be embodied in various forms.


The evolutionary AI framework described herein with respect to the embodiments meets a set of unique requirements. It incorporates expertise from diverse sources with disparate forms. It is multi-objective since conflicting policy goals must be balanced. And the origins of final solutions generated by the framework are traceable, such that credit can be distributed back to humans based on their contributions. The framework is implemented in accordance with the following general and high-level steps. First, define the problem in a manner formal enough so that solutions from diverse experts can be compared and combined in a standardized manner. Second, solicit and gather solutions from a diverse set of experts, wherein solicitation can take the form of an open call or a direct appeal to known experts. Third, convert or “distill” the internal structure of each gathered solution into a canonical form using machine learning. Finally, evolve the distilled solutions through combination and adaptation using an AI system to discover innovations that realize the complementary potential of the expert-developed solutions.


In one significant aspect of the present disclosure, a system and method of generating high performance prescriptors is provided, capable of achieving high performance solution quickly. In one specific embodiment, the prescriptors are distilled into neural networks and further evolved using Evolutionary Surrogate-assisted Prescription (ESP). The Learning Evolutionary Algorithm Framework (LEAF) AI-enables manual decision processes using prescriptors to create and iteratively enhance recommendations, improving processes and achieving business goals in a principled AI-based manner.


Initially, ESP is described in detail in the co-owned patent applications listed above which are incorporated herein by reference. Briefly, the embodiment herein comprises of a system and method of developing models that predict or provide recommendations with enhanced accuracy. The first phase of predictor development involves providing for accurate, localized predictions based on historical data. Next, during second phase, prescriptor models are developed for determining optimal outcomes. Here, prescriptors are not evaluated with live data; instead, the recommendations are evaluated using predictor model of first phase. Accordingly, the predictor model is fed to evolutionary algorithm framework to train a prescriptor model using evolution over multiple generations, wherein subsequent generations are evolved based on results of prior generations until an optimized prescriptor model is determined.


In one general embodiment, a process for developing an optimized prescriptor model comprises: building a predictor surrogate model on historical training data to predict an outcome; the historical training data including context information (C), actions (A) performed in an included context, and historical C, A, outcome (O) data sets (C, A, O). Now, a prescriptor model is evolved with trained predictor model over a number of generations until a predetermined convergence metric is met to discover the optimal prescriptor models.


In one preferred embodiment of present disclosure, these prescriptor models are improved by distilling them into neural networks and evolving them further using Evolutionary Surrogate-assisted Prescription (ESP). The embodied method handles diverse sets of prescriptors in that it makes no assumptions about the form of the input (i.e., contexts) of the initial prescriptors; it relies only on the prescriptions made in order to distill each prescriptor to a neural network with a fixed form.


An overall exemplary evolutionary system with distillation is shown in the schematic of FIG. 1a and includes a surrogate predictor model generation subsystem 10 which includes at base, a database 20 for storing (C, A, O) training datasets, and at least one module for implementing the selected machine learning algorithm 25, which outputs a trained surrogate predictor model 30. The trained surrogate predictor model is used to evaluate fitness of evolved prescriptor model candidates as part of the evolution process implemented by the prescriptor evolution subsystem 40. The prescriptor evolution subsystem 40 includes a prescriptor model candidate generator 45 and a prescriptor candidate population database 50 which can be continually update in accordance with evolving candidates. The prescriptor model candidate generator 45 further includes a distillation module 48 which operates to convert known model candidates into a predetermined fixed neural network format suitable for evolution. In a preferred embodiment, prior to distillation, the known model candidates are previously developed models for providing a solution in a shared action space, but not necessarily a shared context space. As discussed in the detailed example below, these known model candidates 46 may include models previously developed by human and/or machine experts and may be in any implementation format.


The prescriptor candidates are evaluated for fitness against the surrogate predictor model 30 by testing module 60 and ranked or otherwise filtered and compared to one another in accordance with the requirements of a competition module 65. Elite prescriptor model(s) 70 are selected for application to real world scenarios by the real world application subsystem 80. A procreation module 55 is used to re-seed and update the prescriptor candidate population database 50 in accordance with known procreation processes. Finally, the outcomes from application of the elite prescriptor model 70 actions to real work scenarios are stored in outcome database 85 and shared with database 20 to update the (C, A, O) training data.


As is appreciated by those skilled in the art, additional modules and processors, servers and databases may be incorporated to perform different or additional tasks, including data processing/filtering/translation as required for different domains and data sources. Further, aspects of the overall system or subsystems may be performed by different entities. For example, the surrogate predictor model generation subsystem 10 and the prescriptor evolution subsystem 40 may be operated by a service provider and provided as a SaaS product, while the real world application subsystem 80 may be operated exclusively by a customer, thus protecting confidential business and other data. The following co-owned patent applications are incorporated herein by reference herein: U.S. patent application Ser. No. 16/424,686 entitled Systems And Methods For Providing Secure Evolution As A Service and U.S. patent application Ser. No. 16/502,439 entitled Systems And Methods For Providing Data-Driven Evolution Of Arbitrary Data Structures.


For distilling these known models candidates into neural networks, distillation module 48 aims to integrate these prescriptors into a run of ESP in order to improve both the set of prescriptors and ESP itself. Referring to FIG. 1b, consider a fixed set of prescriptors π1, . . . , πN. Each πi is a prescriptor in the sense of ESP, that is, it is a function that maps contexts ci∈Ci to actions ai∈Ai. For the present method, prescriptors are not required to have a shared context space; they are only required to have a shared action space, i.e. Ci≠Cj for i≠j, but Ai=Aj∀i, j is required. No further assumptions are made as to the functional form of any πi, e.g., it could be a neural network, a complicated hand-coded program, or some sort of stochastic search algorithm. This is the sense in which the set of prescriptors can be diverse: they can have any context space and any underlying implementation. The goal is to integrate these prescriptors into a run of ESP in order to improve both the set of prescriptors and ESP itself.


Now, in order to evolve prescriptors in ESP, they all need to have the same context space C*, and have a functional form that ESP can evolve. For the present method, ESP prescriptors are represented as differentiable neural networks with a fixed architecture having a context space C*, and with action space A*=A1= . . . =AN. For each πi, suppose following is a set of actions ai1, . . . , a1K prescribed by πi in K scenarios with corresponding contexts c1, . . . , cKcustom-character*.


Now each πi is distilled into a prescriptor of the form evolvable with ESP by training the ESP neural network to mimic the prescriptions of πi. This distillation is framed as a standard supervised learning problem, where the neural network {circumflex over (π)}i is trained to fit the input-output pairs {(aik, ck)}k=1K. In one working embodiment, this training may be performed by gradient descent, e.g., using Keras and including standard methods to avoid overfitting, i.e., early stopping and use of validation and test sets. The result of training is a neural network {circumflex over (π)}i≈πi, which has a form that ESP can evolve.


The distillation above results in evolvable neural networks {circumflex over (π)}1, . . . , {circumflex over (π)}K which approximate π1, . . . , πN, respectively. These distilled models can then be placed into the initial population of a run of ESP, whose goal is to optimize actions a∈custom-character* given contexts c∈custom-character*. In standard ESP, the initial population (i.e., before any evolution takes place) consists only of neural networks with randomly generated weights. By replacing random neural networks with the distilled neural networks, ESP starts from diverse high-quality solutions, instead of low-quality random solutions. ESP can then be run as usual from this starting point. Throughout this description, the process of distillation and evolution is referenced as D&E.


Some noteworthy advantages of replacing random neural networks with distilled neural networks in accordance with the description herein, include:


Improved efficiency of ESP: Evolution does not have to start from scratch, so it achieves high-performing solutions more quickly


Diversity in ESP: The distilled models can behave quite different from solutions ESP would discover on its own. Evolving using this raw material allows ESP to discover innovations that would not be discovered otherwise.


Quantitative improvement of ESP solutions: The above two advantages combine to enable ESP to generate higher-performing solutions (e.g., expanded Pareto front of cost vs. impact) compared to running from scratch (random initial solutions) given the same amount of computation.


Quantitative improvement of initial prescriptor set: From the perspective of the initial set of fixed prescriptors, merit is seen in the way ESP combines and builds upon the raw material in these prescriptors to find even better sets of solutions (again, e.g., it discovers an expanded Pareto front).


One skilled in the art will appreciate that the system and process described herein is applicable to many scenarios. In one timely example, the application of distilled prescriptor models to generate responses to pandemic challenges is described in detail herein. In particular, distilled prescriptor models are used in ESP to optimize the tradeoff between new cases of infection and policy intervention cost for managing the COVID-19 pandemic.


The pandemic response generation system and method, according to one exemplary embodiment, develop models that predict local outbreaks with more accuracy, along with prescriptive intervention and mitigation approaches that minimize infection cases and economic costs. Further, it improves the human-developed solutions and the evolutionary optimization system, by integrating the human-developed solutions into the evolutionary process by a system of distillation. It therefore allows for a more tightly coupled human-AI collaboration. A secondary advantage of the approach is that it supports such improvements even in the absence of direct access to the original human-developed models. That is, the process relies only on a data set of sample input-output pairs collected from these models. In essence, the distilled and evolved prescriptors foster an ecosystem that makes it easier to implement accurate and rapid prescriptions and enable ongoing improvements to the model as new interventions, treatments, and vaccinations become available.


The open platform for experiment enabled increased and higher-quality data, accurate predictions, stronger regional intervention plans, and continual improvement as new interventions such as vaccinations and treatments become available. It provides a platform for shared human and machine creativity and problem-solving that fosters innovation and provides for an evidence-based decision making to combat future emergencies, functioning ultimately a tool for future humanitarian crises.


The experiment initiates with gathering input-output pair for each model, wherein the input does not need to be the input of the model used when producing the output; the input must only be of a fixed form across the various models and consist of information the model could have used to produce its output. Now, for each model, a neural network is trained with a fixed given form to mimic that model's behavior by training it in a supervised fashion on the input-output pairs, i.e., distilling the model into a neural network. All these trained neural networks are now placed into the initial population to optimize the same application the initial human-developed models were developed to solve.


In accordance with one illustrative embodiment, the distilled models are trained using Keras and Tensorflow, which are APIs well-known to those skilled in the art for building and training models. They are trained with batch size 32, for up to 100 epochs, with early stopping if validation loss did not improve after 5 epochs. The validation loss used is ordinal MAE (mean absolute error), i.e., MAE of outputs rounded to the nearest integer, since in the application outputs are required to be integral in each range. A key requirement of the process is that all human-developed models adhere to the same prescriptor API. This adherence allows distillation of all human-developed models into neural networks with equivalent structure. The distilled models are trained directly in a supervised fashion without access to the human-developed models themselves. Thus, they use only the input-output examples from human-developed models, and not the auxiliary information that is used by alternative distillation methods.


To illustrate this idea, a phasic predictor model is developed. In phase one, the goal is to devise a predictor model that provides accurate, localized predictions of COVID-19 transmission based on local data, unique intervention strategies, community resilience characteristics, and mitigation policies and practices. Precisely, history of cases and interventions in a country or region are used as input to predict the number of cases likely in the future. A sample dataset comprised of case and intervention plan data and example predictors (not region specific) are utilized in phase one to develop predictors. In one exemplary embodiment, these example predictors may include a linear regressor and long short term memory (LSTM) based predictor network. Further, intervention plans may include school and workplace closure policies, travel restrictions, testing, and contact tracing. Furthermore, data from a plurality of data sources can be retrieved from government organizations, demographic or economic data, data on healthcare factors, social distancing, adherence to policies, and more to create a unique dataset. Next, a predictor is generated based on a novel and exhaustive dataset derived from above, which is utilized in phase two for prescriptor development.


In one example embodiment, generality of a predictor model is assessed, wherein the predictor takes as input the active and historical intervention plans for each region and will need to output a prediction for all. Performance on specialty regions is evaluated based on output of those regions. A predictor output can consist of multiple models, for example those specializing in different regions, which can be accessed through the same call. Thus, a predictor can estimate the number of future cases for a given region(s)—considering the local intervention plans in effect from a created dataset over a given time. In one preferred embodiment, the predictor outputs include optional fields, such as confidence intervals, death rates, hospitalization rates, ventilators needed, and other outputs.


At the conclusion of phase one of the predictor development, as indicated above, the predictor generated is evaluated against live data for a predetermined evaluation period on all regions and then separately on the specialty regions. Once approved of their quantitative accuracy over a long term for a specific region of which an intervention plan is produced, the predictor transits to next phase of prescriptor development in phase two.


In the second phase, prescriptors are developed, encompassing rapid creation of custom, non-pharmaceutical and other intervention plan prescriptions and mitigation models to help decision-makers minimize COVID-19 infection cases while lessening economic and other negative implications of the virus. For example, machine-generated prescriptions may provide policymakers and public health officials with actionable locally based, customized, and least restrictive intervention recommendations, such as mandatory masks and reduced restaurant capacity.


During phase two, prescriptor development involves use of machine learning to make more accurate recommendations to stakeholders. Here, intervention plans are prescribed by the model that simultaneously minimizes the number of future cases as well as the stringency (i.e., economic and quality-of-life cost) of the recommended interventions. Thus, based on a time sequence of the number of cases in a region and the past intervention plans in place, prescriptor models (for any region) are developed that generate useful intervention plans that policy makers can implement for their region. Each prescriptor balances a tradeoff between two objectives: minimizing the number of daily COVID-19 cases while minimizing the stringency of the recommended interventions (as a proxy to their economic and quality-of-life cost).


As understood, intervention plan costs can differ across regions. For example, closing public transportation may be much costlier in London than it is in Los Angeles. Such preferences are expressed as weights associated with each intervention plan dimension, given to the prescriptor as input for each region. The prescriptor recommendations along the stringency objective is evaluated according to these weights, so the prescriptor model should consider them in making recommendations. This is a significant aspect for two reasons: (1) such prescriptors can be more readily customized to a particular region for future live site testing that may occur, making it easier to adopt them, and (2) this is a new technical challenge beyond the current state of the art, promoting scientific advances in machine learning. Prescriptors are developed and evaluated separately both in the base case of equal weights and in the more advanced case where the weights are chosen randomly.


Also during phase two, instead of being evaluated against a stream of live data (recommendations of real-world), the prescriptors are evaluated using a standard predictor model from phase one and a collection of neural networks to represent different trade-offs between COVID-19 cases and the stringency of the intervention plan. The prescriptor models are general and not specific to any region. The aim is to develop improved prescriptors that are either general or region-specific, based on selection of any of machine learning or other methods.


Thus, prescriptions may be generated through a variety of approaches. A possible approach may involve the following: a prescription is generated for each day, and a predictor is asked to predict the number of cases for the next day. The generated intervention plans (“IPs”) and the predicted cases then become input to the prescriptor and the predictor for the next day. In this manner, the prescriptions can be rolled out day-by-day indefinitely into the future.


Another possible prescriptor generation approach involves a schedule of intervention plans generated over several days, weeks, or months based on the case and intervention plan history up to that point, and consulting the predictor only occasionally. Overall, an attempt is to create models to predict the course of the COVID-19 pandemic and to prescribe non-pharmaceutical interventions (NPIs) that would help with mitigation for all regions. Evaluation on specialty regions is based on the output for those regions. Note that the prescriptor submission can comprise multiple models, such as those specializing in different regions, which can be accessed through the same call. Here again the prescriptor is evaluated in a same manner as predictor, except the evaluation is now based on a much longer period of unseen/live data.


The prescriptors are evaluated based on the estimates that the predictor model makes on all regions and separately on specialty regions. For a given simulation period, e.g., 60-90 days, the prescriptor is called with the date and weights, obtaining prescriptions for each region. The prescriptor is required to meet two primary objectives: estimation of the number of cases for each region; and calculation of total intervention plan stringency for each region with the specified weights for the region. The weights for each region are drawn from a uniform distribution within [0 . . . 1] and normalized to sum up to one. The prescriptor's performance in this region is then calculated as the number of other prescriptors its Pareto-dominates (i.e., is better along both objectives) in this space.


Next, a second level of quantitative assessment is based on how well it may serve as a stepping stone in creating improved prescriptors through further collaborative machine learning—i.e., a population-based search—in the following process:

    • 1. The prescriptor is distilled into an equivalent neural network via supervised learning of the prescriptions it made, as specified above in first assessment. It is then queried with a “syllabus” of situations (case and IP history and stringency evaluation weights) to obtain a training set. A neural network similar to the evolved prescriptor samples is then trained with it.
    • 2. The network is then inserted into the prescriptor population along with all other submissions for that region.
    • 3. The prescriptors are evolved further, optimizing the two objectives specified in first assessment.
    • 4. In the final Pareto front, evaluation is based on number of descendants of the prescriptor. The greater the number of descendants, the higher the evaluation score.


      Thus, the present method is used to combine and improve the prescriptor submissions, and then the results are evaluated.


Here, the models are made usable in real-world settings, which provide interactivity and actionability in a visual and well-communicated format. The model may also take into consideration vulnerable groups that may include the unemployed, working poor, unhoused individuals, children, the elderly, people with disabilities, ethnic minorities, and other marginalized groups. Hence, the given prescriptor model enables prediction of local outbreaks with more accuracy along with prescriptive intervention based on above discussed predictor-prescriptor model approach. The analysis of the experimental runs also show that ESP achieves a systematic mixing of the prescriptor models which is consistent across multiple runs, suggesting the method is able to reliably take advantage of the initial prescriptors that lead to new innovative solutions.


In the following particular example, the detailed general description above of using the D&E framework to search for optimized policies for curbing pandemic impacts is applied to the global challenge of determining optimal responses to the COVID-19 pandemic. Specifically, in this example, the framework is applied to automate the development of further solutions to the COVID-19 pandemic using multiple, disparate and diverse solutions initially developed by human experts.


As a starting point, the initial prescriptor solution set is comprised of solutions submitted as part of the XPRIZE Pandemic Response Challenge. By way of background, XPRIZE challenge was run over a period from October 2020 through March 2021. The goal of the challenge was to motivate experts around the world to develop automated tools to help policy-makers make better, more informed decisions about how to respond to a quick-moving pandemic. Compared to human-only decision-making, such tools could better take advantage of the broad swaths of data that were being made available at the time. More than 100 teams participated in the challenge, from 38 countries, submitting high-performing models with highly-diverse implementations. The XPRIZE consisted of the two development phases described above. That is, in Phase 1, teams were tasked at developing predictors to predict the change in new COVID-19 cases in approximately 200 geographic regions (hereafter “Geos”) around the world given schedules of planned IPs for each Geo. The top teams then moved on to Phase 2, in which they were tasked at developing prescriptors to prescribe schedules of policies for governments to trade-off between number of predicted new cases (computed w.r.t. a predictor) and cost of implementing the policies (e.g., economic, social, or political cost). The XPRIZE formal problem definition, requirements, API, and code utilities for the challenge are publicly available. The following introductory document is descriptive of the challenge at a high-level and is incorporated herein by reference in its entirety: XPRIZE Challenge Guidelines, Jan. 25, 2020.


IPs are defined by levels of stringency of various policy areas, including restrictions on schools, workplaces, the size of gatherings, and international travel, along with approaches such as public information campaigns and mask requirements. These IPs, along with the meanings of their integer stringency levels, are defined in a data set collected by Oxford University (hereafter “Oxford data set”). The following working document provides information regarding data collection and location of data sets and is incorporated herein by reference: Hale, Thomas, et al., “Variation in Government Responses to COVID-19” Version 13.0. Blavatnik School of Government Working Paper. 11 Mar. 2022. The Oxford data set contains collected values for each of these IPs for over 200 Geos (e.g., country, U.S. state, . . . ) since the beginning of the pandemic, i.e., January 2020. Ground truth data on new cases across these same Geos is also provided in this data set. For the challenge, there were twelve categories of IPs (a subset of the total set of policies in the Oxford data set), each of which could take on up to five values, which can be ordered in terms of their stringency, and are assigned integer values from 0 to 4. The challenge IP set is shown below in Table 1.














TABLE 1





NPI Name
Level 0
Level 1
Level 2
Level 3
Level 4







C1_School
no measures
recommend closing
require closing
require closing
no data


closing

or all schools open
(only some
all levels





with alterations
levels or






resulting in
categories, e.g.






significant
just high






differences
school, or just






compared to non-
public schools)






Covid-19 operations





C2_Workplace
no measures
recommend closing
require closing
require closing
no data


closing

(or recommend
(or work from
(or work from





work from home) or
home) for some
home) for all-





all businesses open
sectors or
but-essential





with alterations
categories of
workplaces





resulting in
workers
(e.g. grocery





significant

stores, doctors)





differences







compared to non-







Covid-19 operation





C3_Cancel
no measures
recommend
require
no data
no data


public events

cancelling
cancelling




C4_Restrictions
no restrictions
restrictions on very
restrictions on
restrictions on
restrictions


on gatherings

large gatherings (the
gatherings
gatherings
on




limit is above 1000
between 101-
between 11-100
gatherings of




people)
1000 people
people
10 people or







less


C5_Close public
no measures
recommend closing
require closing
no data
no data


transport

(or significantly
(or prohibit






reduce
most citizens






volume/route/means
from using it)






of transport







available)





C6_Stay at
no measures
recommend not
require not
require not
no data


home

leaving house
leaving house
leaving house



requirements


with exceptions
with minimal






for daily
exceptions






exercise,
(e.g., allowed to






grocery
leave once a






shopping, and
week, or only






′essential′ trips
one person can







leave at a time,







etc.)



C7_Restrictions
no measures
recommend not to
internal
no data
no data


on internal

travel between
movement




movement

regions/cities
restrictions in







place




C8_International
no restrictions
screening arrivals
quarantine
ban arrivals
ban on all


travel controls


arrivals from
from some
regions or





some or all
regions
total border





regions

closure


H1_Public
no Covid-19
public officials
coordinated
Blank - no data
Blank - no


information
public
urging caution about
public

data


campaigns
information
Covid-19
information





campaign

campaign (e.g.







across







traditional and







social media)




H2_Testing
no testing
only those who both
testing of
open public
Blank - no


policy
policy
(a) have symptoms
anyone
testing (e.g.
data




AND (b) meet
showing Covid-
″drive through″





specific criteria (e.g.
19 symptoms
testing





key workers,

available to





admitted to hospital,

asymptomatic





came into contact

people)





with a known case,







returned from







overseas)





H3_Contact
no contact
limited contact
comprehensive




tracing
tracing
tracing; not done for
contact tracing;






all cases
done for all







identified cases




H6_Facial
No policy
Recommended
Required in
Required in all
Required


Coverings


some specified
shared/public
outside the





shared/public
spaces outside
home at all





spaces outside
the home with
times





the home with
other people
regardless of





other people
present or all
location or





present, or
situations when
presence of





some situations
social
other people





when social
distancing not






distancing not
possible






possible









In Phase 1, submitted predictors would take as arguments a set of Geos and range of dates to predict for, along with the future settings (i.e., policy prescriptions) for each of the twelve IPs at those dates. The prediction date range could be up to 90 days. In Phase 2, the teams were presented with a reference predictor ϕ, and developed prescriptors to, depending on any historical context deemed necessary, e.g., past cases and past policies, generate future policy schedules. These schedules could then be fed into ϕ to produce estimates of new cases in a particular Geo. Formally, each prescriptor program π takes as its argument a query q, consisting of the Geo and date range to predict for, and produces a matrix of actions A∈custom-character5T×12, where T is the length of the date range in days (up to 90 days). That is, A consists of the setting of each IP for each of T future days.


These prescriptors are evaluated in a two-dimensional objective space. Their goal is to minimize the number of new cases, while simultaneously minimizing the cost of implementing their prescribed policies. The aggregated metric for the number of new cases was simply the sum or mean over the date range. The cost is more challenging to aggregate, in that different Geos at different times may have different relative social or economic costs of implementing each IP. For the challenge, the official judges developed various cost settings for each IP, which were fed as input to prescriptors to evaluate their ability to adapt to different relative costs. As a level benchmark, prescriptors were also evaluated with uniform costs, i.e., the IP settings were simply summed across IPs and averaged over time. This uniform weighting makes evaluation simpler and more interpretable, so that the different methodologies of different prescriptors can be usefully compared. For clarity of analysis, and to avoid incorporating additional highly uncertain and variable information into the work, it is this uniform cost setting that is considered in this paper. In this setting, the cost of a particular policy setting falls in a range from cmin=0 (no IPs used) to cmax=34 (all IPs set to their most stringent settings). Since there are competing objectives, teams were allowed to submit multiple prescriptors to cover the space of tradeoffs between reducing cost of IPs and reducing cases. Ideally, the set of prescriptors submitted by a team would result in a Pareto front of solutions, which would give a policy-maker a clear space of tradeoffs from which to select a solution.


All in all, 169 prescriptors (solutions) were submitted. These solutions spanned the entire tradeoff space, and dramatically outperformed the competition baselines. A broad array of different approaches were used across different teams, including hand-coded rules, machine learning approaches like random forests and neural networks, epidemiological models, hybrid approaches of all these, and evolutionary algorithms.


In this particular example, these 169 prescriptors are the starting point for application of the D&E process of the framework shown in the diagram of FIG. 1b. In the D&E process, first, queries q1, . . . , qnq are each fed in to each of a set of preexisting black-box decision-making programs π1, . . . , πnπ, yielding a dataset of actions Ai for each program πi. With the help of the environment predictor model ϕ, a corresponding dataset of contexts Ci is derived, which have a fixed form suitable for input to a neural network. The training dataset (Ai, Ci) is then used to train a neural network {circumflex over (π)}i that approximates πi, i.e., {circumflex over (π)}i is a distilled version of πi. The distilled models {circumflex over (π)}1, . . . , {circumflex over (π)}nπ1o, . . . , πnπo are then placed in the initial population of an evolutionary algorithm (EA), along with randomly-generated solutions πnπ+1o, . . . , πnpo The EA uses ϕ to evaluate the behavior of π1o, . . . , πnpo and then generates the next population πnπ+1o, . . . , πnp1. This evaluate-generate process is repeated ng times, yielding the set of final evolved solutions π1ng, . . . , πnpng. Through this process, evolution discovers improvements to the initial set of black-box programs, and, by bootstrapping the prior knowledge stored in these programs, evolution discovers solutions it would not have discovered from scratch.


In distillation, the goal is to fit a model with a fixed functional form to capture the behavior of each initial solution, by solving the following minimization problem:











θ
i
*

=



min
θ





Q



p

(
q
)








π
i

(
q
)

-



π
ˆ

i

(


κ

(



π
i

(
q
)

,
ϕ

)

;

θ
i


)




1


dq






min
θ


1

n
q







j
=
1


n
q








π
i

(

q
j

)

-




π
ˆ

i

(


κ

(



π
i

(

q
j

)

,
ϕ

)

;

θ
i


)




1





,




(
1
)







where q∈Q is a query, πi is the initial solution, {circumflex over (π)}i is the distilled model with learnable parameters θi, and κ is a function that maps queries (which may be specified via a high-level API) to input data with a canonical form that can be used to train {circumflex over (π)}i. In practice, {circumflex over (π)}i is trained by optimizing Eq. (1) with stochastic gradient descent using data derived from the nq queries for which data is available. Beyond the standard assumptions required for generalization in supervised learning, the key assumption required for distillation to be effective is that there exists θi* such that {circumflex over (π)}i(κ(πi(q),ϕ); θi*)≈πi(q). This assumption is met as long as κ is expressive enough to yield contexts that (approximately) uniquely identify the state of the world that πi uses to generate its actions πi(q), and {circumflex over (π)}i is expressive enough to (approximately) capture the functionality of π. This distillation procedure is capture on the left side of FIG. 1b.


In the specific example described herein, the choices of κ and {circumflex over (π)}(_;θ) enable distillation to sufficiently capture the behavior of the initial existing solutions, by choosing κ to generate real-valued time-series data, and letting {circumflex over (π)}i to be neural networks.


Next, once each of the nπ human-developed models πi has been distilled via Eq. (1) into its respective approximation {circumflex over (π)}i, the {{circumflex over (π)}i}i=1nπ can be placed into the initial population of an evolutionary process, so that they can be recombined to discover further solutions. Say the evolutionary algorithm (EA) has population size np≥nπ. In standard evolution, the population is initialized with all random solutions. Here, instead the population is initialized with solutions {circumflex over (π)}110, . . . {circumflex over (π)}nπnπ0, πnπ+10, . . . πnp0, where πnπ+10, . . . πnp0 are generated randomly. Along with their technically required role of filling out the initial population, these random solutions can provide an additional reservoir of exploratory resources for the algorithm. The algorithm then iterates over the following steps:






f
i
j=Evaluate(πij,ϕ)∀i∈1, . . . ,np.  (2)





{(πkj,fkj)}k=1K=Refine({(πij,fij)}i=1np), where {(πkj,fkj)}k=1K⊂{(πij,fij)}i=1np.  (3)





ij+1}i=1np=Generate({(πkj,fkj)}k=1K).  (4)


In the Evaluate step, objective (or fitness) values f (a vector when there are multiple objectives) are computed for each solution in the current population, using the environment predictor ϕ. In the Refine step, based on these objective values, the population is refined to only include the most promising solutions. In the Generate step, new solutions are generated by combining and perturbing solutions in this refined set, so that there are again np solutions in the population. One iteration through these steps is termed a generation. The process terminates after ng generations. This process is depicted on the right of FIG. 1b.


The Evaluate and Refine steps can generally be implemented independently of model representation. In this example, since neural networks are used to represent the {circumflex over (π)}i, there is a plethora of possible methods to choose from to implement the Generate step. An established method is used which immediately supports the use of a predictor in evaluation, and which was previously used to evolve prescriptors for IP prescription from scratch, i.e., without taking advantage of distilled models. However, one skilled in the art will appreciate that due to the inherent flexibility of evolutionary algorithms, for any canonical form chosen for the distillation step, it is possible to devise appropriate implementations of Generate in the evolution step.


In this particular example, these 169 prescriptors were distilled into an evolvable neural network architecture equivalent to one previously used to evolve prescriptors from scratch in this domain as described in commonly owned U.S. patent application Ser. No. 17/355,971 (hereafter “'971 application”) which is incorporated herein by reference in its entirety. Each distilled prescriptor is a multilayer perceptron (“MLP”) with a single hidden layer of 32 units with tan h activation and orthogonal weight initialization. The MLP has one output for each IP, which also uses tan h activation, which is then discretized to yield an integer stringency setting. In addition to the 8 Containment and closure IPs referenced in the '971 application, 4 additional IPs from the Health systems IPs listed in an updated Oxford data set were used the XPRIZE Pandemic Response Challenge as shown in Table 1. The input to the neural network is COVID-19 case data for the previous 21 days.


The case data was presented as cases per 100K residents. This input was found to allow distilled models to fit the training data much more closely than the modified growth rate used in previous work. This improved training is due to the fact that cases per 100K gives a more complete picture of the state of the pandemic; the epidemiological-model-inspired ratio used in prior work explicitly captures the rate of change in cases, but makes it difficult to deduce how bad an outbreak is at any particular moment. Since many diverse submitted prescriptors took absolute case numbers into account, including this in the distillation process allows the distilled prescriptors more closely align with their source.


The output of the prescriptor neural network gives the prescribed IP settings for the next single day. Longer prescribed schedules are generated by autoregressively feeding the output of the prescriptor back into the predictor in a loop. Although it is possible to simplify prescriptions by limiting changes to less frequent periods than one day, here one day is used in order to accommodate the diverse policies of submitted prescriptors in the challenge, which were unconstrained.


The neural network {circumflex over (π)}i is trained to generate IPs that match those of πi for day t, given cases for the previous 21 days t−21, . . . , t−1. The model can then generate multi-day rollouts by autoregressively feeding the generated IPs into the predictor ϕ to get predicted new cases for day t, which are used to update the input to {circumflex over (π)}i.


Data for training {circumflex over (π)}i was gathered by collecting the prescriptions made by πi in the XPRIZE Pandemic Response Challenge. Data was gathered for all prescriptions made with uniform IP weights. This consisted of five date ranges, each of length 90 days, and 197 Geos, resulting in ≈100K training samples, for each prescriptor, a random 20% of which was used for validation for early stopping.


More formally, each (date range, Geo) pair defines a query q, with πi(q)∈custom-character590×12 the policy generated by πi for this Geo and date range. The predicted daily new cases for this Geo and date range given this policy is ϕ(πi(q))∈custom-character90. Let h be the vector of daily historical new cases for this Geo up until the start of the date range. This query leads to 90 training samples for {circumflex over (π)}i: For each day t, the target is the prescribed actions of the original prescriptor πi(g)t, and the input is the prior 21 days of cases (normalized by 100K residents) taken from h for prior days before the start of the date range and from ϕ(πi(q)) for days in the date range.


These models were implemented and trained in Keras using the Adam optimizer. Mean absolute error (MAE) was used as the training loss (since policy actions were on an ordinal scale, with targets normalized to the range [0, 1]).


The method was implemented inside of the LEAF ESP framework, which was previously used to evolve prescriptors for IP prescription from scratch, i.e., without taking advantage of distilled models as described in U.S. patent application Ser. No. 16/831,550 and the '971 application which are incorporated herein by reference. The distillation above results in evolvable neural networks {circumflex over (π)}1, . . . , {circumflex over (π)}nπ which approximate π1, . . . , πnπ, respectively. These distilled models were then placed into the initial population of a run of ESP, whose goal is to optimize actions given contexts. In standard ESP, the initial population (i.e., before any evolution takes place) consists only of neural networks with randomly generated weights. By replacing random neural networks with the distilled neural networks, ESP starts from diverse high-quality solutions, instead of low-quality random solutions. ESP can then be run as usual from this starting point.


In order to give distilled models a fair chance to reproduce, the population removal percentage was set to 0%, so that solutions could only be replaced once better ones are generated. Also, since the experiments were run as a quantitative evaluation of teams in the XPRIZE competition, distilled models were selected for reproduction inversely proportional to the number of submitted prescriptors for that team. This inverse proportional sampling creates fair sampling at the team level.


A baseline experiment of running evolution from scratch with randomly initialized initial population instead of distilled models was also run. Ten independent evolutionary runs of 100 generations each were run for both the distill & evolve and evolutionary baseline settings.


The task for evolution was to prescribe for 90 days starting on Jan. 10, 2021 for the 20 regions with the most total deaths. Internally, ESP uses the Pareto-based selection mechanism from NSGA-II to handle multiple objectives.


There are many ways to evaluate multi-objective optimization methods. In this description, we compare Pareto fronts. Quantifying performance in this manner is believed to be most useful to a real-world decision maker, because, ideally, the metrics are interpretable and have immediate implications for which method would be preferred in practice.


Each solution generated by each method m in the set of considered methods M yields a policy with a particular average daily cost c∈[0,34] and a corresponding number of predicted new cases a≥0 [?]. Each method returns a set of solutions which yield a set of objective pairs Sm={(ci, ai)}i=1Nm. Following the standard definition, one solution s1=(c1, a1) is said to dominate another s2=(c2, a2) if and only if





(c1<c2∧a1≤a2)∨(c1≤c2∧a1<a2),


i.e., it is at least as good on each metric and better on at least one. If s1 dominates s2, we write s1≥s2. The Pareto front Fm of method m is the subset of all si=(ci, ai)∈Sm that are not dominated by any sj=(cj, aj)∈Sm. The following metrics are considered and discussed briefly below: hypervolume (HV); hypervolume improvement (HVI); domination rate (DR); maximum case reduction (MCR); tradeoff coverage rate (TCR) and posterior tradeoff coverage rate (PTCR).


Dominated hypervolume is the most common general-purpose metric used for evaluating multi-objective optimization methods. Given a reference point in the objective space, the hypervolume is the amount of dominated area between the Pareto front and the reference point. The reference point is generally chosen to be a “worst-possible” solution, so the natural choice here is the point with maximum IP cost and number of cases reached when all IPs are set to 0. Call this reference point so=(co, ao). Formally, the hypervolume is given by





HV(m)=custom-character1[∃s*∈Fm:s*≥s∧s≥so]ds,  (5)


where 1 is the indicator function. Note that HV can be computed in time linear in the cardinality of Fm. The remaining metrics are relative, in the sense that they are computed with respect to the solutions generated by alternative methods.


HVI is simply the improvement in hypervolume compared to the Pareto front Fmo of a reference method mo:





HVI(m)=HV(m)−HV(mo).  (6)


The point of this metric is to normalize for the fact that the raw hypervolume metric is often dominated by empty unreachable solution space.


DR goes by other names such as “Two-set Coverage.” It is the proportion of solutions in a reference front Fmo that are dominated:










D


R

(
m
)


=


1



"\[LeftBracketingBar]"


F

m
o




"\[RightBracketingBar]"



·




"\[LeftBracketingBar]"


{


s
o




F

m
o


:

(



s




F
m

:
s



s
o




)



}



"\[RightBracketingBar]"


.






(
7
)







The above generic multi-objective metrics can be difficult to interpret from a policy-implementation perspective, since, e.g., hypervolume is in units of cost times cases, and the domination rate can be heavily biased by where solutions on reference Pareto front tend to cluster.


The following two metrics are more interpretable, and thus more directly usable by users of such a system. MCR is the maximum reduction in number of cases that a solution on a Pareto front gives over the reference front:





MCR(m)=max{ao−a*∀(so=(co,ao)∈Fmo,s*=(c*,a*)∈Fm):s*≥so}.  (8)


This means there is a solution in Fmo such that, one could reduce the number of cases by MCR(m), with no increase in cost. If MCR is high, then there are solutions on the reference front that can be dramatically improved. TCR captures how often a decision-maker would prefer solutions from one particular Pareto front among many. Say a decision-maker has a particular cost they are willing to pay when selecting a policy. The tradeoff coverage rate for a method is the proportion of costs whose nearest solution on the combined Pareto front F* (the Pareto front computed from the union of all Fm∀m∈M):











TCR

(
m
)

=


1


c
max

-

c
min








c
min


c
max




1
[


arg


min


s




F








c
-

c
*







F
m


]


dc




,




(
9
)







where s*=(c*,a*). Here, cmin=0, and cmax=34, since that is the sum of the maximum settings across all IPs. Note that TCR can be computed in time linear in the cardinality of F*. TCR gives a complete picture of the preferability of each method's Pareto front, but is agnostic as to the real preferences of decision-makers. In other words, it assumes a uniform distribution over cost preferences. The final metric adjusts for the empirical estimations of such preferences, so that the result is more indicative of real-world value.


PTCR adjusts the TCR by the real-world distribution of cost preferences, estimated by their empirical probabilities {circumflex over (p)}(c) at the same date across all geographies of interest:





PTCR(m)=∫cmincmax{circumflex over (p)}(c)·1[arg mins*∈F*∥c−c*∥∈Fm]dc.  (10)


Note that TCR and PTCR are particular instantiations of the R1 metric, which is abstractly defined as the probability of selecting solutions from one set versus another given a distribution over decision-maker utility functions. In other words, PTCR estimates the percentage of time a decision-maker with a fixed stringency budget would choose a prescriptor from a given approach among those from all approaches. For D&E, PTCR is nearly 100%.


First, a visualization of where the solutions of the different methods fall in the objective is shown in FIG. 2a, 2b, 2c. FIG. 2a shows that D&E smoothly covers the tradeoff space. Some of the distilled models are high-performing, but most are significantly dominated by D&E. Raw evolution does reasonably well with high-cost models, but has difficulty with low-cost. These three methods all substantially outperform the random models at initialization, which find themselves in a dominated clump. FIG. 2b shows the Pareto front of each method. Distillation pushes clearly out from evolution and initialization, while D&E pushes even further. FIG. 2c shows the solutions on the combined Pareto front, i.e., computed over the union of all methods. From this plot it is clear that D&E yields the preferred solutions over the vast majority of the tradeoff space. Overall, the distilled models substantially outperform both initialization and evolution, thus confirming the efficacy of the distillation process, and D&E does even better, validating the motivation behind initializing evolution with the distilled models.


The results are measured quantitatively in Table 2. Results averaged over ten independent evolutionary runs.













TABLE 2





Method
TCR
MCR
DR
HVI



















Initialization
0.00
−3458
0.00
−3.57e+6


Evolution
0.93
1795
6.06
−2.30e+6


Distillation
2.25
0
0
0


D&E
97.26
32452
73.94
1.39e+5










For metrics in the table that require a reference Pareto front to measure performance against (HVI, DR, and MCR), Distillation is used as this reference, since Distillation represents the human-developed solutions, and the goal is to compare the performance of Human+AI (D&E) to human alone. D&E provides quantitative improvements over Distillation across the board. Most strikingly, the TCR of D&E is nearly 100%, meaning that a user, selecting a solution from the Pareto front based on the cost they are willing to pay, will prefer D&E nearly all the time. The D&E models also strongly outperform models evolved from scratch. In fact, the distilled models alone clearly dominate evolution from scratch, showing how evolution strongly benefits from the knowledge of the human-developed solutions. By bootstrapping this knowledge from the distilled models, evolution is able to discover policies that it would not discover on its own.


For each of the twelve IPs, FIGS. 3a-31 show trends of how an IP's stringency tends to increase as the overall cost of the policy increases, comparing the trend for the original submitted prescriptors versus the ones discovered with D&E. For some of the IPs, e.g., Restrictions on Internal Movement and International Travel Controls, the trends are very similar, but for some they are notably different. For example, both submitted and D&E models capture the fact that School Closing and Workplace Closing are high-impact interventions, in that their stringency increases quickly as cost moves from 0 to 5, but the preference of D&E models to use these IPs is even stronger, almost unanimously maxing out their stringency by cost 10. On the flip side, the submitted models steadily increase the use of Facial Covering interventions, while the D&E models only use it once the stringency of other interventions is sufficiently high, reflecting real-world evidence that mask-wearing has the biggest relative impact once the pandemic has been sufficiently suppressed through other policies. Overall, these plots show that D&E is not simply “hacking” the predictor by creating small trivial changes to Distilled models, but is coming up with prescriptors that generate fundamentally different policies.


At the high level visible from FIGS. 2a, 2b, 2c, it is clear that D&E models provide a more complete coverage of the overall cases v. cost trade-off space. However, a user may have a more specific set of constraints with which they are working. For example, they may have strong preferences on which IPs they would like to reduce the stringency of. FIGS. 3a-31 show that across the Pareto front, the D&E models also provide a more flexible set of choices than the distilled models alone. For example, if the user wants to reduce Facial Coverings there is a choice for them on the Pareto front. Notice that this flexibility is a side-effect of D&E, but one could also encode these constraints explicitly in the evolutionary process.


Now that it is clear that D&E provides benefits over the distilled models, we can look at how D&E provides this result. We first look at the suite of models at a high behavioral level. We define a simple notion of behavioral complexity. Any number of possible measures are possible here, but we choose one that is simple and interpretable, namely, the number of times the prescribed policy changes over the prescribed time period, summed over all Geos and IPs. Formally, it is defined empirically as





Complexity(π): =Σg∈GΣi∈IΣt=1T-11(π(qgit≠π(qgi(t-1))),  (11)


where G is the set of Geos, I is the set of IPs, and T is the length of prescriptions in days with t=0 indicating the first day of prescriptions.



FIGS. 4a-4c show the complexities of policies in the various Pareto fronts. FIG. 4a is a scatterplot showing the empirical behavioral complexity of policies on the Pareto front of different methods, i.e., scratch, submitted and evolved. On the right side of the plot we see a firework like behavior, where the D&E models have exploding complexity as restraints (IPs) are lifted. Overall, the evolved policies tend to have higher complexities than the distilled ones that are based on human-developed systems, but the D&E policies provide a mediating trade-off between the high structure of the human-developed policies and the high entropy of evolution from scratch. This plot highlights a key reason for how evolution is able to discover policies that go beyond those prescribed by human-developed models: it is able to explore in a space of higher complexity. FIG. 4b is a distribution of real-world policy complexities in 90-day windows. These complexities are generally much lower than those of the policies suggested by D&E. FIG. 4c shows most complex policy for a single IP. This captures a weekly change in the C7 (Restrictions on Internal Movement) IP, showing how weekly policy changes are implementable in the real world. This observation motivates the version of D&E that only prescribes policy weekly without sacrificing Pareto efficiency. This plot highlights a key reason for how evolution is able to discover policies that go beyond those prescribed by human-developed models: it is able to explore in a space of higher complexity.



FIGS. 5a-5h are plots showing the full policies for 8 Geos of the D&E prescriptor that generates the most complex policies. The complexities of the policies vary substantially by Geo, with some IPs changing frequently, while Iran (FIG. 5e) has fixed policies the entire time (possibly due to the fact that official misreporting of case data in Iran causes IPs to look like noise). Although the more complex policies appear challenging to implement for policy-makers today, in an ideal society they would be implementable, helped by the fact that the policies encode a level of weekly periodicity, so each day of the week may have a particular IP setting assigned. Plus, some real-world Geos have incorporated weekly periodicity into their policies (FIG. 4c). However, for implementation today, changing policies on a weekly basis (FIG. 7a-7h) may be more practical, with marginal reduction in Pareto performance, and in line with the complexity of real-world policies (FIG. 8a-8h). In other words, although one of ways D&E produces innovation is through its discovery of highly complex policies, but such complexity is not required to make it work. (darker colors indicate increased stringency).


Looking at the most complex real-world policy, we notice it contains weekly periodicity. Namely, during this period, internal movement in Portugal was limited on weekends with the restrictions relaxed during the week. So, we check if some of the complexity in the evolved D&E policies is due to an inherent weekly periodicity, and see that this is indeed the case, i.e., evolution settles in to this periodicity. To detect and measure this periodicity, we generalize the Complexity metric defined above to any temporal offset, so that it measures how often the policy changes from what the policy was k days before instead of just one. Formally,





Complexity(π,k): =Σg∈GΣi∈IΣt=kT-1I(π(qgit≠π(qgi(t-k))),  (12)


In other words, assuming a periodicity of k days, how often does the policy change? Across the board, we see that the evolved policies have their complexity minimized at k=7, indicating that their natural periodicity is one week. It is notable that evolution comes upon such a structured periodicity that has been found to be useful in the real world, captures some inherent structure of human organization, and could potentially be useful to do more of in real-world implementations.


That said, we can also generate more easily-implementable strategies from evolved D&E models by allowing them to modify their prescriptions only every k days. We took the same models evolved for daily prescription and allowed them only to make prescriptions every 7 days. FIGS. 7a-7h illustrate full policies for the highest-complexity D&E model, but with weekly prescription (as compared to the daily in FIG. 5a-5h). The generated policies are much more in line with what is seen in the real world. The results show that the evolved models generalize to a constrained, more practical setting without further training or evolution, demonstrating the robustness of the evolved policies.


For comparison, the FIG. 6a-6h are plots showing the full policies for the same 8 for the highest complexity model submitted to the challenge. The highest-complexity submitted model has some interesting behavior. Some of the IPs for some Geos are changed rapidly, and eventually turned off (darker colors indicate increased stringency), but the overall complexity is less than that of the most complex D&E models (FIGS. 5a-5h).


Next, we look at how consistent the results are over multiple runs. To measure the contribution of individual models, we analyze the ancestry of individuals on the final Pareto front of D&E. For each distilled model {circumflex over (π)}i we compute the number of final Pareto front individuals who have {circumflex over (π)}i as an ancestor, and the percentage of genetic material on the final Pareto front that originally comes from {circumflex over (π)}i. Formally, these two metrics are computed recursively. Let Parents(π) be the parent set of π in the evolutionary tree. Individuals in the initial population have an empty parent set; individuals in further generations usually have two parents, but may have only one if the same parent is selected twice during the weighted random selection step. Let F be the set of all individuals on the final Pareto front. Then,










Ancestors
(
π
)

=

{








Parents


(
π
)


=












π






Parents

(
π
)




Ancestors
(

π


)




Parents
(
π
)





otherwise
.

















(
13
)







with





ParetoContributionCount(π)=|{π′:π∈Ancestors(π′) and π∈F|,  (14)


and the percentage of ancestry of π due to π′ is











AncestryPercentage

π



(
π
)

=


{



0





Parents

(
π
)

=


,

π


π








1





Parents

(
π
)

=


,

π
=

π










1



"\[LeftBracketingBar]"


Parents
(
π
)



"\[RightBracketingBar]"









π






Parents
(
π
)






AncestryPercentage

π



(

π


)






othe

r

w

i

s


e
.


















(
15
)








with









ParetoContributionPercentage

(
π
)

=


1



"\[LeftBracketingBar]"

F


"\[RightBracketingBar]"









π





F



AncestryPercentag




e
π

(

π


)

.








(
16
)







It turns out that ParetoContributionCount and ParetoContributionPercentage are highly correlated (Spearman correlation of <correlation> over all distilled models), which is already an encouraging indication that they are measuring the underlying contribution of initial models.



FIGS. 9a and 9b illustrate the consistency of contribution. We can see that by both contribution metrics, the contributions follow a long-tailed distribution, with the top twenty models contributing having an outsized impact on the final Pareto front. However, every distilled model makes at least some contribution to a Pareto front, showing that although some models contribute more, evolution is able to exploit some knowledge from every human-developed solution, unifying them all into a single innovation system. We conclude that both the overall performance of the method and contributions of individual distilled models to the success of the method are consistent of independent runs, demonstrating that the approach is a reliable way to push unify and expand the contributions on human-developed models.


Finally, we analyze the process of evolution itself. One may wonder whether distilling prescriptors into neural networks with gradient descent results in sets of models that can be meaningfully recombined with weight-level recombination operators. It turns out that, yes, despite no explicit biases towards evolvability in the distillation process, the distilled models indeed recombine in ways that generally preserve locality and the intuition for how their phenotypes should manifest. FIGS. 10a-10d illustrate exemplary crossover and ancestries of D&E models. FIG. 10a is a heatmap showing that the crossover operator is coherent. Overall, the evolutionary process behaves as expected, in a highly-structured manner, with child models reliably falling between their parents along the case-stringency tradeoff. Several complete ancestries of models on the D&E Pareto front are shown in FIG. 10b-10d. FIG. 10b shows a high stringency lineage and a lower stringency lineage develop separately, before finally producing a novel, balanced final policy. The ancestry of FIG. 10c starts with some of the least stringent distilled models, combining with high stringency ones towards the end to fill another hole in the Pareto front. And the ancestry of FIG. 10d has substantial overlap with FIG. 10c, but the stochasticity of the procreation module leads to a distinct innovating solution.


The ancestries vary in complexity, and generally make a lot of sense, showing how evolution can discover cool behavior throughout the trade-off space by combining previous models. Although there is a correlation between the performance of teams of expert models and their contribution to the final front, there are some teams with unimpressive quantitative performance in their submissions who end up making outsized contributions through the evolutionary process. This result highlights the value in soliciting a broad diversity of expertise, even if some of it does not have immediately obvious practical utility. AI can play a role in realizing this latent potential.


The heat map of FIG. 10a supports how evolution matches our intuition of how behavior should evolve, even though we are evolving indirectly in the space of neural network weights, not in the space of IP schedules. In other words, evolution naturally picks up from the knowledge of distilled models, and pushes them further in terms of performance and into unexplored regions in a consistent and coherent way.


Accordingly, not only does D&E yield high-performing models, but it continues the process of innovation in a meaningful and intuitively useful way from where the humans left it.


The foregoing description is a specific embodiment of the present disclosure. It should be appreciated that this embodiment is described for purpose of illustration only, and that those skilled in the art may practice numerous alterations and modifications without departing from the spirit and scope of the invention. It is intended that all such modifications and alterations be included insofar as they come within the scope of the invention as claimed or the equivalents thereof.


Though the specific application of the D&E approach described herein addresses the current COVID-19 pandemic response policy, this is but one exemplary application. The specific example of the technology for the COVID-19 pandemic should make it faster/easier to apply the framework to future pandemics should they arise. More generally, such technology should be applicable to any policy decision-making problem where the objectives (costs and benefits) can be effectively measured, and the space of possible policies can be effectively enumerated. The most immediate generalization may be to other applications in public health, but applying such methods to other global-scale problems such as industrial climate policy (where there are economic costs in some areas and economic benefits in others, not to mention the environmental benefits) is also considered.


Another direction of generalization would be to allow users to explicitly specify constraints as part of the context to the prescriptor, leading to more controllable exploration of practical possibilities. In global-scale problems, it becomes extremely difficult for humans to make fully-informed decisions without relying on some form of artificial intelligence (“AI”) to extract useful information from vast data sources. Methods like D&E can help bridge the gap between human-only decision making and AI-from-data-only approaches, so that global policy makers can start adopting such powerful methods sooner, and take advantage of the powerful possibilities that such technologies illuminate, leading to a more appealing future.


Finally, it is to be noted that though the presently described method has been applied to extend the ESP platform, such distillation, followed by injecting in the initial population, could be used in principle to initialize the population of any evolution-based method that evolves functions.


In addition to the solution benefits discussed above, the present embodiments also have the benefit of being relatively environmentally friendly compared to other large-scale AI approaches. Any proposed addition to a policymaker's toolkit must be considered for its environmental impact since environmental impact is currently at the top-of-mind of policy-makers. Fortunately, D&E, as implemented in the present embodiments, has very small energy usage compared to the average real-world deep learning application. This is because D&E does not require gigantic models (w.r.t. number of parameters) and thus can be run efficiently in parallel over CPU, avoiding costly GPU consumption. Thus, if energy consumption becomes an even more significant concern for AI methods, approaches like D&E and ESP more generally may be one sustainable way forward.


Further, one major barrier in the adoption of AI technologies by policy-makers is trust. How can a policy-maker, who is not an AI expert, trust a seemingly black-box system? D&E provides an advantage here: If the initial human-developed models are explainable, e.g., are derived from human-developed decision rules, interpretable epidemiological models, or simpler machine learning methods, then a policy-maker can trust the results of D&E are based in something sensible, and is not simply finding strange patterns in noisy data. Further, trust-building can be created by finding rules that explain the actions suggested by prescriptors. Since the prescriptor NNs are relatively small/shallow, there are a variety of techniques that would be effective here. This is another advantage of the D&E NNs being smaller/shallower than many current deep learning models; they can be effectively audited, a critical property for AI systems maintained by government organizations.

Claims
  • 1. A computer-implemented method for generating optimized prescriptor models for optimal decision making, comprising: generating a set of prescriptor models having a context space and an action space; anddistilling each of the prescriptor models into a functional form evolvable with an evolutionary algorithm framework over multiple generations.
  • 2. The computer-implemented method for generating optimized prescriptor models for optimal decision making, as claimed in claim 1, wherein the prescriptor models are differentiable neural networks that are distilled to fit a given context space and action space based on supervised learning approach.
  • 3. The computer-implemented method for generating optimized prescriptor models for optimal decision making, as claimed in claim 1, further comprising injecting the distilled prescriptor models into the evolutionary algorithm framework to initialize an initial training set.
  • 4. A computer-implemented process for developing optimized prescriptor models for determining optimal decision policy outcomes comprising: building a predictor surrogate model using historical training data to predict an outcome;receiving multiple known model candidates for determining decision policy outcomes, wherein the multiple known models are in one or more formats incompatible with an evolutionary algorithm frame;distilling the multiple known model candidates into a functional architecture that is compatible with the evolutionary algorithm framework;feeding the predictor surrogate model in an evolutionary algorithm framework to train a prescriptor model using evolution over multiple generations, wherein an initial population of candidate prescriptor models includes the distilled multiple known model candidates, and further wherein subsequent generations are evolved based on results of prior generations until a set of optimized prescriptor models are determined.
  • 5. The computer-implemented method of claim 4, wherein the functional architecture is a differentiable neural network.
  • 6. The computer-implemented method of claim 4, wherein the historical training data includes context information (C), actions (A) performed in an included context, and historical C, A, outcome (O) data sets (C, A, O).
  • 7. The computer-implemented method of claim 4, wherein the predictor surrogate model is a machine learning model trained with supervised methods.
  • 8. The computer-implemented method of claim 7, wherein the predictor surrogate model is a neural network.
  • 9. The computer-implemented method of claim 6, wherein one or more elite prescriptor models are iteratively selected and applied to a known policy problem to generate new decision policy data in the format (C, A, O) and the new decision policy data (C, A, O) is iteratively supplied as input to the predictor surrogate model.
  • 10. The computer-implemented method of claim 5, wherein the one or more elite prescriptor models are selected based a fitness evaluation.
  • 11. The computer-implemented method of claim 4, wherein the candidate prescriptor models are evolved to optimize at least two objectives.
  • 12. A computer-implemented method for automatic discovery of intervention policies (IP) to optimize one or more objectives related to an epidemiological event, comprising: training a predictor model, Pd (C, A)=O, implemented on a processor, the predictor model being configured to receive input training data, the input historical training data sets (C, A, O) including context information (C), actions (A) performed in a given context, and outcomes (O) resulting from action performed in the given context;establishing an initial population of candidate prescriptor models, said establishing including receiving multiple known model candidates for determining intervention policies, wherein the multiple known models are in one or more formats incompatible with an evolutionary algorithm framework;distilling the multiple known model candidates into a functional architecture that is compatible with the evolutionary algorithm framework;evolving prescriptor models, Ps (C)=A, implemented on a processor, wherein the prescriptor models are evolved over multiple generations using the trained predictor model as a surrogate to evolve a subset of the candidate prescriptor models in the initial population,the evolved prescriptor models being configured to receive context information as input data, wherein the context information includes epidemiological event data; andoutput actions that optimize the one or more objectives as outcomes corresponding to the received context information, wherein the output actions include implementation of intervention policies.
  • 13. The computer-implemented method of claim 12, wherein the predictor model is a machine learning model trained with supervised methods.
  • 14. The computer-implemented method of claim 13, wherein the predictor model is a neural network.
  • 15. The computer-implemented method of claim 12, wherein the functional architecture is a neural network.
  • 16. The computer-implemented method of claim 12, wherein the one or more objectives include minimizing a number of future epidemiological events and a level of stringency of one or more interventions.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application claims the benefit of priority to U.S. Provisional Patent Application No. 63/208,277, “System and Method For Generating Improved Prescriptors” which was filed on Jun. 8, 2022 and which is incorporated herein by reference in its entirety. The following documents are also incorporated herein by reference: U.S. application Ser. No. 16/424,686 entitled SYSTEMS AND METHODS FOR PROVIDING SECURE EVOLUTION AS A SERVICE which was filed on May 29, 2019; U.S. patent application Ser. No. 16/831,550 entitled PROCESS AND SYSTEM INCLUDING AN OPTIMIZATION ENGINE WITH EVOLUTIONARY SURROGATE-ASSISTED PRESCRIPTIONS filed Mar. 26, 2020; U.S. application Ser. No. 16/902,013 entitled PROCESS AND SYSTEM INCLUDING EXPLAINABLE PRESCRIPTIONS THROUGH SURROGATE-ASSISTED EVOLUTION; U.S. patent application Ser. No. 17/355,971 entitled AI BASED OPTIMIZED DECISION MAKING FOR EPIDEMIOLOGICAL MODELING filed Jun. 23, 2021 and Miikkulainen et al., From Prediction to Prescription: Evolutionary Optimization of Nonpharmaceutical Interventions in the COVID-19 Pandemic, IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 25, NO. 2, APRIL 2021. Additionally, one skilled in the art appreciates the scope of the existing art which is assumed to be part of the present disclosure for purposes of supporting various concepts underlying the embodiments described herein. By way of particular example only, prior publications, including academic papers, patents and published patent applications listing one or more of the inventors herein are considered to be within the skill of the art and constitute supporting documentation for the embodiments discussed herein.

Provisional Applications (1)
Number Date Country
63208277 Jun 2021 US