Embodiments described herein relate generally to systems and methods for measuring risk associated with a portfolio, and in particular, to systems and methods for compound risk factor sampling with integrated market and credit risk for use in determining a portfolio loss distribution.
Financial institutions, resource-based corporations, trading organizations, governments, and others may employ risk management systems and methods to measure risk associated with portfolios comprising credit-risky instruments, such as for example, the trading book of a bank. Accurately evaluating the risk associated with a portfolio of instruments may assist in the management of the portfolio. For example, it may allow opportunities for changing the composition of the portfolio in order to reduce the overall risk or to achieve an acceptable level of risk to be identified.
Evaluating the risk associated with a portfolio is a non-trivial task, as instruments (e.g. securities, loans, corporate bonds, credit derivatives, etc.) in the portfolio can be of varying complexity, and may be subject to different types of risk. An instrument may lose value due to adverse changes in market risk factors, for example. An instrument may also lose value due to changes in the credit state (e.g. a downgrade) of the counterparty associated with the instrument, for example. Consider, by way of illustration, that the price of a bond generally declines as interest rates rise. Interest rates are examples of market risk factors. Further examples of market risk factors may include equity indices, foreign exchange rates, and commodity prices.
Also consider, by way of illustration, that an AA-rated counterparty associated with an instrument of the portfolio may transition to a credit state with a lower rating (e.g., B) or one with a higher rating (e.g., AAA), resulting in an accompanying decrease or increase, respectively, in the values of its financial obligations. These changes may, in turn, affect the values of the associated instrument. In an extreme case, a counterparty may default, typically leaving creditors able to recover only some fraction of the value of their instruments with the counterparty.
Credit state migrations (e.g. transitions to different credit states) may be determined by evaluating movements of a creditworthiness index calculated for a specific counterparty. The creditworthiness index may be based on values of a number of systemic credit drivers that generally affect all counterparties and of an idiosyncratic credit risk factor associated with the specific counterparty.
The systemic credit drivers may comprise macroeconomic variables or indices, such as for example, gross domestic product (GDP), inflation rates, and country/industry indices. Accordingly, these systemic credit drivers generally provide a credit correlation between different counterparty names in a portfolio. In contrast, each idiosyncratic credit risk factor is a latent variable independently associated with a specific counterparty name in the portfolio. Accordingly, these idiosyncratic credit risk factors may also be referred to as counterparty-specific credit risk factors herein.
In general, changes to market risk factors and systemic credit drivers tend to be correlated (i.e. in statistical terms, the market risk factors and systemic credit drivers are co-dependent, not independent). Accordingly, many modern risk management systems and methods may be expected to employ methodologies that integrate market and credit risk (e.g. by ensuring that such co-dependence is reflected in the computation of risk measures associated with a portfolio) in order to more accurately assess the financial risks associated with portfolios of interest. Furthermore, approaches that integrate market and credit risk have been further validated by the advent of the International Standard for Banking Regulations Basel II.
To evaluate risk associated with a portfolio, at least some risk management systems and methods perform simulations in which a portfolio of instruments evolves under a set of scenarios (e.g. a set of possible future outcomes, each of which may have an associated probability of occurrence) over some specified time horizon. The losses (or gains) that a portfolio of interest may incur over all possible scenarios might be represented by a loss distribution. With knowledge of the loss distribution associated with the portfolio, it is possible to compute a risk measure for the portfolio of interest.
However, as it is not possible to determine the exact loss distribution analytically, it may be approximated by an empirical distribution. By way of simulation, under each scenario, an individual loss sample may be generated. The scenario used to generate a given loss sample may represent a certain specific set of market and credit conditions, identified by particular sampled values of market risk factors, systemic credit drivers and/or idiosyncratic credit risk factors defined for the respective scenario.
The loss samples generated under a plurality of scenarios may be used to generate the empirical distribution that approximates the actual loss distribution. Accordingly, it will be understood that the larger the number of scenarios used in the simulation and thus the larger the number of loss samples generated, the more accurate the approximation of the actual loss distribution will be.
Estimates of risk measures associated with the portfolio may then be computed based on the empirical distribution that approximates the actual loss distribution. In this regard, the quality of the estimated measurement of risk will also depend on the number of loss samples generated. It will be understood that the individual loss samples may also be referred to collectively as a “loss sample”, and the number of individual loss samples may be referred to as the size of the “loss sample”.
Some known risk management systems generate loss samples according to a methodology that may be classified as a “simple sampling” approach. In accordance with a “simple sampling” approach, to generate a given loss sample, a corresponding market risk factor sample, systemic credit driver sample, and idiosyncratic credit risk factor sample is generated. In order to integrate market and credit risk, the market risk factors and systemic credit drivers are assumed to evolve in accordance with a pre-specified co-dependence structure. It will be understood that in order to obtain N loss samples using this approach, N market risk factor samples, N systemic credit driver samples, and N idiosyncratic credit risk factor samples will be generated in the simulation for a portfolio of interest. Accordingly, the “simple sampling” approach may be considered to be an example of a “brute force” approach to generating loss samples for the portfolio in the simulation.
Some other known risk management systems generate loss samples according to a methodology that may be classified as a “two-tier” approach. In accordance with a “two-tier” approach, a joint sample of market risk factors and systemic credit drivers is combined with multiple samples of idiosyncratic credit risk factor values to obtain multiple loss samples. In order to integrate market and credit risk, the market risk factors and systemic credit drivers are assumed to evolve in accordance with a pre-specified co-dependence structure. The “two-tier” approach attempts to reduce the number of market risk factor and systemic credit driver samples needed to obtain N loss samples. However, it will be understood that if joint samples of market risk factors and systemic credit drivers are employed, where there is a need to consider a larger number of samples of one type of risk factor (e.g. systemic credit drivers), then a larger number of samples of the other type of risk factor (e.g. market risk factors) will be required.
Yet other known risk management systems do not attempt to integrate market and credit risk when evaluating risk associated with a portfolio. For example, some known risk management systems may derive a loss distribution analytically, ignoring the correlation between changes in market risk factors and systemic credit drivers that exists, in reality.
In one broad aspect, there is provided a computer-implemented method for generating an integrated market and credit loss distribution for the purpose of calculating one or more risk measures associated with a portfolio of instruments by performing a simulation, the method comprising at least the acts of: generating N scenarios, said N scenarios defined by N sets of X, Y, and Z values (Xm,Yms,Zmsi) for all m from 1 to M, for all s from 1 to S, and for all i from 1 to I, wherein X, Y and Z comprise a market risk factor process, a systemic credit driver process, and an idiosyncratic credit risk factor process, respectively; and computing N simulated loss samples by simulating the portfolio over the N scenarios over a first time horizon to produce the integrated market and credit loss distribution over the first time horizon; wherein said act of generating MS scenarios comprises: for each m from 1 to M, generating a sample, having index m, of a vector Ξ of normal random variables; for each m from 1 to M and for each from 1 to S, generating a random sample, having index ms, of ΔY from a conditional distribution of ΔY derived from the sample of the vector Ξ having index m and from a co-variance matrix, ΔY being an increment of Y; for each m from 1 to M and for each s from 1 to S and for each i from 1 to I, independently generating a random sample, having index msi, of ΔZ, ΔZ being an increment of Z; and computing said N sets of X, Y, and Z values (Xm,Yms,Zmsi) for all m from 1 to M, for all s from 1 to S, and for all i from 1 to I, wherein Xm is calculated as a value of X at the first time horizon based on a previous value of Xm, at least one function associated with X, and the sample having index m of the vector Ξ, wherein Yms is calculated as a value of Y at the first time horizon based on a previous value of Yms, a function associated with Y, and the random sample having index ms of ΔY, and wherein Zmsi is calculated as a value of Z at the first time horizon based on a previous value of Zmsi, a function associated with Z, and the random sample having index msi of ΔZ.
In another broad aspect, there is provided a computer-implemented method for generating an integrated market and credit loss distribution for the purpose of calculating one or more risk measures associated with a portfolio of instruments by performing a simulation, the method comprising at least the acts of: generating MS scenarios defined by MS sets of X and Y values (Xm,Yms) for all m from 1 to M, and for all s from 1 to S, wherein X and Y comprise a market risk factor process and a systemic credit driver process, respectively; for each of the MS scenarios, analytically deriving a conditional loss distribution FX
Other aspects, embodiments, and features are also disclosed herein.
For a better understanding of the various embodiments described herein and to show more clearly how they may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings in which:
Specific details are set forth herein, in order to facilitate understanding of various embodiments. However, it will be understood by those of ordinary skill in the art that some embodiments may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Furthermore, details of the embodiments described herein, which are provided by way of example, are not to be considered as limiting the scope of the appended claims.
Embodiments described herein relate generally to risk management systems and methods for evaluating risk associated with a portfolio of instruments. Generally, the system (and modules) described herein may be implemented in computer hardware and/or software. The acts described herein are performed on a computer, which comprises at least one processor and at least one memory, as well as other components as will be understood by persons skilled in the art. Accordingly, one or more modules may be configured to perform acts described herein when executed on the computer (e.g. by the at least one processor). Modules and associated data (e.g. instructions, input data, output data, intermediate results generated which may be permanently or temporarily stored) may be stored in the at least one memory, which may comprise one or more known memory or storage devices. The acts performed in respect of a method in accordance with an embodiment described herein may be provided as instructions, executable on a computer, on a computer-readable storage medium. In some embodiments, the computer-readable storage medium may comprise transmission-type media.
It will also be understood that although reference may be made to a “computer” herein, the “computer” may comprise multiple computing devices, which may be communicatively coupled by one or more network connections. In particular, one or more modules may be distributed across multiple computing devices. It will also be understood that certain functions depicted in the example embodiments described herein as being performed by a given module may instead be performed by one or more different modules or otherwise integrated in the functions performed by one or more different modules.
Risk management systems and methods typically evaluate risk associated with a portfolio of instruments by computing one or more risk measures derived from characteristics of a loss distribution F associated with the portfolio. For example, these characteristics of F may comprise the mean of the loss distribution, the variance of the loss distribution and/or a specified quantile value of the loss distribution. Some regulations (e.g. Basel II) may require that a bank hold sufficient capital to offset a maximum loss that can occur with a given probability p, consistent with the bank's desired credit rating. This loss, known as the Value-at-Risk (VaR), equals the p-th quantile lp of the portfolio loss distribution F, where lp F−1(p).
Due to the complex relationships among, for example, asset prices, exposures, and credit state migrations that affect the instruments of a portfolio, the exact distribution F cannot generally be derived analytically. Rather, it may be approximated by an empirical loss distribution {circumflex over (F)}, which may be obtained by simulating the portfolio under a set of possible future outcomes, or scenarios, to obtain a set of N loss samples to derive the empirical loss distribution. Risk measures may then be computed based on the empirical loss distribution {circumflex over (F)}, which approximates the actual distribution F.
Referring now to
The degree to which {circumflex over (F)} approximates F, and thus the quality of the associated risk estimates, typically depends on the number of loss samples N (also referred to herein generally as the “sample size”).
Referring now to
The effect of the sample size is especially pronounced when estimating quantiles for p close to 1, which is typical for credit portfolios. The quantiles for p close to 1 lie in the extreme right tail of the loss histograms 20, 22 of
Referring now to
The loss sample computation module 24 may comprise a pricing module 36, a credit transition module 40, and a portfolio aggregation module 46.
Pricing module 36 may be configured to apply one or more pricing functions to the sampled values of market risk factors Xn 30 received as input, and to compute the prices of the financial instruments in the portfolio. The market risk factors jointly determine the prices of all financial instruments in the portfolio. Given the prices, the pricing module 36 may compute a simulated exposure table 38 for each counterparty named in the portfolio. Each simulated exposure table 38 indicates the amounts that would be lost or gained if the respective counterparty transitioned to any one of a number of possible credit states. The pricing module 36 can determine the data for each simulated exposure table 38 either stochastically and/or deterministically. Data for each simulated exposure table 38 can be stored in one or more computer memories or storage devices.
A credit transition module 40 may be configured to receive as input sampled values of systemic credit drivers Yn 32 and sampled values of idiosyncratic credit risk factors Zn 34, and to apply a credit transition model to compute a simulated credit state for each counterparty named in the portfolio. The eventual credit state of a counterparty depends on the values of a subset of systemic credit drivers that are common to all counterparties (e.g. sampled values of systemic credit drivers Yn 32), and on (b) the value of a single credit risk factor unique to that counterparty (e.g. selected from the sampled values of idiosyncratic credit risk factors Zn 34).
The credit transition module 40 may also be configured to compute a numerical creditworthiness index for each counterparty as a weighted sum of the sampled values of systemic credit drivers Yn 32 and one of the sampled values of idiosyncratic credit risk factors Zn 34. For example, a vector of creditworthiness indices W=βY+σZ may be computed, where β is a matrix of factor loadings and σ is a diagonal matrix of residual specific risk volatilities, with Y being a vector comprising sampled values of systemic credit drivers and Z being a vector comprising sampled values of idiosyncratic credit risk factors.
Then each counterparty's simulated credit state may be determined by comparing its associated creditworthiness index to a set of threshold values as determined from a specified matrix of credit transition probabilities 42. In particular, a default for a given counterparty may be deemed to occur when its component value of W falls below a certain pre-determined threshold value, as determined from the matrix of credit transition probabilities 42. Data used to populate the specified matrix of credit transition probabilities 42 may be determined based on historical data. Accordingly, the credit transition module 40 outputs a table of simulated credit states 44 for each counterparty, from which a credit state for each counterparty named in the portfolio can be determined. Data for each table of simulated credit states 44, one per YZ pair, can be stored in one or more computer memories or storage devices.
For each counterparty in the portfolio, a portfolio aggregation module 46 determines a sampled loss from instruments with the specific counterparty. The portfolio aggregation module obtains these counterparty losses using the associated table of simulated credit states 44 (which provides the simulated credit state for each counterparty) in conjunction with the associated simulated exposure table 38 (which indicates the amount that would be lost or gained if a specific counterparty transitioned to any one of a number of possible credit states). In this example, given the credit state of a counterparty, the sample loss from instruments with the counterparty may be looked up in its associated exposure table. The portfolio aggregation module 46 is configured to then compute the aggregate portfolio loss sample Ln 48 as the sum of the losses from counterparties. Generated loss samples can be stored in one or more computer memories or storage devices.
The inventors recognized that the computational resources (e.g., time and/or memory) required to implement each of the modules shown in
For example, consider that a particular counterparty's credit state may depend on multiple systemic credit drivers, but on only one idiosyncratic credit risk factor. When the credit transition module 40 computes a creditworthiness index for a given counterparty, processing the set of sampled values of systemic credit drivers Yn 32 generally comprises a greater portion of the computational work relative to that required to generate the sampled value of the one idiosyncratic credit risk factor from the set of sampled values of idiosyncratic credit risk factors Zn 34.
More significantly, computing simulated exposure tables 38 from the sampled values of market risk factors Xn 30 requires the pricing module 36 to price all financial instruments in the portfolio. Since the number of instruments of a portfolio of interest may be very large and will typically far exceed the number of counterparties named in the portfolio, and given that pricing is a mathematically intensive procedure (e.g. especially for derivatives), the act of computing simulated exposure tables 38 by pricing module 36 is generally far more computationally expensive than the computing of simulated credit state tables 44 by the credit transition module 40.
Referring now to
It will be understood that the evolution of each risk factor is governed by an appropriate mathematical model. In this example, specific risk factor models 26 govern the evolution of each risk factor 52, 54, 56 over a predetermined time horizon (or in some instances, multiple time horizons). That is, the risk factor models 26 govern how the risk factor sampling module 50 generates samples of risk factor values 28 for each of the risk factors 52, 54, 56 at each time step of the time horizon. By way of example,
Referring now to
Applying the idiosyncratic credit risk factor model 58 results in the generation of an increment value ΔZ(t) 62 from a sample 60 having a normal distribution with mean zero and variance Δt. Increment value ΔZ(t) 62 is added to the risk factor sample Z(t) 64 previously generated for the time step ending at time t to obtain the newly simulated risk factor sample Z(t+Δt) 68. This process is repeated until t+Δt equals the time horizon of the simulation, yielding a “sample path” of risk factor sample values over the time horizon.
Referring now to
The known “simple sampling” approach generally involves generating one sample for each risk factor at each time step (i.e. an evolution from time t to t+Δt). The risk factor model module 92 that implements the “simple sampling” approach attempts to integrate market and credit risk. Market risk factors X(t) and systemic credit drivers Y(t) evolve in a correlated manner as specified by a pre-specified co-variance matrix Σ 70.
As shown at 72, a joint sample of an increment value ΔX(t) 74 and increment value ΔY(t) 76 is generated according to the pre-specified co-variance matrix Σ 70 from the joint distribution of increment value ΔX(t) 74 and ΔY(t) 76, where ΔY(t) 76 is represented by a centered, normal distribution. Subsequently, these values are added to the risk factor samples X(t) 78 and Y(t) 80 previously generated at the time step ending at time t, to obtain newly simulated risk factor samples, X(t+Δt)88 and Y(t+Δt) 90 respectively.
Idiosyncratic credit risk factors are, by definition, independent and therefore they are unaffected by the co-dependence structure Σ 70. The risk factor model module 92 may generate samples of the idiosyncratic credit risk factors, as was described with reference to the risk factor model 58 of
This risk factor model module 92 repeats this process until all required risk factor samples are generated for all (of one or more) time steps, i.e. when t+Δt equals the time horizon for the simulation.
Referring now to
It will be understood that in order to produce N=12 loss samples using a simple sampling approach, N=12 distinct risk factor sampled values for each risk factor 52, 54, 56 are produced by risk factor model module 92. For example, the market risk factor X′ is sampled N=12 times. Then, each of the sampled values for the given market risk factor (i.e. each of the N=12 values for X1) is used only once, along with the other corresponding sampled risk factors (e.g. one of the N=12 values produced for each of X2, X3, Y1, Y2, Z1, Z2 in the example of
Since each risk factor is sampled N=12 times, as a result, the N=12 sampled losses (L1 to L12), are independent. Generally, it will also be understood that since N samples of each market risk factor 52 are generated, the loss sample computation module 24 (
By way of example, referring back to
This illustrates that with a “simple sampling” approach, although the joint samples are taken for the samples of market risk factors and systemic credit drivers in accordance with a pre-specified co-dependence structure used in an attempt to integrate credit and market risk, N samples of each risk factor must be generated. This may result in computational and resource inefficiencies, particularly since N sets of simulated exposure tables will need to be generated in the simulation under this approach, and in use, N may be very large.
Typically, the number of loss samples N that can be generated in practice is limited by the availability of computing resources (e.g. time and/or memory). Thus resource and/or time intensive processes act as constraints on the number of loss samples N that may be simulated.
In the development of a second known approach to generating risk factor samples as described below, it was recognized that since the idiosyncratic credit risk factors Zn are independent of the market risk factors Xn and systemic credit drivers Yn, any sample of a credit risk factor Zk can be combined with the joint sample of market risk factors and credit drivers (Xn, Yn) taken, while still preserving the required co-dependence structure for market risk factors Xn and systemic credit drivers Yn. It was also recognized that processing the sample idiosyncratic credit risk factor values Zn to compute creditworthiness indices is generally computationally inexpensive, relative to other processing acts performed when computing loss samples.
Referring now to
In the example of
The multiple idiosyncratic credit risk factor samples Z(t+Δt) 93 may be used with one joint market risk factor sample X(t+Δt) 88 and systemic credit driver sample Y(t+Δt) 90 to obtain multiple loss samples, one for each Z(t+Δt) 93. Although the resultant loss samples are no longer independent as multiple loss samples are generated from the same market risk factor sample X(t+Δt) 88 and systemic credit driver sample Y(t+Δt) 90, they do nevertheless satisfy the weaker technical condition known as m-dependence.
Referring now to
It can be observed that, using this “two-tiered” approach, N=12 portfolio loss samples can be obtained by combining I=4 (118) idiosyncratic credit risk factor samples with each of B=3 (116) joint samples of market risk factors and systemic credit drivers. In this example, four idiosyncratic credit risk factor samples Z(t+Δt) 93 are used with each given market risk factor sample X(t+Δt)88 and each systemic credit driver sample Y(t+Δt) 90 (
In this example, three groups 102, 104, 106 of sets of risk factor samples are generated. For each group, only one sample of a given market risk factor e.g. (94 X1) and credit risk factor is generated, and is re-used when combined with one of four samples of the idiosyncratic credit risk factors, to generate four different sets of risk factor samples per group, in this example. Each set of risk factor samples can be used to calculate a loss sample, and accordingly, N=BI=12 loss samples can be generated by this approach in the example as shown.
More specifically, four loss samples (L1 to L4) are generated by, for example, loss sample computation module 24 (
Referring back to
Accordingly, use of the “two-tiered” approach typically results in a reduction in the number of distinct samples of market risk factors and systemic credit drivers required relative to the “simple sampling” approach (e.g. see
Referring now to
The inventors realized, however, that although the “two-tiered” approach provides certain advantages over the “simple sampling” approach, a number of practical limitations may arise with the former approach in certain applications. For example, there may be a limit on the number of idiosyncratic credit risk factor samples that may be employed, i.e. the size of I. It may be observed that beyond a certain point, simply generating more idiosyncratic credit risk factor samples for each joint sample in the “two-tiered” approach as described above is no longer effective for improving the approximation of the loss distribution {circumflex over (F)} for a given portfolio. In particular, if certain counterparties incur significant systemic credit risk (i.e., their eventual credit states depend largely on the systemic credit drivers), then a large number of samples of systemic credit drivers Y would be required in order to accurately approximate the right tail of the computed loss distribution (e.g. see
The inventors also observed that the known “two-tiered” approach does not provide guidance on how a given selection of B and I might impact the quality of risk estimates calculated from a generated loss distribution. In practice, implementations of the “two-tiered” approach typically require B and Ito be determined through trial and error.
Compound Risk Factor Sampling and Optimized Sampling Scheme
In accordance with at least one embodiment, a compound risk factor sampling approach is employed in systems and methods described herein. In one broad aspect, compound risk factor sampling is performed that generally comprises conditionally generating multiple samples of systemic credit driver Y for each sample of market risk factor X generated, at each time step of a time horizon for a simulation.
This approach may reduce the number of costly simulated exposure calculations (e.g. generated simulated exposure tables 38 of
Compound Risk Factor Sampling
In at least one embodiment described herein, a compound risk factor sampling approach as described herein is used to generate an integrated market and credit loss distribution for the purpose of calculating one or more risk measures associated with a portfolio of instruments by performing a simulation.
A market risk factor process is denoted as X(t), a systemic credit driver process as Y(t), and an idiosyncratic credit risk factor process as Z(t). In at least one embodiment, each of the processes are vector-valued, with X(t) and Y(t) indexed by the individual scalar risk factors and Z(t) indexed by the counterparty names in the portfolio. The simulation is performed for at least one time horizon, wherein the time horizon comprises at least one time step. Let t and t+Δt be two consecutive simulation times.
For a compound risk factor sampling approach, the following assumptions are made:
Note that Σ will depend generally on t, Δt, X(t), Y(t) even though this dependence is suppressed in the notation.
Referring now to
In this simplified example, it may be observed that the risk factor model module 142 implements a “two-tiered” approach, since at the end of each time step t+Δt, a single market risk factor sample X(t+Δt)88 and a single systemic credit driver sample Y(t+Δt) 90 are generated, along with multiple idiosyncratic credit risk factor samples Z(t+Δt)93.
Referring now to
As shown in
Generally, the risk factor model module 144 conditionally generates risk factor samples for the time step ending at time t+Δt, (i.e. samples X(t+Δt) 88, Y(t+Δt)s 136, and Z(t+Δt)s 93) by generating the increment values Ξ(t) 120, ΔY(t)s 132, and ΔZ(t)s 82 respectively, using the relations derived from the above assumptions:
X(t+Δt)=G−1(G(X(t))+Ht,Δt(Ξ(t)))
Y(t+Δt)=Y(t)+ΔY(t)(for each Y(t))
Z(t+Δt)=Z(t)+ΔZ(t)(for each Z(t)).
The risk factor model module 144 is provided with the predetermined co-variance matrix Σ 124 that defines the joint evolution of market risk factors and systemic credit drivers over time. In at least one embodiment, Σ 124 is a covariance matrix of a random vector (Ξ(t), ΔY(t)) that is conditional on X(t) and Y(t) and is jointly normally distributed.
The risk factor model module 144 generates a sample of a vector Ξ(t) 120 (as defined above) of normal random variables with a distribution N(0,Σ11). This vector Ξ(t) 120 is used to obtain the market risk factor sample X(t+Δt) 88 and conditionally generate the systemic credit driver samples Y(t+Δt)s 136.
Specifically, the risk factor model module 144 obtains a market risk factor sample X(t+Δt) 88 by transforming the random vector Ξ(t) 120 via the above defined bijectve function Ht,Δt conditional on the previously obtained (i.e. at the end of time step t) market risk factor sample X(t) 78. This results in the increment value ΔG(X(t)) 122 (i.e. H(Ξ(t); X(t))), where ΔG(X(t))≡G(X(t+Δt))−G(X(t)). A transformation module 140 may be configured to use the increment value ΔG(X(t)) 122 to obtain X(t+Δt) 88, since X(t+Δt)=G−1(G(X(t))+H(Ξ(t); X(t))). The specific functions used for G and Ht,Δt may depend on how the market risk factor process X is modeled. The market risk factor sample X(t+Δt) 88 is generated based on the sample of the vector Ξ(t) 120 of normal random variables, the model for the market risk factor process X, and a previous market risk factor sample X(t) 78 generated at the end of time step t.
The risk factor model module 144 generates systemic credit driver samples Y(t+Δt)s 136 conditionally on X(t), X(t+Δt), and Y(t), (or equivalently on X(t), Ξ(t), and Y(t)), by implementing a conditional parameters module 126 and a CBM model 138.
Given the random vector Ξ(t) 120 and the co-variance matrix Σ 124, a conditional parameters module 126 computes a conditional mean μ(Ξ(t)) and conditional co-variance matrix {tilde over (Σ)} 128, where:
μ(Ξ(t))=Σ21Σ11−1Ξ(t)
{tilde over (Σ)}=Σ22−Σ21Σ11−1Σ12.
In a case where Σ11 is not invertible, then alternatively the conditional parameters module 126 may use, for example, a Moore-Penrose generalized inverse, Σ11+ in place of Σ11−1.
The conditional parameters 128 (μ(Ξ(t)) and {tilde over (Σ)}) are provided to the CBM model 138 for defining the multi-sample conditional distribution 130 for generating the multiple increment values ΔY(t)s 132. Specifically, the increment values ΔY(t)s 132 are generated from a multi-sample with the conditional normal distribution N(μ(Ξ(t)),{tilde over (Σ)}).
These increment values ΔY(t)s 132 are combined with multiple systemic credit driver samples Y(t)s 134 previously generated at the time step ending at time t. This results in multiple systemic credit driver samples Y(t+Δt)s 136 being conditionally generated on the market risk factor sample X(t+Δt) 88.
In addition, the risk factor model module 144 may independently generate multiple idiosyncratic credit risk factor samples Z(t+Δt)s 93, as generally described in relation to
The risk factor model module 144 will repeat this process until the steps are performed for a given time step t+Δt that is the last time step of the time horizon. Although only one market risk factor sample is shown to be generated in this example, multiple market risk factor samples (M) may be generated at the end of each time step t+Δt, with the systemic credit driver samples generated conditionally on each of the market risk factor samples, as will be explained herein.
The risk factor model module 144 will be further illustrated with a simple example consisting of three risk factors: two market risk factors—an equity value Xe, following a Geometric Brownian Motion and a mean reverting interest rate Xr—and a single systemic credit driver Y, following a Brownian Motion:
dXe=vXedt+σ1XedB1
dXr=a[
dY=dB3
where v is a constant growth rate,
The solutions to these stochastic differential equations are given as
Moreover, the increments are given by
Thus we can set (with transposition of matrices denoted by a superscript, “'”, and vectors represented as columns):
Indeed, (Ξ1(t),Ξ2(t),ΔY(t))′ is normally distributed with mean (0,0,0)′ because we can write it in the form
∫tt+ΔtA(s)(dB1(s), dB2(s), dB3(s))′
for a deterministic matrix function A: A(s)=diag(1, exp(−a[t+Δt−s], 1)).
Using the well known result:
[∫tt+Δtφ(s)dBi(s)∫tt+Δtψ(s)dBj(s)]=ρij∫tt+Δtφ(s)ψ(s)ds
for deterministic integrands, φ and ψ, the covariance matrix Σ 124 of (Ξ1(t),Ξ2(t),ΔY(t))′ is found to be
Using these illustrative example results, the generation of risk factor samples at each given time step t+Δt (by e.g. risk factor model module 144) is reduced incrementally to that for Ξ,ΔY,ΔZ, as described above in relation to
Referring now to
For illustrative purposes, the example compound risk factor sampling module 200 receives as input three market risk factor processes 202 (X1, X2, X3), two systemic credit driver processes 204 (Y1, Y2), and two idiosyncratic credit risk factor processes 206 (Z1, Z2). The compound risk factor sampling module 200 also receives covariance matrix Σ 124 (
The risk factor model module 144 implements at least one market risk factor model 208 for generating samples for at least one specified market risk factor. In this example, the risk factor model module 144 implements a risk factor model for each of the three market risk factors 202. The models for the market risk factor process may be any of the models described above, such as for example, Brownian motions (with or without drift); Ornstein-Uhlenbeck processes; Hull-White processes; Geometric Brownian motions; and Black-Karasinski processes.
An example market risk factor is X3, and a market risk factor sample Xn3 is generated by the risk factor model module 144.
The risk factor model module 144 further implements CBM models (e.g. CBM model 212) for generating systemic credit driver samples of the systemic credit driver processes 204, as is described in relation to CBM model 138 of
Idiosyncratic credit risk factors 206 are modeled as Brownian motions. For example, samples for idiosyncratic credit risk factor Z1 evolve as is described in relation to
The compound risk factor sampling module 200 further receives a sampling scheme, or a set of parameter values for M 216, S 218 and (optionally) I 220. These parameter values indicate the number M of market risk factor samples, the number S of systemic credit driver samples for each of the M market risk factor samples, and the number I of idiosyncratic credit risk factor samples for each of the S of systemic credit driver samples, that are to be generated at each time step of the simulation. Details of how these sample size values M, S, I may be optimally determined will be described herein in accordance with at least one embodiment.
The compound risk factor sampling module 200 uses the resulting set of risk factor samples in defining risk factor scenarios.
Referring now to
At 305, at least a first time horizon for performing the simulation is identified. The time horizon comprises at least one time step, and may comprise a plurality of time steps. Furthermore, a simulation may be performed for multiple time horizons by repeatedly performing 320 to 365 in order to generate risk measures for each time horizon.
At 310, data identifying a market risk factor process X, a systemic credit driver process Y, and an idiosyncratic credit risk factor process Z is received as input. The market risk factor process X is a vector-valued process indexed by individual scalar risk factors, the systemic credit driver process Y is a vector-valued process indexed by individual scalar risk factors, and the idiosyncratic credit risk factor process Z is a vector-valued process indexed by counterparty names in the portfolio of instruments.
The data identifying processes X, Y, and Z comprises, for each process X, Y and Z, a start value or initial value, at least one function representing a model (i.e. Brownian Motions (with or without drift); Ornstein-Uhlenbeck processes; Hull-White processes; Geometric Brownian Motions; Black-Karasinski processes), and zero or more parameters for the model associated with the respective process.
In addition, at 310, data comprising one or more co-variance matrices (e.g. Σ 1124) is received. As described above, the one or more co-variance matrices define the joint evolution of X and Y over the first time horizon. If the time horizon comprises multiple time steps, one of the one or more co-variance matrices is associated with each of the time steps, and accordingly, defines the joint evolution of X and Y over the respective time step.
At 315, a first parameter M, a second parameter S, and a third parameter I are identified. These parameter values define a compound risk factor sampling scheme. Specifically, M defines a desired number of market risk factor samples, S defines a desired number of systemic credit driver samples that are to be generated for each of M market risk factor samples, and I defines a desired number of idiosyncratic credit risk factor samples to be generated for each of S systemic credit driver samples. Accordingly, the sampling scheme will define the desired number of risk factor samples for the time horizon. More particularly, M is a value greater than 0, S is a value greater than 1, and I is a value greater than 0, in at least one embodiment. As shown in
Generally, acts 320 to 350 relate to the generation of N=MSI risk factor scenarios for the time horizon. However, if the time horizon contains multiple time steps, then acts 320 to 345 are repeated until the end of the given time step is also the end of the time horizon identified at 305. In one example embodiment, the time horizon has two time steps, such that acts 320 and 345 will be repeated twice, in generating the N scenarios for the time horizon.
For ease of reference, the following indexing scheme will be used to refer to particular risk factor samples:
The N=MSI scenarios are defined by N sets of X, Y, and Z values (Xm,Yms,Zmsi) for all m from 1 to M, for all s from 1 to S, and for all i from 1 to I. In one example embodiment, these N scenarios for the time horizon will be generated after performing acts 320 to 345 twice, once for each time step. Acts 320 and 345 will be described generally with reference to a given time step.
At 320, for each m from 1 to M, a sample, having index m, of a vector Ξ(t) (e.g. Ξ(t) 120 of
At 325, for each m from 1 to M and for each s from 1 to S, a random sample, having index ms, of ΔY(t) from a conditional distribution N(μ(Ξ(t)), {tilde over (Σ)}) is generated. The conditional distribution is derived from the sample of the vector Ξ(t) having index m, and from the one or more co-variance matrices received at 310. Again if the time horizon contains multiple time steps, then the co-variance matrix used is the one associated with the given time step. As shown in
At 330, for each m from 1 to M and for each s from 1 to S and for each i from 1 to I, a random sample, having index msi, of an increment of Z (ΔZ) is independently generated. The generation of the samples for ΔZ is generally as is described in relation to
At 335, for each of the M samples of the vector Ξ(t), a market risk factor sample Xm, m ε{1, 2, . . . , M}, is calculated for a given time step using the sample having the index m for the vector Ξ(t). The market risk factor sample Xm is calculated as is generally described in relation to
At 340, for each of the MS samples of ΔY(t), a systemic credit driver sample Yms, m ε{1, 2, . . . , M}, and s ε {1, 2, . . . , S}, is calculated for a given time step using the ms-th sample of ΔY(t). The systemic credit driver sample Yms is calculated as is generally described in relation to
At 345, for each of the MSI samples for ΔZ, an idiosyncratic credit risk factor sample Zmsi, m ε {1, 2, . . . , M}, s ε {1, 2, . . . , S}, and i ε {1, 2, . . . , I}, is calculated for a given time step using the msi-th sample of ΔZ. The idiosyncratic credit risk factor sample Zmsi is calculated as is generally described in relation to
If the end of the given time step is not the end of the time horizon, then steps 320 to 345 are repeated for the next time step. This may result in the generation of intermediary market risk factor samples, systemic credit driver samples, and idiosyncratic credit risk factor samples, which may be stored in at least one memory and/or at least one storage device.
At 350, N=MSI risk factor scenarios are generated for the time horizon. The N scenarios are defined by N sets of X, Y, and Z values (Xm,Yms,Zmsi) for all m from 1 to M, for all s from 1 to S, and for all i from 1 to I. Note that the values (Xm,Yms,Zmsi) are the samples for a given time step, with the end of the given time step equal to the end of the time horizon. Put another way, the scenarios generated at 350 in at least one embodiment are a result of a simulation performed over the time horizon.
Referring now to
Then, for each market risk factor sample X., where m ε {1, 2, . . . , M} (e.g. node 410) there are S conditional samples of systemic credit driver Y generated (such as e.g. set 404). This results in a total set of systemic credit driver samples of size MS, or (Ym1, . . . , Yms) for each m ε {1, 2, . . . , M} (i.e. S samples of Y per sample of X) and is shown as the second level of the tree.
For each of the market risk factor samples m ε {1, 2, . . . , M} and a corresponding systemic credit driver sample from the generated systemic credit driver samples s ε {1, 2, . . . , S}, there are I idiosyncratic credit risk factor samples generated (such as e.g. set 406). This results in a total set of idiosyncratic credit risk factor samples of size MSI (i.e. I samples per MS market risk factor—systemic credit driver sample) and is shown as the third level of the tree.
Referring back to
Each of the N=MSI loss samples may be denoted as L(Xm,Yms,Zmsi), in respect of a given m, s and i. Using the N=MSI loss samples, the empirical unconditional loss distribution function {circumflex over (F)} may be obtained. The distribution may also be stored. For any loss value l then {circumflex over (F)}(l) is the proportion of the simulated loss samples which are less than or equal to a given value l where:
where 1{ . . . } is the indicator of the event in braces, taking the value 1 if the event occurs, or 0 if the event does not occur.
The empirical unconditional loss distribution function {circumflex over (F)} may then be used to calculate one or more risk measures, which may be used for evaluating risk associated with the portfolio.
Accordingly, at 360, at least one risk measure for the portfolio is calculated from one or more characteristics of the empirical unconditional loss distribution {circumflex over (F)}. For example, a risk measure may be one of: a mean, a variance, a value at risk equaling a specified p-quantile, an unexpected loss equalling a specified p-quantile, and an expected shortfall equaling a specified p-quantile as previously defined.
At 365, the at least one risk measure calculated at 360 is stored and/or output for use in evaluating the risk associated with the portfolio.
In the “two-tiered” approach, joint samples of market risk factors and systemic credit driver samples are taken in a manner that accounts for the correlation between changes in market risk factors and systemic credit drivers. For a desired number of distinct systemic credit driver samples (e.g. an increased number relative to other risk factors may be desired to accurately approximate the loss distribution for certain portfolios), generation of joint samples will require that a corresponding market risk factor sample be generated for each systemic credit driver sample. This also holds for a “simple sampling” approach.
Accordingly, when it is considered necessary to generate a large number of distinct systemic credit driver samples, a correspondingly large size M of distinct market risk factor samples is also generated when computing sample losses. Computing sample losses for an increased number of distinct market risk factor samples may increase cost (e.g. is computationally expensive) much more significantly relative to the increase in cost when the number of distinct systemic credit driver samples and/or the number of distinct idiosyncratic credit risk factor samples is increased. This may be due in part to, for example, the increase in the number of derivative positions of a portfolio, which must be valued for each of the distinct market risk factor samples generated.
In contrast, with a compound risk factor sampling approach, it becomes possible to sample market risk factors and systemic credit drivers in a manner that allows the number of distinct market risk factor samples (i.e. M) and the number of distinct systemic credit driver samples (i.e. MS) in generated scenarios to be different. Accordingly, an increase in the number of distinct systemic credit driver samples does not require a corresponding increase in the number of distinct market risk factor samples required.
At least one embodiment described herein, as described with reference to
Embodiments of the method 300 described with reference to
The above formula for the empirical unconditional loss distribution {circumflex over (F)} may be rearranged to:
FX
where 1{ . . . } is the indicator of the event in braces, taking the value 1 if the event occurs, or 0 if the event doesn't occur.
In a variant embodiment, an analytic valuation or approximation for FX
Referring now to
Act 305 is generally as is described in relation to
Acts 320 to 351 are similar to acts 320 to 350 of
At act 352, for each of the MS scenarios defined by MS sets of X and Y values (Xm,Yms) for all m from 1 to M, and for all s from 1 to S, a conditional loss distribution FX
As previously noted, in at least one variant embodiment, an analytic valuation or approximation for FX
Alternatively, by the same independence property, the conditional loss distributions FX
Accordingly, by way of example, the following methods may be employed to calculate the conditional loss distributions, FX
At 353, the unconditional loss distribution {circumflex over (F)} is calculated as a mixture i (e.g. the mean) of the MS conditional loss distributions, such that:
Finally, acts 360 and 365 are generally as described with reference to
Referring now to
Embodiments of method 300 as described in
In
At 312, the portfolio of interest is partitioned into a first sub-portfolio and a second sub-portfolio. Only two sub-portfolios are shown for ease of explanation, however, it will be understood that the portfolio may be partitioned into more than two non-overlapping groups in variant embodiments. Generally for each of the sub-portfolios, MS empirical conditional loss distributions are calculated using any of the previously identified methods, for example. These may include, for example, MC, LLN, CLT and convolution via FFT. By way of illustration,
At 313, for the first sub-portfolio, MSI risk factor scenarios for the time horizon are generated. The MSI risk factor scenarios may be generated by, for example, performing the acts 315 to 350 as described with reference to
At 314, MSI simulated loss samples for the first sub-portfolio are computed by simulating the first sub-portfolio over the MSI risk factor scenarios. The simulated loss samples may be generally computed as described with reference to
At 317, for each m ε {1, 2, . . . , M} and s ε {1, 2, . . . , S}, an empirical conditional loss distribution function, FX
where 1{ . . . } is the indicator of the event in braces, taking the value 1 if the event occurs, or 0 if the event doesn't occur.
This results in MS conditional loss distribution functions FX
At 319, the risk factor samples obtained in relation to the first sub-portfolio are re-used in the processing of the second sub-portfolio to produce MS risk factor scenarios for the second sub-portfolio. Specifically, MS risk factor scenarios for the second sub-portfolio are defined by MS sets of X and Y values (Xm,Yms) for all m from 1 to M, and for all s from 1 to S obtained for the first sub-portfolio.
At 321, the act performed at 352 as generally described with reference to
At 323, the MS conditional loss distributions FX
At act 354, the unconditional loss distribution {circumflex over (F)} for the portfolio is calculated as a mixture (e.g. a mean) of the MS conditional loss distributions, such that:
Acts 360 and 365 are performed as generally described with reference to
Sample Size Determination
In another broad aspect, systems and methods to facilitate the selection of appropriate risk factor sample size values (e.g. M, S and optionally I) are provided. In at least one embodiment, appropriate values can be automatically selected given a set of performance requirements.
For example, in the context of embodiments described herein with reference to
The primary performance criterion is the variability of the resulting estimates of the one or more risk measures obtained from the empirical loss distribution {circumflex over (F)}. Examples of risk measures may include, without limitation: a mean, a variance, a value at risk equalling a specified p-quantile, an unexpected loss comprising a value at risk equalling a specified p-quantile less a mean, and an expected shortfall comprising an expected value of losses that exceed a specified p-quantile as previously defined.
The VaR lp(the pth quantile) of the loss distribution {circumflex over (F)} can be estimated from N loss samples by the empirical p-quantile {circumflex over (l)}p, which is defined as:
{circumflex over (l)}p=L└Np┘+1)
where L(k) is the kth order statistic, i.e., the kth smallest value of the N loss samples.
For example, if N=100 then the 97.5 percentile (p=0.975) is estimated by the kth order statistic (i.e. L(k) where k=└97.5┘+1=98. In this example, the 97.5 percentile is estimated by the third largest loss of the N loss samples.
As the size N of loss samples becomes large, the sample quantile {circumflex over (l)}p of an m-dependent sequence has variance Var({circumflex over (l)}p) defined as follows:
where f is the probability density of the loss distribution.
Using the Law of Total (Conditional) Variance, it can be shown that:
for appropriate coefficients v10, v20 and v30.
Defining Var ({circumflex over (F)}(lp))≡ρ2, the following variance decomposition result is obtained.
Proposition 1 There are nonnegative constants, v10, v20, v30, which do not depend on M, S, I, such that
It will be understood that the last term is absent for embodiments applying a pure analytic technique (see e.g.
v10=Var(E[FX,Y(lp)|X],v20=E[Var(FX,Y(lp)|X)],
the expression for v30 depending on the particular technique.
For a pure MC method, the term I); is defined as:
v30=p−E[{FX,Y(lp)}2].
The term v30 is not applicable for the pure analytic method.
For an analytic-MC hybrid method, let FX,YA denote the conditional loss distribution for the part of the portfolio using analytic methods and let FX,YMC denote the conditional loss distribution for the part of the portfolio using the MC method. Thus FX,Y=FX,YA*FX,YMC where * is a convolution of cumulative distribution functions such that FX,YA*FX,YMC(l)=∫FX,YA(l−l′)d FX,YMC(l′)).
Then, in a hybrid case, the term v3° is defined as:
v30=E[((FX,YA)2*FX,YMC)(lp)]−E[{FX,V(lp)}2]
where (FX,YA) is treated as a cumulative distribution function and * again denotes the convolution of cumulative distribution functions.
Formally, the analytic case is just the MC case with / set to ∞.
Therefore, the variance of the estimated p-quantile (i.e. the estimated VaR) is related to the risk factor sample sizes as follows
In practice, the values of the coefficients v10, v20, v30 and the density f(lp) are estimated from an initial pilot simulation with M, S and I chosen to be large.
Once these values have been obtained (e.g. by a pilot simulation module 545 of
In summary, determining a desired sampling scheme generally involves identifying an acceptable variance level for a risk measure, and computing a variance of estimates of said selected one risk measure. Finally M, S and I are determined such that said variance is within said acceptable variance level.
For example, if the risk estimate is the VaR, then the variance of that particular risk measure may be computed using Equation 2a. Then M, S and I are determined such that the variance of the estimated VaR is within an acceptable tolerance level.
As a further example, the mean of the loss distribution can be estimated from N=MSI sampled losses by the sample mean
Similar to the estimated p-quantile, the variance of the sample mean can be expressed as:
for appropriate coefficients v10, v20 and v30. In this case, the coefficients are given by
v10=Var(E[L(X,Y,Z)|X]),
v20=E[Var(Λ(X,Y)|X)] where Λ(X,Y)=E[L(X,Y,Z)|X,Y]
and
v30)=E[Var(L(X,Y,Z)|X,Y)]
If the MS conditional loss distributions FX
where {circumflex over (μ)}ms≡Λ(X,Y) using the notation above. In this case, the values of v10 and v20 are the same as for the sample mean while v30=0.
As noted previously, in practice the number of risk factor samples that can be generated may be limited by computational resource and/or time constraints. For example, since banks typically assess risk on a daily basis, there may be an 8-hour window for completing the simulation. It is possible to use an expression for the variance of the desired estimator (e.g. Equation 2a or 2b, for risk measure VaR and mean respectively) in conjunction with such constraints to obtain an optimal sampling scheme (e.g. a set of sample sizes M, S and I) that minimizes the variability of risk estimates while satisfying constraints on resources and/or time.
Suppose that a time window of length T is available for the simulation and that the processing times for the various types of risk factor samples are:
These processing times may be received as input (e.g. via input module 540) and/or obtained or computed otherwise prior to determining the sampling scheme.
The optimal sampling scheme may be obtained by solving the following optimization problem:
If no sampling of Z is performed, as is the case with analytic methods (see e.g.
Referring now to
Risk factor simulation system 500 generally comprises input data modules 540 to support the loading and managing of large amounts of information obtained from various data sources as input (i.e. internal applications, internal data sources, external data sources, market sources, instrument sources). Input data modules may receive data identifying a market risk factor process X, a systemic credit driver process Y, and an idiosyncratic credit risk factor process Z, for example. Again, X may be a vector-valued process indexed by individual scalar risk factors, Y may be a vector-valued process indexed by individual scalar risk factors, and Z may be a vector-valued process indexed by counterparty names in the portfolio of instruments. The data identifying processes X, Y, and Z comprises, for each process X, Y and Z, a start value, at least one function representing a model, and zero or more parameters for the model associated with the respective process.
Input data modules 540 may also receive data comprising one or more co-variance matrices that define the joint evolution of X and Y over the first time horizon, or over a given time step in the event the time horizon comprises multiple time steps.
Input data modules 540 may also receive data indicating a predetermined time period T over which to perform the risk factor simulation (e.g. time T 516 of
The data received by input data modules 540 may be stored in, for example, a database 550 (internal or external), which may be implemented using one or more memories and/or storage devices, for access by other system 500 modules. In addition, other data generated and/or utilized by the system 500 modules may be stored in database 550 for subsequent retrieval and use.
The risk factor simulation system 500 further comprises an initial pilot simulation module 545 for estimating values for coefficients v10, v20, v30 and the probability density of the loss distribution f(lp) 512. The initial pilot simulation module 545 selects large values for M, S and I and runs an initial pilot simulation using the system 500 to obtain the pilot simulation loss distribution {circumflex over (F)}. The coefficients v10, v20, v30 510 and the density f(lp) 512 are then estimated from the pilot simulation loss distribution {circumflex over (F)}.
The main components of risk factor simulation system 500 (
The optimized sampling scheme module 502 receives the initially estimated coefficients v10, v20, v30 510 and the density f(lp) 512 from initial pilot simulation module 545. The optimized sampling scheme module 502 may also receive additional data, for example, from database 550 or input module 540, such as the time T 516 available for performing the simulation and the processing times cM, cS, cI 514 for generating each of the risk factor samples.
The optimized sampling scheme module 502 is configured to solve one or more predefined optimization problems, such as e.g. Equation 3a, to compute parameters for the optimal sampling scheme (M, S, I) 508. Other optimization problems relating M, S, and (optionally) I to the variability of the selected risk measure(s) may alternatively be implemented in variant embodiments.
For example, in the event that an analytic technique is used to derive the unconditional loss distribution, such as is described with reference to
In addition, the optimization module 502 may receive other performance related data, such as a performance level parameter indicating a required maximum level of variability for one or more risk measures. The optimization module 502 may use such data to identify a maximum acceptable variance level for at least a selected one risk measure.
The optimization module 502 is configured to compute a variance of estimates of the selected one risk measure, as described herein. Finally, the optimization module 502 determines values for M, S and, optionally, I, such that the variance is within the acceptable variance level (e.g. by evaluating Equation 2a and/or 2b).
Further, the optimization module 502 may be configured to evaluate equations 2a and/or 2b in conjunction with solving an optimization problem (e.g. 3a and/or 3b) to obtain an optimal sampling scheme 508 that provides an acceptable level of variability as indicated by a specified performance level. For example, for p=0.999, then Var({circumflex over (l)}p), the variance of the p-quantile (or VaR), provided by Equations 2a may be required to be at least equal to (if not lower than) the specified performance level considered acceptable.
For illustration purposes, in this example, optimized sampling scheme module 502 (
The optimized sampling scheme module 502 provides data identifying the optimal sampling scheme 508 to the compound risk factor sampling module 200. The compound risk factor sampling module 200 generally implements, for example, acts 320 to 350 of
Referring to
Comparing the risk factor samples 504 illustrated in detail in
Referring to
Specifically, the three risk factor subset of the compound risk factor sample 504 is illustrated as a three level tree (as in
Referring back to
The MSI=N=12 simulated loss samples 506 may then be provided to a loss distribution module 528. The loss distribution module 528 may be configured to determine an empirical unconditional loss distribution {circumflex over (F)} based on the simulated loss samples 506, as may be generally described with reference to act 355 of
Alternatively, the compound sampling module 200 may provide MS risk factor scenarios (defined by the set of risk factor samples) directly to the loss distribution module 528. The loss distribution module 528 may be configured to perform acts 352 and 353 of
Further, in the event the portfolio is partitioned into two sub-portfolios for example (as is described in relation to
Finally, a risk measure module 530 is configured to determine at least one risk measure using at least one characteristic of the approximate loss distribution. Example risk measures may include, without limitation: the mean, the variance, the VaR (the p-quantile), unexpected loss, and expected shortfall. The one or more computed risk measures may be used to evaluate risk associated with the portfolio of interest, which integrates credit and market risk. The risk measure may be stored (in e.g. database 550) and/or output by the risk factor simulation system 500, for further use.
The compound risk factor sampling scheme described herein may be extended to encompass other portfolio risk model variations, in variant embodiments.
What has been described herein is merely illustrative of a number of example embodiments. Other configurations, variations, and arrangements to the systems and methods may be implemented by those skilled in the art without departing from the spirit and scope of the embodiments described herein as defined in the amended claims.
This application is a divisional of prior U.S. patent application Ser. No. 12/026,781, filed on Feb. 6, 2008, the entirety of which is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
20090198629 | De Prisco et al. | Aug 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20110119204 A1 | May 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12026781 | Feb 2008 | US |
Child | 13011553 | US |