Operational due diligence relates to various aspects of assessing the running of a business to mitigate risk to clients as well as members of the organization in the area of operations. For investment entities, such as investment funds, private equity funds, infrastructure funds, and hedge funds, operational due diligence aspects can include an assessment of an investment vehicle manager's practices in the general areas of governance, technology and cyber security, vendor management, trade settlement, and back office functions.
Traditionally, investment vehicle managers have been presented with periodic (e.g., annual) due diligence questionnaires, such as a paper form or electronic document including a series of questions related to the different due diligence aspects of the manager's practice. Upon return of the questionnaire, which may take a matter of weeks, an initial review is conducted of the questionnaire to identify any areas requiring clarification or expansion into the provided responses. Once the questionnaire is deemed complete, a reviewer reads through the provided responses, often in sentence format, and identifies areas of risk, generating a summary of the reviewer's findings and an overall assessment, often including a rating. This individualized process is time consuming, expensive, and highly subjective. For example, for thousands of dollars and a matter of months' lead time, a client may receive information regarding an identified manager. However, most clients' portfolios involve many managers, compounding the expense and drawing the time lag out even further. To reduce budget, clients have opted to rotate through the various managers in their portfolio or to skip some managers rather than conducting full periodic reviews.
Conversely, managers may be requested to fill out a number of questionnaires provided by different clients, the vast majority of each questionnaire including duplicate or overlapping questions, because no standardized mechanism exists for conducting operational assessments of investment vehicle managers. Because the investment vehicle managers employ a number of individuals, different surveys may be filled in differently simply based upon who is filling out which questionnaire, since fill-in-the-blank questions leave much room for interpretation and breadth/specificity of answer. Thus, each client may obtain a somewhat different view of potential risk from a same manager.
The inventors recognized a need for a faster, less expensive, and more objective system for assessing investment vehicle managers' operations.
In one aspect of the present disclosure, systems and methods for conducting automated or semi-automated operational due diligence reviews of investment vehicle management organizations provide a data-driven approach to present objective comparisons between the investment vehicle management organizations. The objective comparisons may allow for more consistent decision making and optimized resource allocation. Further, the data-driven automated approach should increase efficiency, thereby decreasing cost and increasing speed of ODD reviews through improved data collection and automated reporting capabilities.
In some embodiments, survey questions presented to investment vehicle management organizations and corresponding answer options provided to the investment vehicle management organizations for responding to the survey questions are organized in a data format designed to streamline the collection and report writing aspects of the ODD review process. Since the answer data including the answer option selections is collected electronically, responses provided by the various investment vehicle management organizations can be analyzed and compared to develop market intelligence and benchmarking information across a range of operational risk factors.
In some embodiments, the objectivity of the analyzed results lies in part in presenting the information without weighting, ranking, or otherwise subjectively sorting the risk factors. For example, if a particular investment vehicle manager answers a question in a manner not conforming to what is considered a “best practice” in risk mitigation, the risk factor corresponding to the question may be highlighted. In a subjective analysis, it was difficult to derive the severity of risk associated with any particular risk factor in comparison to other risk factors, leading potentially to poor decision making based upon familiarity with a particular risk factor or past experience with the particular risk factor causing a subjective weighting in the mind of an evaluating organization and/or with the reviewer of an ODD report. When, instead, comparisons are made between fixed response answers of a large group of managers, industry trends are uncovered, identifying which best practices are adopted by a majority of investment vehicle managers and which best practices, while being best practices in an academic sense, have not gained traction industry-wide. As an illustrative example, when a particular investment vehicle manager responds that it does not require multi-factor authentication for remote access to its computing systems, that response can be compared across potentially hundreds of other managers to determine the commonality of that particular response. This results in a fact (e.g., percentage industry adoption) rather than a subjective opinion (e.g., investment vehicle managers ought to require multi-factor authentication). If there is a lack of industry adoption, there may be an underlying reason for this discrepancy (e.g., common investment vehicle manager software platforms are not designed to support multi-factor authentication). Conversely, if there is widespread adoption of a certain practice, the factual data enabled by the systems and methods described herein can be used as an impetus to direct the non-conforming managers to update their risk mitigation practices. Thus, the systems and methods described herein provide a technical solution to the lack of survey participant visibility into the feasibility and/or importance of applying certain risk mitigation corresponding to a risk discovered by a prior art operational risk due diligence survey.
The survey questions, in some embodiments, represent a risk inventory of a variety of types of risks. Certain portions of the risk inventory relate to how the investment vehicle manager applies best practices to firm management, such as technology practices, accounting practices, and human resources practices. Other portions of the risk inventory may be applicable to a particular investment vehicle manager depending upon the type(s) of investment strategies offered by the investment vehicle manager and/or the structure of the investment vehicle. As additional risk topics are added to the risk inventory, the automated methods and systems described herein are designed to scale and accommodate for topic expansion as well as, if applicable, audience expansion to addition types of investment vehicle managers. In illustration, although ODD began primarily as a hedge fund due diligence effort, over time ODD review has migrated into traditional strategies and, most recently, into private market strategies like real estate and venture/private equity. Thus, the systems and methods described herein, although largely illustrated in relation to public market strategies, are equally applicable to private market strategies. Thus, the survey structure and architecture provides a technical solution to the problem of easily updating risk surveys to comport with changes in best practices while providing continuity in trend analysis among participants.
The systems and methods described herein are additionally designed to provide more frequent analysis of investment vehicle managers. Through increases in efficiency afforded through the data-driven, automated answer collection process and automated analysis thereof, investment vehicle managers may be periodically monitored to confirm, after initial investment by a client with the investment vehicle manager, that the manager has kept pace with a changing technology environment. Further, prior responses from a particular manager may be maintained and reviewed to assess whether a particular investment vehicle manager has ceased to exhibit previously applied best practices. These reassessments may take place, in some examples, annually, semi-annually, or quarterly.
In one aspect, systems and methods described herein establish consistent and objective analysis appropriate to audit support in a manner not before available. The analysis results, for example, may be shared with regulators or internal audit functions for consistent and comprehensive analysis of risk behaviors of investment vehicle managers.
Systems and methods for objectively assessing operational due diligence (ODD) of an investment vehicle manager include providing, to each of a population of managers, an electronically-fillable questionnaire including a number of questions regarding risk factors, each risk factor belonging to one of a number of practice aspects, each question requiring selection from a number of standardized answer options. The answers collected from the number of managers may be combined to identify a propensity for exhibiting each of the number of risk factors across portions of the manager population. Each manager may be benchmarked against the propensity of manager population(s) and/or a peer group thereof to provide an objective assessment of manager performance and, in combination, portfolio performance in relation to real world common practices. Results of analysis and benchmarking may be provided in an interactive report for review.
The forgoing general description of the illustrative implementations and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure, and are not restrictive.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate one or more embodiments and, together with the description, explain these embodiments. The accompanying drawings have not necessarily been drawn to scale. Any values dimensions illustrated in the accompanying graphs and figures are for illustration purposes only and may or may not represent actual or preferred values or dimensions. Where applicable, some or all features may not be illustrated to assist in the description of underlying features. In the drawings:
The description set forth below in connection with the appended drawings is intended to be a description of various, illustrative embodiments of the disclosed subject matter. Specific features and functionalities are described in connection with each illustrative embodiment; however, it will be apparent to those skilled in the art that the disclosed embodiments may be practiced without each of those specific features and functionalities.
Reference throughout the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with an embodiment is included in at least one embodiment of the subject matter disclosed. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” in various places throughout the specification is not necessarily referring to the same embodiment. Further, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. Further, it is intended that embodiments of the disclosed subject matter cover modifications and variations thereof.
It must be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context expressly dictates otherwise. That is, unless expressly specified otherwise, as used herein the words “a,” “an,” “the,” and the like carry the meaning of “one or more.” Additionally, it is to be understood that terms such as “left,” “right,” “top,” “bottom,” “front,” “rear,” “side,” “height,” “length,” “width,” “upper,” “lower,” “interior,” “exterior,” “inner,” “outer,” and the like that may be used herein merely describe points of reference and do not necessarily limit embodiments of the present disclosure to any particular orientation or configuration. Furthermore, terms such as “first,” “second,” “third,” etc., merely identify one of a number of portions, components, steps, operations, functions, and/or points of reference as disclosed herein, and likewise do not necessarily limit embodiments of the present disclosure to any particular configuration or orientation.
Furthermore, the terms “approximately,” “about,” “proximate,” “minor variation,” and similar terms generally refer to ranges that include the identified value within a margin of 20%, 10% or preferably 5% in certain embodiments, and any values therebetween.
All of the functionalities described in connection with one embodiment are intended to be applicable to the additional embodiments described below except where expressly stated or where the feature or function is incompatible with the additional embodiments. For example, where a given feature or function is expressly described in connection with one embodiment but not expressly mentioned in connection with an alternative embodiment, it should be understood that the inventors intend that that feature or function may be deployed, utilized or implemented in connection with the alternative embodiment unless the feature or function is incompatible with the alternative embodiment.
In some implementations, systems and methods described herein assist in identifying and quantifying operational risks within investment managers or specific investment products. The systems and methods rely upon structured survey data including questions linked to bounded and/or limited choice answers to support comparison to other survey takers. The questions may be directed to a variety of risk factors, each risk factor corresponding to one or more policies, procedures, and/or capabilities across an entity's organizational and/or operational structure. Conversely, the limited and/or bounded range of answers for each question may be characterized, using a set of rules, to identify the respondent's selection as being either a preferred (e.g., supportive of best practices) or a non-preferred (e.g., an exception to best practices) response.
In some implementations, the responses to the structured survey data collected from a population of respondents are analyzed to determine the commonality among respondents to fail to follow a best practice (e.g., a propensity to indicate an exception to best practice in response to any give question of the survey questions). In this manner, the systems and methods described herein provide an additional layer of knowledge to participants in the structured survey, assisting participants in recognizing not only divergence from best practices but also deviations from standard market practices. Thus, certain exceptions to best practices as defined in the survey data may, in actual practice, fail to comport with common marketplace risk mitigation practices. Therefore, a participant may utilize both the identification of exceptions to best practices as well as deviations from standard market practices in making internal decisions regarding risk tolerance or underwriting standards.
In some implementations, the survey outcome is presented in a report format. The report may be a printed report or an interactive online report providing the user the ability to drill down, sort, and/or filter the information to glean insights. Different report formats may be provided depending upon the participant industry, the end user audience, the participant geography, or other factors. The aforementioned factors, in some embodiments, are used to limit marketplace comparisons from the entire participant population (e.g., the “universe”) to a peer pool of participants similar to a target participant (e.g., in industry, size, geographical region, etc.).
The questions and/or rules may vary over time. For example, as additional cyber security mechanisms are released, previous best practices (e.g., 8-character passwords) may be viewed instead as risk factors in need of upgrading to the latest best practices (e.g., two-level authentication). As more questions and corresponding rules are added, in some implementations, comparisons can be made between participants, and trends can be analyzed across time for a given participant, by accessing partial survey data in a manner that supports an apples-to-apples comparison. For example, the questions, rules, and answer options may be linked within a database or data network structure to maintain associations while the survey data adjusts and expands over time.
Risk analysis surveys are commonly left at least partially blank, with a number of questions unanswered. In some implementations, the systems and methods described herein support comparisons between participants while also identifying sections of missing information. Further, the completeness of survey data of a given participant may be benchmarked against survey completeness across a peer group and/or the universe of participants of the platform. In supporting these comparisons across a marketplace, individual participants may recognize areas needing improvement. Conversely, participants may find competitive advantage in being able to demonstrate high conformance to best practices, risk mitigation that exceeds peer standards, and commitment to survey diligence that exceeds market practice.
Turning to
In some implementations, a survey presentation engine 120 enables automated presentation of an operational due diligence questionnaire to each of the managers 106 to collect information regarding operational risk management applied by the managers 106 in areas of both investment strategy and firm management strategy. In some examples, management-level questions may relate to the due diligence (risk) aspects of governance, technology and cyber security, and back office functions. Conversely, investment strategy-level questions may include a variety of questions relating to a number of investment strategies managed by the manager such as, in some examples, a fixed-income strategy, an equity strategy, and a hedge fund strategy. The questions, for example, may include the risk aspects of vendor management and trade settlement. While the survey presentation engine 120 may present the same questions repeatedly to obtain information regarding each investment strategy, the management-level questions need only be presented once.
The survey presentation engine 120, in some implementations, presents discrete answer options related to each question. To provide for the ability to conduct comparisons between the strategies and behaviors of various managers 106, for example, each manager is provided limited standardized answer options related to each question (e.g., a yes/no, drop-down menu, or numeric answer, etc.). Further, in some embodiments, for at least a portion of the survey questions, the manager 106 may be provided the opportunity to qualify the selection of the standardized answer option with a brief comment. The brief comments, for example, may be reviewed by evaluators 108 in refining an automated evaluation generated by a survey analysis engine 122.
The managers 106, in an illustrative embodiment, may be invited to log into the operational assessment platform 102 to answer survey questions presented by the survey presentation engine 120 via a portal or web site interface. Manager data 142 may guide the survey presentation engine 120 in which sets of questions to present (e.g., which investment strategies to cover). Alternatively, the managers 106 may be requested to identify and provide information for each investment strategy area offered by the manager. The survey presentation engine 120 may include alternate branches based upon answers provided to certain questions. For example, after identifying whether the manager is using a managed account or a commingled fund, follow-on questions related to the particular type of accounting used may be presented. The survey presentation engine 120, in another example, may include alternate branches based upon the manager's practice (e.g., firm size, practice type, etc.).
Upon submitting answers, each answer may be stored in the data repository 112 as survey data 144. The survey data 144 may be assigned a date to identify the recency of data collection. For example, questions may adapt as best practices change (e.g., technological advances, shifts in human resource requirements, etc.). Thus, the date (timestamp) may be keyed to a particular set, or version, of questions. Further, the survey data 144 may include multiple sets of responses for various managers 106 to track trends in individual managers 106 over time (e.g., movement away or toward best practices compliance).
In some implementations, the managers 106 are invited to take surveys by the operational assessment platform 102 on a regular schedule. The schedule may depend, in part, on the type of manager. For example, a large institutional manager running an equity long-only strategy may be invited to respond on a less frequent schedule (e.g., every other year, every third year, etc.), while a small hedge fund manager may be invited to respond on a more frequent schedule (e.g., every 6 months, every year, etc.). Frequency of collection of survey information may depend, in part, on requirements placed by regulators or auditors 114, expectations or demands of clients 104, or the outcome of analysis by an individual manager's responses as determined by a benchmark analysis engine 124. For example, if a particular manager demonstrated a significantly larger number of risk areas than the typical manager 106, the manager may be approached regarding adoption of certain risk management practices and a follow-on survey may be provided by the survey presentation engine 120 to determine whether improvements have been made. In some embodiments, survey data collection may be triggered by certain risk factors identified through regulatory data analysis via the regulatory data analysis engine 139, described below. In another example, the frequency of survey may be increased based on certain risk factors identified through regulatory data analysis. Further, in some implementations, a full survey may be presented less frequently, while targeted surveys directed to more sensitive risk areas, such as cyber security, may be presented more often.
Regardless of how the survey data is collected, the most recent survey data 144 collected from a manager 106 by the survey presentation engine 120 may be used by the survey analysis engine 122 to identify areas of potential risk in the manager's practices. The survey analysis engine 122, for example, may identify a number of answers provided by the manager 106 indicative of risk. In some embodiments, the survey analysis engine 122 applies rules data 152 to flag certain answers as being indicative of risk. The rules data 152 may include various analysis factors in identifying risk, such as binary factors (e.g., answer “no” to question #3 is indicative of risk), range factors (e.g., if the numeric value of the answer to question #56 is less than 5, etc.), and/or combination factors (e.g., if the answer to question #41 is “no” and the answer to question #5 is greater than 1,000 it is indicative of risk, etc.). The survey analysis engine 122 may output risk data 148 identifying areas of risk exhibited in the answers provided by the manager 106 through the survey presentation engine 120. Examples of risks identified in a number of risk aspects 210 are illustrated in a risk profile summary 204 of
In some implementations, in addition to survey data, regulatory data is imported from one or more regulatory data sources (e.g., from regulators and/or auditors 114) and formatted for use as a portion of the risk data 148. For example, a regulatory data analysis engine 139 may import Securities and Exchange Commission (SEC) form ADV information, such as information regarding criminal actions, regulatory actions, and/or civil judicial actions, and/or data from other regulatory authorities. The regulatory data analysis engine 139, similar to the survey analysis engine 122, applies rules data 152 to flag certain data derived from the imported regulatory data as being indicative of risk. The rules data 152 may include various analysis factors in identifying risk, such as binary factors (e.g., existence of an identified criminal action in the ADV disclosure information), range factors (e.g., categories of civil monetary penalties, etc.), and/or combination factors (e.g., a regulatory action related to violation of a statute in combination with a cease and desist, etc.). The regulatory data analysis engine 139 may output risk data 148 identifying areas of risk exhibited in the information obtained from one or more regulatory data sources.
In some implementations, the risk data 148 generated by the survey analysis engine 122 and/or the regulatory data analysis engine 139 is provided to a benchmark analysis engine 124 for benchmarking against other managers 106. The benchmark analysis engine 124 may combine risk data 148 from groupings of managers 106 to identify propensity among the groupings of managers 106 for exhibiting the same risk factor(s) as the evaluated manager. This allows the operational assessment platform 102 to consider industry norms in addition to simply presenting non-compliance with various practices identified, in some examples, by regulators and auditors 114, clients 104, or representatives of industry leaders in the managers 106 as best practices for risk mitigation. Non-compliance with individual practices, in some examples, may relate to expense of applying the practice, difficulty in obtaining internal compliance with the practice, and/or incremental technological advances required in advance of being capable of complying with the practice (e.g., adoption of the practice by software platforms used by the various managers, etc.). Thus, non-compliance may be common throughout the managers 106 or portions thereof.
The groupings of managers 106, in some examples, can include all managers 106 for which data is available (referred to herein as “the universe”), managers 106 in the same type of industry (e.g., public, private, sub-categories thereof), managers 106 of investment vehicles held within the portfolio of a requesting client 104 (referred to herein as “the portfolio”), or similar managers to a manager under evaluation (referred to herein as “peers”). In evaluating a manager against the managers' peers, one or more characteristics of the evaluated manager may be used to filter the universe of the managers 106 to only those managers 106 having matching characteristics to the evaluated manager. The characteristics may include, in some examples, similarity in investment vehicles (e.g., matching investment strategies), geographic region of the managers, size of the managers, and/or length of time in business (e.g., manager maturity). In some embodiments, users of the operational assessment platform 102, such as the clients 104 and regulators/auditors 114, may select characteristics for identifying peer sets of the managers 106. The peers, in part, may depend upon a threshold number of managers 106 exhibiting the selected characteristics (e.g., at least 20, at least 50, etc.) so that valuable trend analysis is provided and, conversely, behaviors of particular managers is not discoverable through narrow characteristic selections. The benchmark analysis engine 124, in some implementations, accesses population data to identify managers 106 sharing similar characteristics. Alternatively, the benchmark analysis engine 124 may access manager data 142 to filter on the various characteristics to identify similar managers to the evaluated manager 106.
The benchmark analysis engine 124, in some embodiments, obtains data during a recent timeframe, for example to avoid a false analysis based upon movements within the industry toward risk compliance in various areas. In one example, the recency may be set at a one-year period. In other examples, the recency may be set at eighteen months, two years, or three years. Recency, in some embodiments, may be based in part upon intended audience. For example, the regulators and auditors 114 may have specific desired timeframes, while an evaluation for use in presenting to managers 106 or clients 104 may have a different desired timeframe.
The benchmark analysis engine 124, in some implementations, analyzes risk data 148 of the grouping of managers to identify a portion of the managers 106 within the selected grouping of managers 106 that responded similarly to the evaluated manager 106 for each risk factor identified by the survey analysis engine 122. In some embodiments, the benchmark analysis engine 124 accesses benchmark classifications 158 to determine a quantile classification to apply to the selected grouping of managers 106 in determining deviance or similarity of the response of the evaluated manager 106 to the typical response of the selected grouping of managers 106. The quantile classification, in some examples, can include a tercile classification, a quartile classification, a decile classification, a percentile classification, or other quantile classification. In other embodiments, the quantile classification may depend in part upon a requestor of the comparative analysis. For example, one of the clients 104 may wish to review tercile classifications of the managers 106 in the client's portfolio (e.g., as identified via portfolio data 138 of the data repository 112), while a financial services organization 110 may wish to review quantile classifications of a grouping of the managers 106.
In some implementations, a trend assessment engine 130 obtains risk metrics 154 from the benchmark analysis engine 124 and generates trend metrics 150 regarding trends in manager application of various risk mitigation practices. The trend assessment engine 130, for example, may compare historic risk metrics 154 to present risk metrics 154 to identify movement in adoption of the various risk mitigation practices covered within the survey questions presented by the survey presentation engine 120. The trend metrics 150 identified by the trend assessment engine 130, for example, may be used to educate the managers 106 on movement within the industry toward or away from certain risk mitigation practices. In some embodiments, similar to the risk metrics 154, the trend metrics 150 may be developed for different peer groupings of managers 106 as well as for different quantile classifications.
In some implementations, a user (e.g., client 104, regulator/auditor 114, financial service organization 110, or manager 106) accesses the operational assessment platform 102 to obtain a report on one or more managers. A manager report generation engine 126, for example, may be used to generate information regarding a certain manager 106, based on the survey data 144 and/or regulatory data collected regarding the manager 106. The manager report generation engine 126, in addition to accessing and formatting survey data 144 related to a requested manager 106, may execute the benchmark analysis engine 124 in real time to obtain a statistical analysis of the manager's performance in relation to other managers 106 in the operational assessment environment 100 at the time of the request. Further, the manager report generation engine 126 may execute the trend assessment engine 130 in real time (e.g., in the circumstance of a report request targeting a manager 106 audience or a regulator/auditor 114 audience) to enable comparisons between the manager's performance and current movements in practices of sets of managers 106.
In some implementations, the manager report generation engine 126, after gathering automated analysis via the operational assessment platform 102, causes execution of an evaluator commentary engine 128 to obtain manual review and commentary prepared by one of the evaluators 108. The evaluator commentary engine 128, for example, may assign one of the evaluators 108 to review the automatically generated report data prepared by the manager report generation engine 126 and to add evaluator data 160 that the manager report generation engine 126 can use in formatting a final report structure. The evaluators 108, for example, may be provided a graphical user interface by a portal report presentation engine 118 to review information and to add comments thereto.
In addition to reviewing the automatically generated report data, in some embodiments, the evaluators 108 conduct interviews with personnel of each manager 106 being evaluated to clarify brief written responses or to obtain additional information regarding the manager 106. The interviews, in some implementations, extend beyond the managers 106 themselves to key partnerships, such as service providers, vendors, or contractors having relationships with the manager 106 which can expose the manager 106 to risk. In some embodiments, answers to one or more questions regarding risk factors involving these key partnerships may be filled in by the evaluators 108 rather than by the managers 106.
The manager report generation engine 126, in some implementations, generates a formatted report for review by the requesting entity (e.g., client 104, manager 106, or regulator/auditor 114). The report, in some examples, may be provided in a document format (e.g., Word document, PDF, etc.) or as interactive content available to review online via a portal report presentation engine 118. For example, the requesting entity may log into the operational assessment platform to review report information. The client management engine 116 or regulators/auditors engine 137, in some examples, may enable access to the operational assessment platform for report generation requests and for report review.
In an illustrative example, the manager report prepared by the manager report generation engine 126 may include formatted information as presented in a series of example screen shots of
The summary review section 202 additionally provides a quartile analysis key 206 and quartile analysis example graphics 208 illustrating a color-coded quartile circle graphic. The percent of exceptions above the 75th percentile is color-coded green (e.g., the lack of risk mitigation for this survey response practice is common in the universe of managers as illustrated in graphic 108b, in the client's portfolio as illustrated in graphic 208a, or among the manager's peers as illustrated in graphic 208c). The percent of exceptions between the 25th and the 75th percentile is color-coded yellow (e.g., the lack of risk mitigation for this survey response practice is somewhat common but not widely adopted in the universe of managers as illustrated in graphic 208b, in the client's portfolio as illustrated in graphic 208a, or among the manager's peers as illustrated in graphic 208c). The percent of exceptions below the 25th percentile is color-coded red (e.g., the lack of risk mitigation for this survey response practice is uncommon in the universe of managers as illustrated in graphic 208b, in the client's portfolio as illustrated in graphic 208a, or among the manager's peers as illustrated in graphic 208c). In other embodiments, the graphics may differ (e.g., bar graphs vs. circle graphs or pie charts) and/or the quantiles may differ based upon desired output.
Turning to the risk profile summary section 204, on the left hand side, risk aspects 210 of the firm's practice are listed: corporate governance and organizational structure; compliance, regulatory, legal, and controls testing; technology and business continuity planning (BCP) oversight; key external service provider selection and monitoring; trade/transaction execution; middle/back office, valuation, and cash controls; investment and counterparty oversight; and fund governance, structure, and administration. On the right hand side, particular risk identifiers 212 are listed for each risk aspect. The risk identifiers, for example, may represent a question presented in the manager survey or the outcome of a combination of questions. Regarding the risk aspect of fund governance, structure, and administration 210h, the corresponding risk identifier 212h reads “no material risks identified”, demonstrating that the manager 204a is fully in compliance regarding the risk aspect 210h. In relation to privately traded investment vehicles, in some embodiments, the firm risk aspects may include, in some examples, corporate governance and organizational structure; regulation, compliance, and audit; investment and counterparty oversight; technology and BCP oversight; key external service provider selection and monitoring; trade/transaction execution; valuation and cash controls; and fund governance and administration.
In
Regarding an historical employee turnover risk factor 214b, a manager response 216 explains this discrepancy in risk mitigation by informing the audience that the organization has reduced accounting staff by half, perhaps due to efficiencies derived through automation. In some embodiments, one of the evaluators 108 may selectively include manager comments where useful or not confidential through the evaluator commentary engine 128 (e.g., as evaluation data 156). In other embodiments, manager comments collected by the survey presentation engine 120 along with certain standardized answer selections may be automatically included in the report by the manager report generation engine 126.
Regarding a succession planning risk factor 214c, although according to a brief description 218c the “market practice is for a firm to formally document a succession plan”, according to both a portfolio quartile analysis graphic 220c and a universe quartile analysis graphic 222c, a majority of managers in both the client's portfolio and in the managers 106 evaluated by the operational assessment platform 102, managers more often than not do not formally document a succession plan. This advises the client that this risk mitigation practice is less common in the marketplace as of the time of the report. Conversely, regarding the concentrated investor base risk factor 214a, the risk factor is extremely unusual in the universe of managers according to a universe graphic 222a (e.g., 0% or less than 1%) and most likely the only manager in the client's portfolio exhibiting this behavior according to a portfolio graphic 220a (e.g., 11%).
In some embodiments, where the report is instead generated for the benefit of one of the managers 106 rather than for one of the clients 104 of
Turning to
Turning to
Returning to
In an illustrative example, the portfolio report prepared by the portfolio report generation engine 132 may include formatted information as presented in a series of screen shots of
Turning to
The portfolio risk summary screen shot 300 includes an overall breakdown circle graph 308 illustrates that, of 5,325 questions assessed across the fifty-seven managers 302, 65% demonstrated managers 302 conforming to best practices, 22% identified exceptions from best practice behaviors, and 13% of the questions contained no data (e.g., unanswered, irrelevant to one or more managers 302, etc.). A breakdown of responses by survey categories bar graph 310 illustrates exceptions, best, practices, and no data related to both firm related questions 316a and strategy related questions 316b. As illustrated in the bar graph 310, the “no data” category is higher for firm related questions 316a than for strategy related questions 316b, resulting in larger percentages of exceptions and larger percentages of best practices in strategy-related questions. This may be because some managers 302 may be less inclined to answer questions regarding the firm's management, viewing some questions as covering confidential information. In other presentations, the data completion itself may be assessed. For example, the data completion may be separated into quantiles (e.g., excellent, very good, above average, good, average, below average, poor, very poor, etc.) based upon absolute numbers (e.g., a 95%+completion rate is excellent, etc.) and/or in comparison to the universe and/or peer firms' completion data. Turning to
Returning to
Turning to
A distribution of risk areas within portfolio circle graph 406 identifies that three percent of the questions answered as exceptions were highest quartile answers (e.g., matching answers of 75% or above of the managers 302, twenty-six percent of the questions answered with exceptions were middle quartile answers (e.g., matching answers of 25-75% of the managers 302), and seventy-one percent of the questions answered with exceptions were lowest quartile answers (e.g., matching the answers of less than 25% of the managers 302). These exceptions are further broken down below, in a listing of the top five common firm level risks 408 (e.g., green color coded risks where the 75%+majority of the managers reported an exception) and a list of the top five unique firm level risks 410 (e.g., red color coded risks where the <25% minority of managers reported an exception).
In a bottom pane, a summary of the risk categories 412 (e.g., risk aspects) along with top risk factors 414 in each risk category are presented. The percentage exceptions 416 in each, as well as the percentage of “no data” 418 in each, are further displayed. This synopsis is broken down further in the report, or further details may be accessed, in an online report, by selecting particular risk categories and/or risk areas.
Turning to
Turning to
The screen shot 420 further presents a table 426 of top ten managers 302 ranked by percentage exceptions in the firm risk categories 424 (e.g., percentage firm level exceptions 428, percentage best practice 430, and percentage no data 432). As with
Similar to the portfolio risk summary at firm level screen shot 400 of
Additionally, in some implementations, the screen shot 420 may include comments provided by an evaluator 108 of
Similar to the circle graph 406 of
In a bottom pane, a summary of the top five risk categories 512 (e.g., risk aspects) along with top risk factors 514 in each risk category 512 are presented. The percentage exceptions 516 in each, as well as the percentage of “no data” 518 in each, are further displayed. This synopsis is broken down further in the report, or further details may be accessed, in an online report, by selecting particular risk categories and/or risk areas.
Further, turning to
The screen shot 520 further presents a table 526 of top ten strategies ranked by percentage exceptions in the strategy risk categories 524. In an interactive portfolio-level report presented to a representative of the client via a browser or web portal interface, the individual strategies listed in a strategy column 528 may be user-selectable to obtain greater level of detail regarding exceptions discovered during analysis of each strategy. Further, the presentation of the strategies 528 may be rearrangeable, in an interactive report format, to organize, in some examples, by best-to-worst strategies ranked by strategy level exceptions 530, by percentage best practice 532, or by percentage no data 534.
Additionally, in some implementations, the screen shot 520 may include comments provided by an evaluator 108 of
As shown in
The screen shot includes a corporate governance and organization structure risk category graph 612a, a compliance, regulatory, legal & controls testing risk category graph 612b, and an investment and counter party oversight risk category graph 612c. Each risk category graph 612 includes a number of risk factors, each risk factor presented as a bar of a bar chart having an x-axis of 0 to 100%. Each bar represents the managers' answers related to the particular risk factor categorized as exception, best practice, or no data. For example, a “succession planning” risk factor bar 614a illustrates that 60% of managers' answers corresponded to an exception from best practice, 26% of managers' answers corresponded to meeting the best practice, and 14% of managers did not provide answers related to succession planning. In some implementations, questions may be categorized as sets according to rules (e.g., the rules data 152 of
As illustrated in
Similar to
In the strategy level risk category graph 702, risk factors of the trade/transaction execution strategy risk category are each presented as a bar of a bar chart having an x-axis of 0 to 100%. Each bar represents the managers' answers related to the particular risk factor categorized as exception, best practice, or no data. For example, a “front office manual processes” risk factor bar 704a illustrates that 95% of managers' answers corresponded to an exception from best practice, and 5% of managers' answers corresponded to meeting the best practice. Unlike the remaining bars of the strategy level risk category graph 702, the “front office manual processes” risk factor bar 704a lacks a “no data” responses section, meaning that all managers answered the question(s) rated to front office manual processes. As discussed previously, in some implementations, questions may be categorized as sets according to rules (e.g., the rules data 152 of
Similar to
For each strategy in a strategy column 712, a percentage of fund level exceptions 714, a percentage of best practices 716, and a percentage of no data 718 is listed. Further, a risk component column 720 provides a listing of factors corresponding to the percentage of strategy level exceptions 714.
As illustrated in
In some implementations, turning to
Returning to
In some implementations, the evaluation data sharing engine 136 provides reports and/or portions of survey data 144, risk data 148, trend metrics 150, risk metrics 154, and/or evaluation data 156 to underwriters to support insurance underwriting on behalf of the managers 106 and/or to other entities or internal reviewers (e.g., supervisors, developers, and/or managers of the platform 102). For example, a risk underwriter may be provided information obtained and/or generated by the operational assessment platform 102 for use in increasing the efficiency and confidence in insurance underwriting. In another example, a platform sponsor of the operational assessment platform 102 may access metrics generated and compiled by the operational assessment platform to efficiently assess the range of outcomes among the investment products reviewed by the platform 102.
In some implementations, the process 800 begins with retrieving (810), by the survey presentation engine 804, a firm management questionnaire format 810 from the data store 806. The firm management questionnaire format 810, for example, may include an electronic document format including selectable answers, such as an Excel document. In another example, the firm management questionnaire format 810 may include formatting files, such as style sheets (e.g., CSS), web mark-up language documents (e.g., XML, HTML, etc.), and content files for creating an interactive online survey for presentation to the manager 802. The particular firm management questionnaire format retrieved, in some embodiments, depends in part on the type of manager 802 and/or the type of survey desired. For example, various levels of survey (e.g., a full survey presented on a first schedule vs. a partial but more frequently scheduled survey) may be available for presentation to the manager 802. Further, retrieving the questionnaire format may include retrieving a number formatting documents, each directed to a separate questionnaire sections. In some examples, the sections may include a firm information section and a number of risk aspect sections.
In some implementations, the survey presentation engine 804 presents (812) the firm management portion of the survey to the manager 802 using the firm management questionnaire format. The survey presentation engine 120 of
In some implementations, the questions presented are standardized questions presented to a group of managers, and the answers include standardized user-selectable answers. The standardized answers, in some examples, may include yes/no selections, single selection from a set number of options (e.g., via a drop-down menu or list), multiple selection from a set number of options (e.g., via a list), and/or numeric entry.
Further, in some implementations, at least a portion of the questions presented include, in addition to standardized answer options, a data entry field (e.g., text field) for supplying a customized response, such as a detailed explanation regarding a selected answer.
In some implementations, the survey presentation engine 804 receives (814) answers to standardized firm management survey questions from the manager 802. Further, if the manager 802 was provided the opportunity to enter a text comment related to some of the questions, the survey presentation engine 804 may receive custom information related to one or more survey questions. In the circumstance of a user-fillable electronic document, receiving the answers may include receiving a completed version of the electronic document. Conversely, in the circumstance of an online interactive survey, receiving the answers may include receiving submission of at least a portion of the survey. For example, the manager 802 may fill in portions of the survey, submitting the answers to the survey presentation engine 804 in a piecemeal fashion until the manager 802 indicates completion of the survey. Completion, in some embodiments, includes a number of unanswered questions. For example, the manager 802 may elect to leave a portion of the questions blank.
In some implementations, the survey presentation engine 804 stores (816) the standardized answers in the data store 806. The standardized answers, for example, may be stored in a database format for later retrieval. The standardized answers may be linked to the survey questions, such that, as standardized survey questions change (e.g., increase in number, alter in wording, etc.) comparisons are made between responses and the appropriate set of standardized survey questions maintained in the data store 806. In some embodiments, the standardized answers are timestamped for comparison with other standardized answers submitted by the manager 802 at a different time. For example, the standardized answers may be stored as survey data 144 in the data repository 112 of
In some implementations, the survey presentation engine 804 stores (818) comments related to one or more standardized questions in the data store 806. Comments may be submitted, on a question-by-question basis, either in addition to or in lieu of selection of a standardized answer. For example, for one or more questions that the manager 802 felt were not adequately addressed by one of the standardized answers, the manager 802 may instead opt to submit a comment related to the standardized question. Any comments may be stored in the data store 806 keyed to the standardized question and/or the corresponding standardized answer. For example, the comments may be stored as survey data 144 in the data repository 112 of
In some implementations, at some point in the future, the survey analysis engine 808 retrieves (820) the standardized answers for the firm management portion from the data store 806. The survey analysis engine 808, for example, may retrieve standardized answers related to the entire firm management questionnaire or to one or more portions (e.g., risk aspects) of the questionnaire presented to the manager 802. The survey analysis engine 808, for example, may be configured to access and analyze questions on a periodic basis, regardless of whether the manager 802 has completed the entire questionnaire, as long as a portion of finalized answers has been submitted. In other embodiments, the survey analysis engine 808 may be configured to retrieve the standardized answers based on a trigger (e.g., indication of completion of the survey by the manager 802, receipt of a request for a manager report or portfolio report involving the manager 802, etc.). For example, the survey analysis engine 122 may retrieve the survey data 144 from the data repository 112 of
The survey analysis engine 808, in some implementations, retrieves (822) analysis rules from the data store 806. The analysis rules may include various analysis factors in identifying risk exceptions within the standardized answers provided by the manager 802. The analysis rules, in some embodiments, differ based upon characteristics of the investment manager 802. For example, best practice expectations for a large mature firm may differ from best practice expectations for a young, small firm. Further, best practice expectations may differ based upon investment strategies offered by the manager 802. Hedge fund managers, for example, may have different legal requirements and expectations than real estate fund investment managers. Although described as one set of analysis rules, the analysis rules may be separated into the various risk aspects covered within the firm management questionnaire. For example, the survey analysis engine 808 may access separate analysis rules for each risk aspect being analyzed (e.g., firm governance, technology and cyber security, vendor management, trade settlement, and/or back office functions, etc.). In some embodiments, the survey analysis engine 122 may retrieve the rules data 152 from the data repository 112 of
In some implementations, the survey analysis engine 808 translates (824) the standardized answers into risk data according to the analysis rules. As discussed above, the standardized answers may be categorized as exceptions to best practices or as best practices in accordance to the analysis rules. The analysis rules may include, in some examples, binary factors (e.g., answer “no” to question #3 is indicative of a risk exception), range factors (e.g., if the numeric value of the answer to question #56 is less than 5, this is indicative of a risk exception), and/or combination factors (e.g., if the answer to question #41 is “no” and the answer to question #5 is greater than 1,000 it is indicative of a risk exception). Thus, the risk data may include fewer independent values than the number of standardized answers analyzed. Although described as being a binary decision (e.g., best practice or exception from best practice), in other embodiments, the survey analysis engine 808 may classify the standardized answers into three or more categories, such as best practice, exception to best practice, and exception to required practice (e.g., in the event that one or more best practices are requirements placed by legal or certification bodies, etc.). Additionally, if one or more questions were left unanswered or answered only using a comment option, the survey analysis engine 808 may enter a “no data available” value for those questions into the risk data. In an illustrative example, the survey analysis engine 122 or
In some implementations, the survey analysis engine 808 stores (826) the risk data in the data store 806. The risk data, for example, may be stored in a database format for later retrieval. The risk data may be linked to the survey questions, such that, as standardized survey questions change (e.g., increase in number, alter in wording, etc.) comparisons are made between responses and the appropriate set (version) of standardized survey questions maintained in the data store 806. In some embodiments, the risk data are timestamped. For example, the risk data may be stored as risk data 148 in the data repository 112 of
Returning to obtaining information from the manager 802, in some implementations, the survey presentation engine 804 retrieves (828) one or more strategies managed by the investment vehicle manager 802 from the data store 806. The one or more strategies, for example, may be identified within the standardized answers collected by the survey presentation engine 804 via the firm management questionnaire or another initial questionnaire (e.g., firm information questionnaire). In another example, the one or more strategies may be retrieved from the portfolios of one or more clients, such as a requesting client. Turning to
Using the one or more strategies, in some implementations, the survey presentation engine 804 retrieves (830), from the data store 806, a strategy management questionnaire format for a first strategy of the one or more strategies. Similar to the firm management questionnaire format discussed above in relation to step 810, the strategy management questionnaire format may include an electronic document format including selectable answers or formatting files and content files for creating an interactive online survey for presentation to the manager 802. The strategy management questionnaire format retrieved, in some embodiments, depends in part on the type of manager 802 and/or the type of survey desired. For example, various levels of strategy survey (e.g., a full survey presented on a first schedule vs. a partial but more frequently scheduled survey) may be available for presentation to the manager 802. Further, retrieving the questionnaire format may include retrieving a number formatting documents, each directed to a separate questionnaire section. The sections may include a number of risk aspect sections.
In some implementations, the survey presentation engine 804 presents (832) the first strategy management portion of the survey to the manager 802 using the strategy management questionnaire format. The survey presentation engine 120 of
In some implementations, the questions presented are standardized questions presented to a group of managers, and the answers include standardized user-selectable answers. The standardized answers, in some examples, may include yes/no selections, single selection from a set number of options (e.g., via a drop-down menu or list), multiple selection from a set number of options (e.g., via a list), and/or numeric entry. Further, in some implementations, at least a portion of the questions presented include, in addition to standardized answer options, a data entry field (e.g., text field) for supplying a customized response, such as a detailed explanation regarding a selected answer.
In some implementations, the survey presentation engine 804 receives (834) answers to standardized strategy management survey questions from the manager 802. Further, if the manager 802 was provided the opportunity to enter a text comment related to some of the questions, the survey presentation engine 804 may receive custom information related to one or more survey questions. In the circumstance of a user-fillable electronic document, receiving the answers may include receiving a completed version of the electronic document. Conversely, in the circumstance of an online interactive survey, receiving the answers may include receiving submission of at least a portion of the survey. For example, the manager 802 may fill in portions of the survey, submitting the answers to the survey presentation engine 804 in a piecemeal fashion until the manager 802 indicates completion of the survey. Completion, in some embodiments, includes a number of unanswered questions. For example, the manager 802 may elect to leave a portion of the questions blank.
In some implementations, the survey presentation engine 804 stores (836) the standardized answers in the data store 806. The standardized answers, for example, may be stored in a database format for later retrieval. The standardized answers may be linked to the survey questions, such that, as standardized survey questions change (e.g., increase in number, alter in wording, etc.) comparisons are made between responses and the appropriate set of standardized survey questions maintained in the data store 806. In some embodiments, the standardized answers are timestamped for comparison with other standardized answers submitted by the manager 802 at a different time. For example, the standardized answers may be stored as survey data 144 in the data repository 112 of
In some implementations, the survey presentation engine 804 stores (838) comments related to one or more standardized questions in the data store 806. Comments may be submitted, on a question-by-question basis, either in addition to or in lieu of selection of a standardized answer. For example, for one or more questions that the manager 802 felt were not adequately addressed by one of the standardized answers, the manager 802 may instead opt to submit a comment related to the standardized question. Any comments may be stored in the data store 806 keyed to the standardized question and/or the corresponding standardized answer. For example, the comments may be stored as survey data 144 in the data repository 112 of
Turning to
Meanwhile, at some point in the future, the survey analysis engine 808, in some implementations, retrieves (842) the standardized answers for the first strategy management portion from the data store 806. The survey analysis engine 808, for example, may retrieve standardized answers related to the entire strategy management questionnaire or to one or more portions (e.g., risk aspects) of the questionnaire presented to the manager 802. The survey analysis engine 808, for example, may be configured to access and analyze questions on a periodic basis, regardless of whether the manager 802 has completed the entire questionnaire, as long as a portion of finalized answers has been submitted. In other embodiments, the survey analysis engine 808 may be configured to retrieve the standardized answers based on a trigger (e.g., indication of completion of at least the first strategy management questionnaire of the strategy management survey by the manager 802, receipt of a request for a manager report or portfolio report involving the manager 802, etc.). For example, the survey analysis engine 122 may retrieve the survey data 144 from the data repository 112 of
The survey analysis engine 808, in some implementations, retrieves (844) analysis rules from the data store 806. The analysis rules may include various analysis factors in identifying risk exceptions within the standardized answers provided by the manager 802. The analysis rules, in some embodiments, differ based upon characteristics of the investment manager 802. For example, best practice expectations for a large mature firm may differ from best practice expectations for a young, small firm. Further, best practice expectations may differ based upon investment strategies offered by the manager 802. Hedge fund managers, for example, may have different legal requirements and expectations than real estate fund investment managers. Although described as one set of analysis rules, the analysis rules may be separated into the various risk aspects covered within the strategy management questionnaire. For example, the survey analysis engine 808 may access separate analysis rules for each risk aspect being analyzed (e.g., a trade/transaction execution aspect, a middle-back office, valuation, and cash controls aspect, and/or a fund governance, structure, and administration aspect, etc.). In some embodiments, the survey analysis engine 122 may retrieve the rules data 152 from the data repository 112 of
In some implementations, the survey analysis engine 808 translates (846) the standardized answers into risk data according to the analysis rules. As discussed above, the standardized answers may be categorized as exceptions to best practices or as best practices in accordance to the analysis rules. The analysis rules may include, in some examples, binary factors (e.g., answer “no” to question #3 is indicative of a risk exception), range factors (e.g., if the numeric value of the answer to question #56 is less than 5, this is indicative of a risk exception), and/or combination factors (e.g., if the answer to question #41 is “no” and the answer to question #5 is greater than 1,000 it is indicative of a risk exception). Thus, the risk data may include fewer independent values than the number of standardized answers analyzed. Although described as being a binary decision (e.g., best practice or exception from best practice), in other embodiments, the survey analysis engine 808 may classify the standardized answers into three or more categories, such as best practice, exception to best practice, and exception to required practice (e.g., in the event that one or more best practices are requirements placed by legal or certification bodies, etc.). Additionally, if one or more questions were left unanswered or answered only using a comment option, the survey analysis engine 808 may enter a “no data available” value for those questions into the risk data. In an illustrative example, the survey analysis engine 122 or
In some implementations, the survey analysis engine 808 stores (848) the risk data in the data store 806. The risk data, for example, may be stored in a database format for later retrieval. The risk data may be linked to the survey questions, such that, as standardized survey questions change (e.g., increase in number, alter in wording, etc.) comparisons are made between responses and the appropriate set of standardized survey questions maintained in the data store 806. In some embodiments, the risk data are timestamped. For example, the risk data may be stored as risk data 148 in the data repository 112 of
If additional strategies were retrieved at step 828 (840), in some implementations, steps 842, 844, 846, and 848 are repeated for each strategy. Conversely, multiple (or all) of the strategies and corresponding analysis rules, in other implementations, may be retrieved (842, 844) at once by the survey analysis engine 808 for translation according to analysis rules (846) and storage as risk data (848).
Despite being illustrated as a particular flow of operations, in other implementations, more or fewer operations may exist. Additionally, some operations may be performed in a different order than illustrated in
Although illustrated as the single data store 808, in other implementations, the data store 808 may include a number of data storage regions or devices, including local, remote, and/or cloud storage on various types of storage devices. For example, the questionnaire format(s) may be maintained separately from a database including the standardized answers received from the manager 802. Further, some information may be relocated. In illustration, standardized answers may be initially stored in a fast access memory region, then transferred to a long-term storage region at a later time.
While the survey analysis engine 808 is illustrated as analyzing (824) the standardized answers after the standardized questions have all been answered by the investment vehicle manager 802, in other embodiments, the survey analysis engine 808 may retrieve answers once submitted in relation to any firm management risk aspect, regardless of the manager's progress related to other portions of the firm management questionnaire.
Other modifications to the process 800 may be made while remaining with the scope and spirit of the disclosure.
Turning to
The benchmark classifications, in some embodiments, are retrieved from a storage area. For example, the benchmark classifications may be associated with a particular report type (e.g., manager report, portfolio report, trend analysis report, etc.), a particular assessment type (e.g., strategy management risk assessment, firm management risk assessment, etc.), or a particular client. For example, one or more of the clients 104 may designate customized parameters for report generation in the operational assessment platform, for example stored as client data 146. In another example, the benchmark classification scheme may be a system default (e.g., the benchmark classifications 158). The benchmark classifications (e.g., client-specific classifications, report-specific classifications, or default benchmark classifications 158), for example, may be accessed by the benchmark analysis engine 124 of
In other embodiments, the benchmark classifications are designated along with a report request. For example, upon submission of a request for a report, a user (e.g., client 104, regulator/auditor 114, etc.) may designate a particular benchmark classification scheme to use in the report.
In some implementations, risk data generated from answers provided by the group of investment managers for a firm management survey are retrieved (904). The risk data may represent a portion of firm management risk aspects or all firm management risk aspects, depending upon the desired output from the method 900. In retrieving the risk data, in some embodiments, the most recent risk data from multiple sets of firm data is retrieved for each investment manager of the group of investment managers. For example, the risk data 148 may be retrieved by the benchmark analysis engine 124 from the data repository 112 of
The risk data for a particular investment manager of the group, in certain embodiments, may not be retrieved based upon a time stamp associated with the particular manager's risk data. For example, if the particular manager has not completed a firm management survey, at least in part, in the past threshold amount of time (e.g., one year, two years, etc.), any risk data retained in relation to the manager may be left out of the analysis performed by the method 900 as being stale.
For each risk factor of the risk data, in some implementations, a propensity within the group of investment managers for exhibiting an exception to best practice corresponding to the risk factor is calculated (906). As explained above, each risk factor corresponds to one or more questions presented to the managers of the group in a standardized questionnaire regarding the particular risk factor. Each risk factor may be categorized under a risk aspect (e.g., firm management aspect or category). In illustration, as shown in
In some implementations, benchmark metrics regarding performance of the group of investment managers in meeting best practices are calculated using the propensities (908). The benchmark analysis engine 124 of
The benchmark metrics, in some embodiments, include aggregation metrics combining all firm management risk factors within the population group. In illustration,
The benchmark metrics, in some embodiments, include aggregation metrics combining all firm management risk factors within each firm management risk aspect for the population group. For example,
In some embodiments, the benchmark metrics include aggregation metrics combining all firm management risk factors for each individual manager. For example,
In some implementations, each benchmark metric is augmented according to the benchmark classifications (910). The risk metrics, for example, may include visual augmentation identifiers for augmenting the benchmark metrics. In an example involving quartile graphic illustrations, as illustrated in
In some implementations, for each population group of investment managers identified, steps 904, 906, 908, and 910 are repeated (912). The population groups as illustrated in
In some implementations, a report is generated presenting the classified benchmark metrics for review by a user (914). Example excerpts from a firm management report are illustrated and described in relation to
Although the method 900 is illustrated in
Similar to the method 900 illustrated in
In some implementations, the method 950 begins with identifying benchmark classifications for classifying propensity for answers within the group of investment managers (952). Benchmark classifications are discussed in detail above in relation to step 902 of
In some implementations, risk data generated from answers provided by the group of investment managers for a first strategy management survey are retrieved (954). The risk data may represent a portion of strategy management risk aspects or all strategy management risk aspects associated with the first strategy, depending upon the desired output from the method 950. In retrieving the risk data, in some embodiments, the most recent risk data from multiple sets of strategy data is retrieved for each investment manager of the group of investment managers. For example, the risk data 148 may be retrieved by the benchmark analysis engine 124 from the data repository 112 of
The risk data for a particular investment manager of the group, in certain embodiments, may not be retrieved based upon a time stamp associated with the particular manager's risk data. For example, if the particular manager has not completed a strategy management survey related to the first strategy, at least in part, in the past threshold amount of time (e.g., one year, two years, etc.), any risk data retained in relation to the manager may be left out of the analysis performed by the method 950 as being stale.
For each risk factor of the risk data, in some implementations, a propensity within the group of investment managers for exhibiting an exception to best practice corresponding to the risk factor is calculated (956). As explained above, each risk factor corresponds to one or more questions presented to the managers of the group in a standardized questionnaire regarding the particular risk factor. Each risk factor may be categorized under a risk aspect (e.g., strategy management aspect or category). In illustration, as shown in
In some implementations, benchmark metrics regarding performance of the group of investment managers in meeting best practices are calculated using the propensities (958). The benchmark analysis engine 124 of
The benchmark metrics, in some embodiments, include aggregation metrics combining all strategy management risk factors of the first strategy within the population group. For example,
The benchmark metrics, in some embodiments, include aggregation metrics combining all strategy management risk factors within each strategy management risk aspect for the population group. For example,
In some embodiments, the benchmark metrics include aggregation metrics combining all strategy management risk factors for each individual manager in the population group.
In some implementations, each benchmark metric is augmented according to the benchmark classifications (960). The augmentation, for example, can be accomplished as described in relation to step 910 of
In some implementations, if analysis of an additional strategy is desired (962), the risk data generated from answers provided by the group of investment managers for a next strategy management survey is retrieved (964). Note that different managers will supply different strategies, either in general or in reference to the reviewed portfolio in the circumstance of a portfolio review. Thus, each time the steps 956, 958, and 960 are repeated, a different sub-population of an over-all target population (e.g., managers within the universe, managers within the portfolio, etc.) may be analyzed together.
In some implementations, after all desired strategies have been analyzed (962), if multiple strategies were analyzed (966), benchmark metrics regarding group performance in meeting best practices across all analyzed strategies are calculated (968). In illustration,
In some implementations, for each population group of investment managers identified, steps 954, 956, 958, 960, 962, 964, 966, and 968 are repeated (970). The population groups as illustrated in
In some implementations, a report is generated presenting the classified benchmark metrics for review by a user (972). Example excerpts from a strategy management report are illustrated and described in relation to
Although the method 950 is illustrated in
In some implementations, the process 1000 begins with a portfolio report generation engine 1002 receiving a client identifier 1024 that identifies a client having an investment vehicle portfolio. The client identifier 1024, in some examples, may identify a particular client 102 or portfolio of the portfolio data 138 of
Responsive to receipt of the client identifier, in some implementations, the portfolio report generation engine 1002 retrieves portfolio data 1006 related to the client's portfolio, for example from a storage medium. In one example, the portfolio data may be portfolio data 138 retrieved by the portfolio report generation engine 132 of
In some implementations, the portfolio report generation engine 1002 retrieves manager data 1008 related to one or more managers included in the client's investment vehicle portfolio, for example from a same or different storage medium. In illustration, the portfolio data 1006 for a set of portfolios and the manager data 1008 for a population of managers may be maintained in a database, and the client identifier 1024 may be used as a key to access portions of the database. The manager data 1008, for example, may be the manager data 142 of
In some implementations, the portfolio generation engine 1002 extracts, from the manager data 1008 and the portfolio data 1006, a set of investment vehicle strategies 1012 included in the client's portfolio as well as a set of manager identifiers 1014 included in the client's portfolio. Each portfolio strategy 1012 may be provided by one or more of the managers 1014 such that managers 1014 and strategies 1012 are likely to have instances of one to many correlations.
In some implementations, the portfolio report generation engine 1002 provides an indication of report type(s) 1026 as well as the investment vehicle strategies 1012 and the manager identifiers 1014 to a manager report generation engine 1022. The report type(s), in some implementations, include both a firm management report and a strategy management report. In addition, portions of each of the firm management report and the strategy management report may be identified. For example, the client may wish to review the managers 1014 on the granularity of cybersecurity handling of the firm management risk categories. Further, the report type(s) may indicate an end audience (e.g., a client in the circumstance of a portfolio report). If the strategy management report is not selected within the report type(s), the portfolio strategies 1012 may still be useful in identifying appropriate peers to the various managers 1014. In other embodiments, if only a firm management report is desired, the portfolio report generation engine 1002 may not provide the portfolio strategies 1012.
The manager report generation engine 1022, in some implementations, automatically generates report data 1028, including risk factor metrics and population benchmark metrics 1020, related to each of the managers 1014 of the portfolio. The manager report generation engine 1022 may supply manager identifiers 1014a-x and strategy identifiers 1012a-x covering each of the N managers 1014 and M strategies 1012 provided by the portfolio report generation engine 1002 to a benchmark analysis engine 1004 for metrics generation.
In some implementations, the benchmark analysis engine 1004 obtains risk data 1016 regarding risk factors identified through analysis of survey data supplied by the managers 1014. The risk data 1016, for example, may be obtained from a data repository 1010 such as the data store 112 of
In some implementations, the benchmark analysis engine 1004 also obtains benchmark classifications 1018, such as the benchmark classifications 154 of
In some implementations, the benchmark analysis engine 1104 applies the benchmark classifications 1018 and the risk data 1016 to generating benchmark metrics and risk factor propensities 1020. The benchmark analysis engine 1004, for example, may perform the operations described in the method 900 of
In some implementations, the benchmark analysis engine 1104 stores the benchmark metrics and risk factor propensities 1020 in the data repository 1010. The benchmark metrics and risk factor propensities, for example, may be generated by the benchmark analysis engine 124 of
In some implementations, the benchmark analysis engine 1004 provides the benchmark metrics and risk factor propensities 1020 to the manager report generation engine 1022. Conversely, the manager report generation 1022 may access the benchmark metrics and risk factor propensities 1020 from the data repository 1010 (e.g., upon receiving a signal from the benchmark analysis engine that it has completed processing the portfolio strategies 1012 and manager identifiers 1014).
In some implementations, the manager report generation engine 1022 generates report data 1028 using the benchmark metrics and risk factor propensities 1020. The manager report generation engine 1022 may append the benchmark metrics and risk factor propensities 1020 with additional information, such as information regarding the managers 1014 (e.g., demographics, characteristics, etc.), information regarding the risk aspects, and/or information regarding the risk factors. The manager report generation engine 1022 may retrieve this information from the data repository 1010 and/or the manager data 1008 (which may be included in the data repository 1010 in some embodiments).
The manager report generation engine 1022, in some implementations, generates graphic content representing various benchmark metrics and risk factor propensities 1020. For example, turning to
In some implementations, the manager report generation engine 1022 combines the benchmark metrics and risk factor propensities 1020 with additional information, such as a title of the corresponding risk factor or a brief explanation of the best practice associated with the risk factor. For example, turning to
The manager report generation engine 1022, in some implementations, analyzes the benchmark metrics and risk factor propensities 1020 to rank managers according to behaviors. For example, turning to
In some implementations, the manager report generation engine 1022 provides the report data 1028 to the portfolio report generation engine 1002. In other embodiments, the manager report generation engine 1022 may store report data in the data repository 1010 or provide the report data 1028 to another engine for further processing. For example, turning to
The portfolio report generation engine 1002, in some implementations, accesses the manager report data 1028 and generates portfolio report data 1030. The portfolio report generation engine 1002 may augment the manager report data 1028 with additional information, such as information regarding the client (e.g., demographics, characteristics, etc.) and/or the client's portfolio. The portfolio report generation engine 1002 may retrieve this information from the data repository 1010 or the portfolio data 1006 (which may be included in the data repository 1010 in some embodiments).
The portfolio report generation engine 1002, in some implementations, generates graphic content representing various benchmark metrics and risk factor propensities 1020. For example, turning to
In some implementations, the portfolio report generation engine 1002 combines the benchmark metrics and risk factor propensities 1020 with additional information, such as a title of the corresponding risk factor or a brief explanation of the best practice associated with the risk factor. For example, turning to
The portfolio report generation engine 1002, in some implementations, analyzes the benchmark metrics and risk factor propensities 1020 to rank risk factors, investment strategies, and/or manager-strategies. For example, as illustrated in
Although described in relation to a particular sequence of operations (illustrated as A through I), in other implementations, more or fewer operations may be included, as well as more or fewer engines, data sources, and/or outputs. For example, in other embodiments, the portfolio report generation engine 1002 repeatedly issues requests to the manager report generation engine 1022, once for each manager 1014 or combination of manager-strategy (e.g., 1014 and 1012). In this manner, the portfolio report generation engine 1002 can obtain statistical information regarding each individual manager and/or manager-strategy. In other embodiments, the portfolio report generation engine 1002 may submit a single request to the manager report generation engine 1002 involving all managers 1014 and portfolio strategies 1012. The outcome of the request to the manager report generation engine 1022 may differ depending upon the scope of report generated by the manager report generation engine 1022. For example, if the benchmark metrics and risk factor propensities 1020 are only generated in view of a particular manager 1014 or manager-strategy 1014, 1012, additional benchmark metrics may need to be generated by the portfolio report generation engine 1002 (e.g., by directly issuing one or more requests to the benchmark analysis engine 1004).
Additionally, in other implementations, portions of the process 1000 may be performed in a different order or one or more of the steps may be performed in parallel. Other modifications to the process 1000 are possible while remaining in the scope and spirit of the disclosure.
In some implementations, the process 1030 begins with the evaluator commentary engine 1032 receiving portfolio report data 1030 and/or manager report data 1028. The portfolio report generation engine 1002 and/or the manager report generation engine 1022, for example, may leave hooks in the respective generated report data 1030, 1028 for inclusion of customized comments added manually by an evaluator 1036. The evaluator commentary engine 1032, for example, may be the evaluator commentary engine 128 of
In some implementations the evaluator commentary engine 1032 presents, in an interactive display, evaluation information, including portions of the portfolio report data 1030 and/or the manager report data 1028, to an evaluator at a computing device 1048. The evaluator may review report information provided by the evaluator commentary engine 1032 and submit manual additions to the automatically generated report for review by an end recipient of the report.
In response to presenting the evaluation information, in some implementations, the evaluator commentary engine receives user interactions 1036 from the evaluator at the computing device 1048. The user interactions 1036, for example, may include selections of some of the manager comments provided in the survey responses from the managers (e.g., the data entry fields provided along with standardized answers as discussed in relation to the survey presentation engine 120 of
The evaluator commentary engine, in some implementations, repeatedly supplies additional evaluation information 1038 and receives additional user interactions 1036 until the evaluator has completed evaluating all of the relevant portfolio report data 1030 and/or manager report data 1028. The evaluator, for example, may indicate an approval or final submission of entries capture in the user interactions 1036. Although described as a routine involving a single evaluator, in some embodiments, multiple evaluators may review portfolio report data 1030 and/or manager report data 1028 via the evaluator commentary engine 1032 and provide manually added information.
In some implementations, the evaluator commentary engine combines the finalized user interactions 1036 into evaluation data 1040 for incorporation into a finalized report. The evaluation data 1040 may be stored in a data repository 1010 (e.g., as evaluation data 156 of
In some implementations, the portfolio report generation engine 1002 obtains the evaluation data 1040 and the portfolio report data 1030 and combines the information into finalized report data 1042. The portfolio report generation engine 1002, for example, may perform formatting of the evaluation data 1040 to seamlessly include it into the automated information in report data 1042 ready for presentation to an end recipient.
A report presentation engine 1034 such as the portal report presentation engine 118 of
In some implementations, the recipient submits user interactions 1046, for example, to browse between screen shots and to drill deeper into report information provided by the report generation engine 1034.
In some implementations, the method begins with identifying a manager population and a time period for review (1102). The manager population, in some examples, may include the “universe” of managers, managers providing one or more particular strategies, or managers sharing certain characteristics (e.g., geography, size, maturity, etc.). In another example, a particular manager may be identified, for example to confirm that the manager demonstrates application of a greater number of best practices over time. The manager population may be submitted by a requesting user.
In some implementations, if a portion of risk factors are desired (1104), risk factor data and/or metrics are retrieved for the desired risk factors (1106a). For example, certain risk aspects of firm management may be identified, certain strategies, or certain strategy risk aspects. In other implementations, risk factor data and/or metrics for all risk factors, covering multiple reviews of the population over the time period, are retrieved (1106b). The benchmark metrics may cover multiple reviews of each manager within the population over the time period.
For each risk factor, in some implementations, a change in corresponding benchmark metric within the manager population over the time period is calculated as a respective benchmark trend metric (1108). The changes can include both increases and decreases in application of best practices. The trend metrics, for example, may be the trend metrics 150 of
In some implementations, a subset of metrics exhibiting change exceeding a threshold over the time period is identified (1110). For example, adoption of certain best practices within a population of managers may be tracked through reviewing trends within multiple survey requests over time to identify clear demonstration in a trend toward (or away from) adoption of each best practice. The positively identify movement as a trend, the threshold may be set to, in some examples, at least 10%, over 20% or between 20 and 30%.
The method 1000 may be repeated (1112) for each population group of managers identified (1102). Once all population groups have been review, in some implementations, a report is generated presenting the subset(s) of benchmark trend metrics for review by a user (1114), such as the requester. The report may be in document form or in online interactive form, as discussed in relation to the portfolio and manager reports above.
Although the method 1100 is illustrated as having a particular flow of operations, in other implementations, more or fewer steps may exist. The steps of the method 1100 may depend in part upon the end audience for the information. For example, rather than generating a report, the trend metrics may be supplied directly to a financial services organization for combining with the organization's internal data. The financial services organization engine 134 of
Next, a hardware description of the computing device, mobile computing device, or server according to exemplary embodiments is described with reference to
Further, a portion of the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1200 and an operating system such as Microsoft Windows, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
CPU 1200 may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU 1200 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU 1200 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
The computing device, mobile computing device, or server in
The computing device, mobile computing device, or server further includes a display controller 1208, such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing with display 1210, such as a Hewlett Packard HPL2445w LCD monitor. A general purpose I/O interface 1212 interfaces with a keyboard and/or mouse 1214 as well as a touch screen panel 1216 on or separate from display 1210. General purpose I/O interface also connects to a variety of peripherals 1218 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard. The display controller 1208 and display 1210 may enable presentation of the user interfaces illustrated, in some examples, in
A sound controller 1220 is also provided in the computing device, mobile computing device, or server, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 1222 thereby providing sounds and/or music.
The general purpose storage controller 1224 connects the storage medium disk 1204 with communication bus 1226, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the computing device, mobile computing device, or server. A description of the general features and functionality of the display 1210, keyboard and/or mouse 1214, as well as the display controller 1208, storage controller 1224, network controller 1206, sound controller 1220, and general purpose I/O interface 1212 is omitted herein for brevity as these features are known.
One or more processors can be utilized to implement various functions and/or algorithms described herein, unless explicitly stated otherwise. Additionally, any functions and/or algorithms described herein, unless explicitly stated otherwise, can be performed upon one or more virtual processors, for example on one or more physical computing systems such as a computer farm or a cloud drive.
Reference has been made to flowchart illustrations and block diagrams of methods, systems and computer program products according to implementations of this disclosure. Aspects thereof are implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Moreover, the present disclosure is not limited to the specific circuit elements described herein, nor is the present disclosure limited to the specific sizing and classification of these elements. For example, the skilled artisan will appreciate that the circuitry described herein may be adapted based on changes on battery sizing and chemistry or based on the requirements of the intended back-up load to be powered.
The functions and features described herein may also be executed by various distributed components of a system. For example, one or more processors may execute these system functions, wherein the processors are distributed across multiple components communicating in a network. The distributed components may include one or more client and server machines, which may share processing, as shown on
In some implementations, the described herein may interface with a cloud computing environment 1330, such as Google Cloud Platform™ to perform at least portions of methods or algorithms detailed above. The processes associated with the methods described herein can be executed on a computation processor, such as the Google Compute Engine by data center 1334. The data center 1334, for example, can also include an application processor, such as the Google App Engine, that can be used as the interface with the systems described herein to receive data and output corresponding information. The cloud computing environment 1330 may also include one or more databases 1338 or other data storage, such as cloud storage and a query database. In some implementations, the cloud storage database 1338, such as the Google Cloud Storage, may store processed and unprocessed data supplied by systems described herein. For example, the portfolio data 138, population data 140, manager data 142, survey data 144, client data 146, risk data 148, trend metrics 150, rules data 152, risk metrics 154, evaluation data 156, benchmarks classifications 158, and/or evaluator data 160 of the operational assessment platform 102 of
The systems described herein may communicate with the cloud computing environment 1330 through a secure gateway 1332. In some implementations, the secure gateway 1332 includes a database querying interface, such as the Google BigQuery platform. The data querying interface, for example, may support access by the operational assessment platform to data stored on the data repository 112 or to data maintained by any one of the clients 104, evaluators 108, financial services organizations 110, regulators/auditors 114, or managers 106.
The cloud computing environment 1330 may include a provisioning tool 1340 for resource management. The provisioning tool 1340 may be connected to the computing devices of a data center 1334 to facilitate the provision of computing resources of the data center 1334. The provisioning tool 1340 may receive a request for a computing resource via the secure gateway 1332 or a cloud controller 1336. The provisioning tool 1340 may facilitate a connection to a particular computing device of the data center 1334.
A network 1302 represents one or more networks, such as the Internet, connecting the cloud environment 1330 to a number of client devices such as, in some examples, a cellular telephone 1310, a tablet computer 1312, a mobile computing device 1314, and a desktop computing device 1316. The network 1302 can also communicate via wireless networks using a variety of mobile network services 1320 such as Wi-Fi, Bluetooth, cellular networks including EDGE, 3G, 4G, and 5G wireless cellular systems, or any other wireless form of communication that is known. In some examples, the wireless network services 1320 may include central processors 1322, servers 1324, and databases 1326. In some embodiments, the network 1302 is agnostic to local interfaces and networks associated with the client devices to allow for integration of the local interfaces and networks configured to perform the processes described herein. Additionally, external devices such as the cellular telephone 1310, tablet computer 1312, and mobile computing device 1314 may communicate with the mobile network services 1320 via a base station 1356, access point 1354, and/or satellite 1352.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the present disclosures. Indeed, the novel methods, apparatuses and systems described herein can be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods, apparatuses and systems described herein can be made without departing from the spirit of the present disclosures. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the present disclosures.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/905,605, entitled “Systems and Methods for Automating Operational Due Diligence Analysis to Objectively Quantify Risk Factors,” filed Sep. 25, 2019, and to U.S. Provisional Patent Application Ser. No. 62/923,686, entitled “Systems and Methods for Automating Operational Due Diligence Analysis to Objectively Quantify Risk Factors,” filed Oct. 21, 2019. All above identified applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62905605 | Sep 2019 | US | |
62923686 | Oct 2019 | US |