Predictive analysis of availability of systems and/or system components

Information

  • Patent Application
  • 20060095247
  • Publication Number
    20060095247
  • Date Filed
    December 15, 2005
    19 years ago
  • Date Published
    May 04, 2006
    18 years ago
Abstract
A method of modeling a mission system. The system is represented as a plurality of architectural components. At least some of the architectural components are configured with availability characteristics to obtain a model of the system. The model is implemented to assess availability in the mission system. The model may be implemented to perform tradeoff decisions for each individual component and interrelated components. Availability can be assessed for the system, given all of the tradeoffs.
Description
FIELD

The present disclosure relates generally to the modeling of systems and more particularly (but not exclusively) to modeling and analysis of availability in systems.


BACKGROUND

Mission systems typically include hardware components (e.g., computers, network components, sensors, storage and communications components) and numerous embedded software components. Historically, availability prediction for large mission systems has been essentially an educated mix of (a) hardware failure predictions based on well-understood hardware failure rates and (b) software failure predictions based on empirical, historical or “gut feel” data that generally has little or no solid analytical foundation or basis. Accordingly, in availability predictions typical for large-scale mission systems, heavy weighting frequently has been placed upon the more facts-based and better-understood hardware failure predictions while less weighting has been placed on the more speculative software failure predictions. In many cases, mission availability predictions have consisted solely of hardware availability predictions. Hardware, however, is becoming more stable over time, while requirements and expectations for software are becoming more complex.


SUMMARY

The present disclosure, in one aspect, is directed to a method of modeling a mission system. The system is represented as a plurality of architectural components. At least some of the architectural components are configured with availability characteristics to obtain a model of the system. The model is implemented to assess availability in the mission system.


Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.




DRAWINGS

The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.



FIG. 1 is a block diagram of a system that can be modeled according to some implementations of the disclosure;



FIG. 2 is a block diagram of a system architecture representation according to some implementations of the disclosure;



FIG. 3 is a block diagram illustrating how a server node may be modeled as a component of generic structure according to some implementations of the disclosure;



FIG. 4 is a block diagram illustrating how a server process may be modeled as a component of generic structure according to some implementations of the disclosure;



FIG. 5 is a block diagram illustrating how one or more local area networks (LANs) and/or system buses may be modeled as a component of generic structure according to some implementations of the disclosure;



FIG. 6 is a conceptual diagram of dynamic modeling according to some implementations of the disclosure;



FIG. 7 is a block diagram illustrating how various availability characteristics may be modeled for a system according to some implementations of the disclosure;



FIG. 8 is a block diagram of a runtime configuration of a system architecture representation according to some implementations of the disclosure;



FIG. 9 is a diagram of spreadsheet inputs according to some implementations of the disclosure;



FIG. 10A is a diagram of an availability analysis component representing a hardware component according to some implementations of the disclosure;



FIG. 10B is a diagram of a subcomponent of the component shown in FIG. 10A;



FIG. 11 is a diagram of availability analysis subcomponents of a software architectural component according to some implementations of the disclosure;



FIG. 12 is a diagram of system management control components according to some implementations of the disclosure;



FIG. 13 is a diagram of system management status components according to some implementations of the disclosure;



FIG. 14 is a diagram of redundancy components according to some implementations of the disclosure; and



FIG. 15 is a diagram of cascade components according to some implementations of the disclosure.




DETAILED DESCRIPTION

The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.


In various implementations of a method of modeling a mission system, a plurality of components of generic structure (COGSs) are used to represent the mission system as a plurality of architectural components. The architectural components may include, for example, networks, switches, computer nodes, backplanes, busses, antennas, software components residing in any of the foregoing components, satellite transponders, and/or receivers/transmitters. The COGSs are configured with availability characteristics to obtain a model of the system. The model may be implemented to assess availability in the mission system. The model may be used to analyze reliability in the mission system as well as reliability of hardware, network and/or software components of the system.


Component-based modeling environments in accordance with the present disclosure can provide for modeling of system hardware and software architecture to generate predictive analysis data. An exemplary system architecture that can be modeled in accordance with one implementation is indicated generally in FIG. 1 by reference number 20. The system 20 may be modeled using COGSs in combination with a COTS tool to allow modeling of specific attributes of the system.


A build view of a system architecture in accordance with one implementation is indicated generally in FIG. 2 by reference number 50. The architecture 50 includes a plurality of architectural components 54, a plurality of performance components 60, and a plurality of availability components 68. The availability components 68 may be used to generate one or more reports 72. One or more libraries 76 may include configuration research libraries that allow individual component research to be input to modeling. Libraries 76 also may optionally be used to provide platform-routable components for modeling system controls, overrides, queues, reports, performance, discrete events, and availability. Such components can be routed and controlled using a platform routing spreadsheet as further described below.


The architectural components 54 include components of generic structure (COGs). COGSs are described in co-pending U.S. patent application Ser. No. 11/124,947, entitled “Integrated System-Of-Systems Modeling Environment and Related Methods”, filed May 9, 2005, the disclosure of which is incorporated herein by reference. As described in the foregoing application, reusable, configurable COGSs may be combined with a commercial off-the-shelf (COTS) tool such as Extend™ to model architecture performance.


The COGSs 54 are used to represent generic system elements. Thus a COGS 54 may represent, for example, a resource (e.g., a CPU, LAN, server, HMI or storage device), a resource scheduler, a subsystem process, a transport component such as a bus or network, or an I/O channel sensor or other I/O device. It should be noted that the foregoing system elements are exemplary only, and other or additional hardware, software and/or network components and/or subcomponents could be represented using COGSs.


The performance components 60 include library components that may be used to configure the COGSs 54 for performance of predictive performance analysis as described in U.S. application Ser. No. 11/124,947. The availability components 68 include library components that may be used to configure the COGSs 54 for performance of predictive availability analysis as further described below. In the present exemplary configuration, inputs to COGSs 54 include spreadsheet inputs to Extend™ which, for example, can be modified at modeling runtime. The COGSs 54 are library components that may be programmed to associate with appropriate spreadsheets based on row number. Exemplary spreadsheet inputs are shown in Table 1. The inputs shown in Table 1 may be used, for example, in performing predictive performance analysis as described in U.S. application Ser. No. 11/124,947. The spreadsheets in Table 1 also may include additional fields and/or uses not described in Table 1. Other or additional spreadsheet inputs also could be used in performing predictive performance analysis. In implementations in which another COTS tool is used, inputs to the COGSs 54 may be in a form different from the present exemplary Extend™ input spreadsheets.

TABLE 1SpreadsheetUsePlatformRoutingAllows routing of most messages toPrimarilychange without changing the model, onlyused inthis spread sheet. Associates work sheetsprocesseswith process and IOSensor blocks.and I/O-Sensors.resourceCapIncludes list of resources, with capacities.InE.g., includes MIPS strength of a CPU onhardwarea node, or kb/sec capacity of a LAN, SANmodels,or NAS. Field list includes resource name,e.g.,number of resources, capacity units andservercomments. Process programming isnodes,facilitated where an exceedRow is theHMIsame as a resource node column.nodes,disksandtransportLANstrength.processDeplIncludes the mapping of processes toProcessresources. E.g., a track-ident process canmodelsbe mapped onto a server node. Field listandincludes process name, node it istransport.deployed on, process number, msbetween cycles and comments.processNeedsIncludes an amount of resource that aProcessprocess/thread needs to complete its task.models.This can be done on a per record basis, ora per task basis. It can be applied toCPU, LAN or storage resources. E.g., itcan state that a track-id process needs0.01MIPS per report to perform id. Fieldlist includes process/thread name, MIPSneeds, on/off and ms between cycles andcomments.msgTypesUsed to describe additional processingProcessand routing to be done by processes uponmodels.receipt of messages.


Several exemplary COGSs 54 shall now be described with reference to various aspects of predictive performance analysis as described in U.S. application Ser. No. 11/124,947. A block diagram illustrating how an exemplary server node may be modeled as a COGS is indicated generally in FIG. 3 by reference number 100. In the present example, a server node COGS 112 is modeled as having a plurality of CPUs 114, a memory 116, a local disk 118, a system bus 120, a SAN card 122, a plurality of network interface cards (NICs) 124 and a cache 126. The CPU(s) 114 are used by processes 128 based, e.g., on messages, rates and MIPS loading. Cache hit rate and cache miss cost may be modeled when a process 128 runs. Cache hit rate and cache miss cost may be input by the spreadsheet resourceCap. Also modeled is usage of the system bus 120 when cache hit or miss occurs. System bus usage, e.g., in terms of bytes transferred, also is modeled when a SAN 130 is accessed. SAN usage is modeled based on total bytes transferred. System bus usage, e.g., in terms of bytes transferred, is modeled when data travels to or from the server node 112 to an external LAN 132 and/or when the local disk 118 is accessed. System bus usage actually implemented relative to the SAN and local disk 118 and for LAN travels is modeled in another COGS, i.e., a COGS for transport as further described below. HMI nodes 134 and I/O sensors, communication and storage 136 also are modeled in COGSs other than the server COGS 112.


A block diagram illustrating how an exemplary server process may be modeled as a COGS is indicated generally in FIG. 4 by reference number 200. In the present example is modeled a generic process 204 used to sink messages, react to messages, and create messages. The process 204 is modeled to use an appropriate amount of computing resources. A message includes attributes necessary to send the message through a LAN 132 to a destination. Generic processes may be programmed primarily using the processDepl and processNeeds spreadsheets. Generic process models may also include models for translators, generators, routers and sinks. NAS or localStorage 208 may be modeled as a node, with Kbyte bandwidth defined using the spreadsheet resourceCap. Processes “read” 212 and “write” 216 are modeled to utilize bandwidth on a designated node. Generally, generic process models may be replicated and modified as appropriate to build a system model.


A block diagram illustrating how one or more LANs and/or system buses may be modeled as a transport COGS is indicated generally in FIG. 5 by reference number 250. In the present example, a plurality of LANs 254 are modeled, each having a different Kbytes-per-second bandwidth capacity. A model 258 represents any shared bus and/or any dedicated bus. A model 260 represents broadcast. The model 260 uses broadcast groups, then replicates messages for a single LAN 254 and forwards the messages to the proper LAN 254. The transport COGS 258 implements usage of system buses 262 for server nodes 112 and/or HMI nodes 134. Bus usage is implemented based on destination and source node attributes. The transport COGS also implements LAN 254 usage behavior, such as load balancing across LANs and/or use by a single LAN. After LAN and system bus resources are modeled as having been used, the transport COGS 258 routes messages to appropriate places, e.g., to processes 128, HMI nodes 134 or I/O sensors 136.


In one implementation, to build a model describing a mission system, static models first are created and analyzed. Such models may include models for key system components, deployment architectural views, process and data flow views, key performance and/or other parameters, assumptions, constraints, and system architect inputs. The architectural view, static models, system architect predictions and modeling tools are used to create initial dynamic models. Additional inputs, e.g., from prototypes, tests, vendors, and additional architectural decisions may be used to refine the dynamic models and obtain further inputs to the system architecture. Documentation may be produced that includes a performance and/or other profile, assumptions used in creating the models, static model and associated performance and/or other analysis, dynamic model and associated performance and/or other analysis, risks associated with the system architecture, suggested architectural changes based on the analysis, and suggestions as to how to instrument the mission system to provide “real” inputs to the model.


A conceptual diagram of one implementation of dynamic modeling is indicated generally in FIG. 6 by reference number 300. The modeling is performed in a computing environment 302 including a processor and memory. An Extend™ discrete event simulator engine 304 is used to perform discrete event simulation of hardware and software under analysis. Reusable resource COGSs 308 and process and I/O sensor COGSs 312 are used to model nodes, networks, buses, processes, communications and sensors. Spreadsheet controls 316 are applied via one or more spreadsheets 320 to the COGSs 308 and 312. A HMI 324 is used to run the model and show reports. Exemplary deployment changes and how they may be performed are shown in Table 2.

TABLE 2DEPLOYMENT CHANGEHOW IT IS DONEChange the deployment of aChange processDepl node cell value inserver process from oneprocess row.server node to another.Change the deployment of anChange hmiDepl process row from oneHMI process from one noderow to another. Change appropriate nodeto another.and process id cells. In process model,change appropriate row identifiers tonew process row.Change all server processes toChange all processDepl cells to desired 33 node configuration.server nodes.Change storage from NAS toChange all associated messageDest fieldSAN.from NAS to SAN destination, in inputs toprocess block.Load balance across multipleAdd appropriate processDepl lines fornetworks.traffic destination. Change transport viewto use these LANS.Change strength of LANChange resourceCap spreadsheet,appropriate row and cell to new strength.


Implementations of the present modeling framework allow component configurations to be modified at runtime. Such configurations can include but are not limited to number of CPUs, strength of CPUs, LAN configuration, LAN utilization, system and I/O buses, graphics configuration, disk bandwidth, I/O configuration, process deployment, thread deployment, message number, size and frequency, and hardware and software component availability. As further described below, system and/or SoS availability can be modeled to predict, for example, server availability and impact on performance, network availability and impact on performance, fault tolerance and how it impacts system function, redundancy and where redundancy is most effective, load balancing, and how loss of load balance impacts system performance. Modeling can be performed that incorporates system latency, and that represents message routing based on a destination process (as opposed to being based on a fixed destination). Impact of a system event on other systems and/or a wide-area network can be analyzed within a single system model environment.


Referring again to FIG. 2, in various implementations, predictive availability analysis is performed using the COGSs 54, availability components 68 and availability spreadsheet input. Various availability characteristics may be modeled for a system, for example, as indicated generally in FIG. 7 by reference number 400. Failure rate, redundant nodes, coldstart time and switchover time may be modeled for server nodes 112, LAN and system busses 132, I/O sensors, comm. and storage 136, and HMI nodes 134. For processes 128, failure rate, coldstart time, restart time, switchover time, cascading failures, minimum acceptable level (i.e., a minimum number of components for a system to be considered available) and redundancy are modeled for all software. For system management software, heartbeat rate and system control messages are modeled. Processes of HMI nodes 134 are modeled like processes of server nodes 112.


Exemplary spreadsheet inputs for availability analysis are shown in Table 3. The spreadsheets in Table 3 also may include additional fields and/or uses not described in Table 3. Other or additional spreadsheet inputs also could be used in performing predictive availability analysis. In implementations in which another COTS tool is used, inputs to the COGSs 54 may be in a form different from the present exemplary Extend™ input spreadsheets.

TABLE 3SpreadsheetUseComponent UsageAvl-processNeedsFor each component,Used in hardwareinputs for componentavailability componentsfailure rates, transientand in softwaretimes, failure algorithm,availability sub-etc.components.Avl-heartbeatFor heartbeat control,Used in heartbeatand heartbeat analysisgenerators andfor each component.heartbeat analysisPlus, control of actualcomponents.heartbeat generators.Avl-redundancyFor mapping of eachUsed in availabilityindividual component toanalysis components.other componentsconsidered redundant.Also, holds data used inredundancy calculationsfor availability.Avl-cascadeFor mapping eachUsed in availability andcomponent to thecascade analysiscomponents that it cancomponents.cause failure to, due toits own failure.Avl-controlFor system controlUsed in system controlcomponents, to decidecomponents.rates, types ofmessages, routing, etc.


Predictive performance analysis also may optionally be performed in conjunction with availability analysis. In such case, performance components 60 and performance spreadsheet inputs may be optionally included in availability analysis modeling.


An exemplary runtime configuration of the architecture representation 50 is indicated schematically in FIG. 8 by reference number 420. Various spreadsheet inputs are shown in FIG. 9. The configuration 420 may be implemented using one or more processors and memories, for example, in a manner the same as or similar to the dynamic modeling shown in FIG. 6. As shown in FIG. 8, runtime architectural components 424 have been configured via availability components 68 for availability analysis. The architectural components 424 also have been configured via performance components 60 (shown in FIG. 2) for performance analysis. The configuration 420 thus may be used for performing availability analysis and/or performance analysis.


The configuration 420 includes a plurality of runtime availability analysis components 428, including software components 432, hardware components 436, system management control components 440, system management status components 444, and redundancy and cascade components 448 and 452. Spreadsheets 456 may be used to configure a plurality of characteristics of the runtime availability analysis components 428. The runtime configuration 420 also includes spreadsheets 458 which include data for use in performing availability analysis, as further described below, and also may include data for performing predictive performance analysis.


Various spreadsheet inputs are indicated generally in FIG. 9 by reference number 460. PlatformRouting spreadsheet 464 points to several other worksheets to configure and control hardware and/or software components that represent the architectural components 424. Avl-processNeeds spreadsheet 468 is used to perform individual component configuration and initial reporting. Avl-Heartbeat spreadsheet 472 provides for heartbeat control and status. Avl-control spreadsheet 746 provides ways to have system management send controls to any component. Avl-redundancy spreadsheet 480 provides lists of components that are redundant to others. Avl-cascade spreadsheet 484 provides lists of components that cascade failures to other components.


The runtime availability analysis components 428 shown in FIG. 8 shall now be described in greater detail. In various implementations of the disclosure, each architectural hardware or software component 424 is associated with a runtime availability hardware or software component 436 or 432. It should be noted that there are various ways in which an architectural component 424 could be associated with a runtime availability component. For example, in some implementations a hardware architectural component 424 is associated with a corresponding platform-routable hardware component 436. In some implementations a software architectural component 424 includes one or more availability software components 432 as subcomponents, as described below. Each hardware availability component 436 and/or software availability component 432 has its own reliability value(s), its own restart time(s), etc. Reliability values and algorithm types that use the reliability values for computation may be spreadsheet-provided as further described below. Typically, a plurality of spreadsheets may be used to configure each component. Routable components 436 are used to represent each piece of hardware. The components 436 may be used to “fail” hardware components 424 based upon failure rate or upon system control. Routable hardware components 436 also are used to control availability of hardware based upon AvailAttr (availability attribute) messages used, for example, to coldstart or switchover a hardware component 424. A capacity multiplier may be used, e.g., set to zero, to fail resources, making them unable to do any requests, so messages queue up.


Hardware Availability Component


One configuration of a platform-routable hardware component 436 is shown in FIG. 10A. During availability analysis, it is assumed that the platform-routable hardware component 436 represents the corresponding architectural hardware component 424. The component 436 includes a HWAvailAccess subcomponent 500, a Random Generator subcomponent 504 and an Exit subcomponent 508. The Random Generator 504 may generate a failure of the component 436 at a random time. In such case, a failure message is generated which fails the component 436 and is routed to the Exit subcomponent 508.


The HWAvailAccess subcomponent 500 is shown in greater detail in FIG. 10B. The subcomponent 500 queues up a heartbeat signal 512 to a processor architectural component 428 to get processed, by priority, like most other processes. The routable component 436 returns a heartbeat to a final destination using a routeOutputs subcomponent, further described below.


Each routable hardware component 436 corresponds to a line in a processNeeds spreadsheet (shown in Table 1) and a line in a PlatformRouting spreadsheet 464.


Another spreadsheet, Avl-processNeeds, is used to provide input parameters to the model for use in performing various failure algorithms. By varying data in the Avl-processNeeds spreadsheet, a system user can vary statistical failure rates, e.g., normal distributions, and calculation input parameters used by the model. The Avl-processNeeds spreadsheet provides failure times and a probability distribution. A TimeV1 field is used to provide a mean, and a field TimeV2 is used to provide a standard deviation, for hardware failure. The Avl-processNeeds spreadsheet is the same as or similar in form to the processNeeds spreadsheet. The Avl-processNeeds spreadsheet provides the following parameters. If a routable component 436 is “on”, then a value “available” equals 1, else “available” equals 0. If the component 436 is “off”, “available” equals 1. “On” and “off” are provided for allowing a component to fail. Coldstart, restart, switchover and isolation times are also provided by the Avl-processNeeds spreadsheet.


A mechanism is provided to fail hardware and all software components deployed on that hardware. For example, if a routable hardware component 436 fails, it writes to capacityMultiplier in the resourceCap spreadsheet, setting it to 0. When “unfailed”, it writes capacityMultiplier back to 1. The “available” field for that component is also set. If, e.g., a coldstart or restart message is received, the hardware component 436 sets capacityMultiplier to 0, waits for a time indicated on input spreadsheet avl-processNeeds 468, resets capacityMultiplier to 1, and sinks the message. The “available” field for that component 436 is also set.


A mechanism is provided to restore a hardware component and all software components deployed on that hardware component. For example, when a routable hardware component 436 restarts itself, the HWavailAccess subcomponent generates a restart message, sets “available” to 0 and delays for a restartTime (included in the Avl-processNeeds spreadsheet 468) so that no more messages go through the component 436. The restart message causes all messages to be held for the spreadsheet time, while messages to that component, which typically are heartbeat messages, start backing up. Then the component 436 sets “available” back to 1 and terminates the delay, making the component 436 again available. The messages queued up then flow through the system.


Component failure may be detected in at least two ways, e.g., via heartbeatAnalyzer and/or availabilityAnalyzer. components. HeartbeatAnalyzer uses lack of heartbeat message in determining if too long a time has passed since the last heartbeat. It is also detected by availabilityAnalyzer via the avl-processNeeds worksheet. Two values are provided: actual available time, and time that the system knows via heartbeat that the component 436 is unavailable.


AvailabilityAnalyzer periodically goes through the avl-processNeeds spreadsheet and populates a componentAvailability column with cumulative availability for each component. If a component is required and unavailable, AvailabilityAnalyzer marks the system as unavailable.


Software Availability Component


One configuration of a software availability component 432 is indicated generally in FIG. 11. An AvailAccess component 600 is included as a subcomponent of the corresponding software process architectural component 424. The subcomponent 600 corresponds to a line in the processNeeds spreadsheet and also corresponds to a line in the PlatformRouting spreadsheet 464. The subcomponent 600 includes a Random Generator component 604.


If availAttr is set in a message 608 and the signaled attribute is “coldstart”, “restart”, etc., then: (a) an “available” field for the process is set to zero; (b) time and delays are selected; (c) any additional messages are queued; (d) when time expires, the “available” field is set to 1; and (e) all message are allowed to flow again. If the signaled attribute is “heartbeat”, CPUCost is set appropriately, the heartbeat is sent on and routed to its final destination, i.e., a heartbeat receiver component further described below. If availAttr not set in the message 608, the message is assumed to have been generated, for example, by performance analysis components. Accordingly, the message is sent through (and queues if a receiving software component is down.)


A second availability subcomponent 612, routeOutputs, of the corresponding software process architectural component 424 routes a heartbeat signal 616 to a final destination, i.e., a heartbeat receiver component further described below. As previously mentioned, a routeOutputs subcomponent is also included in hardware availability components 436, for which it performs the same or a similar function. If availType equals “heartbeat”, the RouteOutputs subcomponent 612 overrides the route to the final destination, i.e., a heartbeat receiver component further described below.


Avl-processNeeds Spreadsheet


The Avl-processNeeds spreadsheet Avl-processNeeds, is used to provide input parameters to the model for use in performing various software failure algorithms. By varying data in the Avl-processNeeds spreadsheet, a system user can vary statistical failure rates, e.g., normal distributions, and calculation input parameters used by the model. The Avl-processNeeds includes the same or similar fields as the processNeeds spreadsheet and may use the same names for components. The Avl-processNeeds spreadsheet can provide failure times and distribution (TimeV1, TimeV2, Distribution fields) for a component self-generated random failure. These fields can be used for time-related failure or failure by number of messages or by number of bytes. The Avl-processNeeds also provides as follows.


An “On/off” field is provided for allowing a component to fail and is set at the beginning of a model run. An “available” field is set during a model run. If “on” is set, a component is monitored for availability. If “off” is set, a component is not monitored. A “required” field may be used to indicate whether the component is required by the system. Avl-processNeeds may provide coldstart time, restart time, switchover time and/or isolation time. The foregoing times are used when the corresponding types of failures occur. The times are wait times before a component is restored from a fail condition. A cumulative availability field is used to hold a cumulative time that a component was available for the run. This field may be filled in by the availabilityAnalyzer component.


If a software component fails (through coldstart, restart, etc.), the component sets the “available” field to 0, waits an appropriate time, then sets the “available” field to 1.


When a software component coldstarts itself, the availAccess subcomponent generates a “coldstart” message, sets the “available” field in the component to 0, and delays coldstartTime so that no more messages go through. The coldstart message holds all messages for a spreadsheet time while messages to that component start backing up. Then the availAccess subcomponent sets “available” back to 1 and ends the delay, making the component again available. The messages queued up then can flow through the system, typically yielding an overload condition until worked off.


A software component failure may be detected in at least two ways, e.g., by heartbeatAnalyzer and by availabilityAnalyzer, in the same or similar manner as in hardware component failures as previously described. It should be noted that a software component failure has no direct effect on other software or hardware components. Any downstream components would be only indirectly affected, since they would not receive their messages until coldstart is over. Note that a component also could be coldstarted from an external source, e.g., system control, and the same mechanisms would be used.


System Management Control Components


Control components 440 may be used to impose coldstart, restarts, etc. on individual part(s) of the system. Various system management control components 440 are shown in greater detail in FIG. 12. A prControl component 700 uses input from the Avl-control spreadsheet 476 to generate a control message toward a single hardware or software component 428. Attributes of a prControl component 700 include creation time and availType=control. A hardware or software component 436 or 432 receiving a control message checks whether availType=coldstart, restart, switchover, etc. and may delay accordingly. An exit component 438 is used for completed messages.


System Management Status Components


Various system management status components 444 are shown in greater detail in FIG. 13. A prHeartbeat component 750 sets availAttr to “heartbeat” and sets finalMsgDest to heartbeatAnalyzer. A component prHeartbeatRcvr 754 receives heartbeats from anywhere in the system. The component 754 uses an originalmsgSource field in the avl-heartbeat spreadsheet 472 as the identifier of a spreadsheet row that sent a heartbeat and fills in a heartbeat receive time in the avl-heartbeat spreadsheet 472. A prHeartbeatAnalyzer component 758 wakes up in accordance with a cyclic rate in the prProcessNeeds spreadsheet 468. The component 758 gets the current time, analyzes the avl-heartbeat spreadsheet 472 for missing and/or late heartbeats, and fills in a “heartbeat-failed” field if a component is “on” and “required” and a time threshold has passed. An availability reporter component 766 uses results of the prHeartbeatAnalyzer component 758 to report on overall system availability. A component prHeartbeatAll 762 generates heartbeats towards a list of hardware or software components 436 or 432.


The avl-heartbeat spreadsheet 472 operates in the same or a similar manner as the processNeeds spreadsheet and uses the same names for components. Fields of the spreadsheet 472 may be used to manipulate availability characteristics of component(s) and also may be used to calculate heartbeat-determined failures. “On/off” is set for a hardware or software component 436 or 432 in the avl-heartbeat spreadsheet 472 at the beginning of a model run. If “on”, the component is monitored for heartbeat. If “off”, the component is not monitored. “Required” is set for a hardware or software component 436 or 432 in the avl-heartbeat spreadsheet 472 at the beginning of a model run. If “required” is “0”, the component is not required. If “required” is “1”, the component is required. If “required” is “2”, additional algorithms are needed to determine whether the component is required. A heartbeat failure time “failureThreshold” in the avl-heartbeat spreadsheet 472 provides a threshold for determining heartbeat failure and provides individual component control. A heartbeat receive time “receiveTime” in the avl-heartbeat spreadsheet 472 is set for a hardware or software component during a model run. The heartbeat receive time is set to a last time a heartbeat was received for that component. A “heartbeat failed” field is set to “0” if there are no heartbeat failures or to “1” if a heartbeat failure occurs. A heartbeat failure is determined to have occurred when:

currentTime−receiveTime>failureThreshold


A software or hardware component failure may be detected in the following manner. Heartbeat messages may be generated by prHeartbeatAll 762 and may be sent to ranges of components. A failed component queues its heartbeat message. The component prHeartbeatAnalyzer 758 wakes up periodically, looks at current time, on/off, required, heartbeat failure time and heartbeat receive time to determine whether the component being analyzed has passed its time threshold. The component prHeartbeatAnalyzer 758 writes its result to the availability reporter component 766. Such result(s) may include accumulation(s) of availability by heartbeat. It should be noted that components which are not marked “required” have no affect on overall system availability.


Redundancy Management


Various redundancy components 448 are shown in greater detail in FIG. 14. Redundancy can come into play when a fault occurs or is detected. In such event, a conclusion that a component has failed and/or the system is not available is postponed and a prRedundancy component 780 is executed. The prRedundancy component 780 may use one or more lists of redundant components and current availability of each of those components to determine an end-result availability. Lists of redundant components are configurable to trade off redundancy decisions. The availability reporter component 766 uses results of prRedundancy 780 to report on overall system availability.


Cascade


Various cascade components 452 are shown in greater detail in FIG. 15. Cascading failures can come into play when a fault occurs or is detected. In such event, a conclusion that a component has failed and/or the system is not available is postponed and a prCascade component 800 is executed. The prCascade component 800 assesses availability using current component availability and one or more lists of cascading relationships between or among components. The prCascade component 800 uses one or more lists of cascaded failures and fails other components due to the cascading. Lists of cascading components are configurable to trade off cascading decisions. The availability reporter 766 uses results of prCascade 800 to report on overall system availability.


Various functions that may be implemented using various models of the present disclosure are described in Table 4.

TABLE 4FunctionsHow PerformedProvide the quantitative availabilityAvailable when heartbeat says it is,prediction for the system underminus a small amount of timeanalysis.allocated to heartbeat cycle time.Total time is the time of thesimulation. Available time is fromheartbeat analyzer or availabilityanalyzer.Provide quantitative predictions ofDowntime starts at componentdowntime as a result of a variety offailure, using a time tag. Downtimefailures (hardware, software,ends when the heartbeat messagesystem management).from that component is returned.Also may perform static analysisusing the spreadsheet.Provide a means to tradeoffLoad balancing can be turned on orhardware availability decisionsoff for tradeoff analysis. When on,such as redundancy, load balancingavailability does not get affected byredundant component. When off,downtime may exist.Provide a means to trade offCan turn on/off checkpoint, alongsoftware component managementwith coldstart and restart capability toand decisions against programtradeoff timelines with and without.needs (such as amount ofPlus, can tradeoff in presence ofredundancy, use of checkpoint,performance considerations. Canbundling, priorities, QoS, etc).bundle software together, such thatlarger bundles have larger failurerates, perhaps affecting overallsystem availability. Can bust apartcritical and non-critical functions.Can change deployment options.Provide a means to trade offCan change period of heartbeat,system management decisionschange deployment of software toagainst program needs (such ascluster node after failure, changeuse of clustering, period ofrestart policy, change deploymentheartbeat, prioritization, QoSoptions.and others).Provide quantitative predictions ofHeartbeat and failure mechanismtimes and timelines for hardware/provide times for individualsoftware fault detection andcomponent detection andreconfiguration.reconfiguration. Sum of times, plusmessage transmission, plusresource contention, plusperformance data in the way, providetimelines.Provide startup time predictionStart system with hardware coldstartmessages, then software coldstartmessages, in sequence.Provide a means to do the analysisUse availability components inin the presence of performance andaddition to performanceoperational data, or without thesecomponents. Have bothdata types.components compete for resources.Provide a means to driveDo analysis with model to determineavailability requirements downcombinations of software reliabilityto software component(s).that yields required availability.Allocate resultant software reliabilitynumbers down to groups ofcomponents.


Various scenarios that may be implemented using various models of the present disclosure are described in Table 5.

TABLE 5ScenariosHow PerformedRandom hardware failureModel fails node using random rate. Systemis detected, messagesmanagement heartbeat detects the failure, andstop flowing and systemcoldstarts node. Latency and availability areis deemed unavailablemeasured.until MTTR time is over.Routable hardware component to represent eachhardware resource, each one tied to avl-processNeeds spreadsheet. Random generatorsets capacity multiplier to 0. Messages that needresource pile up. Heartbeat analyzer andavailability analyzer cumulate failure time.Random software failureModel fails software component. Systemis detected, then softwaremanagement heartbeat detects the failure, andcomponent cold started.coldstarts (or restarts if checkpointed) process.System is unavailable forLatency and availability are measureddetect + coldstart time + messagingSoftware availability component goes in front oftime.all software components in useS10Resourcesand uses avl-processNeeds spreadsheet forconfiguration. Random fail the resource juststops all messages from leaving availabilitycomponents for the coldstart, restart or switchovertime.System control forcedRoutable system control component wakes up,hardware or softwaresends fail, or coldstart, or other message tofailure or state change,hardware or software routable component.then component changesReceiving component changes state, changesstate. Detection andavailability state, analyzers cumulate down time.availability as above.System control forces aRoutable system control component wakes up,list of hardware/softwaresends fail or coldstart or other to list ofto fail, coldstart, etc, thencomponents. List of destinations in avl-forwardlist of componentsspreadsheet. Each receiving component behaveschange state.as previous row.Cascading softwareFault tree containing necessary components forfailures result in changedthe system to be considered available. Oravailability. Such as, oneperhaps, list containing those that can fail and stillcomponent fails, requiringhave the system available.numerous components tobe restarted. Detection,availability as above.One of the redundantAvailability analysis component uses availablehardware componentsresult worksheet, including avl-redundancy andfails, the system remainsdeems the system available.available.Analysis also shows time that system is availablewithout the redundancy.


In some implementations, static analysis of availability can be performed in which input spreadsheets are used to approximate availability of a system. Reliability values for hardware and software components may be added to obtain overall reliability value(s). For each of any transient failure(s), value(s) representing probability*(detection time+reconfiguration time) may be combined to obtain an average downtime. Software reliability value(s) may be adjusted accordingly.


In contrast to existing predictive systems and methods, the foregoing system and methods use a model of the hardware and software architecture. Thus the foregoing systems and methods are in contrast to existing software availability predictive methods which are not performed in the context of any specific hardware mission system configuration. None of the existing software availability predictive tools or methods directly address software-intensive mission systems, nor do they provide easy trade-off mechanisms between fault detection, redundancy and other architectural mechanisms.


Implementations in accordance with the present disclosure can provide configurability and support for many programs and domains with common components. It can be possible to easily perform a large number of “what if” analyses. Redundancy, system management, varied reliability, and other system characteristics can be modeled. Because implementations of the foregoing modeling methods make it possible to quickly perform initial analysis, modeling can be less costly than when current methods are used. Ready-made model components can be available to address various types of systems and problems. Various implementations of the foregoing modeling methods make it possible to optimize availability designs and to justify availability decisions. Availability analysis modeling can be performed standalone or in the presence of performance data and analysis.


Various implementations of the present disclosure provide mathematically correct, provable and traceable results and can be integrated with other tools and/or techniques. Other tools and/or techniques can be allowed to provide inputs for analyzing hardware or software component reliability. Various aspects of availability can be modeled. Standalone availability analysis (i.e., with no other data in a system) can be performed for hardware and/or software. Hardware-only availability and software-only availability can be modeled. Various implementations allow tradeoff analysis to be performed for hardware/software component reliability and for system management designs. In various implementations, a core tool with platform-routable components is provided that can be used to obtain a very quick analysis of whole system availability and very quick “what if” analysis.


Apparatus of the present disclosure can be used to provide “top down” reliability allocation to hardware and software components to achieve system availability. Additionally, “bottom up” analysis using hardware/software components and system management designs can be performed, yielding overall system availability. Individual software component failure rates based upon time, or size or number of messages can be analyzed. Software failures based upon predictions or empirical data, variable software transient failure times also can be analyzed. Various implementations provide for availability analysis with transient firmware failures and analysis of queueing with effects during and after transient failures.


Cascade analysis can be performed in which cascade failures and their effect on availability, and cascade failure limiters and their effects on availability, may be analyzed. Availability using hardware and/or software redundancy also may be analyzed. Analysis of aspects of system management, e.g., variable system status techniques, rates, and side effects also may be performed.


Implementations of the disclosure may be used to quantitatively predict availability of systems and subsystems in the presence of many unknowns, e.g., hardware failures, software failures, variable hardware and software system management architectures and varied redundancy. Various implementations make it possible to quantitatively predict fault detection, fault isolation and reconfiguration characteristics and timelines.


A ready-made discrete event simulation model can be provided that produces quick, reliable results to the foregoing types of problems. Results can be obtained on overall system availability, and length of downtime predictions in the presence of different types of failures. Using models of the present disclosure, virtually any software/system architect can assess and tradeoff the above characteristics. Such analysis, if desired, can be performed in the presence of performance analysis data and performance model competition for system resources. Numerous components, types of systems, platforms, and systems of systems can be supported. When both performance analysis and availability analysis are being performed together, performance analysis components can affect availability analysis components, and availability analysis components can affect performance analysis components.


A repeatable process is provided that may be used to better understand transient failures and their effect on availability and to assess availability quantitatively. Software component failures may be accounted for in the presence of the mission system components. Availability analysis can be performed in which hardware and/or software components of the mission system are depicted as individual, yet interdependent runtime components. In various implementations of apparatus for predictively analyzing mission system availability, trade-off mechanisms are included for fault detection, fault isolation and reconfiguration characteristics for mission systems that may be software-intensive.


While various preferred embodiments have been described, those skilled in the art will recognize modifications or variations which might be made without departing from the inventive concept. The examples illustrate the invention and are not intended to limit it. Therefore, the description and claims should be interpreted liberally with only such limitation as is necessary in view of the pertinent prior art.

Claims
  • 1. A method of modeling a mission system comprising: representing the system as a plurality of architectural components; configuring at least some of the architectural components with availability characteristics to obtain a model of the system; and implementing the model to assess availability in the mission system.
  • 2. The method of claim 1, wherein representing the system as a plurality of architectural components comprises using a plurality of components of generic structure (COGSs).
  • 3. The method of claim 1, wherein the architectural components include at least one hardware component and at least one software component.
  • 4. The method of claim 3, wherein configuring at least some of the architectural components with availability characteristics comprises associating one or more availability analysis components with one of the hardware and software components.
  • 5. The method of claim 1, wherein implementing the model comprises: providing one or more spreadsheet inputs to the model; and performing discrete event simulation using the inputs.
  • 6. The method of claim 1, wherein the availability characteristics comprise at least one of the following: reliability, failure rate, coldstart time, restart time, switchover time, cascading characteristics, redundancy, queuing characteristics, and heartbeat.
  • 7. The method of claim 1, wherein configuring the architectural components with availability characteristics comprises including the availability characteristics in one or more spreadsheets, and implementing the model comprises using the spreadsheets to approximate reliability of the mission system.
  • 8. The method of claim 7, wherein using the spreadsheets to approximate reliability comprises: combining reliability values in the spreadsheets for the architectural components to obtain an overall reliability value; and adjusting the overall reliability value by any transient failure downtimes included the spreadsheets.
  • 9. An apparatus for analyzing availability in a mission system, the apparatus comprising at least one processor and at least one memory configured to: represent the system as a plurality of architectural components; configure at least some of the architectural components with availability characteristics to provide a model of the system; and execute the model.
  • 10. The apparatus of claim 9, wherein the at least one processor and at least one memory are further configured to assess availability in the presence of operational data, performance data, a changing system environment, and a changing system scenario.
  • 11. The apparatus of claim 10, wherein the at least one processor and at least one memory are further configured to use a plurality of components of generic structure (COGSs) to assess availability.
  • 12. The apparatus of claim 9, wherein the apparatus is further configured to receive at least one spreadsheet input to the model.
  • 13. The apparatus of claim 9, wherein the availability characteristics comprise at least one of the following: reliability, failure rate, coldstart time, restart time, switchover time, cascading characteristics, redundancy, queuing characteristics, and heartbeat.
  • 14. The apparatus of claim 9, wherein the at least one processor and memory are further configured to selectively apply software component reliability algorithms and calculation input parameters to model one or more software components; and assess an impact, if any, of the one or more modeled software components on overall reliability of the mission system. perform a component reliability rate calculation selectable from a plurality of statistical calculation types.
  • 15. The apparatus of claim 9, wherein the at least one processor and memory are further configured to select a reliability rate calculation for a component.
  • 16. The apparatus of claim 15, wherein the at least one processor and memory are further configured to select a statistical rate calculation for a component.
  • 17. The apparatus of claim 15, wherein the at least one processor and memory are further configured to vary a selected statistical rate calculation for a component.
  • 18. A method of modeling a system comprising: using a plurality of components of generic structure (COGSs) to represent hardware components and software components of the system; associating each of at least some of the COGSs with one or more availability analysis components to obtain a model representing availability characteristics of the system; and inputting one or more spreadsheets to a discrete event simulation tool to implement the model.
  • 19. The method of claim 18, further comprising using the spreadsheet input to the availability analysis components to control component availability in the model.
  • 20. The method of claim 18, further comprising performing availability analysis and performance analysis in a single implementation of the model.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation-in-part of U.S. patent application Ser. No. 11/124,947 filed on May 9, 2005, which is a continuation in part of U.S. patent application Ser. No. 10/277,455 filed on Oct. 22, 2002. The disclosures of the foregoing applications are incorporated herein by reference.

Continuation in Parts (2)
Number Date Country
Parent 11124947 May 2005 US
Child 11304925 Dec 2005 US
Parent 10277455 Oct 2002 US
Child 11124947 May 2005 US