Computer networks are under constant threat from malicious parties seeking unauthorized access to the systems hosted thereon. The tactics used by malicious parties to attack networks and the tactics used by network administrators to defend against attacks are constantly evolving as the tactics are updated. New exploits are added to the arsenal of malicious parties and ineffective exploits are dropped. Implementing countermeasures, however, is often reactive, wherein network administrators must wait to identify the newest exploit before deploying a countermeasure and determining when to stop deploying a countermeasure when the corresponding exploit is no longer used. Correctly anticipating, identifying, and blocking the new exploits is crucial to maintaining security of a network.
It is with respect to these considerations and others that the disclosure made herein is presented.
Malicious users are constantly trying to bypass security solutions to compromise computing resources. Attackers, for example, may attempt to bypass security solutions by creating new variants and implementations of well-known techniques. The threats are constantly increasing, and automation and new measures are needed to identify detection gaps before attackers do. There are a variety of techniques that are available to attackers, and new techniques are being generated. A systematic approach to track new threats and to leverage accumulated knowledge is needed.
Attackers typically differ in two aspects: their targets, and the attack techniques that are used. Relatively few attackers develop entirely new approaches, while the vast majority use well-known techniques with some minor modifications. The present disclosure describes a framework for defending against attackers by building and managing end-to-end attack vectors based on these assumptions so that defenses can be verified against the attack vectors. The framework allows users to specify a particular attack vector based on tags or individual techniques and generate atomic payloads for testing. In some embodiments, variants of each attack vector may be automatically generated based on existing implementations.
Some embodiments describe technologies for fuzzing on known techniques to generate new attack scenarios and identify gaps in threat coverage. Additionally, the described techniques enable the attack vectors to be focused on a particular known adversary by using tags to define and test possible techniques per a region or advanced persistent threat (APT) groups to enable users to build a customized security suite. The techniques can allow networks and data centers to provide improved security, more effectively adhere to operational objectives, and improve operating efficiencies.
In one embodiment, generated attack scenarios may be represented as an atomic binary that are limited to the tested logic without actual risk. Additionally, the attack scenarios may be extendable by applying fuzzing in multiple dimensions to generate a larger set of potential scenarios. Interfaces may be provided to allow users to generate their own attack scenarios without the need for deep knowledge of threat scenarios. By providing a way to quickly and easily generate attack scenarios, security functions may be tested by generating a large number of attack vectors against various products, identifying vulnerabilities, and grouping missed threats into categories. The disclosed techniques may also be used to compare and rank security products by comparing success rates and weaknesses.
The disclosed techniques provide a single framework for creating a set of predefined attack vectors and payloads. The framework provides an orchestrator that can mix and match a variety of techniques to generate vectors. By providing such a mechanism for generating attack vectors and identifying potential threats, loss of data and services may be avoided or mitigated, reducing downtime and impact to end users and providing for improved security and operational efficiency for computing networks and service providers.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
The Detailed Description is described with reference to the accompanying figures. In the description detailed herein, references are made to the accompanying drawings that form a part hereof, and that show, by way of illustration, specific embodiments or examples. The drawings herein are not drawn to scale. Like numerals represent like elements throughout the several figures.
The present disclosure provides systems and techniques for generating attack vectors that can be increasingly complex without the need for increased resources, infrastructure, and setup time to test the vectors against a target system. The disclosure describes a framework configured to build and manage end-to-end attack vectors that can include generation of the payloads and the victim machine as well as the delivery of the payload (e.g., macro, zip file, email, etc.). A user may specify an attack vector based on tags or individual techniques to generate an atomic payload, as well as automatically generate variants of each attack vector based on existing implementations. The system for managing and generating the attack vectors may be referred to herein as a cyber vector infrastructure, attack vector infrastructure, or vector infrastructure.
The attack vectors may be generated based on high level and user-readable tags or labels that identify one or more properties for generating samples or attack simulations. The labels can be specific APT groups, techniques, or software that can be associated with any program or target to be tested. The vector infrastructure may further provide templates to automatically generate different implementation combinations.
The vector infrastructure may generate a pre-compiled, single execution file with an adjustable execution workflow that can be externally injected using selected delivery methods and other selectable parameters such as which model to run and in which order, for example. The vector infrastructure may further provide a user interface configured to facilitate selection of vector parameters. The user interface may provide a graphical tool for selecting pre-defined inputs and templates for various attack scenarios. By providing a pipeline of tools that facilitate selection of parameters, a user can quickly and easily chain together a number of parameters to generate a specific attack scenario.
The vector infrastructure may implement tags to facilitate selection of parameters. A tag may be a label which can be attached to one or more templates and can map techniques into buckets (e.g., adversary name, attack phase, etc.). To illustrate using an example, a user may select a series of templates to generate an MS Office application file, select a delivery payload, and create a macro dropper. The generated macro dropper can be injected into the Office application. In some embodiments, a template may represent a specified attack vector which can be assigned to a specific adversary or APT group.
The vector infrastructure may include a number of tools that can include one or more programs or scripts. The vector infrastructure may further include libraries to facilitate tools development (e.g., templates, formats). A template can define a chain of payload tools which, after processing by the vector infrastructure, may become a single binary file.
In some embodiments, a template may be described using JavaScript Object Notation (JSON) which can be used by the vector infrastructure to generate the final payload. JSON may be used to specify which tools to be run and which parameters should be tested. For example, a user may specify a Windows 10 operating system environment and an evasive dropper payload. The vector infrastructure may generate an email message as an .eml file that has a compressed zip file that can be used for the simulated attack.
Tags can be divided into two categories. A group tag may be mapped to one or more tags, e.g., specific APT groups, or to check for specific techniques. Tagging enables users to input to the vector infrastructure what is to be tested and to select templates, samples, and scenarios. A functional tag may be mapped to one or more templates. Templates may be represented as a JSON, and can represent an attack vector.
The output of the vector infrastructure can be a single binary that implements the select attack scenario. The output can also comprise multiple binaries when enabling fuzzing features. In some embodiments, the vector infrastructure may implement template fuzzing which uses a given template to generate multiple variants of the attack vector. In one embodiment, this may be achieved by automatically modifying the template at three levels.
The first level may be the tool parameters. The vector infrastructure may automatically change the parameters for a tool (e.g., change buffer size, etc.). For example, the vector infrastructure may change the batch file dropper, the size of the buffer in each iteration, and the like. The second level may be the implementation. Different tools in the vector infrastructure may have the same input and output and differ only in the implementation. Different kinds of input may also be selected, such as a payload with a VBScript file or a different tool with the same input and output.
A template can also be modified at a third level that can include additional layers. For example, new tools may be added to the chain which do not change the overall flow. For instance, a packer input/output that can be defined as EXE->EXE can be added after a tool that generates an independent EXE file.
It should be understood that in some embodiments, additional layers can be added to increase the number of adjustable parameters. For example, various degrees of variability may be implemented for parameters, implementations, and layers, and the degree and type of variations may be adjustable.
To generate the variations in the implemented levels, a random generator may be implemented to determine the specific parameters for each fuzzed payload. For example, for each tag, variants may be randomly generated. In other embodiments, for each set of selected inputs and/or outputs, the adjustable parameters may be selected using a deterministic selection method.
In some embodiments, the vector infrastructure may include an atomic payload builder. The vector infrastructure may be configured to implement native code implementations for the target system. A precompiled binary may be generated for the target system. The generation of binary payloads may include the insertion of dynamic-link libraries (DLLs) that are injected into the binary as well as the workflow that defines how the DLLs should be run, which parameters to include, and in which order.
The framework configured to build the payload may be referred to herein as an atomic payload builder. In some embodiments, the atomic payload builder may include payload functionality modules that are implemented as DLLs. The DLLs may be embedded as resources in the final payload. The execution order and parameters may be embedded using a JSON configuration file contained in the executable. The main logic in the payload may execute the workflow.
For example, if a macro dropper payload is desired, the vector infrastructure may link together at least three tools to generate the payload. The process may begin with a skeleton input binary into which macro dropper code may be inserted. Arguments may be added for a particular target system. For example, for Office, a “Document_Open” parameter may be selected for the autorun function name in the delivery tool. Additionally, the “document” template may be selected for the inject macro tool. The resulting payload may be an email message “payload.eml.”
The described techniques may be used to verify target system defenses against various attack vectors. The described techniques may further be used to detect and identify gaps in threat coverage for the target system.
In some embodiments, a machine learning model may be implemented to generate payloads for testing a target system. In some configurations, the machine learning model may be configured to utilize supervised, unsupervised, or reinforcement learning techniques to generate payloads. For example, the machine learning model may utilize supervised machine learning techniques by training on the collected threat data. In some embodiments, the machine learning model may also, or alternatively, utilize unsupervised machine learning techniques to determine correlations including, but not limited to, a clustering-based model, a forecasting-based model, a smoothing-based model, or another type of unsupervised machine learning model. In some embodiments, the machine learning model may also, or alternately, utilize reinforcement learning techniques to generate results. For example, the model may be trained using the input data and, based on feedback, the model may be rewarded based on its output.
Referring to the appended drawings, in which like numerals represent like elements throughout the several FIGURES, aspects of various technologies for detecting unauthorized certificates will be described. In the following detailed description, references are made to the accompanying drawings that form a part hereof, and which are shown by way of illustration specific configurations or examples.
The authentication server 130 may be configured to handle the authorization or rejection of login attempts carried in authentication traffic. Although not illustrated, one of skill in the art will appreciate that various servers and intermediaries in a distributed network may be implemented between the devices 110 and the gateway 120 to route a message between the user and the network 170. As will also be appreciated, although some components of the example environment 100 are illustrated singly, in various aspects multiple copies of those components may be deployed, for example, for load balancing purposes, redundancy, or offering multiple services.
In some embodiments, updating or creation of payloads may be enabled within a contextual environment of an application such as a word processing application for creating or editing payloads. In other embodiments, the updating or creation of models may be enabled using a separate user interface application. Either embodiment may be illustrated by application 141 in this example. A user can interact with an application 141 to create and edit payloads, and view and add or edit content. The application 141 may each be configured to display a tool/template pane 191 on a UI 190. The tool/template pane 191 may be used to view available tools, templates, or other parameters for selection and insertion into a payload.
The content provided using the tool/template pane 191 can be used to generate inputs for generating a payload by vector infrastructure 180. In some configurations, the inputs to the vector infrastructure 180 can be in the form of a text strings, files, or any other suitable format. Although vector infrastructure 180 is shown as a separate platform, vector infrastructure 180 may be implemented as a shared platform with other aspects of network 170.
The devices 110 are illustrative of various computing systems including, without limitation, desktop computer systems, wired and wireless computing systems, mobile computing systems (e.g., mobile telephones, netbooks, tablet or slate type computers, notebook computers, and laptop computers), hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, printers, and mainframe computers. The hardware of these computing systems is discussed in greater detail in regard to
The devices 110 may be accessed locally and/or by a network, which may include the Internet, a Local Area Network (LAN), a private distributed network for an entity (e.g., a company, a university, a government agency), a wireless ad hoc network, a Virtual Private Network (VPN) or other direct data link (e.g., Bluetooth connection, a direct wired link). For example, a malicious party may attempt to obtain a certificate for accessing restricted resources which may be done without the knowledge or consent of the devices' owners.
The gateway 120 may be a hardware device, such as a network switch, or a software service that links the devices 110 from the external network (e.g., the Internet) to the authentication server 130 over the network 170 (e.g., an intranet). In various aspects, the gateway device 120 may provide a firewall and may regulate the flow of communications traffic into and out of the local network 170. The gateway 120 may be configured to forward messages to the authentication server 130 from the devices 110 (as well as other devices on the internal network).
The authentication server 130 may receive authorization requests from the devices 110 and determine whether to grant access to accounts served by the network 170. The authentication server 130 may be a physical machine or a virtual machine that handles the authentication requests for the network 170 and acts as a domain controller. The authentication server 130 may use various authentication protocols including, but not limited to, PAP (Password Authentication Protocol), CHAP (Challenge-Handshake Authentication Protocol), EAP (Extensible Authentication Protocol), Kerberos, or an AAA (Authentication, Authorization, Accounting) architecture protocol, to allow a user access to one or more systems within a network 170. Depending on the standards used, the number of protected systems in the network 170 and user account settings, the successful presentation of authentication parameters will grant the devices 110 access to one or more systems safeguarded by the authentication server 130 and at an appropriate permissions level for the associated user.
Referring to
Referring to
Referring to
Referring to
Data center 700 may include servers 716a, 716b, and 716c (which may be referred to herein singularly as “a server 716” or in the plural as “the servers 716”) that provide computing resources available as virtual machines 718a and 718b (which may be referred to herein singularly as “a virtual machine 718” or in the plural as “the virtual machines 718”). The virtual machines 718 may be configured to execute applications such as Web servers, application servers, media servers, database servers, and the like. Other resources that may be provided include data storage resources (not shown on
Referring to
Communications network 730 may provide access to computers 702. Computers 702 may be computers utilized by users 700. Computer 702a,702b or 702c may be a server, a desktop or laptop personal computer, a tablet computer, a smartphone, a set-top box, or any other computing device capable of accessing data center 700. User computer 702a or 702b may connect directly to the Internet (e.g., via a cable modem). User computer 702c may be internal to the data center 700 and may connect directly to the resources in the data center 700 via internal networks. Although only three user computers 702a,702b, and 702c are depicted, it should be appreciated that there may be multiple user computers.
Computers 702 may also be utilized to configure aspects of the computing resources provided by data center 700. For example, data center 700 may provide a Web interface through which aspects of its operation may be configured through the use of a Web browser application program executing on user computer 702. Alternatively, a stand-alone application program executing on user computer 702 may be used to access an application programming interface (API) exposed by data center 700 for performing the configuration operations.
Servers 716 may be configured to provide the computing resources described above. One or more of the servers 716 may be configured to execute a manager 720a or 720b (which may be referred herein singularly as “a manager 720” or in the plural as “the managers 720”) configured to execute the virtual machines. The managers 720 may be a virtual machine monitor (VMM), fabric controller, or another type of program configured to enable the execution of virtual machines 718 on servers 716, for example.
It should be appreciated that although the embodiments disclosed above are discussed in the context of virtual machines, other types of implementations can be utilized with the concepts and technologies disclosed herein.
In the example data center 700 shown in
It should be appreciated that the network topology illustrated in
It should also be appreciated that data center 700 described in
Turning now to
It should be understood by those of ordinary skill in the art that the operations of the methods disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations have been presented in the demonstrated order for ease of description and illustration. Operations may be added, omitted, performed together, and/or performed simultaneously, without departing from the scope of the appended claims.
It should also be understood that the illustrated methods can end at any time and need not be performed in their entireties. Some or all operations of the methods, and/or substantially equivalent operations, can be performed by execution of computer-readable instructions included on a computer-storage media, as defined herein. The term “computer-readable instructions,” and variants thereof, as used in the description and claims, is used expansively herein to include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like. Although the example routine described below is operating on a computing device, it can be appreciated that this routine can be performed on any computing system which may include a number of computers working in concert to perform the operations disclosed herein.
Thus, it should be appreciated that the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system such as those described herein and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof.
Referring to
Operation 801 may be followed by operation 803. Operation 803 illustrates receiving, via the user interface, a selection of a threat type.
Operation 803 may be followed by operation 805. Operation 805 illustrates receiving, via the user interface, a selection of one or more selectable parameters for delivery of the threat type to the target system.
Operation 805 may be followed by operation 807. Operation 807 illustrates communicating, by the user interface to the attack vector infrastructure, data indicative of the selected threat type and the selected parameters.
Operation 807 may be followed by operation 809. Operation 809 illustrates, in response to receiving the data, accessing a base binary executable and a library comprising functions for generating attack vectors.
Operation 809 may be followed by operation 811. Operation 811 illustrates adding, to the base binary executable, one or more functions from the library based on the selected threat type and the selected parameters.
Operation 811 may be followed by operation 813. Operation 813 illustrates generating a payload that implements the selected threat type and the selected parameters in a delivery format based on the selected parameters.
In an embodiment, the selected threat type and the selected parameters are defined using JavaScript Object Notation (JSON).
In an embodiment, wherein the selectable parameters comprise templates defining predetermined attack scenarios.
In an embodiment, the method further comprises generating fuzzed payloads that are variants of the generated payload.
In an embodiment, the fuzzed payloads are generated by randomly varying the selectable parameters.
In an embodiment, the fuzzed payloads are generated by deterministically varying the selectable parameters.
In an embodiment, the fuzzed payloads are generated based on machine learning.
Turning now to
Operation 901 may be followed by operation 903. Operation 903 illustrates receiving, via the user interface, a selection of one or more selectable parameters for delivery of the threat type to the target system.
Operation 903 may be followed by operation 905. Operation 905 illustrates in response to selection of the threat type and the selected parameters, accessing a base binary executable and a library comprising functions for generating attack vectors.
Operation 905 may be followed by operation 907. Operation 907 illustrates adding, to the base binary executable, one or more functions from the library based on the selected threat type and the selected parameters.
Operation 907 may be followed by operation 909. Operation 909 illustrates generating a payload that implements the selected threat type and the selected parameters in a delivery format based on the selected parameters.
In an embodiment, the user interface is a graphical user interface comprising an interactive area configured to enable selection of the selectable parameters.
In an embodiment, the selectable parameters comprise tags or labels that identify one or more properties for generating samples or attack simulations.
In an embodiment, the delivery format comprises one or more of a macro, zip file, or email.
In an embodiment, the selectable parameters comprise templates defining predetermined attack scenarios.
In an embodiment, the acts comprise generating fuzzed payloads that are variants of the generated payload.
In an embodiment, the fuzzed payloads are generated by randomly varying the selectable parameters.
In an embodiment, the fuzzed payloads are generated by deterministically varying the selectable parameters.
In an embodiment, the fuzzed payloads are generated based on machine learning.
The various aspects of the disclosure are described herein with regard to certain examples and embodiments, which are intended to illustrate but not to limit the disclosure. It should be appreciated that the subject matter presented herein may be implemented as a computer process, a computer-controlled apparatus, or a computing system or an article of manufacture, such as a computer-readable storage medium. While the subject matter described herein is presented in the general context of program modules that execute on one or more computing devices, those skilled in the art will recognize that other implementations may be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures and other types of structures that perform particular tasks or implement particular abstract data types.
Those skilled in the art will also appreciate that the subject matter described herein may be practiced on or in conjunction with other computer system configurations beyond those described herein, including multiprocessor systems. The embodiments described herein may also be practiced in distributed computing environments, where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Networks established by or on behalf of a user to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to a distributed set of clients may be referred to as a service provider. Such a network may include one or more data centers such as data center 300 illustrated in
In some embodiments, a computing device that implements a portion or all of one or more of the technologies described herein, including the techniques to verify a target system against one or more security threats may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media.
In various embodiments, computing device 1000 may be a uniprocessor system including one processor 1010 or a multiprocessor system including several processors 1010 (e.g., two, four, eight, or another suitable number). Processors 1010 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 1010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x1010, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 1010 may commonly, but not necessarily, implement the same ISA.
System memory 1020 may be configured to store instructions and data accessible by processor(s) 1010. In various embodiments, system memory 1020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques and data described above, are shown stored within system memory 1020 as code 1025 and data 1026.
In one embodiment, I/O interface 1030 may be configured to coordinate I/O traffic between the processor 1010, system memory 1020, and any peripheral devices in the device, including network interface 1040 or other peripheral interfaces. In some embodiments, I/O interface 1030 may perform any necessary protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory 1020) into a format suitable for use by another component (e.g., processor 1010). In some embodiments, I/O interface 1030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 1030 may be split into two or more separate components. Also, in some embodiments some or all of the functionality of I/O interface 1030, such as an interface to system memory 1020, may be incorporated directly into processor 1010.
Network interface 1040 may be configured to allow data to be exchanged between computing device 1000 and other device or devices 1080 attached to a network or network(s) 1050, such as other computer systems or devices as illustrated in
In some embodiments, system memory 1020 may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above for
Various storage devices and their associated computer-readable media provide non-volatile storage for the computing devices described herein. Computer-readable media as discussed herein may refer to a mass storage device, such as a solid-state drive, a hard disk or CD-ROM drive. However, it should be appreciated by those skilled in the art that computer-readable media can be any available computer storage media that can be accessed by a computing device.
By way of example, and not limitation, computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computing devices discussed herein. For purposes of the claims, the phrase “computer storage medium,” “computer-readable storage medium” and variations thereof, does not include waves, signals, and/or other transitory and/or intangible communication media, per se.
Encoding the software modules presented herein also may transform the physical structure of the computer-readable media presented herein. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable media, whether the computer-readable media is characterized as primary or secondary storage, and the like. For example, if the computer-readable media is implemented as semiconductor-based memory, the software disclosed herein may be encoded on the computer-readable media by transforming the physical state of the semiconductor memory. For example, the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software also may transform the physical state of such components in order to store data thereupon.
As another example, the computer-readable media disclosed herein may be implemented using magnetic or optical technology. In such implementations, the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion.
In light of the above, it should be appreciated that many types of physical transformations take place in the disclosed computing devices in order to store and execute the software components and/or functionality presented herein. It is also contemplated that the disclosed computing devices may not include all of the illustrated components shown in
Although the various configurations have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
While certain example embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions disclosed herein. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions disclosed herein. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of certain of the inventions disclosed herein.
It should be appreciated any reference to “first,” “second,” etc. items and/or abstract concepts within the description is not intended to and should not be construed to necessarily correspond to any reference of “first,” “second,” etc. elements of the claims. In particular, within this Summary and/or the following Detailed Description, items and/or abstract concepts such as, for example, individual computing devices and/or operational states of the computing cluster may be distinguished by numerical designations without such designations corresponding to the claims or even other paragraphs of the Summary and/or Detailed Description. For example, any designation of a “first operational state” and “second operational state” of the computing cluster within a paragraph of this disclosure is used solely to distinguish two different operational states of the computing cluster within that specific paragraph—not any other paragraph and particularly not the claims.
In closing, although the various techniques have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.