A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
When interactions are coordinated and/or carried out with various users, there can be a need to identify users and/or aspects or parameters thereof (e.g., users who have full or partial ownership of public companies), as certain interactions with particular types of users may be subject to additional constraints (e.g., legal constraints, regulatory constraints, etc.) and/or risks. Accordingly, a need exists to proactively identify such users. However, conventional user interaction management approaches are resource-intensive and error-prone, often leading to increased security risks.
Illustrative embodiments of the disclosure provide techniques for security-related risk detection using artificial intelligence techniques.
An exemplary computer-implemented method includes obtaining data related to at least one user associated with at least one interaction, and identifying information pertaining to the at least one user by processing at least a portion of the obtained data using one or more artificial intelligence techniques. The method additionally includes determining, based at least in part on at least a portion of the identified information, one or more security risks associated with the at least one user within a context of the at least one interaction. Further, the method also includes performing one or more automated actions based at least in part on the one or more determined security risks.
Illustrative embodiments can provide significant advantages relative to conventional user interaction management approaches. For example, problems associated with resource-intensive and error-prone techniques are overcome in one or more embodiments through automatically detecting security-related risk information pertaining to one or more users by leveraging one or more artificial intelligence techniques.
These and other illustrative embodiments described herein include, without limitation, methods, apparatus, systems, and computer program products comprising processor-readable storage media.
Illustrative embodiments will be described herein with reference to exemplary computer networks and associated computers, servers, network devices or other types of processing devices. It is to be appreciated, however, that these and other embodiments are not restricted to use with the particular illustrative network and device configurations shown. Accordingly, the term “computer network” as used herein is intended to be broadly construed, so as to encompass, for example, any system comprising multiple networked processing devices.
The user devices 102 may comprise, for example, mobile telephones, laptop computers, tablet computers, desktop computers or other types of computing devices. Such devices are examples of what are more generally referred to herein as “processing devices.” Some of these processing devices are also generally referred to herein as “computers.”
The user devices 102 in some embodiments comprise respective computers associated with a particular company, organization or other enterprise. In addition, at least portions of the computer network 100 may also be referred to herein as collectively comprising an “enterprise network.” Numerous other operating scenarios involving a wide variety of different types and arrangements of processing devices and networks are possible, as will be appreciated by those skilled in the art.
Also, it is to be appreciated that the term “user” in this context and elsewhere herein is intended to be broadly construed so as to encompass, for example, human, hardware, software or firmware entities, as well as various combinations of such entities.
The network 104 is assumed to comprise a portion of a global computer network such as the Internet, although other types of networks can be part of the computer network 100, including a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as a Wi-Fi or WiMAX network, or various portions or combinations of these and other types of networks. The computer network 100 in some embodiments therefore comprises combinations of multiple different types of networks, each comprising processing devices configured to communicate using internet protocol (IP) or other related communication protocols.
Additionally, automated security risk detection system 105 can have an associated user-related risk information database 106 configured to store data pertaining to one or more users (e.g., identifying information, geographic information, historical user activity data, etc.), data pertaining to one or more security risks (e.g., historical security risk identification data pertaining to users, enterprises and/or companies, geographies, etc.), and/or risk-related action data (e.g., risk remediation data, notification data, etc.).
The user-related risk information database 106 in the present embodiment is implemented using one or more storage systems associated with automated security risk detection system 105. Such storage systems can comprise any of a variety of different types of storage including network-attached storage (NAS), storage area networks (SANS), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage.
Also associated with automated security risk detection system 105 are one or more input-output devices, which illustratively comprise keyboards, displays or other types of input-output devices in any combination. Such input-output devices can be used, for example, to support one or more user interfaces to automated security risk detection system 105, as well as to support communication between automated security risk detection system 105 and other related systems and devices not explicitly shown.
Additionally, automated security risk detection system 105 in the
More particularly, automated security risk detection system 105 in this embodiment can comprise a processor coupled to a memory and a network interface.
In one or more embodiments, the processor can illustratively comprise a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), a tensor processing unit (TPU), a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other type of processing circuitry, as well as portions and/or combinations of such circuitry elements.
The memory can illustratively comprise random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination. The memory and other memories disclosed herein may be viewed as examples of what are more generally referred to as “processor-readable storage media” storing executable computer program code or other types of software programs.
One or more embodiments include articles of manufacture, such as, for example, computer-readable storage media. Examples of an article of manufacture include, without limitation, a storage device such as a storage disk, a storage array or an integrated circuit containing memory, as well as a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. These and other references to “disks” herein are intended to refer generally to storage devices, including solid-state drives (SSDs), and should therefore not be viewed as limited in any way to spinning magnetic media.
The network interface allows automated security risk detection system 105 to communicate over the network 104 with the user devices 102, and illustratively comprises one or more conventional transceivers.
The automated security risk detection system 105 further comprises artificial intelligence-based user identification engine 112, security risk determination engine 114, and automated action generator 116.
It is to be appreciated that this particular arrangement of elements 112, 114 and 116 illustrated in the automated security risk detection system 105 of the
At least portions of elements 112, 114 and 116 may be implemented at least in part in the form of software that is stored in memory and executed by a processor.
It is to be understood that the particular set of elements shown in
An exemplary process utilizing elements 112, 114 and 116 of an example automated security risk detection system 105 in computer network 100 will be described in more detail with reference to the flow diagram of
Accordingly, at least one embodiment includes automatically detecting one or more user-related and/or user-specific security-related risks using one or more artificial intelligence techniques. As detailed herein, in various instances and/or contexts, needs can exist in identifying users and/or aspects or parameters thereof ahead of and/or during one or more interactions. In an example use case, there can be a need to identify one or more risks associated with a given user while preparing quotes for the user in connection with a given transaction. To help identify such risks, an example embodiment can include adding pricing details (e.g., maximum pricing details) to the quotes to the user (e.g., a distributor) and informing the user of rights to inspect the pricing provided to one or more additional users associated with the given transaction (e.g., one or more partners of the user). In such an example embodiment, adding maximum pricing details can include providing a directional indication to the end user about setting at least one upper limit on the price that the given product(s) can be sold, and incorporating language indicating the same into the quote(s).
As detailed in connection with, for example,
By way of example, in one or more embodiments identifying whether a given user meets one or more predetermined and/or predefined risk-related qualifiers and/or parameters can include processing data pertaining to the given user, as noted above, using one or more artificial intelligence techniques. Such artificial intelligence techniques can include clustering algorithms implemented in connection with at least one density model, at least one hierarchical model, at least one centroid model, and/or at least one distribution model. The above-noted clustering algorithms can include, for example, at least one K-means clustering algorithm, at least one mean-shift algorithm, at least one density-based spatial clustering of applications with noise (DBSCAN) algorithm, at least one balanced iterative reducing and clustering using hierarchies (BIRCH) algorithm, at least one affinity propagation algorithm, at least one ordering points to identify cluster structure (OPTICS) algorithm, at least one divisive hierarchical algorithm, at least one agglomerative hierarchical algorithm, and/or at least one expectation-maximization clustering using Gaussian mixture models algorithm. In one or more embodiments, such a clustering algorithm can be implemented to identify one or more anomalies in the data and flag any discrepancy in one or more predefined parameters.
Additionally, at least one embodiment can include generating one or more flags and/or notifications in connection with any determined risks, and outputting such flags and/or notifications to the one or more additional users related to the given transaction and/or the given user involved in the given transaction. A flag, in connection with one or more embodiments, can include any information about the given transaction that highlights one or more potential risks (e.g., risks determined and/or detected using one or more artificial intelligence techniques, such as further detailed herein). For example, such risk-related information can include the amount of revenue associated with the given transaction, the customer address associated with the given transaction, etc.
One or more embodiments can also include scaling techniques used in identifying whether a given user meets one or more predetermined and/or predefined risk-related qualifiers and/or parameters to fit different demands and/or constraints associated with a given user interaction. In such an embodiment, scaling implies enabling at least one custom configuration of techniques that can be implemented to detect a variety of transaction-related risks and/or security-related risks associated with one or more given transactions. Additionally (such as described in connection with
By way merely of example, consider an example use case involving a quoting system and a sales team, implemented in connection with a given interaction with a given user. In one or more embodiments, in such an example use case, utilizing an automated security risk detection system can include implementing messaging and/or flagging capability in connection with the quoting system to educate a sales team while dealing with the given user, which has been identified as a public company and/or an agent thereof. Such a user identification can result in additional security risks being added to the given interaction, and as such, the sales representative involved in preparing the quote can be made aware, via the automated security risk detection system, of the user's status such that the sales representative can choose to proceed in accordance with one or more precautions and/or modifications to the interaction (e.g., flagging a record associated with the interaction as a public company interaction). Additionally or alternatively, the automated security risk detection system can initiate one or more automated actions in response to the user identification, such as, e.g., automatically flagging a record associated with the interaction as a public company interaction, generating and outputting one or more notifications to additional and/or interaction-related users, updating one or more pricing parameters related to one or more risks associated with this particular user identification, etc.
In at least one embodiment, identifying whether the given user meets one or more predetermined and/or predefined risk-related qualifiers and/or parameters can include implementing at least one configurable rule engine. Such a rule engine can be configured, for example, to determine, in part by leveraging one or more artificial intelligence techniques, one or more geography-based parameters in connection with identifying one or more state-owned entities, wherein different geographies can be associated with different risks and/or different risk levels. Such artificial intelligence techniques can include, for example, one or more clustering algorithms (such as detailed above and herein) and/or one or more other types of unsupervised learning techniques (e.g., principal component analysis, independent component analysis, Apriori algorithm, etc.).
Additionally, in one or more embodiments, an automated security risk detection system monitors data streaming through and/or from one or more additional systems (e.g., one or more seller systems, one or more web-based e-commerce systems, one or more accounting systems, etc.), wherein such monitoring can include establishing one or more risk indicators by flagging corresponding interactions (or data related thereto) and determining one or more automated actions to be carried out and/or initiated in connection with such indicators.
In an example embodiment such as depicted in
In a scenario wherein automated security risk detection system 305 determines that the criteria are met (e.g., by processing at least a portion of the quote information provided by price quoting system(s) 310-1), automated security risk detection system 305 can then query account repository system(s) 310-2 for information pertaining to the user(s) involved in the deal associated with the quote. Such information can then be processed by automated security risk detection system 305 to detect one or more risk parameters associated with the user(s).
In determining, for example, whether a given user associated with the quote is a public company or agent thereof, automated security risk detection system 305 can seek, from account repository system(s) 310-2, a <Company Number> attributed to the given user. If available, the automated security risk detection system 305 can compare the received <Company Number> against at least one set of known and/or established <Company Number> values associated with public companies.
Additionally, in such an example embodiment, the automated security risk detection system 305 can further check and/or determine if the at least one set of known and/or established <Company Number> values associated with public companies has been updated within a given temporal period (e.g., in the past six months). If yes (that is, the at least one set of known and/or established <Company Number> values associated with public companies has been updated within the given temporal period), then a determination is made, via comparison of the received <Company Number> against at least one set of known and/or established <Company Number> values associated with public companies, as to whether the given user is a public company or agent thereof, and if so, then a flag is generated and associated with the given user (e.g., associated with the received <Company Number>in the account repository system(s) 310-2).
If no (that is, the at least one set of known and/or established <Company Number> values associated with public companies has not been updated within the given temporal period), automated security risk detection system 305 can then request that account repository system(s) 310-2 update the at least one set of known and/or established <Company Number> values associated with public companies. Also, the automated security risk detection system 305 can place the deal on hold until the update has been carried out, and subsequent to the update, a determination can be made, as detailed above, as to whether the given user is a public company or agent thereof.
Step 436 includes encapsulating a flag packet on the quote system, wherein the flag packet is generated based at least in part on the processing of the price quote by the automated security risk detection system. The annotated price quote, including the encapsulated flag packet, can then be interpreted by a quote system (which can, for example, proceed with the order in accordance with any conditions included as part of the flag packet). Also, step 438 includes sending the annotated price quote to at least one team and/or automated system for further processing and/or inspection. For example, price quotes involving users identified as public entities can be subject to additional inspection and/or validation.
Step 440 includes at least one representative from the at least one team and/or the automated system determining whether or not to proceed with the annotated price quote (including the flag packet). Step 442 includes rerouting at least a portion of the price quote workflow, if needed. In one or more embodiments, workflow rerouting can include arranging one or more additional meetings and/or interactions with one or more members of a reviewing team and/or leadership to assess the transaction and determine if it makes sense to proceed to the next step of the process.
Step 444 includes creating an order, which can include converting the annotated price quote into an order. By way of illustration, in one or more example embodiments, when a quote is converted into an order, such a conversion implies that a corresponding transaction has been fulfilled and has been paid for by the customer, signifying that the customer agrees to the terms and conditions and the enterprise has received or will receive corresponding payment. Step 446 includes storing the order and/or the annotated price quote in at least one database. Further, step 448 includes performing analysis on at least a portion of the data stored in the database. In at least one example embodiment, such analysis performed in step 448 can include partitioning and/or otherwise processing the at least a portion of data stored in the database to identifying one or more underlying trends and/or patterns (e.g., one or more user buying patterns) across geographies, products, customers, etc.
Additionally, in one or more embodiments, an automated security risk detection system can provision a response received from an account repository service and store such a response in a database (e.g., a database associated with the automated security risk detection system such as user-related risk information database 106 in
The example pseudocode 500 illustrates a list of inputs, which defines variables as strings, in the given format shown, to be processed by one or more artificial intelligence techniques (e.g., processed by artificial intelligence-based user identification engine 112 in the example embodiment depicted in
It is to be appreciated that this particular example pseudocode shows just one example implementation of example inputs to be utilized and/or processed in connection with automatically detecting user-related security risks, and alternative implementations can be used in other embodiments.
The example pseudocode 501 illustrates inputs to be processed by one or more artificial intelligence techniques to identify at least one user, and/or information associated therewith, related to a given interaction. Specifically, example pseudocode 501 shows inputs (provided and/or obtained from, for example, an account repository service and/or system) which include a JavaScript Object Notation (JSON) request containing contact information including name information, address line information, zip code information, city information, state information, country information, and phone number information. Additionally, example pseudocode 501 shows a JSON response which includes corresponding account details including account name information, identification number information, enterprise customer number information, address identifier information, country code information, state-owned enterprise flag information, and country information.
It is to be appreciated that this particular example pseudocode shows just one example implementation of example inputs to be utilized and/or processed in connection with automatically detecting user-related security risks, and alternative implementations can be used in other embodiments.
It is to be appreciated that one or more embodiments described herein utilize one or more artificial intelligence models. It is to be appreciated that the term “model,” as used herein, is intended to be broadly construed and may comprise, for example, a set of executable instructions for generating computer-implemented recommendations and/or predictions. For example, one or more of the artificial intelligence models described herein may be trained to generate recommendations and/or predictions based on user-specific information and corresponding risk parameter information, and such recommendations and/or predictions can be used to initiate one or more automated actions (e.g., automatically adjusting related user interactions, automatically training one or more artificial intelligence techniques using at least a portion of the recommendations and/or predictions, etc.).
In this embodiment, the process includes steps 600 through 606. These steps are assumed to be performed by automated security risk detection system 105 utilizing elements 112, 114 and 116.
Step 600 includes obtaining data related to at least one user associated with at least one interaction. In at least one embodiment, obtaining data related to the at least one user associated with the at least one interaction includes interfacing with one or more systems utilized in connection with executing at least a portion of the at least one interaction.
Step 602 includes identifying information pertaining to the at least one user by processing at least a portion of the obtained data using one or more artificial intelligence techniques. In one or more embodiments, identifying information pertaining to the at least one user includes processing, using one or more natural language processing techniques, data related to one or more of user name information, user address information, user billing information, and user contact information.
Additionally or alternatively, identifying information pertaining to the at least one user can include identifying at least one of one or more geography-related parameters associated with the at least one user, one or more enterprise ownership parameters associated with the at least one user, and one or more alphanumeric identifiers associated with the at least one user. In such an embodiment, determining one or more security risks associated with the at least one user within a context of the at least one interaction (such as detailed in connection with step 604) can include determining one or more risk levels associated with each of the one or more geography-related parameters associated with the at least one user, the one or more enterprise ownership parameters associated with the at least one user, and the one or more alphanumeric identifiers associated with the at least one user.
Step 604 includes determining, based at least in part on at least a portion of the identified information, one or more security risks associated with the at least one user within a context of the at least one interaction. In at least one embodiment, determining one or more security risks associated with the at least one user within a context of the at least one interaction includes identifying one or more features of the at least one interaction associated with security risk assessment. In such an embodiment, identifying one or more features of the at least one interaction associated with security risk assessment can include at least one of determining that the at least one interaction involves a transaction valued over a given amount, and that the at least one interaction is to be carried out in accordance with one or more predetermined geographic conditions.
Step 606 includes performing one or more automated actions based at least in part on the one or more determined security risks. In one or more embodiments, performing the one or more automated actions includes automatically annotating data associated with the at least one interaction to indicate that the at least one user is associated with at least one of the one or more determined security risks. Further, in at least one example embodiment, performing the one or more automated actions can include automatically generating and outputting, to one or more additional users associated with the at least one interaction, at least one notification that the at least one user is associated with at least one of the one or more determined security risks. Additionally or alternatively, performing the one or more automated actions can include automatically training at least a portion of the one or more artificial intelligence techniques using feedback related to at least one of the one or more determined security risks and at least a portion of the identified information.
Accordingly, the particular processing operations and other functionality described in conjunction with the flow diagram of
The above-described illustrative embodiments provide significant advantages relative to conventional approaches. For example, some embodiments are configured to automatically detect security-related items (e.g., user-specific risks) using artificial intelligence techniques. These and other embodiments can effectively overcome problems associated with resource-intensive and error-prone techniques.
It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing system features and functionality as illustrated in the drawings and described above are exemplary only, and numerous other arrangements may be used in other embodiments.
As mentioned previously, at least portions of the information processing system 100 can be implemented using one or more processing platforms. A given processing platform comprises at least one processing device comprising a processor coupled to a memory. The processor and memory in some embodiments comprise respective processor and memory elements of a virtual machine or container provided using one or more underlying physical machines. The term “processing device” as used herein is intended to be broadly construed so as to encompass a wide variety of different arrangements of physical processors, memories and other device components as well as virtual instances of such components. For example, a “processing device” in some embodiments can comprise or be executed across one or more virtual processors. Processing devices can therefore be physical or virtual and can be executed across one or more physical or virtual processors. It should also be noted that a given virtual device can be mapped to a portion of a physical one.
Some illustrative embodiments of a processing platform used to implement at least a portion of an information processing system comprises cloud infrastructure including virtual machines implemented using a hypervisor that runs on physical infrastructure. The cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines under the control of the hypervisor. It is also possible to use multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the system.
These and other types of cloud infrastructure can be used to provide what is also referred to herein as a multi-tenant environment. One or more system components, or portions thereof, are illustratively implemented for use by tenants of such a multi-tenant environment.
As mentioned previously, cloud infrastructure as disclosed herein can include cloud-based systems. Virtual machines provided in such systems can be used to implement at least portions of a computer system in illustrative embodiments.
In some embodiments, the cloud infrastructure additionally or alternatively comprises a plurality of containers implemented using container host devices. By way of example, as detailed herein, a given container of cloud infrastructure can illustratively comprise a Docker container and/or other type of Linux Container (LXC). The containers are run on virtual machines in a multi-tenant environment, although other arrangements are possible. Additionally, the containers are utilized to implement a variety of different types of functionality within the system 100. For example, such containers can be used to implement respective processing devices providing compute and/or storage services of a cloud-based system. Again, containers may be used in combination with other virtualization infrastructure such as virtual machines implemented using a hypervisor.
Illustrative embodiments of processing platforms will now be described in greater detail with reference to
The cloud infrastructure 700 further comprises sets of applications 710-1, 710-2, . . . 710-L running on respective ones of the VMs/container sets 702-1, 702-2, . . . 702-L under the control of the virtualization infrastructure 704. The VMs/container sets 702 comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs. In some implementations of the
A hypervisor platform may be used to implement a hypervisor within the virtualization infrastructure 704, wherein the hypervisor platform has an associated virtual infrastructure management system. The underlying physical machines comprise one or more information processing platforms that include one or more storage systems.
In other implementations of the
As is apparent from the above, one or more of the processing modules or other components of system 100 may each run on a computer, server, storage device or other processing platform element. A given such element is viewed as an example of what is more generally referred to herein as a “processing device.” The cloud infrastructure 700 shown in
The processing platform 800 in this embodiment comprises a portion of system 100 and includes a plurality of processing devices, denoted 802-1, 802-2, 802-3, . . . 802-K, which communicate with one another over a network 804.
The network 804 comprises any type of network, including by way of example a global computer network such as the Internet, a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network, a wireless network such as a Wi-Fi or WiMAX network, or various portions or combinations of these and other types of networks.
The processing device 802-1 in the processing platform 800 comprises a processor 810 coupled to a memory 812.
The processor 810 comprises a microprocessor, a CPU, a GPU, a TPU, a microcontroller, an ASIC, a FPGA or other type of processing circuitry, as well as portions or combinations of such circuitry elements.
The memory 812 can comprise random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination. The memory 812 and other memories disclosed herein should be viewed as illustrative examples of what are more generally referred to as “processor-readable storage media” storing executable program code of one or more software programs.
Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture comprises, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used.
Also included in the processing device 802-1 is network interface circuitry 814, which is used to interface the processing device with the network 804 and other system components, and may comprise conventional transceivers.
The other processing devices 802 of the processing platform 800 are assumed to be configured in a manner similar to that shown for processing device 802-1 in the figure.
Again, the particular processing platform 800 shown in the figure is presented by way of example only, and system 100 may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, servers, storage devices or other processing devices.
For example, other processing platforms used to implement illustrative embodiments can comprise different types of virtualization infrastructure, in place of or in addition to virtualization infrastructure comprising virtual machines. Such virtualization infrastructure illustratively includes container-based virtualization infrastructure configured to provide Docker containers or other types of LXCs.
As another example, portions of a given processing platform in some embodiments can comprise converged infrastructure.
It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform.
Also, numerous other arrangements of computers, servers, storage products or devices, or other components are possible in the information processing system 100. Such components can communicate with other elements of the information processing system 100 over any type of network or other communication media.
By way of example, particular types of storage products that can be used in implementing a given storage system of an information processing system in an illustrative embodiment can include all-flash and hybrid flash storage arrays, scale-out all-flash storage arrays, scale-out NAS clusters, and/or other types of storage arrays. Combinations of multiple ones of these and other storage products can also be used in implementing a given storage system in an illustrative embodiment.
It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Thus, for example, the particular types of processing devices, modules, systems and resources deployed in a given embodiment and their respective configurations may be varied. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.