Enterprise networking refers to the physical, virtual, and/or logical design of a network, and how the various software, hardware, and protocols work together to transmit data. Enterprise networks may include, for example, routers, switches, access points, and different stations. Design protocols for designing enterprise architectures can utilize a blueprint for the enterprise network that is based on the type of enterprise network. These blueprints are static in nature, and therefore may be outdated as improvements in architecture designs or advancements in technology are developed.
The accompanying drawings are incorporated herein and form a part of the specification.
In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
It is to be appreciated that the Detailed Description section, and not the Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections may set forth one or more but not all example embodiments as contemplated by the inventor(s), and thus, are not intended to limit the appended claims in any way.
The engines described herein may be implemented as cloud-based engines. For example, a cloud-based engine may be an engine that can run applications and/or functionalities using a cloud-based computing system. All or portions of the applications and/or functionalities may be distributed across multiple computing devices, and need not be restricted to only one computing device. In some embodiments, the cloud-based engines can execute functionalities and/or modules that end users access through a web browser or container application without having the functionalities and/or modules installed locally on the end-users' computing devices.
In some embodiments, datastores may include repositories having any applicable organization of data, including tables, comma-separated values (CSV) files, databases (e.g., SQL), or other applicable known organizational formats. Datastores may be implemented, for example, as software embodied in a physical computer-readable medium on a general- or specific-purpose machine, in firmware, in hardware, in a combination thereof, or in an applicable known device or system. Datastore-associated components, such as database interfaces, may be considered part of a datastore, part of some other system component, or a combination thereof.
Datastores can include data structures. In some embodiments, a data structure may be associated with a particular way of storing and organizing data in a computer so that it may be used efficiently within a given context. Data structures may be based on the ability of a computer to fetch and store data at any place in its memory. Thus, some data structures may be based on computing the addresses of data items with arithmetic operations; while other data structures may be based on storing addresses of data items within the structure itself. Many data structures use both principles. The implementation of a data structure can entail writing a set of procedures that create and manipulate instances of that structure. The datastores described herein may be cloud-based datastores that is compatible with cloud-based computing systems and engines.
The server 120 may include a server device (e.g., a host server, a web server, an application server, etc.), a data center device, or a similar device, capable of communicating with the plurality of enterprise networks 104 via the network 125. The server 120 may include a machine learning model 130.
In some embodiments, the machine learning model 130 may be trained using supervised machine learning algorithms, unsupervised machine learning algorithms, or a combination of both, to categorize each of the plurality of enterprise networks 104. For example, the machine learning model 130 may be trained using a density-based clustering technique such as, but not limited to, a K-means clustering algorithm or a support-vector clustering algorithm, to cluster each of the plurality of enterprise networks 104. As one example, the density-based clustering technique may cluster the plurality of enterprise networks 104 based on the number of client devices per access point for each different type of enterprise network, e.g., academic institutions, corporations, etc.
Based on the clustered enterprise networks, the machine learning model 130 may be trained to associate the clustered enterprise networks with different enterprise architectures. For example, the machine learning model 130 may be trained using an association algorithm, such as, but not limited to, an apriori algorithm, eclat algorithm, or a frequent-pattern growth (FP-growth) algorithm to determine a correlation between the different categories of enterprises and their respective enterprise architectures.
In some embodiments, the machine learning model 130 may be further trained using a sequence modeling algorithm. For example, the machine learning model 130 may be trained using data collected from the plurality of enterprise networks 104 using a sequence generation algorithm. In some embodiments, the data collected from the plurality of enterprise networks 104 may be used as a training data set to enable the machine learning model 130 to generate enterprise architectures similar to those of the training data.
In some embodiments, the machine learning model 130 may be further trained using a statistical inference algorithm. For example, the machine learning model 130 may be trained using data collected from the plurality of enterprise networks 104 to enable the machine learning model 130 to generate enterprise architectures based on statistical analyses of the plurality of enterprise networks 104. Using the number of devices per access point as an example, the machine learning model 130 may be trained to analyze the number of devices per access point, and then recommend, based on an average number of devices per access point of similar enterprise networks, the number of devices per access point that provides the best performance. Continuing with this example, the machine learning model 130 may also generate the recommendation based on a standard deviation of the average number of number of devices per access point.
In some embodiments, the machine learning model 130 may be further trained using a collective inference algorithm. For example, the machine learning model 130 may be trained using the collective inference algorithm in order to make statistical analyses about the enterprise architectures of the plurality of enterprise networks 104 and to simultaneously classify and label the plurality of enterprise networks 104 based on their respective architectures.
The network 125 may include one or more wired and/or wireless networks. For example, the network 125 may include a cellular network (e.g., a long-term evolution (LTE) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a 5G network, another type of next generation network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks.
Referring to
In some embodiments, the stations 212 may be client devices, such as wired or wireless devices connect to the network 125. In some embodiments, the stations 212 may be, for example, a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a handheld computer, a gaming device, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, etc.), a desktop computer, a laptop computer, a tablet computer, or a similar type of device. For example, in some embodiments, the stations 212 may be wireless devices, such as a thin client device or an ultra-thin client device that includes a wireless network interface, through which the wireless device can receive data wirelessly through a wireless communication channel. The wireless network interface may be used to send data generated by the wireless device to remote or local systems, servers, engines, or datastores through the network 125. The stations 212 may be referred to as being “on” a wireless network of the enterprise network 104, but may not be the property of the enterprise network 104. For example, the stations 212 may be privately owned devices that access services through a guest or other network of the enterprise network 104, or IoT devices owned by the enterprise network 104 that are on the wireless network.
The network devices 210 may be, for example, routers, switches, access points, gateways, including wireless gateways, repeaters, or any combinations thereof, as should be understood by those of ordinary skill in the art.
The capacity-based service client engine 214 may be an engine that enables a user or artificial agents of the each of the plurality of enterprise networks 104 to provide information about the enterprise network 104 to the server 120 and to receive recommendations for an enterprise architecture from the server 120. In some embodiments, the service parameters datastore 208 may be implemented as a shared database that may be updated by more than one party, a party other than an enterprise could access traffic either via a mirror port within a private network of the enterprise or traffic that is transmitted into or out of the private network on a medium to which the party other than the enterprise has access.
In some embodiments, each of the plurality of enterprise networks 104 may store information related to the enterprise architecture in the service parameters datastore 208 of
In some embodiments, the network traffic and performance information may include, for example, bandwidth, throughput, latency, jitter, and error rate of the devices operating on the enterprise architecture. The network traffic and performance information may also include information, such as the number of devices per access point and a corresponding quality of service of the access point. In some embodiments, the energy performance may include product longevity, data center design, resource deployment, power management, materials recycling, cloud computing, edge computing, telecommuting. In some embodiments, the data center performance information may include information technology (IT) system parameters, environmental conditions, air management, cooling system parameters, electrical system parameters, and the like. In some embodiments, the resource deployment performance information may include algorithmic efficiency, resource allocation, virtualization, terminal servers, or the like. In some embodiments, the power management performance information may include operating system support, power supply, storage, video card usage, display characteristics, or the like. In some embodiments, the network security performance information may include firewalls, email security, anti-virus/anti-malware software, network segmentation, access control, application security, behavioral analytics, data loss prevention, intrusion prevention, mobile device security, virtual private network (VPN) security, web security, wireless security, or the like.
As shown in
The enterprise networks datastore 316 may store information related to real-world resources of each of the plurality of enterprise networks 104. This information may be implementation- and/or configuration-specific, but for illustrative purposes, may include knowledge of licenses, network capabilities, green initiatives, or the like. In some embodiments, the enterprise networks datastore 316 may store information received from the service parameters datastore 208 of each of the plurality of enterprise networks 104. In some embodiments, the enterprise networks datastore 316 may also store data from third party analytics from government databases, business databases, news sources, social media, or the like. The data can also be obtained from monitoring network traffic, device utilization, localized human activity, or the like.
In some embodiments, the enterprise network resource analysis engine 318 may analyze resources of each the plurality of enterprise networks 104 represented in the enterprise networks datastore 316. The enterprise network resource analysis engine 318 may store analytics obtained from analyzing each of the plurality of enterprise networks 104 in the enterprise networks datastore 316. In some embodiments, the enterprise network resource analysis engine 318 may use information about the enterprise networks 104 to generate a health score for each of the plurality of enterprise networks 104. As one example, the enterprise network resource analysis engine 318 may determine a health score based on the network performance of each of the plurality of enterprise networks 104.
In some embodiments, the enterprise network comparison engine 320 may be an engine that compares enterprise network parameters for one of the enterprise networks 104 with those of another of the enterprise networks 104 using information in the enterprise networks datastore 316. In some embodiments, the enterprise network comparison engine 320 may compare one of the enterprise networks 104 with other similar enterprises, such as by business sector, enterprise type, e.g., educational institutions, office buildings, corporate campuses, public shopping centers, public parks, employee count, revenue, or the like. The comparison may be useful in order to generate enterprise architectures that closely match that enterprise architectures of enterprises having a similar profile.
In some embodiments, the enterprise network needs prediction engine 322 may determine a resource utilization plan that is appropriate for enterprise needs and goals based on from available resources, resource utilization data and analytics, and business plans. This can include reducing the number of capacity of licenses if they are being underused, turning off or putting into sleep mode devices that are being underutilized, directing traffic paths through underutilized network devices, controlling lighting or HVAC in accordance with human activity in locations, preparing service orders for devices that appear to be faulty, reconfiguring devices to match apparent needs, to name several possibilities. This may also include predicting needs based on peak and off-peak periods based on the individual needs of each of the plurality of enterprise networks 104. Using educational institutions as one example, demands for network resources may be reduced during, for example, summer and winter recesses (e.g., off-peak periods), whereas demands for network resources may surge when classes are in session (e.g., a peak period). This may be achieved using a modelling pipeline that may be based on a combination of one or more techniques, such as a pattern mining technique, a recursive feature elimination technique, a gradient boosting technique, and/or the like. The pattern mining technique may be, for example, a sequential pattern mining technique (e.g. a sequential pattern discovery using equivalence classes (SPADE) technique, a frequent closed sequential pattern mining technique, a vertical mining of maximal sequential patterns (VMSP) technique, and/or the like). In further embodiments, the modeling pipeline may be based on one or more data mining techniques, such as tracking patterns, classification, association, or clustering.
In some embodiments, the service capacity recommendation engine 324 creates recommendations regarding resource utilization for existing enterprise networks, e.g., the plurality of enterprise networks 104, or when developing a new enterprise network. The recommendations may emphasize cost reductions, energy efficiency, infrastructure build-out, and disaster recovery preparedness. It should be understood that these are merely examples, and that other recommendations are further contemplated in accordance with aspects of the present disclosure.
In some embodiments, the capacity-based service recommendation server engine 326 may act as a server to a client of the capacity-based service client engine 314. Communications from the plurality of enterprise networks 104 may be characterized as passing through the capacity-based service server engine 126 including traffic, traffic analytics, energy consumption, or the like, that may be detected automatically with appropriately configured devices, and resource parameters, green initiative goals, security goals, or the like that may be provided from relevant agents of the enterprise networks 104. Such data is assumed to be stored in the enterprise networks datastore 316.
At 1102, the method 1100 may include receiving, at a server, e.g., the server 102 of
At 1104, the method may include analyzing, by the server 120, the historical information from the plurality of enterprise networks to generate a network health score for each of the plurality of enterprise networks. For example, the server 120 may be configured to calculate the health score for the enterprise architecture of each of the plurality of enterprises 104. This may be achieved an enterprise network resource analysis engine 318, as shown in
In some embodiments, the health score may be, for example, based on a scale from zero (0) to one hundred (100), with higher health scores indicating better performance of the enterprise architecture of the enterprise network 104. In some embodiments, generating the network health score for each of the plurality of enterprise networks 104 may include generating an overall network health score for each of the plurality of enterprise networks based on a plurality of sub-network health scores. For example, the plurality of subcomponents may include, but are not limited to, a device score, a security score, a service score (e.g., domain name system (DNS)/dynamic host configuration protocol (DHCP)), an applications services score, a Wi-Fi score, a network services score (e.g., a round-trip-time to an outside network), and/or a client score. It should be understood by those of ordinary skill in the art that these are merely examples of sub-components, and that more or less sub-components may be used to determine the overall network health score. In some embodiments, the health score may be an average of the plurality of subcomponents. In some embodiments, the plurality of subcomponents may be given different weights when determining the health score. In some embodiments, the weight assigned to any given subcomponent may vary from one type of enterprise to another based on the priorities of the enterprise. For example, some enterprises may emphasis providing the best wireless connection possible to users, such that the Wi-Fi score may be given more weight than any of the other subcomponents.
At 1106, the method 1100 may also include training a machine learning model, e.g., the machine learning model 130 of
At 1108, the method may further include generating, using the machine learning model 130, an enterprise architecture for a first enterprise network. In some embodiments, the first enterprise network may be a new enterprise network or an existing enterprise network from among the plurality of enterprise networks 104. In some embodiments, generating the enterprise architecture for the first enterprise network may include identifying, using the machine learning model 130, a subset of enterprise networks from among the plurality of enterprise networks 130 with a same category as the first enterprise network, comparing the first enterprise network to the subset of enterprise networks to identify at least one enterprise network, with the comparison being based on one or more parameters for generating the enterprise architecture for the first enterprise network, and generating the enterprise architecture for the first enterprise network based on the enterprise architecture of the identified at least one enterprise network.
That is, by aggregating and analyzing the information of each enterprise network of the plurality of enterprise networks 104 and classifying each of the plurality of enterprises networks 104, the server 120, using the machine learning model 130, may provide recommendations for enterprises of a similar type. For example, the server 120 may receive a request to generate an enterprise architecture for a new enterprise network, and the server 120 may use the machine learning model 130 to identify enterprise networks that match a profile of the requesting enterprise network and retrieve enterprise architecture information for the identified enterprise networks. For example, the request may be from an enterprise, such as a school, and the sever 120, using the machine learning model 130, may identify other enterprise networks having a similar profile, e.g., other schools having a similar size, location, number users, number of connected devices, etc.
In some embodiments, the request may include a request to prioritize one of the plurality of health score components. In some embodiments, the request may also include one or more parameters. For example, the one or more parameters may include a budget parameter, e.g., a projected budget for the enterprise architecture, a priority parameter, e.g., a request to prioritize one of the plurality of health score components, a geographic parameter, e.g., a size and location of the enterprise, and a complexity parameter, e.g., a request to limit a complexity of the enterprise architecture for simplified implementation or a request for multiple sub-architectures within the enterprise architecture, e.g., a first sub-architecture for less dense locations within the enterprise, such as an administrative building, academic buildings, and student housing of a university, and a second sub-architecture for more dense locations, such as stadiums and arenas, of the university. It should be understood by those of ordinary skill in the arts that these are merely example parameters and that other parameters are further contemplated in accordance with aspects of the present disclosure. In response, the machine learning model 130 may identify an enterprise architecture for a similar enterprise having the highest score for the specified health score component and/or matching parameters. Once similar enterprise networks have been identified, the machine learning model 130 may generate an enterprise architecture for the requesting enterprise network based on the enterprise architectures of the identified enterprise networks.
In some embodiments, the server 120 may be also configured to continuously receive the historical information from each of the plurality of enterprise networks 104, and update the network health score for each of the plurality of enterprise networks 104 based on continuously receiving the historical information. In some embodiments, the machine learning model 130 may be continuously trained based on the continuously received historical information and the updated network health scores. That is, the server 120 may continuously monitor each of the plurality of enterprises 104, and how changes in the enterprise architecture affect each of the plurality of subcomponents of the health score and the overall health score of the enterprise. For example, in some embodiments, the server 120 may monitor the number of stations 212 connected to an access point of the enterprise and how this affects the Wi-Fi component of the health score, as well as the overall health score of the enterprise, e.g., at which point does the number of stations 212 reduce the quality of the wireless connection provided by the access point below a threshold level. As a result, the machine learning model 130 may continuously learn about how different changes effect enterprise architectures and apply that that knowledge to provide recommendations to similar enterprises. For example, with respect to existing enterprises, the machine learning model 130 may learn how certain changes will affect overall health score, e.g., improve or degrade the health score, of the enterprise architecture, and the machine learning model 130 may thus provide recommendations accordingly. In some embodiments, for existing enterprises, the recommendations may be based on a combination of knowledge learned from other enterprises of a similar type, as well as the current enterprise.
In some embodiments, the server 120 may also monitor a performance of the first enterprise network, calculate a change in the health score for the first enterprise network based on the monitored performance, determine a cause of the change in the health score, and generate one or more recommendations for updating the enterprise architecture for the first enterprise network to modify the cause of the change in the health score. That is, in some embodiments, the server 120 may continuously monitor a performance of each of the plurality of enterprise networks 104 and calculate a health score for of each of the plurality of enterprise networks 104 based on the performance. Additionally, the machine learning model 130 may analyze the updated health score of each of the plurality of enterprise networks 104 in order to provide updated recommendations as improvements to the enterprise architecture are identified. This may be achieved as the machine learning model 130 is continuously learning from changes made to the plurality of enterprises 104 and updating their health scores accordingly, such that the recommendations are tailored specifically to each individual enterprise network based on the most up to date information available to the machine learning model 130.
In some embodiments, the recommendations may be dynamically updated based on the specific needs of the enterprise network at a particular time. For example, some enterprise networks may experience surges in network demands on a seasonal basis, e.g., shopping centers in holiday seasons or during back-to-school season, amusement parks during the summer, or some enterprise networks may experience fluctuations in network demands, e.g., academic institutions may experience fluctuations in network demands throughout the academic school year. To address these changes, the machine learning model 130 may provide dynamic recommendations to the enterprise networks that enable the enterprise networks to change the enterprise architectures as needed based on the network demands at that time. To achieve this, the machine learning model 130 may be trained on historical demand on such fluctuations and provide recommendations based on predicted network demands, such that administrators may implement any changes in a timely manner.
At 402, the method 400 includes operating an enterprise network, e.g., one of the plurality of enterprise networks 104 of
At 404, the method 400 may also include with providing the service parameters, traffic, traffic analytics, and other enterprise-specific data to a server, e.g., server 120 of
At 406, the method 400 may include analyzing, using the machine learning model 130 of the server 120, the service parameters to obtain a resource consumption model. For example, the machine learning model 130, using the enterprise network analysis engine 318 of
At 408, the method 400 may further include comparing, using the enterprise network comparison engine 320 of
At 410, the method 400 may further include predicting, using the enterprise network needs prediction engine 322 of
At 412, the method 400 may include making, the service capacity recommendation engine 324 described of
In some embodiments, the capacity computation engine 502 may determine a capacity for an enterprise network, e.g., enterprise network 104. For example, in some embodiments, the capacity computation engine 502 may use licenses information and licensing limitations of enterprise network 104 to determine licensing usage of the enterprise network 104. In some embodiments, the licenses information may include a number of available license and a number of licenses currently being used. The licenses information may be obtained from the enterprise network 104 itself, a provider of the license, by a third party, or derived from third party data. The licensing limitations of the enterprise network 104 may be from hardware, software, or self-induced limitations, such as self-induced limitations including green initiatives, expense caps (e.g., limiting an amount spent on annual licenses), security initiatives, or the like.
In some embodiments, the enterprise allocations datastore 504 may be a datastore that indicates how capacity is allocated within an enterprise network 104. For example, how capacity may be allocated according to users, groups, divisions, locations, or the like. In some embodiments, understanding how the capacity is allocated may be useful for determining how capacity may be reallocated. In some embodiments, the capacity parameters datastore 506 may store information associated with the capacity allocations throughout the enterprise network 104, e.g., a capacity (e.g., a software license, a network license, a limitation, or the like) and a capacity allocation to enterprise network employees, offices, user groups, or the like in accordance with current licensed and limited parameters.
In some embodiments, the network topology datastore 508 may store information associated with network devices, software resources, and users within the enterprise network 104. The capacity allocations may be specific to specific branches (e.g., between network devices), VLANs, users, or the like, of the network topology. In some embodiments, the capacity modeling engine 510 may create a capacity model using data structures of the capacity parameters datastore 506 and the network topology datastore 508. Advantageously, the models may be used to graphically represent the capacity and capacity allocations within the enterprise network 104. In some embodiments, the capacity model datastore 512 may store information associated with components of the enterprise network and the capacity allocations associated with those components. In some embodiments, the capacity models may further illustrate the capacity with different colors, shapes, or sizes to represent different capacities in association with a component or between components.
In some embodiments, the resource utilization datastore 514 may store traffic parameters, hardware utilization, software utilization, or the like, and the consumption computation engine 516 may compute resource utilization using data from the resource utilization datastore 514. In some embodiments, the consumption parameters datastore 518 may store information related to resource utilization throughout the enterprise network 104. For example, the information may include utilized seats of a software license, computer resource expended, traffic parameters between network nodes, or the like. The consumption parameters may have time-space parameters indicative of where the resource is consumed (e.g., by device) and when the resource is utilized. In some embodiments, the consumption modeling engine 520 may apply a capacity model from the capacity model datastore 512 to the consumption parameters from the consumption parameters datastore 518. Because the capacity model includes network topology and resource capacity allocations, the consumption parameters may be matched to the model at the relevant network nodes in association with the relevant capacity allocations. Advantageously, in some embodiments, the models may be used to graphically represent capacity and capacity allocations within an enterprise network with an overlay of actual resource utilization.
In some embodiments, the consumption model datastore 522 may store information related to the components of the enterprise network 104 and capacity allocations associated with those components with an overlay of resource utilization. For example, the consumption models may be represented graphically, with consumption being associated with different colors, shapes, or sizes to represent different utilizations of network resources. In some embodiments, an under-utilized resource may be represented in green, while an over-utilized resource may be represented in red, with potentially thicker lines between network nodes to indicate the degree of under- or over-utilization. In some embodiments, a filter may be applied to the model to emphasize cost allocations, quality of service, energy consumption, or other aspects of utilization that are of interest to an administrator of the enterprise.
At 602, the method 600 includes determining, using the capacity computation engine 502 of
At 604, the method 600 may include creating, using the capacity modeling engine 510 of
At 606, the method 600 may include determining, using the consumption computation engine 516 of
At 608, the method 600 may include creating, using the consumption modeling engine 520 of
The comparison parameter set selection engine 702 may receive one or more enterprise parameters from the enterprise network 104 to which other enterprises are to be compared. In some embodiments, the enterprise parameters may be determined automatically by attempting to match enterprises in the same industry, of the same size, in the same geographic area, or the like. Alternatively, the enterprise parameters may be selected in accordance with a growth plan (or reduction in force) or for some other reason. The enterprise parameters may also be limited to specific aspects of enterprises, such as network device allocation or capabilities, software license costs, or the like.
In some embodiments, the selection parameters datastore 704 may store a set of parameters for matching to enterprise network parameters to which a comparison is desired. In some embodiments, the real-world models 706 may be consumption models for enterprise networks other than an enterprise network to which the enterprise networks are to be compared. In some embodiments, the real-world models 706 may include a consumption model of the enterprise network 104 as well. In some embodiments, the real-world models 706 may be similar to the consumption models described with reference to
In some embodiments, the composite model creation engine 708 may use the real-world models 706 that match a selection parameter of the selection parameters datastore 704. In some embodiments, the composite model creation engine 708 may consider a hypothetical model, instead of or in addition to the real-world models 706, that matches the selection parameter. In some embodiments, the composite model can include an average or some other statistical representation of the real-world models 706, and may incorporate knowledge about, for example, device capabilities to provide alternative models that account for differentiations between two or more of the real-world models 706.
In some embodiments, the composite model datastore 710 may store information associated with a composite representation of the real-world models 706, which may be referred to as a composite model. The composite model may take into account available real-world models 706 that match the selection parameter. In some embodiments, the composite model may be similar to the consumption model datastore 522 described with reference to
In some embodiments, the consumption model datastore 712 may store consumption models that represent components of the enterprise network 104 and capacity allocations associated with those components with an overlay of resource utilization. In some embodiments, the consumption model may be similar to the consumption model datastore 522 described with reference to
In some embodiments, the real-world comparison engine 714 compare a consumption model of the consumption model datastore 712 to a composite model of the composite model datastore 710, which may yield a comparison model that is useful for illustrating variance between the enterprise network and similar (or as selected) enterprise networks. The comparison model datastore 716 may store the comparison models. Advantageously, the consumption model of an enterprise network may be discernable to an administrator of the enterprise network, while the composite model anonymizes data associated with enterprise networks to which the enterprise network is being compared.
At 802, the method 800 may include selecting, using the comparison parameter set selection engine 702 of
At 804, the method 800 may include creating, using the composite model creation engine 708 of
At 806, the method 800 may include creating, using the real-world comparison engine 714 of
In some embodiments, the comparison model datastore 902 stores comparison models that represent components of an enterprise network and capacity allocations associated with those components with an overlay of resource utilization and similar enterprise utilizations, when applicable. In some embodiments, the comparison model is similar to the comparison model datastore 716 described with reference to
In some embodiments, the initiative parameters datastore 904 may store expected capacity parameters in accordance with initiatives of the enterprise network. In some embodiments, the expected capacity parameters may include self-imposed limitations of the enterprise network, including green initiative requirements, infrastructure building, cost-cutting measures, or the like. In some embodiments, the expected enterprise allocations may be used to generate expected capacity parameters by an engine similar to the capacity computation engine 302 described with reference to
In some embodiments, the restructuring parameters datastore 906 may store expected changes to the enterprise network, such as remodeling, moving divisions within an existing structure, moving to a new structure, or the like. In some embodiments, when applicable, the restructuring parameters may include a new network topology, which may be used, along with the expected capacity parameters, to generate an expected capacity model that incorporates the new network topology. In some embodiments, the needs integration engine 908 may include functionality similar to the capacity modeling engine 310 described with reference to
In some embodiments, the needs integration engine 908 may use the comparison model datastore 902, the initiative parameters datastore 904, and the restructuring parameters datastore 906 to generate an expected capacity model. In some embodiments, the comparison model may include a consumption model of the enterprise network and a composite model of similar real-world networks. In some embodiments, the comparison model may be a consumption model of the enterprise network, which may be compared with models that incorporate expected changes to the enterprise network. The expected capacity model may incorporate information in the initiative parameters datastore 904 regarding desired changes to various aspects of the enterprise network, which can impact capacity, and information in the restructuring parameters datastore 906 regarding organizational or structural changes, which can impact capacity at particular space-time coordinates within the enterprise network. In some embodiments, the expected capacity model datastore 910 may store the expected capacity model generated by the needs integration engine 908.
In some embodiments, the resource options datastore 912 may include data about hardware options available to the enterprise network. In some embodiments, the hardware options can include specifications for hardware that is on the market or will be available at a future date. The hardware options may or may not include hardware that is already available at the enterprise network, such as hardware that may be eliminated pursuant to changes brought on by initiatives or restructuring, or that are warehoused and not in use, any of which may be treated as now available after generating the expected capacity model.
In some embodiments, the labor options datastore 914 may include data about the time and costs associated with moving from a current model to a future model. In some embodiments, the labor options may include technicians, engineers, and other professionals who offer their services on the market. In some embodiments, the labor options may or may not include in-house talent capable of carrying out expected implementations.
In some embodiments, the implementation scheduling engine 916 may use data stored in the resource options datastore 912 and the labor options datastore 914 to generate an implementation schedule, complete with costs and time requirements, to convert a current capacity model to the expected capacity model of the expected capacity model datastore 910. In some embodiments, the implementation schedule datastore 918 may store the implementation schedule generated by the implementation scheduling engine 916.
At 1002, the method 1000 may include integrating, using the needs integration engine 908 of
At 1004, the method 1000 may include generating, using the implementation scheduling engine 916 of
Various embodiments can be implemented, for example, using one or more well-known computer systems, such as computer system 1200 shown in
Computer system 1200 includes one or more processors (also called central processing units, or CPUs), such as a processor 1204. Processor 1204 is connected to a communication infrastructure or bus 1206. Processor 1204 may be a graphics processing unit (GPU). In some embodiments, a GPU may be a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc.
Computer system 1200 also includes user input/output device(s) 1203, such as monitors, keyboards, pointing devices, etc., which communicate with communication infrastructure 1206 through user input/output interface(s) 1202.
Computer system 1200 also includes a main or primary memory 1208, such as random access memory (RAM). Main memory 1208 may include one or more levels of cache. Main memory 1208 has stored therein control logic (e.g., computer software) and/or data.
Computer system 1200 may also include one or more secondary storage devices or memory 1210. Secondary memory 1210 may include, for example, a hard disk drive 1212 and/or a removable storage device or drive 1214. Removable storage drive 1214 may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive.
Removable storage drive 1214 may interact with a removable storage unit 1218. Removable storage unit 1218 may include a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit 1218 may be program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Removable storage drive 1214 may read from and/or write to removable storage unit 1218.
Secondary memory 1210 may include other means, devices, components, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 1200. Such means, devices, components, instrumentalities or other approaches may include, for example, a removable storage unit 1222 and an interface 1220. Examples of the removable storage unit 1222 and the interface 1220 may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface.
Computer system 1200 may further include a communication or network interface 1224. Communication interface 1224 may enable computer system 1200 to communicate and interact with any combination of external devices, external networks, external entities, etc. (individually and collectively referenced by reference number 1228). For example, communication interface 1224 may allow computer system 1200 to communicate with external or remote devices 1228 over communications path 1226, which may be wired and/or wireless (or a combination thereof), and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system 1200 via communication path 1226.
Computer system 1200 may also be any of a personal digital assistant (PDA), desktop workstation, laptop or notebook computer, netbook, tablet, smart phone, smart watch or other wearable, appliance, part of the Internet-of-Things, and/or embedded system, to name a few non-limiting examples, or any combination thereof.
Computer system 1200 may be a client or server, accessing or hosting any applications and/or data through any delivery paradigm, including but not limited to remote or distributed cloud computing solutions; local or on-premises software (“on-premise” cloud-based solutions); “as a service” models (e.g., content as a service (CaaS), digital content as a service (DCaaS), software as a service (SaaS), managed software as a service (MSaaS), platform as a service (PaaS), desktop as a service (DaaS), framework as a service (FaaS), backend as a service (BaaS), mobile backend as a service (MBaaS), infrastructure as a service (IaaS), etc.); and/or a hybrid model including any combination of the foregoing examples or other services or delivery paradigms.
Any applicable data structures, file formats, and schemas in computer system 1200 may be derived from standards including but not limited to JavaScript Object Notation (JSON), Extensible Markup Language (XML), Yet Another Markup Language (YAML), Extensible Hypertext Markup Language (XHTML), Wireless Markup Language (WML), MessagePack, XML User Interface Language (XUL), or any other functionally similar representations alone or in combination. Alternatively, proprietary data structures, formats or schemas may be used, either exclusively or in combination with known or open standards.
In some embodiments, a tangible, non-transitory apparatus or article of manufacture including a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system 1200, main memory 1208, secondary memory 1210, and removable storage units 1218 and 1222, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system 1200), may cause such data processing devices to operate as described herein.
Embodiments of the present embodiments have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.
Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown in
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments that others may, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present embodiments. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present embodiments should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
The following application is incorporated herein by reference in its entirety: U.S. provisional application 62/858,303, filed Jun. 6, 2019, and entitled “Capacity-Based Service Provisioning.”
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/036659 | 6/8/2020 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62858303 | Jun 2019 | US |