The present disclosure relates generally to distributed processing in network monitoring systems and, more specifically, to distribution of both transaction-level and session-level processing.
Network monitoring systems may utilize distributed processing to extract metadata from protocol data units or packets obtained from the monitored network. However, such distributed processing can conflict with the inherent transaction ordering of protocols employed by the networks monitored. Moreover, in at least some instances, the metadata desired may not be extracted from single, atomic transactions between network nodes or endpoints, but may instead require context that can only be ascertained from the complete series of transactions forming a session between the nodes and/or endpoints.
Transaction and session processing of packets within a network monitoring system may be distributed among tightly-coupled processing elements by marking each received packet with a time-ordering sequence reference. The marked packets are distributed among processing elements by any suitable process for transaction processing by the respective processing element to produce transaction metadata. Where a session-owning one of the processing elements has indicated ownership of the session to the remaining processing elements, the transaction-processed packet and transaction metadata are forwarded to the session owner. The session owner aggregates transaction-processed packets for the session, time-orders the aggregated packets, and performs session processing on the aggregated, time-ordered transaction-processed packets to generate session metadata with the benefit of context information. Where the session owner for a transaction-processed packet has not previously been indicated, the transaction-processed packet and transaction metadata are forwarded to an ordering authority of last resort, which assigns ownership of the session.
Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation; the term is inclusive, meaning “and/or”; the phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; “circuits” refers to physical electrical and/or electronic circuits that are physically configured in full or both physically configured in part and programmably configured in part to perform a corresponding operation or function; “module,” in the context of software, refers to physical processing resources programmably configured by software to perform a corresponding operation or function; and the term “controller” means any device, system or part thereof that controls at least one operation, where such a device, system or part may be implemented in hardware that is programmable by firmware and/or software. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, those of ordinary skill in the art should understand that in many, if not most instances, such definitions apply to prior, as well as future uses of such defined words and phrases.
For a more complete understanding of the present disclosure and its advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:
In some implementations, endpoints 102a and 102b may represent, for example, computers, mobile devices, user equipment (UE), client applications, server applications, or the like. Meanwhile, nodes 101a and 101b may be components in an intranet, Internet, or public data network, such as a router, gateway, base station or access point. Nodes 101a and 101b may also be components in a 3G or 4G wireless network, such as: a Serving GPRS Support Node (SGSN), Gateway GPRS Support Node (GGSN) or Border Gateway in a General Packet Radio Service (GPRS) network; a Packet Data Serving Node (PDSN) in a CDMA2000 network; a Mobile Management Entity (MME) in a Long Term Evolution/Service Architecture Evolution (LTE/SAE) network; or any other core network node or router that transfers data packets or messages between endpoints 102a and 102b. Examples of these, and other elements, are discussed in more detail below with respect to
Still referring to
Network monitoring system 103 may be used to monitor the performance of network 100. Particularly, monitoring system 103 captures duplicates of packets that are transported across links 104 or similar interfaces between nodes 101a-101b, endpoints 102a-102b, and/or any other network links or connections (not shown). In some embodiments, packet capture devices may be non-intrusively coupled to network links 104 to capture substantially all of the packets transmitted across the links. Although only three links 104 are shown in
Monitoring system 103 may include one or more processors running one or more software applications that collect, correlate and/or analyze media and signaling data packets from network 100. Monitoring system 103 may incorporate protocol analyzer, session analyzer, and/or traffic analyzer functionality that provides OSI (Open Systems Interconnection) Layer 2 to Layer 7 troubleshooting by characterizing IP traffic by links, nodes, applications and servers on network 100. In some embodiments, these operations may be provided, for example, by the IRIS toolset available from TEKTRONIX, INC., although other suitable tools may exist or be later developed. The packet capture devices coupling network monitoring system 103 to links 104 may be high-speed, high-density 10 Gigabit Ethernet (10 GE) probes that are optimized to handle high bandwidth IP traffic, such as the GEOPROBE G10 product, also available from TEKTRONIX, INC., although other suitable tools may exist or be later developed. A service provider or network operator may access data from monitoring system 103 via user interface station 105 having a display or graphical user interface 106, such as the IRISVIEW configurable software framework that provides a single, integrated platform for several applications, including feeds to customer experience management systems and operation support system (OSS) and business support system (BSS) applications, which is also available from TEKTRONIX, INC., although other suitable tools may exist or be later developed.
Monitoring system 103 may further comprise internal or external memory 107 for storing captured data packets, user session data, and configuration information. Monitoring system 103 may capture and correlate the packets associated with specific data sessions on links 104. In some embodiments, related packets can be correlated and combined into a record for a particular flow, session or call on network 100. These data packets or messages may be captured in capture files. A call trace application may be used to categorize messages into calls and to create Call Detail Records (CDRs). These calls may belong to scenarios that are based on or defined by the underlying network. In an illustrative, non-limiting example, related packets can be correlated using a 5-tuple association mechanism. Such a 5-tuple association process may use an IP correlation key that includes 5 parts: server IP address, client IP address, source port, destination port, and Layer 4 Protocol (Transmission Control Protocol (TCP), User Datagram Protocol (UDP) or Stream Control Transmission Protocol (SCTP)).
Accordingly, network monitoring system 103 may be configured to sample (e.g., unobtrusively through duplicates) related data packets for a communication session in order to track the same set of user experience information for each session and each client without regard to the protocol (e.g., HTTP, RTMP, RTP, etc.) used to support the session. For example, monitoring system 103 may be capable of identifying certain information about each user's experience, as described in more detail below. A service provider may use this information, for instance, to adjust network services available to endpoints 102a-102b, such as the bandwidth assigned to each user, and the routing of data packets through network 100.
As the capability of network 100 increases toward 10 GE and beyond (e.g., 100 GE), each link 104 may support more user flows and sessions. Thus, in some embodiments, link 104 may be a 10 GE or a collection of 10 GE links (e.g., one or more 100 GE links) supporting thousands or tens of thousands of users or subscribers. Many of the subscribers may have multiple active sessions, which may result in an astronomical number of active flows on link 104 at any time, where each flow includes many packets.
Generally speaking, front-end devices 205a-205b may passively tap into network 100 and monitor all or substantially of its data. For example, one or more of front-end devices 205a-205b may be coupled to one or more links 104 of network 100 shown in
In some embodiments, front-end devices 205a-205b may be configured to monitor all of the network traffic (e.g., 10 GE, 100 GE, etc.) through the links to which the respective front-end device 205a or 205b is connected. Front-end devices 205a-205b may also be configured to intelligently distribute traffic based on a user session level. Additionally or alternatively, front-end devices 205a-205b may distribute traffic based on a transport layer level. In some cases, each front-end device 205a-205b may analyze traffic intelligently to distinguish high-value traffic from low-value traffic based on a set of heuristics. Examples of such heuristics may include, but are not limited to, use of parameters such as IMEI (International Mobile Equipment Identifier) TAC code (Type Allocation Code) and SVN (Software Version Number) as well as a User Agent Profile (UAProf) and/or User Agent (UA), a customer list (e.g., international mobile subscriber identifiers (IMSI), phone numbers, etc.), traffic content, or any combination thereof. Therefore, in some implementations, front-end devices 205a-205b may feed higher-valued traffic to a more sophisticated one of analyzers 210a-210b and lower-valued traffic to a less sophisticated one of analyzers 210a-210b (to provide at least some rudimentary information).
Front-end devices 205a-205b may also be configured to aggregate data to enable backhauling, to generate netflows and certain Key Performance Indicator (KPI) calculations, time stamping of data, port stamping of data, filtering out unwanted data, protocol classification, and deep packet inspection (DPI) analysis. In addition, front-end devices 205a-205b may be configured to distribute data to the back-end monitoring tools (e.g., analyzer devices 210a-210b and/or intelligence engine 215) in a variety of ways, which may include flow-based or user session-based balancing. Front-end devices 205a-205b may also receive dynamic load information such as central processing unit (CPU) and memory utilization information from each of analyzer devices 210a-210b to enable intelligent distribution of data.
Analyzer devices 210a-210b may be configured to passively monitor a subset of the traffic that has been forwarded to it by the front-end device(s) 205a-205b. Analyzer devices 210a-210b may also be configured to perform stateful analysis of data, extraction of key parameters for call correlation and generation of call data records (CDRs), application-specific processing, computation of application specific KPIs, and communication with intelligence engine 215 for retrieval of KPIs (e.g., in real-time and/or historical mode). In addition, analyzer devices 210a-210b may be configured to notify front-end device(s) 205a-205b regarding its CPU and/or memory utilization so that front-end device(s) 205a-205b can utilize that information to intelligently distribute traffic.
Intelligence engine 215 may follow a distributed and scalable architecture. In some embodiments, EPC module 220 may receive events and may correlate information from front-end devices 205a-205b and analyzer devices 210a-210b, respectively. OAM module 230 may be used to configure and/or control front-end device(s) 205a and/or 205b and analyzer device(s) 210a and/or 210b, distribute software or firmware upgrades, etc. Presentation layer 235 may be configured to present event and other relevant information to the end-users. Analytics store 225 may include a storage or database for the storage of analytics data or the like.
In some implementations, analyzer devices 210a-210b and/or intelligence engine 215 may be hosted at an offsite location (i.e., at a different physical location remote from front-end devices 205a-205b). Additionally or alternatively, analyzer devices 210a-210b and/or intelligence engine 215 may be hosted in a cloud environment.
In some implementations, each front-end probe or device 205 may be configured to receive traffic from network 100, for example, at a given data rate (e.g., 10 Gb/s, 100 Gb/s, etc.), and to transmit selected portions of that traffic to one or more analyzers 210a and/or 210b, for example, at a different data rate. Classification engine 310 may identify user sessions, types of content, transport protocols, etc. (e.g., using DPI module 315) and transfer UP packets to flow tracking module 320 and CP packets to context tracking module 325. In some cases, classification engine 310 may implement one or more rules to allow it to distinguish high-value traffic from low-value traffic and to label processed packets accordingly. Routing/distribution control engine 330 may implement one or more load balancing or distribution operations, for example, to transfer high-value traffic to a first analyzer and low-value traffic to a second analyzer. Moreover, KPI module 340 may perform basic KPI operations to obtain metrics such as, for example, bandwidth statistics (e.g., per port), physical frame/packet errors, protocol distribution, etc.
The OAM module 345 of each front-end device 205 may be coupled to OAM module 230 of intelligence engine 215 and may receive control and administration commands, such as, for example, rules that allow classification engine 310 to identify particular types of traffic. For instance, based on these rules, classification engine 310 may be configured to identify and/or parse traffic by user session parameter (e.g., IMEI, IP address, phone number, etc.). In some cases, classification engine 310 may be session context aware (e.g., web browsing, protocol specific, etc.). Further, front-end device 205 may be SCTP connection aware to ensure, for example, that all packets from a single connection are routed to the same one of analyzers 210a and 210b.
In various embodiments, the components depicted for each front-end device 205 may represent sets of software routines and/or logic functions executed on physical processing resource, optionally with associated data structures stored in physical memories, and configured to perform specified operations. Although certain operations may be shown as distinct logical blocks, in some embodiments at least some of these operations may be combined into fewer blocks. Conversely, any given one of the blocks shown in
Generally speaking, eNB 402 may include hardware configured to communicate with UE 401. MME 403 may serve as a control-node for the access portion of network 400, responsible for tracking and paging UE 401, coordinating retransmissions, performing bearer activation/deactivation processes, etc. MME 403 may also be responsible for authenticating a user (e.g., by interacting with HSS 404). HSS 404 may include a database that contains user-related and subscription-related information to enable mobility management, call and session establishment support, user authentication and access authorization, etc. PDG 405 may be configured to secure data transmissions when UE 401 is connected to the core portion of network 400 via an entrusted access. SGW 406 may route and forward user data packets, and PDW 407 may provide connectivity from UE 401 to external packet data networks, such as, for example, Internet 408.
In operation, one or more of elements 402-407 may perform one or more Authentication, Authorization and Accounting (AAA) operation(s), or may otherwise execute one or more AAA application(s). For example, typical AAA operations may allow one or more of elements 402-407 to intelligently control access to network resources, enforce policies, audit usage, and/or provide information necessary to bill a user for the network's services.
In particular, “authentication” provides one way of identifying a user. An AAA server (e.g., HSS 404) compares a user's authentication credentials with other user credentials stored in a database and, if the credentials match, may grant access to the network. Then, a user may gain “authorization” for performing certain tasks (e.g., to issue predetermined commands), access certain resources or services, etc., and an authorization process determines whether the user has authority do so. Finally, an “accounting” process may be configured to measure resources that a user actually consumes during a session (e.g., the amount of time or data sent/received) for billing, trend analysis, resource utilization, and/or planning purposes. These various AAA services are often provided by a dedicated AAA server and/or by HSS 404. A standard protocol may allow elements 402, 403, and/or 405-407 to interface with HSS 404, such as the Diameter protocol that provides an AAA framework for applications such as network access or IP mobility and is intended to work in both local AAA and roaming situations. Certain Internet standards that specify the message format, transport, error reporting, accounting, and security services may be used by the standard protocol.
Although
In order to execute AAA application(s) or perform AAA operation(s), client 502 may exchange one or more messages with server 503 via routing core 501 using the standard protocol. Particularly, each call may include at least four messages: first or ingress request 506, second or egress request 507, first or egress response 508, and second or ingress response 509. The header portion of these messages may be altered by routing core 501 during the communication process, thus making it challenging for a monitoring solution to correlate these various messages or otherwise determine that those messages correspond to a single call.
In some embodiments, however, the systems and methods described herein enable correlation of messages exchanged over ingress hops 504 and egress hops 505. For example, ingress and egress hops 504 and 505 of routing core 501 may be correlated by monitoring system 103, thus alleviating the otherwise costly need for correlation of downstream applications.
In some implementations, monitoring system 103 may be configured to receive (duplicates of) first request 506, second request 507, first response 508, and second response 509. Monitoring system 103 may correlate first request 506 with second response 509 into a first transaction and may also correlate second request 507 with first response 508 into a second transaction. Both transactions may then be correlated as a single call and provided in an External Data Representation (XDR) or the like. This process may allow downstream applications to construct an end-to-end view of the call and provide KPIs between LTE endpoints.
Also, in some implementations, Intelligent Delta Monitoring may be employed, which may involve processing ingress packets fully but then only a “delta” in the egress packets. Particularly, the routing core 501 may only modify a few specific Attribute-Value Pairs (AVPs) of the ingress packet's header, such as IP Header, Origin-Host, Origin-Realm, and Destination-Host. Routing core 501 may also add a Route-Record AVP to egress request messages. Accordingly, in some cases, only the modified AVPs may be extracted without performing full decoding transaction and session tracking of egress packets. Consequently, a monitoring probe with a capacity of 200,000 Packets Per Second (PPS) may obtain an increase in processing capacity to 300,000 PPS or more—that is, a 50% performance improvement—by only delta processing egress packets. Such an improvement is important when one considers that a typical implementation may have several probes monitoring a single DCA, and several DCAs may be in the same routing core 501. For ease of explanation, routing core 501 of
Additionally or alternatively, the load distribution within routing core 501 may be measured and managed. Each routing core 501 may include a plurality of message processing (MP) blades and/or interface cards 510a, 510b, . . . , 510n, each of which may be associated with its own unique origin host AVP. In some cases, using the origin host AVP in the egress request message as a key may enable measurement of the load distribution within routing core 501 and may help in troubleshooting. As illustrated, multiplexer module 511 within routing core 501 may be configured to receive and transmit traffic from and to client 502 and server 503. Load balancing module 512 may receive traffic from multiplexer 511, and may allocate that traffic across various MP blades 510a-510n and even to specific processing elements on a given MP blade in order to optimize or improve operation of core 501.
For example, each of MP blades 510a-510n may perform one or more operations upon packets received via multiplexer 511, and may then send the packets to a particular destination, also via multiplexer 511. In that process, each of MP blades 510a-510n may alter one or more AVPs contained in these packets, as well add new AVPs to the packets (typically to the header). Different fields in the header of request and response messages 506-509 may enable network monitoring system 103 to correlate the corresponding transactions and calls while reducing or minimizing the number of operations required to performs such correlations.
The monitoring model employed includes a plurality of tightly-coupled processing elements 601, 602 and 603 on, for example, an MP blade 510 within the MP blades 510a-510n depicted in
The protocol data units (PDUs) 610-617 shown in
A goal in monitoring a network is to create meaningful metadata that describes the state of the network. As noted, the PDUs belong to flows, where each flow is a brief exchange of signaling (e.g., request-response), and a set of flows rolls up into a session. Processing elements 601-603 in the network monitoring system 103 each manage a set of sessions, where a session is a set of flows between a pair of monitored network elements (e.g., endpoints 102a-102b in
The model described above assumes that PDUs, while not necessarily balanced by time order, are marked according to time order. The PDUs may then be scattered across processing elements 601, 602, and 603 by some means—say, randomly or by a well-distributed protocol sequence number. Additionally, processing of the PDUs is staged so that metadata is created for both the PDUs themselves and for the endpoints, at a protocol flow, transaction, and session level. Transaction or flow metadata may include, for example, the number of bytes in the messages forming a transaction. Session metadata may include, for example, a number of transactions forming a session or a type of data (audio, video, HTML, etc.) exchanged in the session.
The flow or transaction processing and reorder functionality 701, 702 for PE 1 601 and PE 2 602, respectively, in
Within the process of ordering PDUs at a processing element 601-603, flow or transaction work or processing on a particular PDU may occur at a processing element PE 2 602 that also monitors the session to which PDU belongs. Thus, for example, packet 1 610 may be directed (by load balancing module 512, for example) by message 706 or similar work assignment indication to PE 2 flow processing and reorder functionality 702 for transaction (flow) processing of PDU 610. In such a case, the work for transaction processing PDU 610 is inserted into a priority queue for transaction processing and reorder functionality 702 by time order. Because accommodation is made for work that may be under transaction or flow processing on a related PDU belonging to the same session at some remote processing element, the work spends some time in the queue before being removed. This allows the remote work time to arrive and be ordered correctly. Accordingly, the time spent in the queue should be greater than the expected latency for work to be distributed across the monitoring network system's processing elements and the latency for the PDU itself to be flow-processed. Once transaction processing on PDU 610 is complete, the transaction-processed PDU and associated transaction metadata are forwarded by message 707 or similar work transfer mechanism to PE 2 session processing functionality 703.
Flow work at a processing element that does not own the flow's session occurs normally. In the example of
As with
As illustrated in
The ordering authority of last resort functionality 801 will assign session ownership for the orphan PDU/session to one of the processing elements 601, 602 or 603. The transaction-processed PDU and associated transaction metadata are forwarded by a work transfer message 807 to session processing functionality of the processing element assigned ownership of the session, which is session processing functionality 804 of processing element PE 2 602 in the example shown. In the alternative embodiment mentioned above, the message 807 is only an indication of assignment of session ownership to the processing element PE 2 602, and does not include the transaction-processed PDU and associated transaction metadata. The selection of one of processing elements 601, 602 and 603 by the ordering authority of last resort functionality 801 for assignment of ownership of an orphan session may be in any of a variety of manners: by round-robin selection, by random assignment, by taking into account load balancing considerations, etc. In the alternative embodiment described above, in which the transaction-processed PDU and associated transaction metadata were not forwarded with ownership request message 806 from transaction processing and reorder functionality 803 to the ordering authority of last resort functionality 801, ownership of the session may simply be assigned to the processing element PE 3 603 that performed the transaction-processing of the PDU. Assignment to the processing element requesting indication of session ownership may be conditioned on whether other PDUs for that session have been received and transaction-processed by other processing elements, or on the current loading at the requesting processing element (processing element PE 3 603 in the example described)
Upon receiving the work transfer message 807, the session processing functionality 804 of processing element PE 2 602, having been assigned ownership of the session, publishes or advertises one or more ownership indication(s) 808, 809 to the remaining processing elements PE1 601 and PE 3 603 among which the work is distributed. In the alternative embodiment described above, in which the transaction-processed PDU and associated transaction metadata were not forwarded with ownership request message 806 from transaction processing and reorder functionality 803 to the ordering authority of last resort functionality 801, the transaction processing and reorder functionality 803 may forward transaction-processed PDU and associated transaction metadata to the now-published session owner, processing element PE 2 602.
As with
The process 1100 includes receiving one or more PDUs relating to a session on a monitored network at one or more of the processing elements 601, 602 and 603 (step 1102). In practice, some PDUs for the session may be received by each of the processing elements 601, 602 and 603, although in some cases only two of the processing elements 602 and 603 might receive PDUs for the session. Different ones of the processing elements 601, 602 and 603 may receive different numbers or proportions of the PDUs for the session based on, for instance, load balancing or other considerations. Each PDU for the session that is received by one of the processing elements 601, 602 and 603 is marked with a time-ordering sequence reference. Such marking may be performed, for example, by front-end devices 205a-205b. Each processing element 601, 602 and 603 receiving at least one PDU for the session performs transaction processing on the received PDUs to generate transaction metadata based upon the received PDUs (step 1103).
Depending on whether the session owner for the session was previously indicated to processing elements within the distributed processing system (step 1104), processing elements 602 and 603 may forward transaction-processed PDUs and associated transaction metadata to the session owner processing element 601 (step 1105), with the session-owning processing element 601 concurrently aggregating and time-ordering the transaction-processed PDUs relating to the session (step 1106). Where the session-owning processing element 601 transaction-processed one or more PDUs relating to the session, the transaction-processed PDUs and transaction metadata are simply forwarded from the transaction processing and reorder functionality of the processing element 601 to the session processing functionality of the processing element 601. Moreover, the session-owning processing element 601 aggregates and time-orders the transaction-processed PDUs relating to the session even if the processing element 601 received no PDUs from the session for transaction processing. The session owner processing element 601 session processes the aggregated, time-ordered, transaction-processed PDUs to generate session metadata (step 1107).
When none of the processing elements 601, 602, or 603 has previously indicated ownership of the session (i.e., step 1101 did not occur at the start of the process 1100), the transaction processed PDUs are forwarded to a processing element 601 designated as the ordering authority of last resort (step 1108), which assigns ownership of the session to one of the processing elements 601, 602 or 603 and forwards the received transaction-processed PDUs and associated transaction metadata to the new owner for the session (step 1109). The session owner then proceeds with aggregating, time-ordering, and session processing the transaction-processed PDUs to produce session metadata.
The present disclosure provides a novel architecture for network monitoring devices. Previous solutions required a single processing element to produce metadata for all messages/flows within a session. The processing work may now be distributed across multiple processing elements. Additionally, the solutions of the present disclosure may be easily abstracted to a virtual environment, since processing elements are readily implemented as virtual network entities. Such abstraction would allow the solutions to scale up or down with available resources. The solutions of the present disclosure enable performance of a monitoring function using a cluster of processors, with the load on a set of processors scaling linearly with the volume of monitored data to produces both flow and session metadata.
The solutions of the present disclosure allow monitoring of protocols that are not readily load-balanced with respect to time using a tightly-coupled multiprocessor system. This satisfies the need to evenly utilize processing elements, allows higher monitoring capacity, and accurately creates metadata regarding the state of monitored data. Value is created by allowing a greater hardware density that will monitor large volumes of data, providing for an economy of scale.
Aspects of network monitoring system 103 and other systems depicted in the preceding figures may be implemented or executed by one or more computer systems. One such computer system is illustrated in
As illustrated, computer system 1200 includes one or more processors 1210a-1210n coupled to a system memory 1220 via a memory/data storage and I/O interface 1230. Computer system 1200 further includes a network interface 1240 coupled to memory/data storage and interface 1230, and in some implementations also includes an I/O device interface 1250 (e.g., providing physical connections) for one or more input/output devices, such as cursor control device 1260, keyboard 1270, and display(s) 1280. In some embodiments, a given entity (e.g., network monitoring system 103) may be implemented using a single instance of computer system 1200, while in other embodiments the entity is implemented using multiple such systems, or multiple nodes making up computer system 1200, where each computer system 1200 may be configured to host different portions or instances of the multi-system embodiments. For example, in an embodiment some elements may be implemented via one or more nodes of computer system 1200 that are distinct from those nodes implementing other elements (e.g., a first computer system may implement classification engine 310 while another computer system may implement routing/distribution control module 330).
In various embodiments, computer system 1200 may be a single-processor system including only one processor 1210a, or a multi-processor system including two or more processors 1210a-1200n (e.g., two, four, eight, or another suitable number). Processor(s) 1210a-1210n may be any processor(s) capable of executing program instructions. For example, in various embodiments, processor(s) 1210a-1210n may each be a general-purpose or embedded processor(s) implementing any of a variety of instruction set architectures (ISAs), such as the x86, POWERPC, ARM, SPARC, or MIPS ISAs, or any other suitable ISA. In multi-processor systems, each of processor(s) 1210a-1210n may commonly, but not necessarily, implement the same ISA. Also, in some embodiments, at least one processor(s) 1210a-1210n may be a graphics processing unit (GPU) or other dedicated graphics-rendering device.
System memory 1220 may be configured to store program instructions 1225 and/or data (within data storage 1235) accessible by processor(s) 1210a-1210n. In various embodiments, system memory 1220 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, solid state disk (SSD) memory, hard drives, optical storage, or any other type of memory, including combinations of different types of memory. As illustrated, program instructions and data implementing certain operations, such as, for example, those described herein, may be stored within system memory 1220 as program instructions 1225 and data storage 1235, respectively. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 1220 or computer system 1200. Generally speaking, a computer-accessible medium may include any tangible, non-transitory storage media or memory media such as magnetic or optical media e.g., disk or compact disk (CD)/digital versatile disk (DVD)/DVD-ROM coupled to computer system 1200 via interface 1230.
In an embodiment, interface 1230 may be configured to coordinate I/O traffic between processor 1210, system memory 1220, and any peripheral devices in the device, including network interface 1240 or other peripheral interfaces, such as input/output devices 1250. In some embodiments, interface 1230 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 1220) into a format suitable for use by another component (e.g., processor(s) 1210a-1210n). In some embodiments, interface 1230 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of interface 1230 may be split into two or more separate components, such as a north bridge and a south bridge, for example. In addition, in some embodiments some or all of the functionality of interface 1230, such as an interface to system memory 1220, may be incorporated directly into processor(s) 1210a-1210n.
Network interface 1240 may be configured to allow data to be exchanged between computer system 1200 and other devices attached to network 100, such as other computer systems, or between nodes of computer system 1200. In various embodiments, network interface 1240 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fiber Channel storage area networks (SANs); or via any other suitable type of network and/or protocol.
Input/output devices 1250 may, in some embodiments, include one or more display terminals, keyboards, keypads, touch screens, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system 1200. Multiple input/output devices 1260, 1270, 1280 may be present in computer system 1200 or may be distributed on various nodes of computer system 1200. In some embodiments, similar input/output devices may be separate from computer system 1200 and may interact with one or more nodes of computer system 1200 through a wired or wireless connection, such as over network interface 1240.
As shown in
A person of ordinary skill in the art will appreciate that computer system 1200 is merely illustrative and is not intended to limit the scope of the disclosure described herein. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated operations. In addition, the operations performed by the illustrated components may, in some embodiments, be performed by fewer components or distributed across additional components. Similarly, in other embodiments, the operations of some of the illustrated components may not be performed and/or other additional operations may be available. Accordingly, systems and methods described herein may be implemented or executed with other computer system configurations in which elements of different embodiments described herein can be combined, elements can be omitted, and steps can performed in a different order, sequentially, or concurrently.
The various techniques described herein may be implemented in hardware or a combination of hardware and software/firmware. The order in which each operation of a given method is performed may be changed, and various elements of the systems illustrated herein may be added, reordered, combined, omitted, modified, etc. It will be understood that various operations discussed herein may be executed simultaneously and/or sequentially. It will be further understood that each operation may be performed in any order and may be performed once or repetitiously. Various modifications and changes may be made as would be clear to a person of ordinary skill in the art having the benefit of this specification. It is intended that the subject matter(s) described herein embrace all such modifications and changes and, accordingly, the above description should be regarded in an illustrative rather than a restrictive sense. Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.