The present invention
The proliferation of hardware devices in computer networks has resulted in the virtualization of network devices such as firewalls, route-servers, network-access devices and network-management devices. However, the virtualization of network devices has its own set of problems. Several inefficiencies occur across communication processes within a node. Further, several inefficiencies are present in the connectivity of between virtual communication systems, such as inefficiencies in the virtualization interface and routing process across systems. Further, the management of large numbers of virtual communication devices such as virtual routers with several interfaces pose a significant challenge.
In view of the foregoing, there is a need for a system and method for scaling and managing virtual communication
Embodiments of the invention support the virtualization of control software for communication devices by providing a virtual engine framework, and a canonical interface (APIs) for a virtual communication environment. According to certain embodiments, a virtual communication environment runs communication processes collaboratively to support the virtualization of communication devices such as, by way of non-limiting example, firewalls, routers, switches, mobile environments, security gateways, storage area networks, or network access equipment.
The virtual communication environment allows for the creation, linking and management of virtual communication processes in order to create virtual communications devices that can span several modules within a process, across multiple processes in a machine, or across multiple processes in multiple machines. The virtual communication processes may exchange information via a variety of communication protocols. The virtual communication environment is sufficiently flexible to be collapsed to a single monolithic communication process or alternatively enhanced to suit the requirements of communicating entities across multiple target platforms. These and other embodiments of the invention are described in further detail herein.
According to certain embodiments, a virtualization of control software for communication devices is enabled by providing a virtual engine framework (“vrEngine framework”), and a canonical interface (APIs) for a virtual communication environment. According to certain embodiments, a virtual communication environment is an environment in which communication processes run in collaboration to support the virtualization of communication devices such as firewalls, routers, switches, mobile environments, security gateways, or network access equipment, for example.
The virtual communication environment allows for the creation, linking and management of virtual communication processes in order to create virtual communications devices that can span several modules within a process, across multiple processes in a machine, or across multiple processes in multiple machines. The virtual communication processes exchange information via a variety of communication protocols that can include but are not limited to TCP and Inter-Process Communication (IPC).
Such a virtual communication environment is general enough to be collapsed to a single monolithic communication process or it can be enhanced to suit the requirements of communicating entities across multiple target platforms.
The vrEngine framework for the virtual communication includes the following concepts:
A virtual communication device running for a particular application creates an instance of the vrEngine Framework. At the heart of each virtual communication device is an application. Virtual communication devices utilize virtual applications that are herein referred to as vrApplications. The vrApplication runs in a virtual process and controls vrMgr via the vrMgrApp API. The vrApplication and associated configuration support determine what applications modules (vTasks) go in vrClients or Client vrMgrs. The vrApplication determines what vTasks need to communicate with other vTasks in other vrClients. The application coordinates the whole group of software processes to act as a set of virtual communications devices. A virtual communication device can operate on one device or across many physical devices. A vrMgrApplication utilizes the vrMgr to create and/or destroy vrClient or client vrMgr with the correct application tasks at the appropriate time for the application. The vrMgrApplication uses the vrMgr's vrMgrApp API to add, delete, modify vrClients serving as communication end-point clients (vrClient) or a next level application manager (Client VrMgr) of groups of vrClients. The vrMgr establishes a communication link between vrClients (end-point or Client vrMgr), and allows information to flow between application tasks on different clients.
Remote messaging processes encodes the information into messages and passes the messages between a remote management process and the router/communication process. The remote process can communicate with the routing process via any communication method that can exchange messages.
Examples of applications that can are run in a virtual communication environment include but are not limited to:
A vrEngine environment may have vEngine modules for vrClients, Client VrMgrs and a vrMgr running an application. Each vEngine module may have vTasks that perform some communication function. An example of a vTask for a router application vEngine is the OSPF protocol. There are no limits to the physical instantiation of the virtual engines. There are no constraints on the interaction network management processes for configuration or remote monitoring of fault, performance or accounting (security) functions.
The vrClient can be a virtual communication end-point or provide multiplexing services for a group of vrClients. Multiplexing services include but are not limited to: 1) relay services for configuration information, network management, or network protocol, 2) processing of devices or information common to all vrClients, or 3) delegation of services. A vrClient performing multiplexing services becomes a Client vrMgr.
The vrIPC protocol has messages to 1) register/de-register vrClients, 2) register/deregister tasks on clients, 3) resolve where a task is in the vrEngine environment (resolve/resolve-reply), and 4) send messages to vrMgr/vrClient, 5) allow a vrMgr or ClientvrMgr declare itself as a relay point, and 6) instruct the vrMgr to kill a client.
vrMgr 150 includes a vrIPC 156, vrMgr API 157 and vTasks 150a that comprises vrMgrApplications 151, 152 and 153. Client vrMgr 140 includes a vrIPC 146, Client vrMgr API 147 and vTasks 140a that comprises vrMgrApplications 141, 142, 143, 144, and 145.
vrClient 100 includes a vrIPC 105, vrClient API 104 and vTasks 106 that comprises vrApplications 101, 102 and 103. Similarly, vrClient 110 includes a vrIPC 115, vrClient API 114 and vTasks 116 that comprises vrApplications 111, 112 and 113. vrClient 120 includes a vrIPC 125, vrClient API 124 and vTasks 126 that comprises vrApplications 121, 122 and 123.
vrClient 200 includes a vrIPC 205, vrClient API 204 and vTasks 206 that comprises vrApplications 201, 202 and 203. Similarly, vrClient 210 includes a vrIPC 215, vrClient API 214 and vTasks 216 that comprises vrApplications 211, 212 and 213. vrClient 220 includes a vrIPC 225, vrClient API 224 and vTasks 226 that comprises vrApplications 221, 222 and 223.
vrMgr 350 includes a vrIPC 355, vrMgr API 356 and vTasks 350a that comprises an AMI MIO configuration 351, an SNMP master agent 352, and a secure key PKI manager 353. Client vrMgr 340 includes vrIPC 346, 349 and Client vrMgr APIs 347, 348 and vTasks 340a that comprises firewall synchronization and keys 341, an OSPF route table 342, an AMI MIO interface configuration management and relay function 343, an SNMP agent manager relay 344, and secure key rotations 345.
vrClient 300 includes a vrIPC 306, vrClient API 305 and vTasks 307 that comprises an IP firewall 301, an OSPF 302, an MIO 303 and an SNMP sub-agent 304. Similarly, vrClient 310 includes a vrIPC 316, vrClient API 315 and vTasks 317 that comprises an IP firewall 311, an OSPF 312, an MIO 313 and an SNMP sub-agent 314. vrClient 320 includes a vrIPC 326, vrClient API 325 and vTasks 327 that comprises an OSPF 321, an MIO 322, secure keys 323 and an SNMP sub-agent 324. Communication between vrMgr, Client vrMgr and vrClients are through their respective APIs and vrIPC protocols.
Non-limiting, illustrative examples of vrMgrApplications that utilize the vrMgr include but are not limited to the backbone router (BR) that supports an MPLS 2547 policy and the Virtual Master Agent for sub-agents within Virtual instances.
BR Application on vrMgr
The BR application is one embodiment of the vrEngine environment. The communicating entities in the BR are tasks in different routing processes running on the same target platform. The BR vrEngine environment includes a vrMgr (a new routing task providing the communication infrastructure), a vrClient (a new routing task in each communicating routing process) along with the vrClient API for use by tasks within the routing process and the communication protocol between the vrMgr and the vrClients. Only one instance of vrMgr is needed and is embedded in a specially marked routing process “BR” (backbone router) used for provide backbone router services. Also, the protocol between the vrMgr and the vrClient within the “BR” is greatly simplified and is mapped to the inter-task communication facility (gMsg) as both the vrMgr and the vrClient of the “BR” are encased within the same process.
vrMgr 440 includes a vrIPC 446, vrMgr API 447 and vTasks 450 that comprises a VRF route table 441, an SNMP agent manager relay 442, an AMI MIO interface configuration management and relay function 443, policy 444, VR RIBs 445, BGP 451, ISIS 452, OSPF 453, MPLS 454, RSVP-TE 455 and LDP 456.
vrClient 400 includes a vrIPC 407, vrClient API 406 and vTasks 408 that comprises an eBGP 401, an OSPF 402, an MIO 403, route table 404 and an SNMP sub-agent 405. Similarly, vrClient 410 includes a vrIPC 417, vrClient API 416 and vTasks 418 that comprises an eBGP 411, an OSPF 412, an MIO 413, route table 414 and an SNMP sub-agent 415. vrClient 420 includes a vrIPC 427, vrClient API 426 and vTasks 428 that comprises an eBGP 421, an OSPF 422, an MIO 423, route table 424 and an SNMP sub-agent 425.
The BR vrEngine includes the following concepts:
An external configuration manager (such as that of a customer) speaks to the BR via MIO API. Configuration information that is relevant to virtual routes are relayed by the vrMgr to the proper instance's own MIO module.
Relay Configuration
“Axiom”—the configuration manager communicates with a single routing process (referred to as BR in this document) to achieve the correct operational semantics for the virtual routing environments.
Add Operation (Creation of New vr_Engines)
While getting MIO messages or calls when BR encounters vr_engine statements, vr routing processes are spawned via the (task_)vfork/execle standard C library calls. The path and file names and the environment variables of the newly spawned vr routing process are those inherited by BR when it is invoked via the shell. It is assumed that before any vr routing processes are spawned, the vrMgr listener task is appropriately setup. The vrMgr listener task is used in the inter-process communication between the BR and the vr routing processes. The process is identical if configured via a XML based configuration manager. Initial setting is passed for the new vr routing processes spawned using the execle function call (protocol family, port number, to use to contact the BR) via command arguments (char *argv□) to establish the vr routing mode. It is the responsibility of the BR to feed the configuration information to the newly spawned vr routing process. Configuration information is fed via the inter-process communication mechanism (not the MIO port). The configuration information will be fed to the configuration agent in binary TLV via the inter-task communication method. The global configuration and the vr routing specific (vr_engine scoped) are provided to the target vr routing process. It is assumed that binary coded TLV can be generated from the parsed MIO structures. In the mioSet( ) handler corresponding to the vr_engine, the configuration processing is undertaken for the vr process. A MIO structure walk is undertaken to supply the global setting and traversal within the context of the vr_engine to supply the specific information pertaining to the vr engine. A general macro can be used to maintain a single binary of the routing software (VR_ENABLED( ), VR_MASTER( ) to refer to BR specifics, and VR_SLAVE( ) to refer to vr specifics). The default behavior is to execute like BR with the BR passing command line arguments to identical images to act like vr routing processes. This implies that the configuration agent is able to accept binary encoded TLV messages directly over its well-known AF_STREAM (TCP port) or via the vr-manager intercommunication protocol. The BR routing process uses the former method while the vr engine process utilizes the latter.
Delete Operation (Deletion of Existing vr_Engines)
On receiving XML messages to delete/disable an existing vr_engine in the mioDelete( ) handler, notifications are sent to the vr routing processes to commit suicide (or notify the vr engine to orderly undertake a shutdown). As a result of the orderly shutdown of the vr_engine (vr processes), the exported routes or other dynamically created structures in BR are freed and the inter-process communication socket or channel is closed. Finally, a call to the _exit standard C library call is made.
Modify Operations (Changes Made to an Existing vr_Engine)
The modify operation can be classified as two distinct operations: 1) modifications to the global configuration tree, and 2) modifications within the vr_engine scope sub-tree. Modifications to the global configuration tree are relayed to all (broadcasted) currently running vr routing process (vr_engines). The master BR routing has a list of all vr_engines and a mapping of the process ids for use by the inter-process communication subsection. A modification within the scope of a vr_engine sub-tree translates to a relay of the binary TLV oriented messages to the appropriate vr_engine. Helper routines in MIO determine whether the add/modify/delete operation refers to the global context or within the scope of a vr_engine. Implementation includes providing a generic function in the MIO internals which analyses the configuration binary TLV to determine whether vr enabled/disabled and if enabled, then determine if operation is in server (BR) mode or vr mode. If operating in the server (BR) mode, the configuration is analyzed to decipher whether the configuration is within the global or is contained within the vr_engine sub-tree scope. Global scope changes are broadcasted to every vr_engine (routing process) via the inter-process communication facility while the appropriate vr_engine receives the personal vr_engine sub-tree scoped messages.
MIO Based Configuration of VR Engines
There are two methods of configuring vr engines via MIO. The first method relies on configuring each vr engine independently via MIO. The second method relies on configuring each vr instance by relaying the MIO messages through the vrMgr server “BR” instance. When MIO relaying feature is used, the MIO commands meant from the vr engine can be steered via the vrMgr server. A new client vri_agt of vrMgr aids in MIO relaying by sending the commands and recovering the responses. The vri_agt parcels the MIO commands and sends the commands via the vrMgr communication channel to the appropriate mioagt. The responses are parceled back in the reverse direction. The user exit functions for this purpose ia vri_agt.c::agt_engine_recv for processing the reply from the Client vrMgr.
Delegated Client vrMgr Application
Examples of a delegated Client vrMgr application include but are not limited to 1) a Virtual Interface Manager application (see
vrMgr 550 includes a vrIPC 555, vrMgr API 356 and vTasks 550a that comprises an AMI MIO configuration 551, and a virtual interface master manager 552. Client vrMgr 540 includes vrIPC 546, 549 and Client vrMgr APIs 547, 548 and vTasks 540a that comprises firewall synchronization and keys 541, an OSPF route table 542, an AMI MIO interface configuration management and relay function 543, and Interface and virtual interface processing 544.
vrClient 500 includes a vrIPC 506, vrClient API 505 and vTasks 507 that comprises an IP firewall 501, an OSPF 502, an MIO 503 and an RT support with virtual interface 504. Similarly, vrClient 510 includes a vrIPC 516, vrClient API 515 and vTasks 517 that comprises an IP firewall 511, an OSPF 512, an MIO 513 and an RT support with virtual interface 514. vrClient 520 includes a vrIPC 526, vrClient API 525 and vTasks 527 that comprises a IP firewall 521, an OSPF 522, an MIO 523, and an RT support with virtual interface 524. Communication between vrMgr, Client vrMgr and vrClients are through their respective APIs and vrIPC protocols.
vrMgr 650 includes a vrIPC 655, vrMgr API 656 and vTasks 650a that comprises an AMI MIO configuration 651, an SNMP master agent 652, and a secure key PKI manager 653. Client vrMgr 640 includes vrIPC 646, 649 and Client vrMgr APIs 647, 648 and vTasks 640a that comprises firewall synchronization and keys 641, an OSPF route table 642, an AMI MIO interface configuration management and relay function 643, an SNMP agent manager relay 644, and secure key rotations 645.
vrClient 600 includes a vrIPC 606, vrClient API 605 and vTasks 607 that comprises an IP firewall 601, an OSPF 602, an MIO 603 and secure keys 604. Similarly, vrClient 610 includes a vrIPC 616, vrClient API 615 and vTasks 617 that comprises an IP firewall 611, an OSPF 612, an MIO 613 and secure keys 614. vrClient 320 includes a vrIPC 326, vrClient API 325 and vTasks 327 that comprises an IP firewall 621, OSPF 622, an MIO 623 and secure keys 624. Communication between vrMgr, Client vrMgr and vrClients are through their respective APIs and vrIPC protocols.
VrApplications Running Without a vrMgr
A vrApplication can start vrClients without the management support of a vrMgr. Exterior services remotely configure and monitor in real-time the vrClients. vrClients may utilize a reduced set of the vrMgr API (just listen and modify, for example).
vrMgr And vrClient Normal Operations
The vrMgr coordinates name resolution service for associated vrClients and vTasks. The vrMgr takes an active roll in detecting the comings and goings of vrClients. If the vrMgr fails to detect the presence of a vrClient, the vrMgr reports the absence of the vrClient to the application and other associated vrClients.
vrMgr opens a well known listener for connecting requests from vrClients. The sequenced delivery mechanism of messages in the underlying communication protocol is exploited to assure that the connection requests from vrClients (end-point or Client vrMgr) are heard.
If the vrMgr spawns a vrClient, the spawning occurs under the control of a vrApplication. After spawning, the vrMgr opens a connection to the vrClient over a communication protocol. Upon opening a connection to the vrMgr, the vrClient sends a REGISTER messages via the vrIPC protocol. The vrMgr tracks the new existence of vrClients by the REGISTER message. Upon receiving the REGISTER message, the vrMgr stores the information about the connection.
The vrClient, upon bringing up an application task that requires communication with other tasks, will use the REGISTER_TASK message to indicate to the vrMgr that a given task is requesting communication to with another task. The vrMgr, upon receiving the REGISTER_TASK message, will check the “pending resolve” list to determine if any vTask(s) from any vrClient has been waiting for this vTask by name. If so, the vrMgr sends the RESOLVE_RESPONSE message corresponding to each task to the appropriate vrClient. Upon receiving the RESOLVE_RESPONSE message, the vrClient will allow messages to be sent via the SEND message to the vrMgr for forwarding.
The Client vrMgr, upon receiving a REGISTER_TASK message, sends a REGISTER_TASK message to the vrMGR. The Client vrMgr then determines if the corresponding task name is on the “pending resolve” list.
If a vTask in a first vrClient has data to send to a remote vTask in another vrClient, the first vrClient determines if the remote vTask can be reached. The first vrClient performs such a determination by checking the local cache of vTasks at remote vrClients, called target vrMgrEndPoints. If the local cache does not have the remote task (there is a cache miss), then the first vrClient sends the vrMgr a RESOLVE message before any SEND messages are sent. The local cache of target VrMgrEndPoint_t is searched by the first vrClient.
vTasks On vrClients Sending Data to Remote vTasks
A given vTask obtains data space by allocating space for messages, populating the data space with a message, and sending the message to the vrMgr. The vrMgr relays the information to the target vrClient. After the message is sent, the data space is freed up.
There are two types of errors:
If the vrClient gracefully terminates, the vrClient will send a DEREGISTER message to signal the end of the connection. The vrMGr may force the vrClient to terminate with a “KILL_CLIENT” message.
vrEngines
The vrEngine module allows creation of multiple vrEngine environments. Each vrEngine is identified by an engine name. The vrEngine has an associated system logging, system tracking and a remote configuration interface. The vrEngine allows for a configurable vrEngine initial vrMgr. The vrEngine has the ability to start in one of two modes: vrMgr relay or Client vrMgr. The vrEngine spawns the initial vrMgr.
vEngines
The vEngine supports running vrApplications as vTasks in a virtual communication environment. To support such vrApplications, vTasks use a co-operative multi-tasking environment that has the following features:
According to certain embodiments, the AMI interface can be used for remote configuration management (see associated patent application on remote construction methodology for Network Management of Communication Devices for configuration and Process Critical Network Monitoring).
The vEngines support code that create vrClients, Client VrMgrs, and vrMgrs. vTasks can be associated with vrClients and vrMgrs. If a vTask is associated with a vrClient, then the vrClient contains a link back to a vrMgr. If the vrMgr is a Client vrMgr, then Client vrMgr has a link to a vrMgr. The original vrMgr will have a link to the vrEngine. The vEngines support code that for linking vrClients to vrMgrs, Client vrMgrs to vrMgrs over the vrIPC protocol. The vEngine can search for a particular vEngine on behalf a vTask using the vri_agt_hunt( ) routine. The vrIPC protocol is started using the vr_agt_init( ) routine.
For a VR-router vrApplication, the vTasks support packet forward via a Virtual Router Forwarding Table and a Virtual Routing table that is unique to the virtual router.
The vEngines support canonical modules for creating, deleting, and locating vTasks within the vrEngine environment. Such canonical modules include:
Once the vTasks locate their remote peer vTask, the vEngines support canonical code that utilize the vrIPC protocol to send information to remote vTasks. Modules for sending information to remote vTasks include:
Upon start-up, the vrMgr allocates a data structure per vrMgr (vrmgr_node) and allocates memory to support data structures related to clients. The vrMgr opens a well known listener using the IPC protocol; and waits for connect requests from vrClients.
The vrApplication that is associated with the vrMgr controls the manner in which vrClients are spawned.
The vrMgr keeps track of configured vrClients (end-point or vrMgr), spawned clients, clients that are receiving configuration via a relay. The vrMgr spawns vrClients based on the vrApplication configuration and the run-time configuration.
As an example of a specific vrApplication configuration, the BR-virtual router is created based on the routing software; the CLI command of “context-router” that references the vrMgr and the “br-virtual-router boston” command causes the vrClient “boston” to be spawned. The br-virtual-router boston points to the vrMgr.
Inter-process communication messages flow through the vrMgr. The vrMgr is responsible for coordinating the name resolution service. The vrMgr detects the comings and goings of vrClients and notifies vrClients when a particular vrClient or a vrClient's task has gone away. Further, the vrMgr can provide a central clearinghouse (multiplex/de-multiplex) of messages destined to various vrClients. The vrMgr also possesses the complete list of the tasks registered to become recipients of messages. Because the inter-process communications flow through vrMgr, the vrMgr is the good place for tracing/debugging the flow of the inter-process messages. The vrClient that is in the same routing process as the vrMgr (in the BR) communicates with the vMgr using the inter-task communication model (gMsg).
The vrMgr uses a “server” flag to indicate if the vrMgr is a relay server for other vrClients. If the vrMgr is a Client vrMgr, then vrMgr as a Client vrMgr tracks both the up-level vrMgr (the Client vrMgr's server) and the down-level vrClients (the Client vrMgr's clients).
The vrMgr uses a res_pend_list to keep track of the tasks that have requested communication. Each task is tracked by engine name, task name, and task id, and requesting process id (for multi-process systems).
The vrMgr allows:
The vrMgr App API tasks include:
Presence Detection by vrMgr
The server vrMgr detects the closure of the communication socket of Client vrMgr and notifies the active registrants. The detection of the closure of the communication socket by a Client vrMgr results in the termination of the encasing routing client process. Such a termination uses the vrMgr exit functions below. The vrMgr detects the presence of the vrClients or Client vrMgrs by vrmgr_connect or vrmgr_accept.
vrMgr Exit Functions:
vrMgr Spawning and Connections:
vrClient modules utilize vEngines routines. The vrClient keeps track of:
Common methods in the vEngine track the number of messages and bytes the vrClient originates.
The vrClient API includes the following tasks:
Client vrMgr Modules
The code specific to Client vrMgr modules utilizes the “relay server” function. There are routines for vrApplications Client vrMgr to obtain its server name (get_server_name( )) or the engine name (get_my_vr_engine_name( )) or particular vrClients. The search for vrClients can be by process id, name or
An application on either a client vrMgr (or a normal vrMgr) can terminate a client via the following call:
Message Processing:
VrMgrEndPoint is a tuple containing: machine_id, pid, task_id.
VRMsg_t—a wrapper to the a mesage data structure used with this framework.
Protocol Message Format:
The vrIPC message format is protocol-header (dest,source, length,type) followed by type specific data. The header format is defined by:
Table 1 describes vrIPC messages.
Format of Resolve Message
Format of REGISTER_TASK, DEREGISTER_TASK
Format of REGISTER/DEREGISTER
Table 2 describes the direction of vrIPC protocol messages.
Protocol Message Handling
1. REGISTER(vr_engine_name, pid)—This message flows from a vrClient to the vrMgr upon a successful connection establishment. This message flows from a vrClient to the vrMgr before a graceful shutdown. This message is relayed to other vrClients by the vrMgr to flush the cache of target VrMgrEndPoint_t with the pid in question. After the detection of an ungraceful termination of a vrClient by the vrMgr, it is automatically relayed to all remaining active vrClients by the vrMgr.
3. REGISTER_TASK(vr_name, task_name, pid, task_id)—Upon the initial connection establishment between a vrClient and the vrMgr, a message is generated by the vrClient for each task registered with vrClient. As new tasks are registered over time, a corresponding message is generated by the vrClient. Upon receipt of such a message by the vrMgr, RESOLVE_REPLY messages might be generated by vrMgr to other active vrClients taht have a pending matching RESOLVE request queued at the vrMgr corresponding to <vr_engine_name, vr_name, task_name>.
4. DEREGISTER_TASK(pid, task_id)—As tasks deregister with the vrClient, the DEREGISTER_TASK messages are generated by the vrClient and directed to the vrMgr. This message is also relayed back to other vrClients to flush out their cache of target VrMgrEndPoint_t associated with <pid, task_id>. The frequency of such a message is very low.
5. SEND(pid, task_id, data, . . . )—The bulk of the messages from vrClients to the vrMgr SEND messages. The SEND message is also relayed to the destined target vrClient by the vrMgr.
A write error at the vrClient results in a entire cache purge, closure of the connection socket with the vrMgr followed by a retry for re-establishment of a connection with vrMgr. A write error on the socket to the destination vrClient by the vrMgr results in a closure of that socket and a DEREGISTER message to all the other active vrClients. The target VrMgrEndPoint_t is checked at both the originating vrClient and the vrMgr. A failure at the vrClient results in a error code returned to the corresponding vrClientSend API call. A failure at the vrMgr results in a DEREGISTER_TASK message directed to the originating vrClient, which would result in a cache purge of that entry. Subsequent vrClientSend calls to the same destination would result in error codes being returned to the invoking task.
6. RESOLVE(vr_engine_name, vr_name, task_name)—This message is generated from the vrClient to the vrMgr before any SENDs are performed. The local cache of target VrMgrEndPoint_t is searched by the vrClient. This message is directed to the vrMgr in the case when there is a cache miss.
7. RESOLVE_REPLY(vr_engine_name, vr_name, task_name, pid, task_id)—This message is in response to a RESOLVE message generated by the vrClient. This message can be in response to a RESOLVE message when there is a cache hit of the target VrMgrEndPoint_t or after a receipt of a REGISTER_TASK notification from a vrClient.
A task can observe only SEND and RESOLVE_REPLY messages in its inter-task message queue to be processed (passed on by the local vrClient task within its routing pid).
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application claims the benefit of U.S. Provisional Patent Application No. 60/567,358 filed Apr. 30, 2004, entitled, “VIRTUALIZATION OF CONTROL SOFTWARE FOR COMMUNICATION DEVICES,” by Hares et al., and which is hereby incorporated by reference in its entirety. This application further incorporates by reference in their entirety each of the following U.S. patent applications: U.S. patent application No. U.S. patent application Ser. No. 10/648,141, filed on Aug. 25, 2003 (Atty. Docket No.: 41434-8001.US00). U.S. patent application Ser. No. 10/648,146, filed on Aug. 25, 2003 (Atty. Docket No.: 41434-8002.US00). U.S. patent application Ser. No. 10/648,758; filed on Aug. 25, 2003 (Atty. Docket No.: 41434-8003.US00). U.S. patent application Ser. No. 60/567,192 filed Apr. 30, 2004, entitled, “REMOTE MANAGEMENT OF COMMUNICATION DEVICES,” by Hares et al. (Atty. Docket No.: 41434-8010.US00). U.S. patent application Ser. No. ______ filed May 2, 2005, entitled, “REMOTE MANAGEMENT OF Communication DEVICES,” by Susan Hares et al. (Atty. Docket No.: 41434-8010.US01).
Number | Date | Country | |
---|---|---|---|
60567358 | Apr 2004 | US |