Certain embodiments of the present invention provide techniques for providing reliable and efficient access to non-transactional resources, using transactional memory.
Traditionally, in a computing system, even though multiple processes may simultaneously or near simultaneously request access to a hardware component, such as an application-specific integrated circuit (ASIC), only one of the processes may acquire a lock and communicate with the hardware component at a time, thereby serializing access to the hardware component. The other processes must wait for the first process to release the lock before they can make progress. In most instances, the other processes may merely idle and ping the lock semaphore to check if the first process has relinquished the lock, so that the respective processes can make further progress. The wait for acquiring the lock by the other processes may be exasperated by the fact that performing input/output (I/O) operations to hardware components may take significantly longer to complete relative to other processor operations. Furthermore, the first process may hold the memory locked for an indeterminate period of time, until the process has completed several operations to the hardware component, such as writing to a graphics card or an Ethernet card.
Moreover, in the event of a failure during the execution of a process writing to a hardware component, known techniques do not provide support for recovering from such an error. For example, when a process belonging to an application is performing multiple writes to an ASIC, if the process encounters an error and fails, in prior art implementations the process may crash, possibly resulting in a catastrophic shutdown of the system. If errors occur during the write process the writes to the ASIC may be left in an indeterminate state and recovery may be difficult.
Certain embodiments of the present invention provide techniques for providing reliable and efficient access to non-transactional resources, using transactional memory.
In certain embodiments, a transactional memory system may be implemented for allowing multiple processes to continue executing and modifying memory associated with the hardware component, concurrently. As described in further detail, in certain embodiments, various processes can modify non-overlapping memory associated with the hardware component, without blocking each other. Furthermore, in an event of error, techniques described herein may prevent catastrophic shutdowns by enabling a group of already executed operations to rollback without committing any changes to the hardware component itself.
In certain embodiments, an example device may include a memory and one or more processing entities. The processing entities may be configurable to execute a first transaction comprising one or more write operations to a first memory address, and a second transaction comprising one or more write operations to a second memory address. The first memory address and the second memory address may be mapped to the same controller for a hardware component. The one or more processing entities may commence execution of the second transaction after the first transaction starts execution and before the completion of the first transaction. The device may also include a transactional memory system configurable to communicate data written to the first memory address from the first transaction to the controller upon completion of the first transaction, and communicate data written to the second memory address from the second transaction to the controller upon completion of the second transaction. In some aspects, the execution of the one or more write operations to the first memory address from the first transaction does not block the execution of the one or more write operations to the second memory address from the second transaction and vice versa. In some implementations, the device is a network device.
In certain embodiments of the example device, the one or more processing entities may be further configurable to commence execution of a third transaction after the first transaction starts execution and before the completion of the first transaction, the third transaction comprising one or more write operations targeted to the first memory address, and the transactional memory system further configurable to block the execution of the third transaction until the completion of the first transaction and the update of the first memory location upon completion of the first transaction. In some instances, the first transaction may execute from a first process and the second transaction may execute from a second process.
In certain embodiments, a portion of memory is in a first state prior to commencing execution of operations from the first transaction by the one or more processing entities wherein, in response to a failure event, the one or more processing entities are further configurable to stop execution of the first transaction after execution of a subset of operations from the plurality of operations; and the transactional memory system may be further configurable to cause the state of the portion of memory to be in the first state.
In one implementation of the example device, causing the state of the portion of memory to be in the first state prior to commencement of the execution of the transaction by the second processing entity may include tracking changes to the portion of the memory by the first processing entity during the executing of the transaction on the first processing entity, and reverting the changes back to the first state prior to commencement of the execution of the transaction by the second processing entity.
In another implementation of the example device, causing the state of the portion of memory to be in the first state prior to commencement of the execution of the transaction by the second processing entity may include buffering changes directed to the portion of the memory during executing of the transaction in a memory buffer, and discarding the buffered changes in the memory buffer.
In certain embodiments, an example method for performing embodiments of the invention may include executing, by one or more processing entities, a first transaction comprising one or more write operations to a first memory address, executing, by the one or more processing entities, a second transaction comprising one or more write operations to a second memory address, wherein the one or more processing entities commence execution of the second transaction after the first transaction starts execution and before the completion of the first transaction, communicating data written to the first memory address from the first transaction to a controller upon completion of the first transaction, and communicating data written to the second memory address from the second transaction to the controller upon completion of the second transaction. In some aspects, the execution of the one or more write operations to the first memory address from the first transaction does not block the execution of the one or more write operations to the second memory address from the second transaction and vice versa.
Furthermore, the example method may include executing a third transaction, by the one or more processing entities, after execution and before the completion of the first transaction, the third transaction comprising one or more write operations targeted to the first memory address; and blocking the execution of the third transaction until the completion of the first transaction and the updating of the first memory location upon completion of the first transaction. In some instances, the first transaction may execute from a first process and the second transaction may execute from a second process.
In certain embodiments, the example method comprises stopping execution of the first transaction, by the one or more processing entities, in response to a failure event, and causing the state of a portion of memory to be in a first state, wherein the portion of memory is in the first state prior to commencing execution of the first transaction.
In one implementation of the example method, causing the state of the portion of memory to be in the first state prior to commencement of the execution of the transaction by the second processing entity may include tracking changes to the portion of the memory by the first processing entity during the executing of the transaction on the first processing entity, and reverting the changes back to the first state prior to commencement of the execution of the transaction by the second processing entity.
In another implementation of the example method, causing the state of the portion of memory to be in the first state prior to commencement of the execution of the transaction by the second processing entity may include buffering changes directed to the portion of the memory during executing of the transaction in a memory buffer, and discarding the buffered changes in the memory buffer.
In certain embodiments, an example non-transitory computer-readable storage medium may comprise instructions executable by one or more processing entities. The instructions may comprise instructions to execute a first transaction comprising one or more write operations to a first memory address; execute a second transaction comprising one or more write operations to a second memory address, wherein the one or more processing entities commence execution of the second transaction after the first transaction starts execution and before the completion of the first transaction; communicate data written to the first memory address from the first transaction to a controller upon completion of the first transaction; and communicate data written to the second memory address from the second transaction to the controller upon completion of the second transaction.
In certain embodiments, an example apparatus may include means for performing embodiments of the invention which may include executing, by one or more processing entities, a first transaction comprising one or more write operations to a first memory address; means for executing, by the one or more processing entities, a second transaction comprising one or more write operations to a second memory address, wherein the one or more processing entities commence execution of the second transaction after the first transaction starts execution and before the completion of the first transaction; means for communicating data written to the first memory address from the first transaction to a controller upon completion of the first transaction; and means for communicating data written to the second memory address from the second transaction to the controller upon completion of the second transaction. In some aspects, the execution of the one or more write operations to the first memory address from the first transaction does not block the execution of the one or more write operations to the second memory address from the second transaction and vice versa.
The foregoing has outlined rather broadly features and technical advantages of examples in order that the detailed description that follows can be better understood. Additional features and advantages will be described hereinafter. The conception and specific examples disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Such equivalent constructions do not depart from the spirit and scope of the appended claims. Features which are believed to be characteristic of the concepts disclosed herein, both as to their organization and method of operation, together with associated advantages, will be better understood from the following description when considered in connection with the accompanying figures. Each of the figures is provided for the purpose of illustration and description only and not as a definition of the limits of the claims.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the invention. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.
In certain implementations, the controller 116 of the hardware component 114 may be accessible from one or more processes through memory mapped I/O. In such an implementation of the system, a write operation to memory reserved for the hardware component may be translated by the system as an I/O operation to the controller 116 of the hardware component 114. A process vying for access to the hardware component 114 may request for exclusive access to the memory mapped to the hardware component 114.
As shown in
In an example scenario, as depicted in
Once the lock is acquired by the process 1 (102), at step 122, the module 108 may make the ioctl( ) function call to the hardware driver 112 for accessing the hardware component 114, at step 124. In some instances, the hardware driver may hold memory locked for an extended period of time. For example, the driver may hold the memory locked for performing several operations (step 126) to the hardware component 114.
Even though the first process 102, the second process 104 and the third process 106 may simultaneously or near simultaneously request access to the hardware component 114, only one of the three processes may acquire the lock to communicate with the hardware component 114, thereby serializing access to the hardware component 114. Therefore, the second process 104 and the third process 106 wait for the first process 102 to release the lock before they can make progress. In most instances, the second process 104 and the third process 106 may merely idle and ping the lock semaphore to check if the first process 102 has relinquished the lock, so that the respective processes can make further progress. The wait for acquiring the lock by the second process 104 and the third process 106 may be exasperated by the fact that performing input/output (I/O) operations to hardware components may take significantly longer to complete relative to other processor operations. Furthermore, the first process 102 may hold the memory locked for an indeterminate period of time, till the process has completed several operations to the hardware component, such as writing to a graphics card or an Ethernet card.
Moreover, in the event of a failure during the execution of a process writing to a hardware component 114, known techniques do not provide support for recovering from such an error. For example, in
In certain embodiments, a transactional memory system may be implemented for allowing multiple processes to continue executing and modifying memory associated with the hardware component 116, concurrently. As described in further detail below, in certain embodiments, various processes can modify non-overlapping memory associated with the hardware component 116, without blocking each other. Furthermore, in an event of error, techniques described herein may prevent catastrophic shutdowns by enabling a group of already executed operations to rollback without committing any changes to the hardware component itself.
In certain embodiments, the transactional memory system ensures the consistency of data stored in the transactional memory at a transaction level, where the transaction may comprise one or more operations. The transactional memory system guarantees that changes to the transactional memory caused by write and/or update operations are kept consistent at the level or atomicity of a transaction. The transactional memory system treats a transaction as a unit of work; either a transaction completes or does not. The execution of a transaction is considered to be completed if all the sequential operations defined for that transaction are completed. The execution of a transaction is considered not to be completed, i.e., considered to be incomplete, if all the sequential operations defined for that transaction are not completed. In terms of software code, a transaction represents a block of code and the transactional memory system ensures that this block of code is executed atomically. The transactional memory system ensures that changes to memory resulting from execution of the operations in a transaction are committed to the transactional memory only upon completion of the transaction. If a transaction starts execution but does not complete, i.e., all the operations in the transaction do not complete, the transactional memory system ensures that any memory changes made by the operations of the incomplete transaction are not committed to the transactional memory. Accordingly, the transactional memory system ensures that an incomplete transaction does not have any impact on the data stored in the transactional memory. The transactional memory system thus ensures the consistency of the data stored in the transactional memory at the boundary or granularity of a transaction.
A transactional memory system may use different techniques to ensure that any memory changes caused by operations of a transaction are committed to the transactional memory only upon completion of the transaction, or alternatively, to ensure that any memory changes caused by operations of an incomplete transaction are not committed to the transactional memory.
For illustration purposes,
For example, in one embodiment, a processing entity of computing device 200 may be a physical processor, such as an Intel, AMD, or TI processor, or an ASIC. In another embodiment, a processing entity may be a group of processors. In another embodiment, a processing entity may be a processor core of a multicore processor. In yet another embodiment, a processing entity may be a group of cores from one or more processors. A processing entity can be any combination of a processor, a group of processors, a core of a processor, or a group of cores of one or more processors.
In certain embodiments, the processing entity may be a virtual processing unit or a software partitioning unit such as a virtual machine, hypervisor, software process or an application running on a processing unit, such as a physical processing unit, core or logical processor. For example, the one or more processing entities may be virtual machines executing or scheduled for execution on one or more physical processing units, one or more cores executing within the same physical processing unit or different physical processing units, or one or more logical processors executing on one or more cores on the same physical processing unit or separate physical processing units.
In certain implementations, each processing entity may have a dedicated portion of memory assigned to or associated with the processing entity. In one embodiment, the memory assigned to a processing entity is random access memory (RAM). Non-volatile memory may also be assigned in other embodiments. For example, in the embodiment depicted in
Software instructions (e.g., software code or program) that are executed by a processing entity may be loaded into the memory 206 coupled to that processing entity 202. This software may be, for example, loaded into the memory upon initiation or boot-up of the processing entity. In one embodiment, as depicted in
As depicted in
For example, as shown in
In certain embodiments, a transactional memory system (TMS) 210 is provided to facilitate non-blocking execution of operations to hardware components from a process executing on the processing entity 202. As depicted in
Memory 206 and transactional memory 212 may be physically configured in a variety of ways without departing from the scope of the invention. For example, memory 206 and transactional memory 210 may reside on one or more memory banks connected to the processing entities using shared or dedicated busses in computing device 200.
As shown in
Transactional memory system 210 may be implemented using several software or hardware components, or combinations thereof. In one embodiment, the infrastructure 213 may be implemented in software, for example, using the software transactional memory support provided by GNU C Compiler (GCC) (e.g., the libitm runtime library provided by GCC 4.7). Infrastructure 213 may also be implemented in hardware using transactional memory features provided by a processor. Transactional memory system 210 may also be provided using a hybrid (combination of software and hardware) approach.
In certain embodiments, a process executed by a processing entity may make use of transactional memory system 210 by linking to and loading a runtime library 232 (e.g., the libitm library provided by GCC 228) that provides various application programming interfaces (APIs) that make use of transactional memory system 210. Operations that belong to a transaction may make use of the APIs provided by such a library such that any memory operations performed by these operations use transactional memory system 210 instead of non-transactional memory. Operations that do not want to use transactional memory system 200 may use APIs provided by non-transactional libraries such that any memory operations performed using these non-transactional memory APIs use data space 220 instead of transactional memory system 210. For example, as shown in
In certain implementations, transactional memory system 210 uses TM logs 214 to guarantee consistency of data stored in transactional memory 212 on a per transaction basis. In one embodiment, for a sequence of operations in a transaction, information tracking changes to transactional memory 212, due to execution of the operations of the transaction, are stored in TM logs 214. The information stored is such that it enables transactional memory system 210 to reverse the memory changes if the transaction cannot be completed. In this manner, the information stored in TM logs 214 is used by transactional memory system 210 to reverse or unwind any memory changes made due to execution of operations of an incomplete transaction.
For example, for a transaction that comprises an operation that writes data to a memory location in transactional memory 212, information may be stored in a TM log 214 related to the operation and the memory change caused by the operation. For example, the information logged to a TM log 214 by transactional memory system 210 may include information identifying the particular operation, the data written by the operation or the changes to the data at the memory location resulting from the particular operation, the memory location in transactional memory 212 where the data was written, and the like. If for some reason the transaction could not be completed, transactional memory system 210 then uses the information stored in TM log 214 for the transaction to reverse the changes made by the write operation and restore the state of transactional memory 212 to a state prior to the execution of any operation in the transaction as if the transaction was never executed. For an incomplete transaction, the TM log information is thus used to rewind or unwind the transactional memory changes made by any executed operations of an incomplete transaction. The memory changes made by operations of an incomplete transaction are not committed to transactional memory 212. The memory changes are finalized or committed to memory only after the transaction is completed. TM logs 214 themselves may be stored in transactional memory 212 or in some other memory in or accessible to transactional memory system 110.
As depicted in
The operations that make up a transaction are generally preconfigured. In one embodiment, a system programmer may indicate what operations or portions of code constitute a transaction. A piece of code may comprise one or more different transactions. The number of operations in one transaction may be different from the number of operations in another transaction. For example, a programmer may define a set of related sequential operations that impact memory as a transaction.
When code 240 is executed due to execution of process 216 by processing entity 202, operations that are part of a transaction, such as operations 5-15, use the transactional memory APIs provided by TM lib 232, and as a result, transactional memory 212 is used for the memory operations. Operations that are not part of a transaction, such as operations 1-4 and 16-19, use a non-transactional memory library and, as a result, any memory operations resulting from these operations are made to data space 220 within the memory portion allocated for process 216 in memory 206.
For a transaction, the block of code corresponding to operations in the transaction is treated as an atomic unit. In one embodiment, the “transaction start” indicator (or some other indicator) indicates the start of a transaction to first processing entity 202 and the “transaction commit” indicator (or some other indicator) indicates the end of the transaction. The operations in a transaction are executed in a sequential manner by processing entity 202. As each transaction operation is executed, if the operation results in changes to be made to transactional memory 212 (e.g., a write or update operation to transactional memory 212), then, in one embodiment, the information is logged to a TM log 214. In this manner, as each operation in a transaction is executed, any memory changes caused by the execution operation is logged to TM log 214. If all the operations in the transaction (i.e., operations 5-15 for the transaction shown in
For example, while executing code 240, the processing entity 202 may receive an event that causes code execution by processing entity 202 to be interrupted. If the interruption occurs when the transaction comprising operations 5-15 is being executed, then any transactional memory 212 changes made by the already executed operations of the incomplete transaction are reversed, using information stored in TM logs 214. For example, if the interruption occurs when operation—9 has been executed and operation—10 is about to be executed, any changes to transactional memory 212 caused by execution of operations 5-9 are reversed and not committed to transactional memory 212. In this manner, transactional memory system 210 ensures that the state of data stored in transactional memory 212 is as if the incomplete transaction was never executed.
In certain embodiments, the transactional memory 112 and the TM log 114 may be implemented using memory that is persistent across a failover. During the reboot, the power planes associated with the processing entities and the memory may also be rebooted. Rebooting of the power planes may result in losing of the data stored in memory. In certain embodiments, to avoid losing data stored in the transactional memory 212 and the TM logs 114, the library may allocate the memory using persistent memory. In one implementation, persistent memory may be implemented using non-volatile memory, such as flash, that retains data even when not powered. In another implementation, persistent memory may be implemented by keeping the memory powered during the period when the computing device 100 reboots. In some implementations, the transactional memory 212 and the TM logs 114 may be implemented on a separate power plane so that they do not lose power and consequently data while other entities in the network device lose power and reboot.
For example, for a transaction that comprises an operation that writes data to a memory location in transactional memory 316, information may be stored in a TM log 214 related to the operation and the memory change caused by the operation. For example, the information logged to a TM log 314 by the transactional memory system may include information identifying the particular operation, the data written by the operation or the changes to the data at the memory location resulting from the particular operation, the memory location in transactional memory where the data was written, and the like. If, for some reason, the transaction could not be completed, transactional memory system 210 then uses the information stored in TM log 214 for the transaction to reverse the changes made by the write operation and restore the state of the portion of the transactional memory 316 to a state prior to the execution of any operation in the transaction as if the transaction was never executed. For an incomplete transaction, the TM log information is thus used to rewind or unwind the transactional memory changes made by any executed operations of an incomplete transaction. The memory changes made by operations of an incomplete transaction are not committed to memory 312. The memory changes are finalized or committed to memory only after the transaction is completed. TM logs 314 themselves may be stored in transactional memory 312 or in some other memory in or accessible to transactional memory system.
As described earlier, the changes to the portion of the transactional memory 316 are committed to transactional memory 312 at the transaction boundary. For example, in
In one implementation, the buffer 418 shown in
In one implementation of a cache-based transactional memory system, all memory write operations during the execution of the transaction to the memory region reserved as the transactional memory 412 region may be written to the processor cache instead of the transactional memory 412. The memory write operations during the execution of the transaction to the memory region reserved as the transactional memory 412 region may also be tagged as transactional memory stores in the cache. In one implementation, during the execution of the transaction, the stores tagged as transactional memory stores in the processor are preserved in the caches and protected from evictions (i.e., being pushed out to the memory) until the completion of the transaction.
At the completion of the execution of the transaction, all the stores targeted to the transactional memory 412 are committed to the memory 416. In the above example implementation, committing of the memory operations to transactional memory 416 may refer to evicting all the transaction memory writes stored in the caches to the transactional memory 414.
In
In certain embodiments, each process may request access to the hardware component 532 through a transactional memory system 210. For each process, the operations for accessing the hardware component 532 may be initiated from within a transaction associated with the process. For example, the first process 502 may initiate a first transaction 508 with read and write operations to a first non-transactional memory region 528 associated with the hardware component 532, a second process 504 may initiate a second transaction 510 with read and write operations to a second non-transactional memory region 530 associated with the hardware component 532, and a third process 506 may initiate a third transaction 512 with read and write operations again to the first non-transactional memory region 528 associated with the hardware component 532.
In certain implementations, the various processes may initiate access to the hardware component 532 from inside a transaction in a variety of ways. For example, the developer of the process at the time of development may programmatically initiate a transaction before requesting access to a hardware component 532 and complete the transaction after executing all of the operations targeted to the hardware component 532. In certain other implementations, the operating environment, such as the transactional memory system 210, may provide the mechanisms for transactionally executing access requests to the hardware components. For example, the process may request to write to a network card in the device 500 using a networking API supported by the library 228. In one implementation, for writing data to the network card, the process may call the API with a pointer to a temporary buffer in memory with the data. The size of the buffer may be of variable sizes, requiring multiple writes to the network card. In one implementation, the TM lib 232 is invoked by the networking API for writing data to the network card. In one implementation, the TM lib 232 may embed the request for access to the network card within a transaction.
Referring back to
The memory operations executing from the first transaction 508 and the second transaction 510 are completed in transactional memory space (TM1 522 and TM2 524, respectively) managed by the transactional memory system 210. Any modifications to the transactional memory during the execution of the transaction are committed to the non-transactional memory (528 and 530) hardware component 532 only upon completion of the respective transaction. For instance, the modifications to TM1 522 by the first transaction 508 are committed to the first non-transactional memory region/address 528 associated with the hardware component only upon completion of the first transaction 508. Since the memory operations executing within each transaction are not committed to the non-transactional memory associated with the hardware component 532 until completion of the respective transactions, both transactions can execute simultaneously completing memory operations targeted to the hardware component. Referring to
Completion of the memory operation, as part of the transaction, may refer to the completion of the execution of the memory operation by the processing entity, allowing the forward progress of the subsequent instructions in the transaction. At the completion of all the operations in the transaction, the changes to the transactional memory are committed to memory. The committing of the changes to non-transactional memory 526 may result in the changes to the respective addresses in memory being pushed out to the hardware component 532. In one implementation, the memory controller (not shown) may translate the memory operations to the respective memory sections reserved for the hardware component 532 to I/O writes to the hardware component 532. In certain embodiments, the transactional memory system 210 may be further configured to commit only one transaction at a time, or commit only one transaction associated with a non-transactional memory region at one time to avoid collisions between the memory operations associated with the same hardware component 532 from different transactions.
In
The transactional memory 212 and the non-transactional memory 526 may be implemented using a variety of techniques without departing from the scope of the invention. For example, in implementations of transactional memory system 210 similar to the implementation discussed with reference to
In an implementation of the transactional memory system 210, similar to the implementation discussed with reference to
In such an implementation of the transactional memory system, implementation of a separate non-transactional memory may not be needed. Portions of the transactional memory 212 may be directly mapped to the hardware component 532. As the transactions execute on the processing entities, the changes targeted to the memory mapped to the hardware component 532 are temporarily stored in the processing buffers, such as caches and are not observable in the transactional memory (622 and 624). Upon completion of the transactions, the respective changes to the memory, stored in the buffer, are committed to the transaction memory, i.e., pushed out from the buffers and written to the transactional memory. At the completion of the transaction, when the memory operations are committed to the transactional memory, in certain implementations, the memory controller may convert the memory operations to I/O operations directed to the hardware components 532.
At step 702, components of the device may receive a request to commence execution of a transaction with memory operations targeted to a hardware component. At step 704, components of the device may check if the memory operations from the transaction targeted for the hardware component overlap with memory operations of one or more currently executing transactions for the same memory addresses. In certain implementations, the device may check for overlap in specific regions or a range of addresses rather than just specific addresses.
At step 704, if the request for execution of the transaction has an overlap in memory operations with currently executing transactions, then components of the device, at step 706, may block the execution of the transaction until completion of the currently executing transactions with overlapping memory operations. Once the currently executing transaction with the overlapping memory operations completes and commits the changes to memory, the request for execution of the transaction may be unblocked. Components of the device may again request execution of the transaction at step 702.
On the other hand, if, at step 704, components of the device do not find any overlap in memory operations with currently executing transactions, then, at step 708, components of the device may perform operations from the transaction including memory operations using the transactional memory system 210. According to certain implementations of the device, the changes resulting from executing of the memory operations from the transaction are not observable at the memory mapped to the hardware component during the execution of the transaction.
At step 710, the operations of the transaction are completed. At step 712, upon completion of the transaction, the changes to the memory from the execution of the memory operations from the transaction are committed to the memory mapped to the hardware component. In addition to committing the changes to memory, at step 712, components of the device may indicate to the synchronizer 514 in the transactional memory system that any blocked transactions waiting to access the same memory addresses as the completed transaction may be unblocked.
It should be appreciated that the specific steps illustrated in
At step 830, the transactional memory system 810 may initiate ioctl( ) calls to the driver 814 operating in the kernel privilege level to communicate with the hardware component 818. As step 832, the device driver 814 may update the shadow memory 816 instead of directly writing to the controller 820 of the hardware component 818. In one implementation, the shadow memory 816 may be managed by the transactional memory system 810.
At step 834, components of the device may repeatedly service several ioctl( ) function calls by repeating steps 830 and 832 for each ioctl( ) call. In certain embodiments, the memory operations performed by the invocation of the icoctl ( )[ioctl( )??] call may be performed and buffered in the shadow memory 816. At 836, the transaction managed by the transactional memory system 810 may be completed and all changes from the transaction may be reflected in the shadow memory 816. At step 838, components of the device may push the changes from the shadow memory 816 to the controller 820 of the hardware component 818.
As shown in
Above figures, such as
Ports 902 represent the I/O plane for network device 900. Network device 900 is configured to receive and forward data using ports 902. A port within ports 902 may be classified as an input port or an output port depending upon whether network device 900 receives or transmits a data packet using the port. A port over which a data packet is received by network device 900 is referred to as an input port. A port used for communicating or forwarding a data packet from network device 900 is referred to as an output port. A particular port may function both as an input port and an output port. A port may be connected by a link or interface to a neighboring network device or network. Ports 902 may be capable of receiving and/or transmitting different types of data traffic at different speeds including 1 Gigabit/sec, 10 Gigabits/sec, or more. In some embodiments, multiple ports of network device 800 may be logically grouped into one or more trunks.
Upon receiving a data packet via an input port, network device 900 is configured to determine an output port for the packet for transmitting the data packet from the network device to another neighboring network device or network. Within network device 900, the packet is forwarded from the input network device to the determined output port and transmitted from network device 900 using the output port. In one embodiment, forwarding of packets from an input port to an output port is performed by one or more line cards 904. Line cards 904 represent the data forwarding plane of network device 900. Each line card 904 may comprise one or more packet processing entities 908 that are programmed to perform forwarding of data packets from an input port to an output port. A packet processing entity on a line card may also be referred to as a line processing entity. Each packet processing entity 908 may have associated memories to facilitate the packet forwarding process. In one embodiment, as depicted in
Since processing performed by a packet processing entity 908 needs to be performed at a high packet rate in a deterministic manner, packet processing entity 908 is generally a dedicated hardware device configured to perform the processing. In one embodiment, packet processing entity 908 is a programmable logic device such as a field programmable gate array (FPGA). Packet processing entity 908 may also be an ASIC.
Management card 906 is configured to perform management and control functions for network device 900 and thus represents the management plane for network device 900. In one embodiment, management card 906 is communicatively coupled to line cards 904 and includes software and hardware for controlling various operations performed by the line cards. In one embodiment, a single management card 906 may be used for all the line cards 904 in network device 900. In alternative embodiments, more than one management card may be used, with each management card controlling one or more line cards.
A management card 906 may comprise a processing entity 914 (also referred to as a management processing entity) that is configured to perform functions performed by management card 906 and associated memory 916. As depicted in
In one embodiment, the functions performed by management card processing entity 914 include maintaining a routing table, creating associations between routes in the routing table and next-hop information, updating the routing table and associated next-hop information responsive to changes in the network environment, and other functions. In one embodiment, management processing entity 914 is configured to program the packet processing entities and associated memories of line cards 904 based upon the routing table and associated next-hop information. Programming the packet processing entities and their associated memories enables the packet processing entities to perform data packet forwarding in hardware. As part of the programming of a line card packet processing entity and its associated memories, management processing entity 914 is configured to download routes and associated next-hops information to the line card and program the packet processing entity and associated memories. Updates to the next-hop information are also downloaded to the line cards to enable the packet processing entities on the line cards to forward packets using the updated information.
The present application is a non-provisional of and claims the benefit and priority under 35 U.S.C. 119(e) of (1) U.S. Provisional Application No. 61/845,934, filed Jul. 12, 2013, entitled TRANSACTIONAL MEMORY LAYER, and (2) U.S. Provisional Application No. 61/864,371, filed Aug. 9, 2013, entitled TRANSACTIONAL MANAGEMENT LAYER. The entire contents of the 61/845,934 and 61/864,371 applications are incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
61845934 | Jul 2013 | US | |
61864371 | Aug 2013 | US |