Routing platforms include a motherboard having a host processor and various slave devices such as Digital Signal Processors (DSPs), Microprocessors, Application Specific Integrated Circuits (ASICs), and Field Programmable Gate Arrays (FPGAs). Also, many motherboards include a slot for holding a Packet Voice Data Module (PVDM).
Communication between the host processor and the slave devices is generally accomplished utilizing proprietary, specialized interfaces for each device. For example, some devices have proprietary interfaces, others have synchronous or non-synchronous interfaces. Additionally, direct communication between slaves without host processor intervention has not been available.
Another problem has been facilitating host processor transfers to large memories controlled by the slave devices. It is not practical for the host processor to map each of these slave spaces.
Accordingly, a generic bus system providing efficient communication between the host processor and slave modules, efficient memory usage, and inter-slave communication is required.
In a first embodiment of the invention, a new protocol and interface specification allows for transactions with existing and future slave devices. The protocol and interface specification allows for interaction with complex slave devices such as modems, CPUs, Microcontrollers, etc.
In another embodiment of the invention, a DMA engine is provides a Master with the capability of accessing a slave using either a direct access method or an indirect access method.
In another embodiment of the invention, all data is transferred between the DMA engine and mailbox registers on the slave utilizing a PVDM generic bus protocol.
In another embodiment of the invention, the DMA engine provides the Master the capability of performing single word accesses to the addressable region of a slave device.
In another embodiment of the invention, direct communication between slave devices in made available by the interface.
In another embodiment of the invention, during startup the DMA engine negotiates with the slaves to determine a desired operating mode of communication including bus width and asynchronous/synchronous mode operation.
Other features and advantages of the invention will be apparent in view of the following detailed description and appended drawings.
Reference will now be made in detail to various embodiments of the invention. Examples of these embodiments are illustrated in the accompanying drawings. While the invention will be described in conjunction with these embodiments, it will be understood that it is not intended to limit the invention to any embodiment. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. However, the present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.
In one embodiment of the invention, referred to below as a PVDM n Interface that defines a Generic Bus Protocol (PGBP), a generic parallel, n-bit wide data path communications bus is defined to allow a number of major Slave devices (DSPs, Microprocessors, Application Specific ICs/FPGAs) to be used to interface with a generic PVDM module. The bus itself has no parity or CRC hardware data integrity checking. Although, we allow for CRC/parity extensions to be provided in the messages. Higher level protocol allows a DMA engine to interface multiple master devices (Host Processor, etc) to interface directly with multiple PVDM modules through the DMA (Direct Memory Access) engine.
The embodiment being described provides the following features:
Each of these features will be described in detail below.
The PVDM-n Interface provides the host processor (Master device) a method of sending and receiving data from slave device(s) on the PVDM modules. This interface appears to the host as a contiguous block of memory and all address translation and master device selection is handled by the DMA engine. In the following, transfers from the DMA to slave are termed Egress transfers and transfers from the slave to the DMA are termed Ingress transactions.
The following Table describes the pin functions of the PGBP interface:
As depicted in
Thus, for the PVDM interface with two slaves, 2 SLAVE_SELECT lines are required to select either S1A or S1B. Other slave devices can be selected utilizing a single SLAVE_SELECTn line.
Each slave interfaces to the PGBP through a set of mailbox registers including a Slave RX/TX Status Register, a Slave Egress MSI (Message Signal Interrupts) Register, Slave Egress Mailbox Registers, Slave Ingress Mailbox Registers, and Slave Indirect Access Address Registers. Each of these registers will be described in more detail below.
The master device can communicate with the Slave Device using 32 Mailbox Registers (expandable depending on address bits availability). These registers provide capability of fast transactions into the Memory Space of the Slave Device and also allow for reading/writing to a larger Slave Memory Space through Address Mailbox Register. To allow support for current modules that only provide 4 bit address support, the fifth address bit is obtained from the MAST_RDWR signal. This provides slaves capability of 16 write-only and 16 read-only registers. Any register that needs to be read/write will be shadowed internally within the Slave.
Messages/Frames are transferred to/from the slave devices by the DMA engine using the mailbox scheme.
The following are detailed descriptions and memory maps of the registers required in a slave device to interface with the PGBP of this embodiment:
The Slave Egress MSI Register (EMR) is used by the master device to interrupt the slave device and perform the appropriate action based on the asserted fields.
The Slave RX/TX Status Register (XSR) informs the status of various conditions within the Slave Device. It is a READ ONLY register from the DMA engine/Master device.
The Slave Ingress Message Size Register (IMS) is not required in this embodiment. However, they may be defined if the DMA engine/Slave require implementation of this register.
The Slave Ingress/Egress Message Mailbox Registers (MM 0-3) allow the Slave Devices for up to 64 Bit operation. If the Slave is identified for 32-bit operation, the data in the registers 2 and 3 are bypassed.
Slave Ingress/Egress Message Mailbox Register 0
Slave Ingress/Egress Message Mailbox Register 1
Slave Ingress/Egress Message Mailbox Register 2
Slave Ingress/Egress Message Mailbox Register 3
The Slave Indirect Access Address Registers (IA 0-3) allow the Slave Devices for up to 64 Bit addressed operation. If the Slave is identified for 32-bit address operation, the address in the registers 2 and 3 are bypassed.
Slave Indirect Access Address Register 0
Slave Indirect Access Address Register 1
Slave Indirect Access Address Register 2
Slave Indirect Access Address Register 3
The DMA engine provides the master with the capability of accessing the slave using two methods—direct method (also referred to as DMA access) and/or indirect method. The Direct method requires the DMA Engine to facilitate moving of complete packet data to/from the addressable memory region of the Master to/from a message passing interface/addressable memory region on the slave device. The indirect method requires the Master device to use DMA engine to perform single word accesses (read/write) in the addressable region of the slave device.
The techniques for performing direct egress and ingress accessing will be described first. In the egress direct, a transaction is performed by the DMA engine on the behalf of the master where the data is moved from the Master Device to Slave Device. Whereas, in ingress direct, a transaction is performed by the DMA engine on the behalf of the master where the data is moved from the Slave Device to Master Device. Similarly, for Indirect transactions, messages are replaced with single word message.
Egress Message Transfer—The master requests/programs the DMA engine to transfer a message to the slave device. The DMA engine polls the Slave RX/TX Status Register (Egress Space Available bits) and determines whether the slave device is ready to accept any new message. In this embodiment, the message size is application specific and is typically set to 1500 bytes. If there is enough space and the slave is ready, the DMA engine moves the message from the Master Device's addressable memory region to the Slave Device's Mailbox Registers. The transfer is performed using the physical interface operation defined in detail below.
The message payload movement is done to a set of two to four (programmable) 16-bit mailbox registers called Egress Message Mailbox Registers[0 . . . 3]. The DMA engine writes data to these mailbox registers in order 0 to ⅓ (depending on programmed value) in cyclic order [0, 1, 2, 3, 0, 1, 2, 3, . . . ] or [0, 1, 0, 1, 0, 1, . . . ]. During the data movement from the master device to the slave device, if the slave is not ready to receive further data, it can assert SLAVE_WAIT signal to indicate to the DMA engine to wait before continuation of data transfer.
Upon completion of the payload movement, the DMA engine updates the Egress MSI Register, Frame Written bit, to inform the Slave device of the completion of message transfer. Also, the DMA engine updates the ID of the master that moved the message to the slave and it asserts the Egress Interrupt Bit in the Egress MSI Register. This is to request the Slave to perform internal action on the message just transferred.
Ingress Message Transfer—The master programs the DMA engine to transfer a message from the slave device whenever the slave device has a message to send. The DMA engine continuously polls the Ingress Message Available bit in the Slave RX/TX Status Register. If this bit is set, it informs the DMA engine to move the message to the master indicated by the Ingress Master ID bits (in Slave RX/TX Status Register). The DMA engine now initiates the message transfer from the slave device to master device using the physical interface operation defined below.
The payload movement is done from a set of two to four (programmable) 16-bit mailbox registers called Ingress Message Mailbox Registers[0 . . . 3]. The DMA engine reads data from these mailbox registers in order 0 to ⅓ (depending on programmed value) in cyclic order [0, 1, 2, 3, 0, 1, 2, 3, . . . ] or [0, 1, 0, 1, 0, 1 . . . ]. If during the entire message movement, if the slave device is not ready to transfer further data, it can assert SLAVE_WAIT signal to indicate to the DMA engine to wait before continuation of data transfer. The DMA engine is required to always accept data once it initiates the message transfer.
In this embodiment, the first short-word in the payload from Slave may indicate the size of the message to be transferred from the slave. However, this is not required for the protocol to work and the ING_MESSAGE_SIZE_REG may be used by the Slave Device to indicate the transaction size.
As depicted in
Indirect Slave Memory Write—During an indirect slave memory write operation, the master reads the Indirect Write Ready bit held in the Slave RX/TX Status Register. This bit informs the master that it can perform an indirect slave memory write operation (using the DMA engine) If the Slave is ready for a transfer, the Master writes a 64-bit/32-bit address in the Indirect Access Address Register, writes the 64-bit/32-bit Data to the Egress Message Mailbox Registers[0 . . . 3]. Upon completion of the write to the mailbox registers, the master then writes to the Egress MSI Register, Indirect Write Interrupt bit to request the Slave to initiate the indirect write to the requested address within its memory region. Upon initiation of the write to the Egress MSI Register, the Slave Device will clear the Indirect Write Ready bit to indicate the Slave is performing the write. Once the write is completed, the Slave will re-assert the Indirect Write Ready bit in the Slave RX/TX Status Register.
Indirect Slave Memory Read—During an indirect slave memory read operation, the master reads the Indirect Read Ready bit held in the Slave RX/TX Status Register. This bit informs the master that it can perform an indirect slave memory read operation. If the Slave is ready for a transfer, the Master writes a 64-bit/32-bit address in the Indirect Access Address Register. Upon completion of the write to the address registers, the master then writes to the Egress MSI Register, Indirect Read Interrupt bit to request the slave to initiate the indirect Read to the requested address within its memory region. Upon write to the Egress MSI Register, the slave will clear the Indirect Read Ready bit to indicate the Slave is performing the read from its internal memory map and loading the values in the Ingress Message Mailbox Registers[0 . . . 3]. Once the read is completed, the Slave will re-assert the Indirect Read Ready bit in the Slave RX/TX Status Register. The master will poll for this bit to be re-asserted, once it is ready, the master can complete the Read of the Ingress Message Mailbox Registers[0 . . . 3].
Thus, as depicted in
The present embodiment provides for direct communication between slaves without host processor involvement. For example, in
The DMA Engine provides support for Synchronous operation of the PGBP Bus and/or Asynchronous operation of the PGBP Bus. Synchronous operation is supported via Source-Synchronous clock operation or traditional synchronous operation. The PGBP interface allows for support of very high throughput through the multi-loaded PVDM-II module(s). Synchronous/Source-Sync operation allows up-to 100 MHz/100 MHz DDR (Double Data Rate) bus operation (depending on system architecture), providing maximum raw throughput of 1200/TBDMbps. Asynchronous operation allows interface to modules that cannot operate with minimal skew requirements of the synchronous mode. Asynchronous operation supports lower raw throughput of 300 Mbps (Practical throughput is about 75% of the theoretical number).
During the bootstrapping phase, the DMA engine operates in Asynchronous mode to allow handshaking with a low performance Slave device not capable of communicating in the Synchronous mode. This allows the DMA engine/the Host and the Slave to negotiate the desired operation mode—Source Synchronous, Synchronous, and/or Asynchronous. In addition, the DMA engine will negotiate the bus interface widths. The initial bus width is 8/16 (as hard configured in the DMA engine) but can be increased dynamically by negotiating with the slave device.
This negotiation provides great flexibility and expandability to the PGBP and allows slave devices of different capabilities regarding data transfer speed, types of system clocks, and bus width to be coupled to the PGBP of this embodiment.
Examples of hardware interfaces for implementing the Synchronous and Source Synchronous Interfaces will now be described with reference to
The invention may be implemented as hardware and/or program code, stored on a computer readable medium, that is executed by a digital computer. The computer readable medium may include, among other things, magnetic media, optical media, electro-magnetic fields encoding digital information, and so on.
The invention has now been described with reference to the preferred embodiments. Alternatives and substitutions will now be apparent to persons of skill in the art. For example, the particular numbers of bus lines will vary according to the requirements of a system. Also, the polling of interrupt bits can be replace by actively interrupting the DMA engine. Accordingly, it is not intended to limit the invention except as provided by the appended claims.