System core for transferring data between an external device and memory

Information

  • Patent Grant
  • 7266620
  • Patent Number
    7,266,620
  • Date Filed
    Wednesday, March 10, 2004
    20 years ago
  • Date Issued
    Tuesday, September 4, 2007
    17 years ago
Abstract
A system core having an internal memory which transfers data from an external device to the internal memory is described. To this end, the system core includes a processor, a direct memory access (DMA) controller, an instruction memory and a plurality of memories. The instruction memory contains processor instructions and DMA instructions. The DMA controller fetches DMA instructions from the instruction memory. The DMA controller executes the fetched DMA instructions and thus populates the plurality of memories with data from the external device. The processor then operates on the data found in the populated memories.
Description
FIELD OF THE INVENTION

The present invention relates generally to improvements to parallel processing, and more particularly to such processing in the framework of a ManArray architecture and instruction syntax.


BACKGROUND OF THE INVENTION

A wide variety of sequential and parallel processing architectures and instruction sets are presently existing. An ongoing need for faster and more efficient processing arrangements has been a driving force for design change in such prior art systems. One response to these needs have been the first implementations of the ManArray architecture. Even this revolutionary architecture faces ongoing demands for constant improvement.


SUMMARY OF THE INVENTION

A system core having an internal memory which transfers data from an external device to the internal memory is described. To this end, the system core includes a processor, a direct memory access (DMA) controller, an instruction memory and an internal memory. The instruction memory contains processor instructions and DMA instructions. The DMA controller fetches DMA instructions from the instruction memory. The DMA controller executes the fetched DMA instructions and thus populates the internal memory with data from the external device. The processor then operates on the data found in the internal memory. By having a DMA controller which can fetch and execute DMA instructions, the present invention advantageously provides a flexible system core such as providing the system core the feature of populating internal memory according to a particular pattern. Similarly, the system core has the flexibility to read from internal memory and transfer contents of internal memory to external memory according to a particular pattern.


These and other features, aspects and advantages of the invention will be apparent to those skilled in the art from the following detailed description taken together with the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an exemplary ManArray 2×2 iVLIW processor showing the connections of a plurality of processing elements connected in an array topology for implementing the architecture and instruction syntax of the present invention;



FIG. 2 illustrates an exemplary test case generator program in accordance with the present invention;



FIG. 3 illustrates an entry from an instruction-description data structure for a multiply instruction (MPY); and



FIG. 4 illustrates an entry from an MAU-answer set for the MPY instruction.





DETAILED DESCRIPTION

Further details of a presently preferred ManArray core, architecture, and instructions for use in conjunction with the present invention are found in


U.S. patent application Ser. No. 08/885,310 filed Jun. 30, 1997, now U.S. Pat. No. 6,023,753,


U.S. patent application Ser. No. 08/949,122 filed Oct. 10, 1997, now U.S. Pat. No. 6,167,502,


U.S. patent application Ser. No. 09/169,255 filed Oct. 9, 1998, now U.S. Pat. No. 6,343,356,


U.S. patent application Ser. No. 09/169,256 filed Oct. 9, 1998, now U.S. Pat. No. 6,167,501,


U.S. patent application Ser. No. 09/169,072, filed Oct. 9, 1998, now U.S. Pat. No. 6,219,776,


U.S. patent application Ser. No. 09/187,539 filed Nov. 6, 1998, now U.S. Pat. No. 6,151,668,


U.S. patent application Ser. No. 09/205,5588 filed Dec. 4, 1998, now U.S. Pat. No. 6,173,389,


U.S. patent application Ser. No. 09/215,081 filed Dec. 18, 1998, now U.S. Pat. No. 6,101,592,


U.S. patent application Ser. No. 09/228,374 filed Jan. 12, 1999 now U.S. Pat. No. 6,216,223,


U.S. patent application Ser. No. 09/238,446 filed Jan. 28, 1999, now U.S. Pat. No. 6,366,999,


U.S. patent application Ser. No. 09/267,570 filed Mar. 12, 1999, now U.S. Pat. No. 6,446,190,


U.S. patent application Ser. No. 09/337,839 filed Jun. 22, 1999,


U.S. patent application Ser. No. 09/350,191 filed Jul. 9, 1999, now U.S. Pat. No. 6,356,994,


U.S. patent application Ser. No. 09/422,015 filed Oct. 21, 1999, now U.S. Pat. No. 6,408,382,


U.S. patent application Ser. No. 09/432,705 filed Nov. 2, 1999 entitled “Methods and Apparatus for Improved Motion Estimation for Video Encoding”,


U.S. patent application Ser. No. 09/471,217 filed Dec. 23, 1999 entitled “Methods and Apparatus for Providing Data Transfer Control”,


U.S. patent application Ser. No. 09/472,372 filed Dec. 23, 1999, now U.S. Pat. No. 6,256,683,


U.S. patent application Ser. No. 09/596,103 filed Jun. 16, 2000, now U.S. Pat. No. 6,397,324,


U.S. patent application Ser. No. 09/598,566 entitled “Methods and Apparatus for Generalized Event Detection and Action Specification in a Processor” filed Jun. 21, 2000, and


U.S. patent application Ser. No. 09/598,567 entitled “Methods and Apparatus for Improved Efficiency in Pipeline Simulation and Emulation” filed Jun. 21, 2000,


U.S. patent application Ser. No. 09/598,564 entitled filed Jun. 21, 2000, now U.S. Pat. No. 6,622,234,


U.S. patent application Ser. No. 09/598,558 entitled “Methods and Apparatus for Providing Manifold Array (ManArray) Program Context Switch with Array Reconfiguration Control” filed Jun. 21, 2000, and


U.S. patent application Ser. No. 09/598,084 entitled filed Jun. 21, 2000, now U.S. Pat. No. 6,654,870, as well as,


Provisional Application Ser. No. 60/113,637 entitled “Methods and Apparatus for Providing Direct Memory Access (DMA) Engine” filed Dec. 23, 1998,


Provisional Application Ser. No. 60/113,555 entitled “Methods and Apparatus Providing Transfer Control” filed Dec. 23, 1998,


Provisional Application Ser. No. 60/139,946 entitled “Methods and Apparatus for Data Dependent Address Operations and Efficient Variable Length Code Decoding in a VLIW Processor” filed Jun. 18, 1999,


Provisional Application Ser. No. 60/140,245 entitled “Methods and Apparatus for Generalized Event Detection and Action Specification in a Processor” filed Jun. 21, 1999, Provisional Application Ser. No. 60/140,163 entitled “Methods and Apparatus for Improved Efficiency in Pipeline Simulation and Emulation” filed Jun. 21, 1999,


Provisional Application Ser. No. 60/140,162 entitled “Methods and Apparatus for Initiating and Re-Synchronizing Multi-Cycle SIMD Instructions” filed Jun. 21, 1999,


Provisional Application Ser. No. 60/140,244 entitled “Methods and Apparatus for Providing One-By-One Manifold Array (1×1 ManArray) Program Context Control” filed Jun. 21, 1999,


Provisional Application Ser. No. 60/140,325 entitled “Methods and Apparatus for Establishing Port Priority Function in a VLIW Processor” filed Jun. 21, 1999,


Provisional Application Ser. No. 60/140,425 entitled “Methods and Apparatus for Parallel Processing Utilizing a Manifold Array (ManArray) Architecture and Instruction Syntax” filed Jun. 22, 1999,


Provisional Application Ser. No. 60/165,337 entitled “Efficient Cosine Transform Implementations on the ManArray Architecture” filed Nov. 12, 1999, and


Provisional Application Ser. No. 60/171,911 entitled “Methods and Apparatus for DMA Loading of Very Long Instruction Word Memory” filed Dec. 23, 1999,


Provisional Application Ser. No. 60/184,668 entitled “Methods and Apparatus for Providing Bit-Reversal and Multicast Functions Utilizing DMA Controller” filed Feb. 24, 2000,


Provisional Application Ser. No. 60/184,529 entitled “Methods and Apparatus for Scalable Array Processor Interrupt Detection and Response” filed Feb. 24, 2000,


Provisional Application Ser. No. 60/184,560 entitled “Methods and Apparatus for Flexible Strength Coprocessing Interface” filed Feb. 24, 2000,


Provisional Application Ser. No. 60/203,629 entitled “Methods and Apparatus for Power Control in a Scalable Array of Processor Elements” filed May 12, 2000, and


Provisional Application Ser. No. 60/212,987 entitled “Methods and Apparatus for Indirect VLIW Memory Allocation” filed Jun. 21, 2000, respectively, all of which are assigned to the assignee of the present invention and incorporated by reference herein in their entirety.


All of the above noted patents and applications, as well as any noted below, are assigned to the assignee of the present invention and incorporated herein in their entirety.


In a presently preferred embodiment of the present invention, a ManArray 2×2 iVLIW single instruction multiple data stream (SIMD) processor 100 shown in FIG. 1 contains a controller sequence processor (SP) combined with processing element-0 (PE0) SP/PE0101, as described in further detail in U.S. application Ser. No. 09/169,072 entitled “Methods and Apparatus for Dynamically Merging an Array Controller with an Array Processing Element”. Three additional PEs 151, 153, and 155 are also utilized to demonstrate improved parallel array processing with a simple programming model in accordance with the present invention. It is noted that the PEs can be also labeled with their matrix positions as shown in parentheses for PE0 (PE00) 101, PE1 (PE01) 151, PE2 (PE10) 153, and PE3 (PE11) 155. The SP/PE0101 contains a fetch controller 103 to allow the fetching of short instruction words (SIWs) from a B=32-bit instruction memory 105. The fetch controller 103 provides the typical functions needed in a programmable processor such as a program counter (PC), branch capability, digital signal processing eventpoint loop operations, support for interrupts, and also provides the instruction memory management control which could include an instruction cache if needed by an application. In addition, the SIW I-Fetch controller 103 dispatches 32-bit SIWs to the other PEs in the system by means of a 32-bit instruction bus 102.


In this exemplary system, common elements are used throughout to simplify the explanation, though actual implementations are not so limited. For example, the execution units 131 in the combined SP/PE0101 can be separated into a set of execution units optimized for the control function, e.g. fixed point execution units, and the PE0 as well as the other PEs 151, 153 and 155 can be optimized for a floating point application. For the purposes of this description, it is assumed that the execution units 131 are of the same type in the SP/PE0 and the other PEs. In a similar manner, SP/PE0 and the other PEs use a five instruction slot iVLIW architecture which contains a very long instruction word memory (VIM) memory 109 and an instruction decode and VIM controller function unit 107 which receives instructions as dispatched from the SP/PE0's I-Fetch unit 103 and generates the VIM addresses-and-control signals 108 required to access the iVLIWs stored in the VIM. These iVLIWs are identified by the letters SLAMD in VIM 109. The loading of the iVLIWs is described in further detail in U.S. patent application Ser. No. 09/187,539 entitled “Methods and Apparatus for Efficient Synchronous MIMD Operations with iVLIW PE-to-PE Communication”. Also contained in the SP/PE0 and the other PEs is a common PE configurable register file 127 which is described in further detail in U.S. patent application Ser. No. 09/169,255 entitled “Methods and Apparatus for Dynamic Instruction Controlled Reconfiguration Register File with Extended Precision”.


Due to the combined nature of the SP/PE0, the data memory interface controller 125 must handle the data processing needs of both the SP controller, with SP data in memory 121, and PE0, with PE0 data in memory 123. The SP/PE0 controller 125 also is the source of the data that is sent over the 32-bit broadcast data bus 126. The other PEs 151, 153, and 155 contain common physical data memory units 123′, 123″, and 123′″ though the data stored in them is generally different as required by the local processing done on each PE. The interface to these PE data memories is also a common design in PEs 1, 2, and 3 and indicated by PE local memory and data bus interface logic 157, 157′ and 157″. Interconnecting the PEs for data transfer communications is the cluster switch 171 more completely described in U.S. Pat. No. 6,023,753 entitled “Manifold Array Processor”, U.S. application Ser. No. 09/949,122 entitled “Methods and Apparatus for Manifold Array Processing”, and U.S. application Ser. No. 09/169,256 entitled “Methods and Apparatus for ManArray PE-to-PE Switch Control”. The interface to a host processor, other peripheral devices, and/or external memory can be done in many ways. The primary mechanism shown for completeness is contained in a direct memory access (DMA) control unit 181 that provides a scalable ManArray data bus 183 that connects to devices and interface units external to the ManArray core. The DMA control unit 181 provides the data flow and bus arbitration mechanisms needed for these external devices to interface to the ManArray core memories via the multiplexed bus interface represented by line 185. A high level view of a ManArray Control Bus (MCB) 191 is also shown.


Turning now to specific details of the ManArray architecture and instruction syntax as adapted by the present invention, this approach advantageously provides a variety of benefits. Among the benefits of the ManArray instruction syntax, as further described herein, is that first the instruction syntax is regular. Every instruction can be deciphered in up to four parts delimited by periods. The four parts are always in the same order which lends itself to easy parsing for automated tools. An example for a conditional execution (CE) instruction is shown below:


(CE).(NAME).(PROCESSORJUNIT).(DATATYPE)


Below is a brief summary of the four parts of a ManArray instruction as described herein:


(1) Every instruction has an instruction name.


(2A) Instructions that support conditional execution forms may have a leading (T. or F.) or . . .


(2B) Arithmetic instructions may set a conditional execution state based on one of four flags (C=carry, N=sign, V=overflow, Z=zero).


(3A) Instructions that can be executed on both an SP and a PE or PEs specify the target processor via (.S or .P) designations. Instructions without an .S or .P designation are SP control instructions.


(3B) Arithmetic instructions always specify which unit or units that they execute on (A=ALU, M=MAU, D=DSU).


(3C) Load/Store instructions do not specify which unit (all load instructions begin with the letter ‘L’ and all stores with letter ‘S’.


(4A) Arithmetic instructions (ALU, MAU, DSU) have data types to specify the number of parallel operations that the instruction performs (e.g., 1, 2, 4 or 8), the size of the data type (D=64 bit doubleword, W=32 bit word, H=16 bit halfword, B=8 bit byte, or FW=32 bit floating point) and optionally the sign of the operands (S=Signed, U=Unsigned).


(4B) Load/Store instructions have single data types (D=doubleword, W=word, H1=high halfword, H0=low halfword, B0=byte0).


The above parts are illustrated for an exemplary instruction below:




embedded image


Second, because the instruction set syntax is regular, it is relatively easy to construct a database for the instruction set. The database is organized as instructions with each instruction record containing entries for conditional execution (CE), target processor (PROCS), unit (UNITS), datatypes (DATATYPES) and operands needed for each datatype (FORMAT). The example below using TcLsyntax, as further described in J. Ousterhout, Tcl and the Tk Toolkit, Addison-Wesley, ISBN 0-201-63337-X, 1994, compactly represents all 196 variations of the ADD instruction.


The 196 variations come from (CE)*(PROCS)*(UNITS)*(DATATYPES)=7 *2*2*7=196. It is noted that the ‘e’ in the CE entry below is for unconditional execution.


















set instruction(ADD,CE)
{e t. f. c n v z}



set instruction(ADD,PROCS)
{s p}



set instruction(ADD,UNITS)
{a m}



set instruction(ADD,DATATYPES)
{1d 1w 2w 2h 4h 4b 8b}



set instruction(ADD,FORMAT,1d)
{RTE RXE RYE}



set instruction(ADD,FORMAT,1w)
{RT RX RY}



set instruction(ADD,FORMAT,2w)
{RTE RXE RYE}



set instruction(ADD,FORMAT,2h)
{RT RX RY}



set instruction(ADD,FORMAT,4h)
{RTE RXE RYE}



set instruction(ADD,FORMAT,4b)
{RT RX RY}



set instruction(ADD,FORMAT,8b)
{RTE RXE RYE}










The example above only demonstrates the instruction syntax. Other entries in each instruction record include the number of cycles the instruction takes to execute (CYCLES), encoding tables for each field in the instruction (ENCODING) and configuration information (CONFIG) for subsetting the instruction set. Configuration information (1×1, 1×2, etc.) can be expressed with evaluations in the database entries:


proc Manta { } {


# are we generating for Manta?


return 1


# are we generating for ManArray?


# return 0


}


set instruction(MPY,CE) [Manta]?{e t. f.}:{e t. f. c n v z}


Having the instruction set defined with a regular syntax and represented in database form allows developers to create tools using the instruction database. Examples of tools that have been based on this layout are:


Assembler (drives off of instruction set syntax in database),


Disassembler (table lookup of encoding in database),


Simulator (used database to generate master decode table for each possible form of instruction), and


Testcase Generators (used database to generate testcases for assembler and simulator).


Another aspect of the present invention is that the syntax of the instructions allows for the ready generation of self-checking code from test vectors parameterized over conditional execution/datatypes/sign-extension/etc. TCgen, a test case generator, and LSgen are exemplary programs that generate self-checking assembly programs that can be run through a Verilog simulator and C-simulator.


An outline of a TCgen program 200 in accordance with the present invention is shown in FIG. 2. Such programs can be used to test all instructions except for flow-control and iVLIW instructions. TCgen uses two data structures to accomplish this result. The first data structure defines instruction-set syntax (for which datatypes/ce[1,2,3]/sign extension/rounding/operands is the instruction defined) and semantics (how many cyles/does the instruction require to be executed, which operands are immediate operands, etc.). This data structure is called the instruction-description data structure.


An instruction-description data structure 300 for the multiply instruction (MPY) is shown in FIG. 3 which illustrates an actual entry out of the instruction-description for the multiply instruction (MPY) in which e stands for empty. The second data structure defines input and output state for each instruction. An actual entry out of the MAU-answer set for the MPY instruction 400 is shown in FIG. 4. State can contain functions which are context sensitive upon evaluation. For instance, when defining an MPY test vector, one can define: RXb (RX before)=maxint, RYb (RY before)=maxint, RTa=maxint*maxint. When TCgen is generating an unsigned word form of the MPY instruction, the maxint would evaluate to 0xffffffff. When generating an unsigned halfword form, however, it would evaluate to 0xffff. This way the test vectors are parameterized over all possible instruction variations. Multiple test vectors are used to set up and check state for packed data type instructions.


The code examples of FIGS. 3 and 4 are in Tcl syntax, but are fairly easy to read. “Set” is an assignment, ( ) are used for array indices and the { } are used for defining lists. The only functions used in FIG. 4 are “maxint”, “minint”, “sign0unsi1”, “sign1unsi0”, and an arbitrary arithmetic expression evaluator (mpexpr). Many more such functions are described herein below.


TCgen generates about 80 tests for these 4 entries, which is equivalent to about 3000 lines of assembly code. It would take a long time to generate such code by hand. Also, parameterized testcase generation greatly simplifies maintenance. Instead of having to maintain 3000 lines of assembly code, one only needs to maintain the above defined vectors. If an instruction description changes, that change can be easily made in the instruction-description file. A configuration dependent instruction-set definition can be readily established. For instance, only having word instructions for the ManArray, or fixed point on an SP only, can be fairly easily specified.


Test generation over database entries can also be easily subset. Specifying “SUBSET(DATATYPES) {1sw 1sh} ” would only generate testcases with one signed word and one signed halfword instruction forms. For the multiply instruction (MPY), this means that the unsigned word and unsigned halfword forms are not generated. The testcase generators TelRita and TelRitaCorita are tools that generate streams of random (albeit with certain patterns and biases) instructions. These instruction streams are used for verification purposes in a co-verification environment where state between a C-simulator and a Verilog simulator is compared on a per-cycle basis.


Utilizing the present invention, it is also relatively easy to map the parameterization over the test vectors to the instruction set since the instruction set is very consistent.


Further aspects of the present invention are addressed in the documentation which follows below. This documentation is divided into the following principle sections:


















Section I
Table of Contents;



Section II
Programmer's User's Guide (PUG);



Section III
Programmer's Reference (PREF).










The Programmer's User's Guide Section addresses the following major categories of material and provides extensive details thereon: (1) an architectural overview; (2) processor registers; (3) data types and alignment; (4) addressing modes; (5) scalable conditional execution (CE); (6) processing element (PE) masking; (7) indirect very long instruction words (iVLIWs); (8) looping; (9) data communication instructions; (10) instruction pipeline; and (11) extended precision accumulation operations.


The Programmer's Reference Section addresses the following major categories of material and provides extensive details thereof: (1) floating-point (FP) operations, saturation and overflow; (2) saturated arithmetic; (3) complex multiplication and rounding; (4) key to instruction set; (5) instruction set; (6) instruction formats, as well as, instruction field definitions.


While the present invention has been disclosed in the context of various aspects of presently preferred embodiments, it will be recognized that the invention may be suitably applied to other environments and applications consistent with the claims which follow.

Claims
  • 1. A system core comprising: a processor;a direct memory access (DMA) controller operating under control of a DMA processor;an instruction memory containing processor instructions and DMA processor instructions;a plurality of memories, the DMA controller coupled to the instruction memory and the plurality of memories, the DMA processor configured for fetching the DMA instructions from the instruction memory and executing the DMA instructions in parallel with the processor fetching and executing the processor instructions, the DMA instructions when executed causing the transfer of data to populate the plurality of memories with data from an external device, the processor operating on the data found in the populated memories.
  • 2. The system core of claim 1 wherein the executed DMA instructions specify a pattern to populate the plurality of memories.
  • 3. The system core of claim 2 wherein the pattern is a block, circular, or stride pattern.
  • 4. The system core of claim 1 wherein the data from the external device includes processor instructions.
  • 5. The system core of claim 1 further comprising: a DMA bus connecting the DMA controller to the instruction memory and the plurality of memories.
  • 6. The system core of claim 1 further comprising: a bus coupled to the external device and the system core.
  • 7. The system core of claim 1 wherein the external device is an external host processor.
  • 8. The system core of claim 1 wherein the external device is an external synchronous data random access memory (SDRAM).
  • 9. The system core of claim 1 wherein the DMA processor fetches DMA instructions from the instruction memory and executes the DMA instructions in parallel with the processor fetching and executing the processor instructions, the DMA instructions when executed causing the transfer of data to populate the external device with data from the plurality of memories.
  • 10. A method for transferring data between a system core and an external device, the system core having a processor, a direct memory access (DMA) processor, an instruction memory storing processor instructions and DMA processor instructions, and a plurality of memories, the method comprising: fetching direct memory access (DMA) instructions from the instruction memory under control of the DMA processor;executing the fetched DMA instructions in parallel with the processor fetching and executing the processor instructions, the DMA instructions when executed causing the transfer of data to populate the plurality of memories with data from the external device; andtransferring data from the external device to the plurality of memories.
  • 11. The method of claim 10 wherein the executed DMA instructions specify a pattern to populate the plurality of memories.
  • 12. The method of claim 11 wherein the pattern is a block, circular, or stride pattern.
  • 13. The method of claim 10 wherein the data from the external device includes processor instructions.
  • 14. The method of claim 10 wherein the external device is an external host processor.
  • 15. The method of claim 10 wherein the external device is an external synchronous data random access memory (SDRAM).
  • 16. The method of claim 10 further comprising: executing the fetched DMA instructions in parallel with the processor fetching and executing the processor instructions, the DMA instructions when executed causing the transfer of data to populate the external device with data from the plurality of memories; andtransferring data from the plurality of memories to the external device.
  • 17. The method of claim 16 wherein the transferring data step further comprises: accessing data from the plurality of memories:writing the data to the external device wherein both the accessing and the writing steps occur in parallel.
  • 18. The method of claim 10 wherein the processor is a sequential processor (SP) which executes the data transferred from the external device as instructions.
RELATED APPLICATIONS

The present application is a continuation of U.S. Ser. No. 09/599,980 filed Jun. 22, 2000 now U.S. Pat. No. 6,748,517 which claims the benefit of U.S. Provisional Application Ser. No. 60/140,425 filed Jun. 22, 1999 which are incorporated herein by reference in their entirety.

US Referenced Citations (4)
Number Name Date Kind
4475155 Oishi et al. Oct 1984 A
5179689 Leach et al. Jan 1993 A
5822616 Hirooka Oct 1998 A
6944683 Barry et al. Sep 2005 B2
Provisional Applications (1)
Number Date Country
60140425 Jun 1999 US
Continuations (1)
Number Date Country
Parent 09599980 Jun 2000 US
Child 10797726 US