1. Field of the Invention
The present invention relates to data transfer between different subsystems of a computer, and in particular, to a method and respective system for transferring a stream of data from a first higher-speed subsystem of a computer to a plurality of lower speed subsystems, wherein the stream is structured in a sequence of blocks of different bit length, and a block is to be transferred to a specific one of said lower-speed subsystems.
2. Description and Disadvantages of Prior Art
Between single subsystems of a computer data and commands are transported via multiple bus systems in blocks of different bit length. Thereby, the term “block” is understood to represent generally a piece of information, i.e., an information packet, such as above commands including or excluding respective data, wherein a block makes sense to be processed consistently with a single semantic rule, as for example “decode command”, or “pass block to subsystem X”, etc. A general technical problem emerges, when a transport crosses subsystem borders and the data speed in the source subsystem is higher than the data speed in the target system such that data cannot be directly processed in a synchronous way as it emerges at the subsystem border.
A typical example is disclosed in IBM journal of research and development, volume 48 no. 4, May/July 2004, page 449 to 459 and in particular FIG. 2 and text in the right column of page 450.
As a matter of fact the incoming input stream is very inhomogeneous in packet size, which results in a quite inefficient use of the buffers. Nevertheless, the buffer size must be designed quite large in order to cope with payload peaks.
A further disadvantage consists in the fact that two different sets of buffers are needed in order to implement the two different functions of speed matching and in-order alignment in case when program commands are transferred via the bus system.
It is thus the objective of the present invention to provide a data management at the above-mentioned subsystem borders which takes a more efficient use of the storage resources.
This objective of the invention is achieved by the features stated in enclosed independent claims. Further advantageous arrangements and embodiments of the invention are set forth in the dependent claims.
The basic idea of the present invention is to use a queue instead of the before-mentioned plurality of buffers and using this queue both, for speed matching and for alignment purposes, if the latter plays a major role. A write client logic is inserted at the upstream write end of the queue which inserts certain control bits into a queue entry at predetermined entry locations. Those control bits are evaluated by a read client logic reading those control bits from the output register of the queue, which evaluation yields to respective signalling of the queue control logic, thus implementing steps like stopping the queue read process, starting the queue read process with the last or the next entry again, etc.
Thus, according to this basic aspect of the present invention the payload of the input stream is cut at predefined locations which are appropriate for the respective prevailing payload data and is stored in respective queue entries together with additional control information which can immediately be evaluated at the output register of the queue. Thus, at the end of the queue the queue entries can be read out and a correct processing of the payload can be derived by evaluating the control bits also comprised of a respective queue entry. By that the advantage results that the queue output evaluation logic is not required to be programmed by semantics which result from the content of the payload.
This approach solves the difficulty to cope with the constraint in this architecture that at the output register of the queue only a predetermined number of blocks—here only one—can be processed at a time. The size of the queue entries is typically so large that a queue entry may be filled with a plurality of blocks, if the block length is short enough. An example for a short block is when the block contains a simple command without data or if it contains just a small set of data. On the other hand a block can also have a big length such that it spans over more than a single queue entry.
According to a preferred aspect of the present invention a specific control bit is provided, which is enabled when the respective queue entry which comprises it, shall be repeatedly displayed at the output register of the queue. This handles the problem of short blocks of which in a first read process the first block is processed and in a subsequent read the next block is processed.
A further control bit can be preferably provided which tells if or if not the respective queue entry must be repeatedly displayed at the output register of the queue after the queue read process was stopped.
This is needed to ensure that the queue read process can stop even if there is a part of a new block in a queue entry that contains data associated to the previous block. For example, a queue entry must be read before the read process stops, because there may be a last part of an already started block, or multiple blocks-starts are within one entry. However, the last block-start may not be allowed to be decoded yet, because there is the danger of an under run. In case of a slow writer; this is one reason why the queue read process must stop here. In such a case such an entry must be repeated when the queue read process starts again.
Further preferably, a countdown control information is implemented which indicates the number of queue entries still to follow upstream of a current queue entry, wherein the following queue entries comprise the continuation of the latest block in the current queue entry. In other words, a countdown indicator of the control bits indicates the number of required queue entries needed for finishing a current packet. If there is at least one packet header in a queue entry, the countdown indicates the length of the whole packet.
This countdown control information is needed to determine when the queue can safely start to readout a block downstream without interruption. This is mainly needed to cope with the speed matching requirements. In case of a slow writer, i.e., a slow queue write process and fast reader, i.e., a fast queue read process the reader uses the countdown and the speed ratio to calculate the optimal point in time to start the read of a block for minimal latency on one side and under run avoidance on the other.
The present invention is illustrated by way of example and is not limited by the shape of the figures of the drawings in which:
With general reference to the figures and with special reference now to
With reference to the following figures an application is described in which the term “packet” is used in a sense which coincides exactly to the more general term “block” as used before and within the claims.
With reference to
In this exemplary embodiment these control signals are:
Advantageously, the queue 24 needs no knowledge about the semantic meaning of the payload contained in each queue entry 26. The only signals treated by the queue control logic 25 are [c], [u] and [k], that are stored in parallel with the payload into one queue entry. Signals [s] and [r] are generated by a decode logic 29 at the downstream end of the queue with respect to the internal state of the queue and depending on the signals [c], L[u] and [k] of a particular queue entry 26.
In particular, the signal “Stopped on partial entry” [s] is generated whenever the queue is signalled by the control bit evaluation logic 27 that a streamed readout of all data according to one header is possible, and the entry was indicating a conditional repeat signal [c].
Further, with respect to an exemplary embodiment of the invention wherein the decode logic operates stateless, the signal “Repeated entry” [r] is generated whenever the queue shows an [u]—flagged entry the second time. This is needed to tell the downstream logic, which block—enumerated by its position in the decode window—has to be processed at a particular cycle. In this stateless implementation of the decode logic the [u] flag is needed to tell the decode logic, that a part of the entry has already been decoded at the time the entry was shown the first time. In a more general sense this might be reflected generally by an Integer number for coding e.g. 1st repeat, 2nd repeat 3rd repeat, etc. Causing the decode logic to process the portion for the 1st repeat, 3nd portion for the 2nd repeat and so on.
If only two blocks can start in a single entry such a single flag meaning “1st repeat” is sufficient.
According to this general aspect it is disclosed that, if the logic attached is only capable to cope with the start of a certain number of packet starts at a time—this preferred embodiment deals with one packet starts at a time—and if multiple blocks starts may appear within one entry, which may be dictated by the queue write process as this is the case in the sample implementation of this disclosure, the entry has to be given to the downstream logic multiple times. In the present embodiment this is once for each block-start within the entry. However, it should be added that as described before the last block-start of an entry may have to be delayed because of under run prevention in case of slow writer.
The following descriptions are based on a scenario where a single repeat is sufficient to deal with the packets. The algorithm can easily be adapted and scaled for other numbers of repeats.
With additional reference to
The inbound stream [i] 30 generally consists of a number of [1] parallel data shots. The inbound stream 30 is fed into the shift register [S] 32. It consists of the registers [A] 31 and [B] 33, where register [B] 33 shows the value of [A] at the time (t−1).
The control signal “unconditioned repeat” [u] is generated by logic 27 whenever there is more than one packet header start in the window [w] 36. The appendix [a] 34 ensures that a packet header that starts in the window [w] 36 fully resides in register [A] 31.
The control signal “conditional repeat” [c] guides the queue 24 how to behave on a stop/restart condition. This signal is generated whenever a previous packet will end and also a new packet will start in register [A] 31, which is not fully present in register [A] at this particular time (t).
With additional reference to
The queue 24 [Q] writes the payload [p] to the shift register 32 [S]. The data is fed in register 31 [A] while [A] is moved to register 33 [B] and register [B] is moved to a register 35 [C]. The inspector sets the the header multiplexer denoted as “header mux” to the leftmost header start bit “header start” in [w].
Whenever there is a [s] signal read, which means that queue 24 [Q] stopped on an entry 26 with the [c] condition, no header in [A] will be decoded until the signal is gone. After the pause the same shot appears in [A] but without the [s] condition which reactivates the decoder.
With reference to
The control flow of the write logic generating the important control signals for the queue management is depicted to comprise a general step 710 of analysing the inbound stream 30. In particular the number of subsequent queue entries k required to comprise a complete block, here a complete command, is always encoded independently of the composition of a data shot.
In step 730 the condition is checked if more than one block start is visible in the available evaluation window w, see back to
In a step 740 the queue restart condition is managed by setting control signal [c] if a previous block ends and a new block begin is visible in the available evaluation window w,
It should be added that the countdown [k] is not related to the signal generation of [c] and [u]. For the speed-matching however, [k] is needed to ensure that the queue read process only passes the queue entries to the client side when the whole packet can be processed without interruption.
In a step 830 the control signal u is read and evaluated in order to affect a repetition of the last queue entry at the output register of the pipe queue when this condition is met.
In step 840, the presence of control signal c is monitored in the same way.
With reference to
Hereby, the following abbreviations and data apply:
The following scenario deals with the following values of data shots:
The write logic 27 generates both signals c, u as described before. In the write stream 30 a rest of the last processed command is denoted with alpha, beta, gamma which is not taken into consideration in this example.
Then the start of packet 1 is denoted by H1 in the last section of data shot 1. In the next data shot 2 two subsequent parts of the header of packet 1 are depicted as h1. In the next section the start of the header of packet 2 denoted as H2 is placed and followed by one subsequent part h2 in the second data shot. In the third data shot the second subsequent part of the header of packet 2 is present in the first section of data shot 3. Then in the second section of data shot 3 the third data packet is started with header H3 and two subsequent part of headers h3. Finally, in the fourth data shot the fourth data packet begins (H4) followed by two subsequent header parts h4. The last section in data shot 4 is not taken into consideration for this example.
In
In
With reference back to
From that description as given above, thus in summary, the following advantages yield: The queue states and properties have no implication on the generation [G] as well as the decoding logic [D]. Only the queue depth is relevant to support a number of back-to-back commands.
The storage is on-demand distributed to the tasks of supporting long packets and supporting a big number of back-to-back commands.
The present invention can be realized in hardware, or a combination of hardware and software. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention can also be embedded in a computer program product such as an ASIC, for example a FPGA, which comprises all the features enabling the implementation of the methods described herein, and which—when installed in a computer system—is able to carry out these methods.
Computer program means or computer program in the present context mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following
Number | Date | Country | Kind |
---|---|---|---|
05106813.8 | Jul 2005 | EP | regional |