Method and Device for Controlling Output Arbitration

Information

  • Patent Application
  • 20170012890
  • Publication Number
    20170012890
  • Date Filed
    August 13, 2014
    9 years ago
  • Date Published
    January 12, 2017
    7 years ago
Abstract
Provided is a method and device for controlling output arbitration, comprising: a received data stream is stored in a corresponding data cache queue according to a de-multiplexing filter condition, and data address information of the corresponding data cache queue is updated; when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an End Of Packet (EOP), the data cache queue is controlled to apply for output arbitration and the state of the data cache queue is updated; and the cache data in the data cache queue which applies for the output arbitration is outputted according to a preset scheduling rule and the state of the data cache queue.
Description
TECHNICAL FIELD

The present disclosure relates to the technical field of digital videos, and in particular to a method for controlling output arbitration and a device for controlling output arbitration.


BACKGROUND

During a process of transmitting a digital video stream, de-multiplexing Transport Stream (TS) by software has defects such as high occupancy rate of CPU resources and low execution efficiency when locating and finding specified data, thus, the TS is usually de-multiplexed by hardware.


Through de-multiplexing, data with different attributes are classified and stored in different cache queues. For example, Packetized Elementary Stream (PES) data is stored according to programs, section data is stored according to types, other data such as index data and self-adaption area data is stored according to application requirements; then the data in each cache queue is output, by various methods for controlling output arbitration, to an address where the software is able to access, for example, Double Data Rate (DDR) synchronous dynamic memory; what is directly processed by the software is the de-multiplexed data streams with known types, such as the PES data stream, the section data stream, the index data stream, the self-adaption data streams. This type of system architecture effectively improves the efficiency of de-multiplexing and video applications, and thus ensures the play smoothness of digital videos in a condition of high code rate and high-multiplexing.


Common methods for controlling output arbitration include: Strict Priority (SP), Round Robin (RR), Weighted Round Robin (WRR), Deficit Round Robin (DRR), Deficit Weighted Round Robin (DWRR) and other methods. Different method for controlling output arbitration may meet different design requirements; or, multiple methods for controlling output arbitration are used in one system design simultaneously; or, an existing method for controlling output arbitration is modified according to special requirements to execute a specified function. The process to implement an SP fixed-length scheduling algorithm in a related art includes the following steps:


Step 1: a priority (P(i),i=1, . . . n) is assigned to each data cache queue, where P(1) is the lowest scheduling priority and P(n) is the highest scheduling priority, each data cache queue corresponding to a different scheduling priority.


Step 2: in the cache data queues, the queue of which the length of cache data is greater than or equal to a fixed length applies for output arbitration.


Step 3: i=n, the scheduler judges whether the data cache queue with the highest scheduling priority P(i) applies for output arbitration; when a judgement result is yes, the scheduler schedules to output a fixed length of data in the data cache queue with the highest scheduling priority P(i), return to Step 2; otherwise, return to Step 4.


Step 4: i=i−1, the scheduler judges whether the data cache queue with the second highest scheduling priority P(i−1) applies for output arbitration; when a judgement result is yes, the scheduler schedules to output a fixed length of data in the data cache queue with the second highest scheduling priority P(i−1), return to Step 2; otherwise, return to Step 4.


Step 5: the scheduler judges whether the data cache queue currently applies for output arbitration is one with the lowest scheduling priority P(1); when a judgement result is yes, return to Step 2; otherwise, return to Step 4.


According to the SP fixed-length output scheduling algorithm, the data is assigned with different priorities and the data queue with the highest scheduling priority have the priority to be served, to ensure that the most important application is able to obtain the fastest data transmission.


During implementing the process of the related SP fixed-length output scheduling scheme, there are at least the following defects:


for PES data and section data obtained after de-multiplexing streaming data, which needs to be processed as a whole packet by software, when the SP fixed-length output scheduling scheme is directly adopted, if a cache queue has stored an End Of Packet (EOP) of the PES data or the section data but the length of the cache data stored in the cache queue has not reached the fixed length, no matter the cache queue has the highest scheduling priority or the lowest scheduling priority, cache queue will not apply for arbitration. In this condition, since the cache queue with the highest scheduling priority does not output the EOP of the PES data or the section data, the software is not able to acquire the latest data information corresponding to the PES data or the section data in time; the cache queue is able to apply for output arbitration unless the cache queue meets the fixed-length condition, and the software is able to process the entire packet unless the scheduler responds to the application and schedules to output. This defect will impact the timeless and completeness of processing the data stream and reduce the performance of system.


Therefore, for streaming data cache queues, the direct application of the SP fixed-length output scheduling algorithm merely meets the prioritization service of each queue, but meets the timeless service requirement of data.


SUMMARY

In view of the above, the embodiments of the present disclosure aim at providing a method for controlling output arbitration and a device for controlling output arbitration, which not only can provide an output arbitration service for the fixed-length data, but also can provide an output arbitration service for the variable-length data which contains an End Of Packet (EOP) and of which a length is less than the fixed length.


In order to achieve the above object, the technical scheme of the present disclosure is as follows:


the embodiment of the present disclosure provides a method for controlling output arbitration, including: storing a received data stream in a corresponding data cache queue according to a de-multiplexing filter condition, and updating data address information of the corresponding data cache queue; when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an End Of Packet (EOP), controlling the data cache queue to apply for output arbitration and updating a state of the data cache queue; and outputting the cache data in the data cache queue which applies for the output arbitration according to a preset scheduling rule and the state of the data cache queue.


In an example embodiment, outputting the cache data in the data cache queue which applies for the output arbitration according to the preset scheduling rule and the state of the data cache queue comprises: setting a scheduling priority for each data cache queue, each data cache queue corresponding to a different scheduling priority; outputting the cache data in the data cache queue which applies for the output arbitration according to an order from a highest scheduling priority to a lowest scheduling priority and the state of the data cache queue.


In an example embodiment, outputting the cache data in the data cache queue which applies for the output arbitration according to the order from the highest scheduling priority to the lowest scheduling priority and the state of the data cache queue comprises: Step A: acquiring a data cache queue with the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority; Step B: judging whether the currently acquired data cache queue applies for output arbitration according to the state of the data cache queue; when it is determined that the currently acquired data cache queue applies for the output arbitration, executing Step C; when it is determined that the currently acquired data cache queue does not apply for the output arbitration, executing Step D; Step C: outputting the cache data in the current data cache queue according to the data address information, then executing Step A; Step D: when it is determined that the currently acquired data cache queue is a data cache queue with the lowest scheduling priority, executing Step A; when it is determined that the currently acquired data cache queue is not the data cache queue with the lowest scheduling priority, acquiring a data cache queue with a scheduling priority next to the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority, and executing Step B.


In an example embodiment, outputting the cache data in the current data cache queue according to the data address information comprises: when the length of the cache data is greater than or equal to the fixed length, outputting the fixed length of cache data in the data cache queue according to the data address information and preset fixed-length output information; when the length of the cache data is less than the fixed length but the cache data contains the EOP, determining an EOP address of the cache data according to the data address information and outputting, according to the EOP address, a corresponding variable length of cache data of which a length is less than the fixed length.


In an example embodiment, storing the received data stream in the corresponding data cache queue according to the de-multiplexing filter condition comprises: determining the data cache queue corresponding to the data stream according to the de-multiplexing filter condition; acquiring the current data address information of the data cache queue; storing the data stream in the corresponding data cache queue according to the acquired data address information.


According to another embodiment of the present disclosure, a device for controlling output arbitration is provided, including: a data enqueuing component, a queue managing component, an arbitration controlling component and an output scheduling component, wherein the data enqueuing component is configured to store a received data stream in a corresponding data cache queue according to a de-multiplexing filter condition; the queue managing component is configured to update data address information of the corresponding data cache queue; the arbitration controlling component is configured to control the data cache queue to apply for output arbitration and to update a state of the data cache queue when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or it is determined that the cache data contains an End Of Packet (EOP); and the output scheduling component is configured to output the cache data in the data cache queue which applies for the output arbitration according to a preset scheduling rule and the state of the data cache queue.


In an example embodiment, the output scheduling component is configured to: set a scheduling priority for each data cache queue, each data cache queue corresponding to a different scheduling priority; output the cache data in the data cache queue which applies for the output arbitration according to an order from a highest scheduling priority to a lowest scheduling priority and the state of the data cache queue.


In an example embodiment, the output scheduling component is configured to: Step A: acquire a data cache queue with the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority; Step B: judge whether the currently acquired data cache queue applies for output arbitration according to the state of the data cache queue; when it is determined that the currently acquired data cache queue applies for the output arbitration, execute Step C; when it is determined that the currently acquired data cache queue does not apply for the output arbitration, execute Step D; Step C: output the cache data in the current data cache queue according to the data address information, then execute Step A; Step D: when it is determined that the currently acquired data cache queue is a data cache queue with the lowest scheduling priority, execute Step A; when it is determined that the currently acquired data cache queue is not the data cache queue with the lowest scheduling priority, acquire a data cache queue with a scheduling priority next to the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority, and execute Step B.


In an example embodiment, the output scheduling component is configured to: when the length of the cache data is greater than or equal to the fixed length, output the fixed length of cache data in the data cache queue according to the data address information and preset fixed-length output information; when the length of the cache data is less than the fixed length but the cache data contains the EOP, determine an EOP address of the cache data according to the data address information and output, according to the EOP address, a corresponding variable length of cache data of which a length is less than the fixed length.


In an example embodiment, the data enqueuing component is configured to: determine the data cache queue corresponding to the data stream according to the de-multiplexing filter condition; acquire the current data address information of the data cache queue; store the data stream in the corresponding data cache queue according to the acquired data address information.


With the method and device for controlling output arbitration provided in the embodiment of the present disclosure, a received data stream is stored in a corresponding data cache queue according to a de-multiplexing filter condition and data address information of the corresponding data cache queue is updated, the data cache queue is controlled to apply for output arbitration when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an EOP, and the cache data in the data cache queue which applies for output arbitration is output according to a preset scheduling rule and the state of the data cache queue, wherein the EOP might be the EOP of PES or Section data. Therefore, the present disclosure not only can provide an output arbitration service for the fixed-length data, but also can locate the EOP of the PES data or the Section data through a hardware finding and locating method to provide an output arbitration service for the variable-length PES or Section data which contains an EOP and of which the length is less than the fixed-length, and at same time can meet the timeless service requirement of data.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flowchart of a method for controlling output arbitration according to the embodiment of the present disclosure;



FIG. 2 is a flowchart of a method for outputting cache data based on an SP algorithm according to the embodiment of the present disclosure; and



FIG. 3 is a structure diagram of a device for controlling output arbitration according to the embodiment of the present disclosure.





DETAILED DESCRIPTION OF THE EMBODIMENTS

In the embodiment of the present disclosure, a received data stream is stored in a corresponding data cache queue according to a de-multiplexing filter condition and data address information of the corresponding data cache queue is updated, the data cache queue is controlled to apply for output arbitration when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an EOP, and the cache data in the data cache queue which applies for output arbitration is output according to a preset scheduling rule and the state of the data cache queue. Therefore, the present disclosure not only can provide an output arbitration service for the fixed-length data, but also can provide an output arbitration service for the variable-length data which contains an EOP and of which a length is less than the fixed length.


Here, carrying an EOP refers to carrying the EOP of the PES data or Section data.


The specific embodiments of the present disclosure are further illustrated below in conjunction with accompanying drawings.


The embodiment of the present disclosure provides a method for controlling output arbitration, which as shown in FIG. 1 includes:


S100: a received data stream is stored in a corresponding data cache queue according to a de-multiplexing filter condition, and data address information of the corresponding data cache queue is updated.


Specifically, first, a data cache queue corresponding to the data stream is determined according to the de-multiplexing filter condition; then, the current data address information of the data cache queue is acquired; and finally, the data stream is stored in the corresponding data cache queue according to the acquired data address information, and the data address information of the corresponding data cache queue is updated. Here, the de-multiplexing operation refers to the de-multiplexing operation realized through hardware, that is, hardware de-multiplexing, so as to solve the problems of high occupancy rate of CPU resources and low execution efficiency of specific bytes caused by software de-multiplexing.


Here, a corresponding data cache queue is set for different data types in advance; the de-multiplexing filter condition may be set to store PES data in a corresponding data cache queue according to program types in advance, or to store Section data in a corresponding data cache queue according to data types, or to store other data such as index data and self-adaption area data in a corresponding data cache queue according to application requirements; each type of data corresponds to a fixed data cache queue; through the de-multiplex filter condition, the data cache queues corresponding to different types of data is determined, and each type of data is taken as cache data to store in the corresponding data cache queue.


Here, the preset data address information includes data read/write pointer information and other information; when data is enqueued, a write starting address is indicated for each data cache queue according to a write pointer and the data is stored in a storage unit starting with the write starting address, and then the received data stream is stored in a corresponding data cache queue according to the data address information; when data is dequeued, a read starting address is indicated for each data cache queue according to a read pointer and the data in the storage unit starting with the read starting address is output; then, the data address information of the data cache queue (that is, the data read/write pointer of the data cache queue) is updated according to the data enqueuing and data dequeuing condition of the data cache queue, so as to complete the management of the data cache queue by monitoring the data enqueuing and data dequeuing condition in real time.


Here, data enqueuing and data dequeuing are two independent processes; while cache data is output from the data cache queue, there might be data entering the queue cache queue. S101: when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an EOP, the data cache queue is controlled to apply for output arbitration and update the state of the data cache queue.


Here, carrying an EOP refers to carrying the EOP of PES or Section data. The initial state of all data cache queues is set to a state of not applying for output arbitration in advance; when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an EOP, the data cache queue applies for output arbitration and the state of the data cache queue is updated to apply for output arbitration.


Generally, the length of a PES or Section packet is thousand or ten-thousand times the fixed output length of the data cache queue; the PES or Section data enters the corresponding data cache queue one after another according to the de-multiplexing filter condition; when the length of the cache data in the data cache queue is greater than or equal to a fixed length, that is, when the fixed-length output condition is met, the data cache queue is controlled to apply for output arbitration, and the cache data in the data cache queue which applies for output arbitration is output according to a preset scheduling rule and the state of the data cache queue; since the length of the PES or Section packet is very long, all the previous data entering the data cache queue one after another is output by arbitration when meeting the fixed-length output condition; however, since the data finally entering the data cache queue probably might not meet the fixed-length output condition, in order to ensure the timeliness and completeness of data stream processing and to make following software process in time, it is needed to contain an EOP in the PES or Section packet; in this way, even if the data finally entering the data cache queue does not meet the fixed-length output condition, since the packet contains an EOP, the EOP is able to be located by a hardware finding and locating method, then a variable-length of PES or Section data is output according to the address indicated by the EOP, thus, the entire PES or Section packet is output, meeting the timeliness and completeness of data stream processing.


S102: the cache data in the data cache queue which applies for the output arbitration is outputted according to a preset scheduling rule and the state of the data cache queue.


Here, different scheduling rules may be set to implement the output of the cache data in the data cache queue which applies for output arbitration; specifically, the cache data in the data cache queue which applies for output arbitration is outputted based on an SP algorithm, which includes the following process:


S200: scheduling priorities (P(1) . . . P(n)) for all data cache queues are set, where P(1) is the lowest scheduling priority and P(n) is the highest scheduling priority, each data cache queue corresponding to a different scheduling priority.


S201: the data cache queue with the highest scheduling priority is acquired according to the order (P(n) . . . P(1)) from the highest scheduling priority to the lowest scheduling priority.


S202: whether the currently acquired data cache queue applies for output arbitration is judged according to the state information of the data cache queue; when it is determined that the currently acquired data cache queue applies for the output arbitration, S203 is executed; when it is determined that the currently acquired data cache queue does not apply for the output arbitration, S204 is executed.


S203: the cache data in the current data cache queue is outputted according to the data address information, then S201 is executed.


S204: when it is determined that the currently acquired data cache queue is a data cache queue with the lowest scheduling priority P(1), S201 is executed; when it is determined that the currently acquired data cache queue is not the data cache queue with the lowest scheduling priority P(1), S205 is executed.


S205: the data cache queue with the scheduling priority P(n−1) next to the highest scheduling priority is acquired according to the order (P(n) . . . P(1)) from the highest scheduling priority to the lowest scheduling priority, and S202 is executed.


It should be noted that the state of the data cache queue is updated in real time according to the condition in that the data cache queue applies for output arbitration; when S202 is executed, it is re-determined whether the currently acquired data cache queue applies for output arbitration according to the updated state of the data cache queue.


Here, since data enqueuing and data dequeuing are two independent processes; while cache data is output from a data cache queue, there might be data entering the queue cache queue. Therefore, after the cache data in the data cache queue which applies for output arbitration is output according to the preset scheduling rule and the state of the data cache queue, the data cache queue probably still has cached a data of which the length is greater than or equal to a fixed length, or of which a length is less than the fixed length but which contains an EOP, then the data cache queue continues to be controlled to apply for output arbitration; when the length of the cache data in the data cache queue in the following process is less than the fixed length and the data does not contain the EOP, the state of the data cache queue is updated to a state of not applying for output arbitration.


In the embodiment of the present disclosure, the scheduling rule based on the SP algorithm can improve the de-multiplexing efficiency of data stream, meet timeliness service requirements, ensure the real-time capability of the data of following system modules and improve the entire system performance.


In S203, the cache data in the current data cache queue is outputted according to the data address information, and the process is specifically realized by the following process:


when the length of the cache data is greater than or equal to the fixed length, the fixed length of cache data in the data cache queue is outputted according to the data address information and preset fixed-length output information;


when the length of the cache data is less than the fixed length but the cache data contains the EOP, an EOP address of the cache data is determined according to the data address information and a corresponding variable length of cache data of which a length is less than the fixed length, is outputted according to the EOP address.


Specifically, the EOP refers to the EOP of the PES or Section data; when it is determined that the length of the PES or Section data is less than the fixed length but the data contains an EOP, the EOP address of the PES or Section data is determined according to the PES or Section data address, and a corresponding variable length of cache data of which a length is less than the fixed length, is output according to the EOP address.


When the length of the PES or Section data or other types of data is greater than or equal to the fixed length, the fixed length of cache data in the data cache queue is outputted according to the data address information and preset fixed-length output information;


In S203, the EOP can be located by a hardware finding and locating method, then the address indicated by the EOP is determined according to the data address information, and then a variable-length cache data is output according to the address of the EOP, thereby improving the execution efficiency of system.


Specifically, the EOP is located through the following methods: when the PES or Section data is enqueued, the Start Of Packet (SOP) of the PES or Section packet is located according to a Payload Unit Star Indicator (PUSI), so as to acquire the EOP of the last PES or Section packet; the EOP and the PES or Section data are enqueued simultaneously.


The embodiment of the present disclosure not only can provide an output arbitration service for the fixed-length data, but also can locate the EOP through a hardware finding and locating method to provide an output arbitration service for the variable-length PES or Section data which contains an EOP and of which the length is less than the fixed-length, thereby guaranteeing the completeness of the PES or Section data, meeting the timeless service requirement and enhancing the entire system performance.


Based on the above method, the embodiment of the present disclosure further provides a device for controlling output arbitration, of which the principle to solve problems is similar to the problems of the method; therefore, the implementation of the device can refer to the implementation of the method and no further description is provided for repeated part.


As shown in FIG. 3, the device for controlling output arbitration provided by the embodiment of the present disclosure includes: a data enqueuing component 301, a queue managing component 302, an arbitration controlling component 303 and an output scheduling component 304, wherein


the data enqueuing component 301 is configured to store a received data stream in a corresponding data cache queue according to a de-multiplexing filter condition;


the queue managing component 302 is configured to update data address information of the corresponding data cache queue;


the arbitration controlling component 303 is configured to control the data cache queue to apply for output arbitration and to update a state of the data cache queue when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an EOP; and


the output scheduling component 304 is configured to output the cache data in the data cache queue which applies for the output arbitration according to a preset scheduling rule and the state of the data cache queue.


The division of the function components above is an example implementation given by the embodiment of the present disclosure and does not form a limitation to the present disclosure.


Here, carrying an EOP refers to carrying the EOP of PES or Section data.


During specific implementation, the output scheduling component 304 is further configured to:


set a scheduling priority for each data cache queue, each data cache queue corresponding to a different scheduling priority;


output the cache data in the data cache queue which applies for the output arbitration according to an order from a highest scheduling priority to a lowest scheduling priority and the state of the data cache queue.


During specific implementation, the output scheduling component 304 is further configured to:


Step A: acquire a data cache queue with the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority;


Step B: judge whether the currently acquired data cache queue applies for output arbitration according to the state of the data cache queue; when it is determined that the currently acquired data cache queue applies for the output arbitration, execute Step C; when it is determined that the currently acquired data cache queue does not apply for the output arbitration, execute Step D;


Step C: output the cache data in the current data cache queue according to the data address information, then execute Step A;


Step D: when it is determined that the currently acquired data cache queue is a data cache queue with the lowest scheduling priority, execute Step A; when it is determined that the currently acquired data cache queue is not the data cache queue with the lowest scheduling priority, acquire a data cache queue with a scheduling priority next to the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority, and execute Step B.


During specific implementation, the output scheduling component 304 is further configured to:


when the length of the cache data is greater than or equal to the fixed length, output the fixed length of cache data in the data cache queue according to the data address information and preset fixed-length output information;


when the length of the cache data is less than the fixed length but the cache data contains the EOP, determine an EOP address of the cache data according to the data address information and output, according to the EOP address, a corresponding variable length of cache data of which a length is less than the fixed length.


During specific implementation, the data enqueuing component 301 is further configured to:


determine the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;


acquire the current data address information of the data cache queue;


store the data stream in the corresponding data cache queue according to the acquired data address information.


Here, de-multiplexing filtering may be realized by a de-multiplexing Packet ID (PID) filter in actual applications; each de-multiplexing PID filter corresponds to a fixed data cache queue; when the data enqueuing component 301 receives a data stream filtered by the de-multiplexing PID filter, the data enqueuing component 301 determines the corresponding data cache queue according to the de-multiplexing PID filter and acquires the data address information of the data cache queue from the queue managing component 302, and stores the data stream in the corresponding data cache queue according to the data address information.


Here, the queue managing component 302 maintains a fixed data cache queue for each de-multiplexing PID filter; the data attribute of any one data cache queue is determined by the filter information set by the corresponding de-multiplexing PID filter.


During actual applications, the data enqueuing component 301, the queue managing component 302, the arbitration controlling component 303 and the output scheduling component 304 may be realized by a chip logic gate circuit or a field-programmable gate array (FPGA) located in the output arbitration control device.


Although the above content has described the example embodiment of the present disclosure, those skilled in the art can make other changes and modifications to these embodiments once learning the basic creative concepts. Therefore, claims appended below are intended to include example embodiments and all changes and modifications included in the scope of the present disclosure.


Obviously, those skilled in the art can make various changes and variations to the present disclosure without departing from the spirit and scope of the present disclosure; therefore, if these modifications and variations of the present disclosure are included in the scope of the claims and equivalent technologies of the present disclosure, the present disclosure shall cover these changes and variations.


INDUSTRIAL APPLICABILITY

To sum up, through the above embodiments and example implementations, the present disclosure not only can provide an output arbitration service for the fixed-length data, but also can locate the EOP of PES or Section data through a hardware finding and locating method to provide an output arbitration service for the variable-length PES or Section data which contains an EOP and of which the length is less than the fixed-length, and at same time can meet the timeless service requirement of data.

Claims
  • 1. A method for controlling output arbitration, comprising: storing a received data stream in a corresponding data cache queue according to a de-multiplexing filter condition, and updating data address information of the corresponding data cache queue;when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or when it is determined that the length of the cache data is less than the fixed length but the cache data contains an End Of Packet (EOP), controlling the data cache queue to apply for output arbitration and updating a state of the data cache queue; andoutputting the cache data in the data cache queue which applies for the output arbitration according to a preset scheduling rule and the state of the data cache queue.
  • 2. The method as claimed in claim 1, wherein outputting the cache data in the data cache queue which applies for the output arbitration according to the preset scheduling rule and the state of the data cache queue comprises: setting a scheduling priority for each data cache queue, each data cache queue corresponding to a different scheduling priority;outputting the cache data in the data cache queue which applies for the output arbitration according to an order from a highest scheduling priority to a lowest scheduling priority and the state of the data cache queue.
  • 3. The method as claimed in claim 2, wherein outputting the cache data in the data cache queue which applies for the output arbitration according to the order from the highest scheduling priority to the lowest scheduling priority and the state of the data cache queue comprises: Step A: acquiring a data cache queue with the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority;Step B: judging whether the currently acquired data cache queue applies for output arbitration according to the state of the data cache queue; when it is determined that the currently acquired data cache queue applies for the output arbitration, executing Step C; when it is determined that the currently acquired data cache queue does not apply for the output arbitration, executing Step D;Step C: outputting the cache data in the current data cache queue according to the data address information, then executing Step A;Step D: when it is determined that the currently acquired data cache queue is a data cache queue with the lowest scheduling priority, executing Step A; when it is determined that the currently acquired data cache queue is not the data cache queue with the lowest scheduling priority, acquiring a data cache queue with a scheduling priority next to the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority, and executing Step B.
  • 4. The method as claimed in claim 3, wherein outputting the cache data in the current data cache queue according to the data address information comprises: when the length of the cache data is greater than or equal to the fixed length, outputting the fixed length of cache data in the data cache queue according to the data address information and preset fixed-length output information;when the length of the cache data is less than the fixed length but the cache data contains the EOP, determining an EOP address of the cache data according to the data address information and outputting, according to the EOP address, a corresponding variable length of cache data of which a length is less than the fixed length.
  • 5. The method as claimed in claim 1, wherein storing the received data stream in the corresponding data cache queue according to the de-multiplexing filter condition comprises: determining the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquiring the current data address information of the data cache queue;storing the data stream in the corresponding data cache queue according to the acquired data address information.
  • 6. A device for controlling output arbitration, comprising: a data enqueuing component, a queue managing component, an arbitration controlling component and an output scheduling component, wherein the data enqueuing component is configured to store a received data stream in a corresponding data cache queue according to a de-multiplexing filter condition;the queue managing component is configured to update data address information of the corresponding data cache queue;the arbitration controlling component is configured to control the data cache queue to apply for output arbitration and to update a state of the data cache queue when it is determined that a length of cache data in the data cache queue is greater than or equal to a fixed length, or it is determined that the cache data contains an End Of Packet (EOP); andthe output scheduling component is configured to output the cache data in the data cache queue which applies for the output arbitration according to a preset scheduling rule and the state of the data cache queue.
  • 7. The device as claimed in claim 6, wherein the output scheduling component is configured to: set a scheduling priority for each data cache queue, each data cache queue corresponding to a different scheduling priority;output the cache data in the data cache queue which applies for the output arbitration according to an order from a highest scheduling priority to a lowest scheduling priority and the state of the data cache queue.
  • 8. The device as claimed in claim 7, wherein the output scheduling component is configured to: Step A: acquire a data cache queue with the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority;Step B: judge whether the currently acquired data cache queue applies for output arbitration according to the state of the data cache queue; when it is determined that the currently acquired data cache queue applies for the output arbitration, execute Step C; when it is determined that the currently acquired data cache queue does not apply for the output arbitration, execute Step D;Step C: output the cache data in the current data cache queue according to the data address information, then execute Step A;Step D: when it is determined that the currently acquired data cache queue is a data cache queue with the lowest scheduling priority, execute Step A; when it is determined that the currently acquired data cache queue is not the data cache queue with the lowest scheduling priority, acquire a data cache queue with a scheduling priority next to the highest scheduling priority according to the order from the highest scheduling priority to the lowest scheduling priority, and execute Step B.
  • 9. The device as claimed in claim 8, wherein the output scheduling component is configured to: when the length of the cache data is greater than or equal to the fixed length, output the fixed length of cache data in the data cache queue according to the data address information and preset fixed-length output information;when the length of the cache data is less than the fixed length but the cache data contains the EOP, determine an EOP address of the cache data according to the data address information and output, according to the EOP address, a corresponding variable length of cache data of which a length is less than the fixed length.
  • 10. The device as claimed in claim 6, wherein the data enqueuing component is configured to: determine the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquire the current data address information of the data cache queue;store the data stream in the corresponding data cache queue according to the acquired data address information.
  • 11. The method as claimed in claim 2, wherein storing the received data stream in the corresponding data cache queue according to the de-multiplexing filter condition comprises: determining the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquiring the current data address information of the data cache queue;storing the data stream in the corresponding data cache queue according to the acquired data address information.
  • 12. The method as claimed in claim 3, wherein storing the received data stream in the corresponding data cache queue according to the de-multiplexing filter condition comprises: determining the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquiring the current data address information of the data cache queue;storing the data stream in the corresponding data cache queue according to the acquired data address information.
  • 13. The method as claimed in claim 4, wherein storing the received data stream in the corresponding data cache queue according to the de-multiplexing filter condition comprises: determining the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquiring the current data address information of the data cache queue;storing the data stream in the corresponding data cache queue according to the acquired data address information.
  • 14. The device as claimed in claim 7, wherein the data enqueuing component is configured to: determine the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquire the current data address information of the data cache queue;store the data stream in the corresponding data cache queue according to the acquired data address information.
  • 15. The device as claimed in claim 8, wherein the data enqueuing component is configured to: determine the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquire the current data address information of the data cache queue;store the data stream in the corresponding data cache queue according to the acquired data address information.
  • 16. The device as claimed in claim 9, wherein the data enqueuing component is configured to: determine the data cache queue corresponding to the data stream according to the de-multiplexing filter condition;acquire the current data address information of the data cache queue;store the data stream in the corresponding data cache queue according to the acquired data address information.
Priority Claims (1)
Number Date Country Kind
201410053654.0 Feb 2014 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2014/084314 8/13/2014 WO 00