This invention relates to a data processing system, a method for task scheduling in a data processing system and a computer program product.
Data processing systems or devices for executing modern data processing applications process huge amounts of data using complex processing algorithms. Advanced video processing systems or devices for executing video processing applications, for example, may provide a wide range of processing capabilities, such as, for example, video encoding and decoding, motion compensated frame rate conversion, 3D-video processing etc., in order to provide a high end video experience. In this respect, “processing data” may comprise converting data from one representation into a different one, for example, converting a compressed video data stream into an uncompressed sequence of video frames. It may, for example, also refer to extracting parts of the information contained in the data, such as extracting audio information from multi-media data or detecting objects in video sequences, just to give a few examples.
A data processing system contains one or more processing devices for providing the needed high performance processing power. Data processing systems may, for example, be provided as a system on a chip (SoC) or as circuitry, e.g. located on a printed circuit board (PCB), containing one or more integrated circuit devices. Data processing systems, for example in mobile devices, such as portable computers, smartphones or the like or being part of an automotive apparatus, such as a vehicle etc., may provide limited processing power, requiring efficient usage.
Data processing applications may, for example, be communication network related applications, such as applications for video or multi-media transmission, internet traffic routing, or protocol conversions. Other data processing applications may provide, for example, video content or content combining multiple media data, such as images, video, textual information, audio, or 3D animated graphics. Data processing systems for execution of these applications may, for example, be arranged to process large amounts of data at a processing speed above a minimum processing speed associated with a particular application, such as error-free decoding and uninterrupted display of video sequences received in a compressed data format, just to give an example. The received data may be processed in a pre-determined sequence of consecutive processing stages.
A data processing system may be capable of processing, sequentially or concurrently, data belonging to the same or different applications. For each application, data may be processed at a quality of service (QoS) considered suitable for that particular application. A QoS parameter may, for example, be a required bit rate or image resolution, jitter, delay or bit error rate, just to name a few.
Instead of processing dedicated data on general purpose processors, specialized data processing systems can be used, which, for example, employ hardware acceleration engines, i.e. processing devices optimized for accelerated execution of dedicated tasks. In order to execute the different processing stages for a data set on available processing devices optimized for processing dedicated tasks, multiple-stage processing algorithms and methods are divided into multiple tasks, where each task provides a portion of the total processing needed for a whole data set. A task may correspond to a processing stage or a portion of a processing stage. For example, video processing systems being implemented, for example, on a graphics board or as a SoC, may include hardware acceleration engines arranged to implement, for example, video encoding and decoding, or motion compensated frame rate conversion functionalities and may help to achieve high video quality with reduced hardware complexity and processing latency. Allocating the tasks to dedicated processing devices as efficiently as possible usually contains performing a full search of dependencies between the tasks, in order to enable efficient pipeline processing of tasks depending on each other.
The present invention provides a data processing system, a method for task scheduling in a data processing system and a computer program product as described in the accompanying claims.
Specific embodiments of the invention are set forth in the dependent claims.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Further details, aspects and embodiments of the invention will be described, by way of example only, with reference to the drawings. In the drawings, like reference numbers are used to identify like or functionally similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary, as illustrated, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.
Referring to
The data processing system using an event-driven task scheduling approach may provide a fast and resource-saving task search and release and may avoid a conventional full search.
The task scheduling device 12 of the shown data processing system 10 may be arranged to assign tasks to a processing unit 16, 18, 20 and to distribute task assignment between the processing units 16, 18, 20. A task may be a processing algorithm consisting of instructions that can be loaded and executed by a processing unit. A processing unit 16, 18, 20 may be a processing device of the data processing system 10. A processing unit 16, 18, 20 may, for example, be a microprocessor, a microcontroller unit (MCU), a graphics processing unit (GPU) or any other circuitry arranged to execute program instructions of any or dedicated tasks. A processing unit may, for example, be a hardware acceleration engine, i.e. a processing device optimized for accelerated execution of dedicated tasks. Assigning a task to a processing unit may refer to allocating the processing resource, i.e. the processing unit and input and output buffer to the assigned task and the data to be processed.
Scheduling may refer to the way tasks are assigned to run on available processing units. A task scheduling device 12 may be arranged to receive tasks and to decide which task to be assigned when and to which of the processing units 16, 18, 20 in order to increase usage of the processing units 16, 18, 20 and improve performance of the data processing system 10. Performance of the data processing system 10 may be improved, for example, by enhancing task throughput, i.e. the number of tasks completed by unit of time, or by reducing latency and response time per task.
Receiving a task may, for example, refer to receiving a task descriptor for the particular task. A task descriptor may, for example, be a set of information comprising addresses of or pointers to a task identifier, task data and associated input and output buffers. A task may, for example, be defined by an identifier number of the processing unit associated to the task, and a pointer to or address of an associated input buffer, for receiving the data to be processed next, or input buffer list (IBL) and an associated output buffer, for receiving the processed data, or output buffer list (OBL).
Receiving a task may also refer to receiving only a task identifier or pointer to a task descriptor or it may refer to receiving all data related to the particular task. Similarly, assigning a task may also refer to assigning a task identifier or a task descriptor or any other information for enabling the selected processing unit to execute or perform the task. A task register 14 arranged to store a plurality of tasks may be, for example, any register, buffer or other memory device arranged to store, e.g., task data, task identifiers and/or task descriptors. New tasks may be added dynamically to the task register.
The shown data processing system 10 may comprise a flow chain buffer unit 22 arranged to store one or more task parameter tables defining one or more processing flows of one or more of the plurality of tasks and one or more associated flow chains an each of the flow chains may comprise one or more of the plurality of processing units. The task scheduling device 12 may comprise a task register 14 arranged to store the plurality of tasks, each of the plurality of tasks being associated with the one or more processing flow. The task scheduler controller unit 24 may be arranged to assign the one or more of the plurality of tasks according to a corresponding one of the one or more processing flows.
A processing flow of tasks defined in a task parameter table may be, for example, a linked list or other source of information defining dependencies between and required consecutiveness of tasks when processing a set of data. Just to give an example, compressed video data may first be de-compressed, and then upsizing, colour space conversion and display enhancement may be applied to the video data before displaying the decoded video content. A processing flow of tasks may be associated or mapped to one or more associated flow chains. A flow chain buffer unit 22 may, for example, be a shared memory buffer containing a linked list. The task scheduling device may manage execution of one or several processing flows according to the linked list. A flow chain may comprise one or more of the plurality of processing units 16, 18, 20, i.e., a flow chain may comprise information, how to execute a processing flow of tasks using one or more of the processing units of the data processing system 10. A flow chain may be considered comprising a particular processing unit when, for example, the flow chain comprises a pointer or other identifier of the particular processing unit. This may allow a task of a processing flow to be mapped to processing units 16, 18, 20 being adapted to execute the assigned task without a need for full search of dependencies between tasks at the time of task assignment and high access rate to any external memory, reducing latency and improving QoS of the data processing system 10.
The processing units 16, 18, 20 may be connected to the task scheduling device 12 and may receive the tasks to process and generate a task status event which indicates when an execution of the task is finished. The task status event may be signalled to the task scheduling device 12 and may allow the task scheduling device 12 to assign more tasks to the particular processing unit.
The task scheduling device 12 may be arranged to analyse task status conditions for repeating processing of the same task, and may, for example, assign the same task to the same or another processing unit 16, 18, 20. Additionally or alternatively, the task scheduling device 12 may be arranged to analyse task status conditions for processing tasks sharing data buffers with the finished task. The task scheduling device 12 may be arranged to assign another task to the same processing unit.
For example, the received task status event may allow the task scheduling device 12 to proceed with the flow processing of the data processed by the previously finished task, i.e. assign a subsequent task of the processing flow to a suitable subsequent processing unit of the associated flow chain, which may be the same or a different one of the plurality of processing units 16, 18, 20. Processing flows and flow chains may be event-driven. The task scheduling device 12 may select the processing units flow chain for processing a flow of tasks on a fully modular basis, instead of selecting between pre-defined allowed flows.
Task scheduling may be managed by the task scheduling device 12 without interference by, for example, a central processing unit of a computer that may host the described data processing system 10.
The task scheduler controller unit 24 of the task scheduling device 12 may, for example, be a processing device or logic circuit connected to assign tasks, in response to corresponding processing flows and to receiving the task status events associated with one or more previously assigned tasks, to a corresponding one of the processing units 16, 18, 20 being adapted or configured to execute the assigned task.
The data processing system 10 shown in
The task scheduling device 12 may be connected to the task register unit 24 via a data channel and may be arranged to receive tasks. Tasks may be offline tasks, i.e. non-real-time tasks, and the task scheduler controller unit 24 of the task scheduling device 12 may, for example, be arranged to maximize throughput of tasks or minimize latency of task processing or may be adapted to optimize QoS of the data processing system 10 with respect to an aimed trade-off between throughput and latency. The data processing system 10 may also comprise an input 26 connectable to receive task data. The task data may comprise real-time task data and the task scheduling device 12 may be arranged to receive and schedule one or more real-time tasks. For example, a video processing system may be arranged to receive video streams or support live video communication over a communication network. Other real-time environments may, for example, be mobile devices for automatic control, for example, in robotics. Real-time tasks may be characterized by operational deadlines from event to system response. A real-time task may be executed within strict constraints on response time of the data processing system 10. The task scheduling device 12 may allow using the several processing units 16, 18, 20 for executing different offline and real-time task operations on the incoming data in an efficient way with minimum memory bandwidth, overhead and maximized efficiency to meet high output data rates, and to provide a high QoS.
The task scheduler controller unit 24 of the task scheduling device 12 may comprise an input queue and the task scheduling device 12 may comprise an arbitrating unit 28 arranged to receive the task status events and to insert the task status events into the input queue. The arbitrating unit 28 or arbiter may be connected, e.g., via control channels between the processing units 16, 18, 20 and the arbitrating unit 28, to receive at least the task status events generated by the processing units 16, 18, 20. It may or may not also receive other events. The arbitrating unit 28 may insert the tasks status events or the corresponding tasks or other data identifying the corresponding tasks from the task register 14 into the input queue of the task scheduler controller unit 24. The arbitrating unit 28 may also be connectable to input 26 for receiving real-time tasks or other new tasks for inserting into the input queue of the task scheduler controller unit 24. Each task having an entry in the input queue of the task scheduler controller unit 24 may have assigned a priority identifier, which may, for example, be used by the arbitrating unit 28 for inserting the entry in the input queue at a position reflecting its priority of processing. In another embodiment of the data processing system 10, the priority information may be evaluated by the task scheduler controller unit 24 instead of the arbitrating unit 28. The input queue may be comprised in the task scheduler controller unit 24 or it may be implemented as a separate unit connected to the task scheduler controller unit 24.
For pipeline-like assignment of tasks to processing devices, the task scheduling device 12 may be arranged to assign tasks to different of the plurality of processing units 16, 18, 20 for at least partly parallel execution of the tasks. The tasks may, for example, be associated with the one or more processing flows. The one or more processing flows may, for example, be the same processing flows, i.e., tasks constituting the same processing flow may be distributed across the available processing units 16, 18, 20. Additionally or alternatively, the processing flows may, for example, be different processing flows, i.e., tasks associated with different of the processing flows may be assigned to the available processing units 16, 18, 20. In other words, tasks belonging to different processing flows may be executed in parallel on the plurality of processing units.
In case consecutive processing of certain tasks of a processing flow is not mandatory, tasks belonging to the same processing flow may be executed in parallel on available processing units 16, 18, 20, too. One or more of the processing units 16, 18, 20 may, for example, be arranged to execute tasks of single and multiple processing flows in a time-multiplex mode. The processing units 16, 18, 20 may operate in parallel or with time-multiplexing of tasks dedicated to processing different segments of the same processing flow or different processing flows. An at least partly parallel execution of tasks may be an execution of tasks being in parallel for at least a portion of the total processing time of the tasks. Some of the processing units 16, 18, 20 may, for example, at least partly provide the same functionality and may be arranged to provide multi-threading support.
The task scheduling device 12 may comprise a plurality of task output queues 30, 32, 34, each connectable to a corresponding one of the plurality of processing units 16, 18, 20. The task scheduler controller unit 24 may be arranged to assign one or more of the plurality of tasks to the corresponding one of the processing units 16, 18, 20 arranged to execute the assigned task by inserting the one or more of the plurality of tasks into one or more of the task output queues 30, 32, 34. Providing a dedicated task output queue for each of the processing units 16, 18, 20 may help avoid bottlenecks and performance-degrading head of line blocking and may enable high task throughput and response time and, thereby, enhanced QoS, increasing suitability for real-time applications. Providing a task output queue for each processing unit 16, 18, 20 may enable parallel queuing of tasks, multi-threading and parallel computing of the processing units.
The task scheduling device may comprise a plurality of queue control units 36, 38, 40 connected to the plurality of task output queues 30, 32, 34, each of the plurality of queue control units being arranged to assign a task from a connected task output queue 30, 32, 34 to a corresponding processing unit 16, 18, 20 in response to an availability information of the corresponding processing unit. The availability information may be comprised in or derived from the task status events signalled by the particular processing unit, or it may, for example, be comprised in a dedicated event, that may be signalled, e.g., directly to the corresponding queue control unit. A new task may be assigned, for example, one clock cycle after the previous task was finished, enabling full utilisation of the processing unit.
A queue control unit or queue launch machine (QLM) may, for example, be any logic circuitry or processing device implementing a queue state machine arranged to manage the tasks in the corresponding connected task output queue and allocate the next assigned task to the connected processing unit.
In an embodiment of the data processing system 10, at least one of the plurality of queue control units 36, 38, 40 may be arranged to assign a task from a connected task output queue 30, 32, 34 to a corresponding processing unit 16, 18, 20 in response to a priority of the task, i.e., the task scheduler controller unit 24 and the arbitrating unit 28 may be provided with reduced complexity, and, for example, only queue control units 36, 38, 40 managing task allocation of tasks that may use a priority information, may be provided with circuitry for evaluating priority information. Reduced complexity arbitration unit 28 and task scheduling controller 24 may allow for very fast arbitration and task scheduling, respectively. Within each task output queue 30, 32, 34, the queue control unit 36, 38, 40 may select the next task to be run in the connected processing unit 16, 18, 20 with respect to a task priority. The priority associated with the task may be adapted dynamically, for example, in response to an availability of the shared memory buffer, a waiting time in the task output queue or a static priority of the processing flow the task belongs to.
The data processing system 10 may comprise one or more memory buffer units. The one or more memory buffer units may, for example, be configurable to comprise an input buffer and an output buffer for each task assigned to a processing unit 16, 18, 20. The one or more memory buffer units may, for example, be shared memory buffer units, i.e. the data processing system 10 may comprise one or more shared memory buffer units 42, 44, 46, 48.
Shared memory may be memory that may be accessed by multiple processing units 16, 18, 20 executing multiple tasks, for example to provide communication among them or to avoid redundant copies. For example, an output buffer of a first task executed by a first processing unit 16 may be changed into an input buffer of a second task executed by a second processing unit 18 that may receive the processing result of the first processing unit 16 as input for further processing, without copying or moving the data. The internal memory shared buffers between different tasks may reduce the memory load and the need to access external memory devices for intermediate results. The shown data processing system 10 may reduce memory load and power consumption while providing a scalable architecture for adding additional image or video processing accelerators or other processing units.
The data processing system 10 may comprise a switching unit 50 arranged to connect the plurality of processing units 16, 18, 20 to the one or more shared memory buffer units 42, 44, 46, 48. A switching unit 50 may, for example, be a cross-bar switch or any other switching device or multiplexer arranged to connect the processing units 16, 18, 20 to one or more of the shared memory buffer units 42, 44, 46, 48.
Referring to
Referring to
When executing a processing flow, using the flow chain shown in
Referring to
The data processing system 90 may be arranged to apply processing flows of tasks to the input data received through data input interface 126. For example, received input video data may be downsized, if necessary, and compressed. Compressed video frames may, for example, be stored in compressed video frame buffers 136 located in the external memory device 132. For compression and decompression, the video codec 106 may use reference buffers 138 located in the external memory 132. The GPU may, for example, be connected to use a shared memory buffer 104 for providing graphics that may be overlaid with the video content. A graphics frame buffer 140 located in the external memory 132 may be connected to receive graphics content. Compressed video data may be subject to temporal interpolation. A processing flow dedicated to displaying video content may comprise accessing compressed video data from the memory using CDMA processing unit 102 and applying a decoding and upsizing. The video for display may then, for example, be subject to colour space conversion (CSC) and may be combined with graphics overlay, for example provided by the GPU 108 and held in the graphics frame buffer 136. After applying further display enhancement, the content, i.e., decoded video and combined graphics, may be delivered to the display controller and interface 128.
Task scheduling may, for example, be initiated by the task scheduler controller unit 110 or an external processing device, or the task scheduling device 92 may comprise a second controller unit 142 arranged to initiate the one or more processing flows. The second controller unit may also be arranged to terminate processing flows. The second controller unit 142 may, for example, be a reduced instruction set computing (RISC) device providing high performance and high-speed operation, or it may be another processing device or microcontroller device.
Referring to
The shown flow chain may be event-driven. After receiving an initial command by the second controller unit 142, the TSC 110 may receive an information that data to be processed is available in an external memory 132, and a buffer availability information from an internal memory buffer 94. In case data and processing unit are available, the TSC 110 may assign the task to a processing unit, for example a direct memory access unit, such as VDMA 112, for execution. VDMA 112 may be arranged to signal a task status event to TSC 110 after finishing the task. On reception of the VDMA task status event, the TSC 110 may be arranged to check availability of input and output buffer, wherein buffer 94, which served as an output buffer for VDMA 112, may now be the input buffer holding the data to be processed by the next processing unit 114 in the flow chain. The output buffer for processing unit 114 may, for example, be buffer unit 96. In case input and output buffers 94, 96 are available, the TSC 110 may assign the next process in the process flow being processed to processing unit 114. After receiving a task status event, signalling completion of task processing, from processing unit 114, TSC 110 may again check buffer availability of buffer 96, which may now serve as input buffer for processing device 116, and buffer 98 and may then be arranged to assign the next task of the processing flow to processing unit 116. On reception of a task status event from processing unit signalling that the assigned task has been completed successfully, TSC 110 may again check buffer 98 availability, assign the next task of the processing flow to the next processing unit 120 in the flow chain. In the shown example, processing unit 120 may be a direct memory access unit arranged to provide the processed output data to an external memory 132. On reception of a task status event indicating successful completion of the last task of the processing flow, TSC 110 may provide an indication to the second controller unit 142, which may, for example, be arranged to terminate the processing flow.
With the described approach, processing overhead caused by the procedure of selecting the next task, may be decreased. Copying of processed data between buffers may be reduced or avoided by using shared memory buffers. External memory copies may not be required when processing a flow chain, except for loading the data to be processed at the beginning of the flow chain and for output of the processing result to external memory 132 at the end of the flow chain. The task throughput of the data processing system may be increased. The processing flow executed by the shown flow chain may be one of many, which may be executed at least partly in parallel. The processing flow may be pipelined. The TSC 110 may receive task status events from processing units of different flow chains. A search for the next task to assign may be possible with only little overhead, since only event related tasks may be checked.
A response time of the data processing system may be fast, for example due to fast task arbitration and multi-threading architecture. This may help reduce processing bottlenecks, reduce latency and avoid head of line blocking.
Referring to
Intra-frame level scheduling performed by a task scheduling device may then be applied, for example, to single video or image frames 146, which may be divided into blocks or pages for further processing. A page may be a portion of the video frame processed by one task run.
Intra-page level scheduling and processing may be applied to pages 148 of a frame and may be performed by dedicated acceleration engines or other processing units of the data processing system.
Referring to
Referring to
Referring to
Referring to
Then the TSC may be arranged to check whether there is a task in halt mode. Halt mode means that task execution has been paused by a processing unit due to internal processing reasons. If a task is found to be in halt mode, a read operation of its pointers is carried out by the TSC and updated to the corresponding processing unit or accelerator. Otherwise, the TSC may switch to IDLE mode.
Referring to
Referring to
The method may comprise storing, in a flow chain buffer unit, one or more task parameter tables defining one or more processing flows and one or more associated flow chains, each of the flow chains comprising one or more of the plurality of processing units. The method may further comprise storing, in a task register, the plurality of tasks, each of the plurality of tasks being associated with one or more of the processing flows of one or more of the plurality of tasks.
A programmable apparatus may be provided for at least partly executing the steps of the shown method. A computer program product may comprise code portions for executing steps of a method as described above when run on a programmable apparatus.
The invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.
A computer program is a list of instructions such as a particular application program and/or an operating system. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
The computer program may be stored internally on computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. All or some of the computer program may be provided on transitory or non-transitory computer readable media permanently, removably or remotely coupled to an information processing system. The computer readable media may include, for example and without limitation, any number of the following:
magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.; and data transmission media including computer networks, point-to-point telecommunication equipment, and carrier wave transmission media, just to name a few.
A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.
The computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices. When executing the computer program, the computer system processes information according to the computer program and produces resultant output information via I/O devices.
In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims.
The connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections. The connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa. Also, plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals.
Each signal described herein may be designed as positive or negative logic. In the case of a negative logic signal, the signal is active low where the logically true state corresponds to a logic level zero. In the case of a positive logic signal, the signal is active high where the logically true state corresponds to a logic level one. Note that any of the signals described herein can be designed as either negative or positive logic signals. Therefore, in alternate embodiments, those signals described as positive logic signals may be implemented as negative logic signals, and those signals described as negative logic signals may be implemented as positive logic signals.
Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. For example, the task scheduler controller unit 24, the arbitrating unit 28 and the task output queue controller units 36, 38, 40 may be provided as different circuits or devices or integrated in a single device. Or the flow chain buffer module 22 may be provided connected to or integrated in the task scheduling device 12.
Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.
Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. For example, the data processing system 10 may be provided as a system on a chip in a single integrated circuit. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner. For example, the task scheduling device 12 and the processing units 16, 18, 20 may be provided as separate integrated circuits.
Also for example, the examples, or portions thereof, may implemented as soft or code representations of physical circuitry or of logical representations convertible into physical circuitry, such as in a hardware description language of any appropriate type.
Also, the invention is not limited to physical devices or units implemented in non-programmable hardware but can also be applied in programmable devices or units able to perform the desired device functions by operating in accordance with suitable program code, such as mainframes, minicomputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other embedded systems, cell phones and various other wireless devices, commonly denoted in this application as ‘computer systems’.
However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.
While the principles of the invention have been described above in connection with specific apparatus, it is to be clearly understood that this description is made only of way of example and not as a limitation on the scope of the invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2011/053857 | 9/2/2011 | WO | 00 | 2/28/2014 |