This application claims the benefit under 35 U.S.C. §119(a) of a Korean patent application filed on Apr. 8, 2013 in the Korean Intellectual Property Office and assigned Serial number 10-2013-0038275, the entire disclosure of which is hereby incorporated by reference.
The present disclosure relates to a method for operating a task and an electronic device thereof.
Nowadays, in order to improve a performance of a calculation device, use of a multi-core processor is in an enlarging trend. More particularly, in an embedded system in which much calculation is needed, in order to provide a real time service, a multicore processor is applied.
In User Equipment (UE) Category 3 of Long Term Evolution (LTE) Release 8, it is defined that a modem of a terminal supports traffic of about a maximum of 100 Mbps for a downlink. Based on such situation, because an LTE-Advanced (LTE-A) system that sets an International Mobile Telecommunications (IMT)-Advanced specification as a target should support traffic of 1 Gbps, the LTE-Advanced system should support a calculation work about 10 times greater than that of a modem defining in UE Category 3 of LTE Release 8. For this, a research of a method for processing in parallel much calculation by applying a multicore processor needs to be conducted.
However, application of a multicore processor in this manner needs a large change in a programming paradigm and a software model of the related art. For example, a sequential programming method is appropriate in improving a performance with a method for increasing an operation clock of a single core of the related art, but the sequential programming method is not appropriate to a development environment of a multicore embedded system that improves performance based on a parallel processing of a given work.
Nowadays, a programming technique based on a plurality of tasks, a plurality of processes, or a plurality of threads is provided. However, such techniques do not consider an environment in which a workload that should process is extremely changed.
The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present disclosure.
Aspects of the present disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present disclosure is to provide an embedded system that operates a task based on a varying workload and power efficiency in an electronic device.
Another aspect of the present disclosure is to provide an embedded system that operates a task based on a component carrier in an electronic device.
Another aspect of the present disclosure is to provide an embedded system that generates, divides, or combines a task according to a varying workload in an electronic device.
Another aspect of the present disclosure is to provide an embedded system that operates a task on each layer basis of a protocol in an electronic device.
Another aspect of the present disclosure is to provide an embedded system that operates a task using a plurality of Central Processing Unit (CPU) cores in an electronic device.
In accordance with an aspect of the present disclosure, a method for operating a task in an electronic device is provided. The method includes generating at least one task on a protocol layer basis based on a work to process, executing at least one task generated on the layer basis through at least one CPU, determining whether a workload to process is changed, and changing, if a workload to process is changed, a workload of the executing of the at least one task.
In accordance with another aspect of the present disclosure, a device for operating a task in an electronic device is provided. The device includes an embedded system configured to generate at least one task on a protocol layer basis based on a work to process, to execute at least one task generated on the layer basis through at least one CPU, to determine whether a workload to process is changed, and to change a workload of the executing of the at least one task, if a workload to process is changed.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the present disclosure.
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the present disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the present disclosure is provided for illustration purpose only and not for the purpose of limiting the present disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
In an electronic device of various embodiments of the present disclosure, an embedded system that operates a task will be described based on a varying workload and power efficiency. Hereinafter, in an embodiment of the present disclosure, for convenience of description, a method for operating a task is exemplified based on a protocol layer structure of a Long Term Evolution Advanced (LTE-A) system. For example, in an LTE-A system, a component carrier corresponding to one transport channel supports downlink traffic of 100 Mbps and uplink traffic of 50 Mbps. However, an embodiment to be described hereinafter is not limited to the LTE-A system and may be applied with the same method even to another system having a plurality of protocol layer structures or another system having a plurality of transport channels. Further, in the following description, a method for operating a task may be applied to an embedded system that supports a single core and may be applied to an embedded system that supports a multi core.
In an embodiment of the present disclosure, a new task may be generated and operated, executing at least two tasks may be combined and operated to one task, or executing one task may be divided and operated into at least two tasks based on a workload changed in real time in a single core or multicore embedded system. For example, in an embedded system, when a superfast-speed is needed, a plurality of tasks may be generated and a plurality of tasks may be processed using at least one Central Processing Unit (CPU) core. In another example, in an embedded system, in other CPU cores, when a workload of independently or individually operating tasks is smaller than or equal to a workload that can process in real time in one CPU core, tasks operating in each of other CPU cores are combined to one task, and the combined one task may be processed using one CPU core. In another embodiment of the present disclosure, in an embedded system, when a workload of executing one task is more than a workload that can process in real time in one CPU core, a corresponding task may be divided into at least two tasks and the divided tasks may be processed in at least two CPU cores. Examples of the above-described task generation, combination, and division are some embodiment of various embodiments of the present disclosure and a range of the present disclosure is not limited thereto.
Referring to
Hereinafter, it is assumed that a device has a protocol layer structure and an execution entity shown in
Referring to
As shown in
Further, when a MAC layer has a characteristic that an integrated single processing is requested, in the embedded system, one task 220 of the MAC layer may operate in the platform and operating system 200. This assumes a situation in which an entire MAC layer is formed with one execution entity, and as an entire MAC layer is formed with a plurality of execution entities, when the MAC layer has a characteristic that a separate processing is requested on a component carrier basis, in the embedded system, a task may be operated on an execution entity basis of the MAC layer.
Further, because an RLC layer and a PDCP layer each have a characteristic that a processing on a logic channel basis is needed, in the embedded system, a plurality of tasks 230 and 240 of the same property may operate in the platform and operating system 200 on a logic channel basis of each of the RLC layer and the PDCP layer. For example, in an embodiment of the present disclosure, in the RLC layer and the PDCP layer, in each of a plurality of logic channels, a task may be previously designed and defined, and thus in the embedded system, a task of each logic channel is generated in each of the RLC layer and the PDCP layer according to a design and a definition. For example, each of the RLC layer and the PDCP layer is formed with a first logic channel to a third logic channel, the system generates a first RLC layer task and a first PDCP layer task for the first logic channel, generates a second RLC layer task and a second PDCP layer task for a second logic channel, and generates a third RLC layer task and a third PDCP layer task for a third logic channel, thereby performing a processing of the RLC layer and the PDCP layer of a corresponding logic channel with each task.
In another embodiment of the present disclosure, based on a characteristic that should process data corresponding to a specific component carrier through each logic channel in the RLC layer and the PDCP layer, a task may be operated on a pair basis of a logic channel and a component carrier. For example, when the first logic channel should process data corresponding to a first component carrier and a second component carrier, the first logic channel may generate and operate a first RLC layer task and a first PDCP task of the first logic channel and the first component carrier and generate and operate a second RLC layer task and a second PDCP task of the first logic channel and a second component carrier.
In an LTE-A protocol layer structure, a task of each layer may be virtually and simultaneously performed. For example, for a downlink packet or data, a processing is performed in order of a PHY layer, a MAC layer, an RLC layer, and PDCP, and for a uplink packet or data, a processing is performed in reverse order and thus, it may be understood that a task of each layer may be sequentially performed. In an actual operation, a packet or data should arrive and be processed without disconnection every moment and thus, each layer should virtually and simultaneously be able to perform a corresponding work like a pipeline operation.
The foregoing method for operating a task is a method that does not consider a restriction of a hardware resource (e.g., a CPU) and may operate a task with a method shown in
Referring to
Referring to
Referring to
In an embodiment of the present disclosure, as described above, a task may be operated based on a protocol layer structure and a workload. In addition, in an embedded system, a task may be generated, an executing task may be divided into at least two tasks, or the executing of the at least two tasks may be combined to one task according to a variable situation of a workload in which each task should process.
For example, in a situation in which an amount of data that should process in one logic channel is varied from 0 Mbps to 1 Gbps, an RLC layer task and a PDCP layer task of a corresponding logic channel should be able to process data from 0 Mbps to 1 Gbps according to a situation. In such a situation, a method for operating a task by generating, dividing, or combining the task may be, for example, as follows. Here, for convenience of description, a task of an RLC layer for a logic channel is described with, for example, task generation, division, and combination methods. However, the following description is not limited thereto and a task of a PDCP layer of a logic channel, a PHY layer task of a component carrier, a task of an RLC layer and a PDCP layer of a logic channel and component carrier pair, and a task of a MAC layer may be applied with the same method.
In an embedded system, at least one RLC layer task corresponding to each logic channel may be generated according to a processing workload on a logic channel basis. For example, if an initial workload of a specific logic channel is smaller than 100 Mbps, the system may generate one RLC layer task of a specific logic channel and process a generated task in one CPU. In another example, if an initial workload of a specific logic channel is about 1 Gbps, the system may generate a plurality of RLC layer tasks for a specific logic channel and enable the generated plurality of RLC layer tasks to be processed through one CPU or a plurality of CPUs.
An embedded system may divide and operate one RLC layer task into two RLC layer tasks according to a varying workload of a specific logic channel. For example, if an initial workload of a specific logic channel is smaller than 100 Mbps, the system may process an RLC layer task of a specific logic channel through one CPU, and when a workload of a specific logic channel increases to 1 Gbps according to a temporal change, a workload of 1 Gbps cannot be processed with one CPU and thus, an RLC layer task of a specific logic channel may be divided into a plurality of tasks, and a CPU to process each of four divided RLC layer tasks may be determined based on a use rate or a state of a plurality of CPUs. In this case, the division number of tasks may be determined based on a workload to process and the number of CPUs that can use, and the number of CPUs that can use may be determined according to a present use rate, the remaining effective throughput, and a state (e.g., an idle state, an active state, and the like) of each CPU. Here, task division may be performed through a method for dividing a work content of the executing basic task into a plurality of work contents, resetting a portion of the divided work content to a work content of a basic task, and generating at least one task that performs the divided remaining works.
In an embedded system, corresponding RLC layer tasks may be combined and operated to one task according to a varying workload of a specific logic channel. For example, when an initial workload of a specific logic channel is 1 Gbps, the system may process a plurality of RLC layer tasks of a specific logic channel through a plurality of CPUs, and when a workload of a specific logic channel is reduced to 100 Mbps or less according to a temporal change, a workload of 100 Mbps may be processed with one CPU and thus, corresponding plurality of RLC layer tasks are combined into one and the combined one RLC layer task may be processed through one CPU. In this case, task switching and message exchange between tasks may be decreased and thus, efficiency can be improved. Here, task combination may be performed through a method for removing the remaining task, except for one task of at least two tasks to combine and resetting a work content of one task that is not removed.
Referring to
The task control center 600 includes a task definition of each layer of a protocol. For example, the task control center 600 may include a PHY task definition 604 representing a method, a rule, or a condition of generating a task of a PHY layer, a method for generating a task of an RLC layer, the MAC task definition 603 representing a method, a rule, or a condition of generating a task of a MAC layer, the RLC task definition 602 representing a method, a rule, or a condition of generating a task of an RLC layer, and the PDCP task definition 601 representing a method, a rule, or a condition of generating a task of a PDCP layer. In this case, a task definition of each layer may have a correlation in a task definition of another layer. For example, the RLC task definition 602 and the PDCP task definition 601 may define a basic work task of both an RLC layer and a PDCP layer. Further, a task definition of each layer may define a basic work task in a range smaller than that of each layer.
The task control center 600 performs the control for generating, dividing, or combining a task according to a varying workload based on a task definition of each layer using the task controller 610, the task monitoring unit 620, the task workload estimation unit 630, and the task state DB 640.
More specifically, the task controller 610 performs a control function for generating, dividing, or combining a task of each layer based on a monitor result of the task monitoring unit 620 and an estimation result of the task workload estimation unit 630. The task controller 610 may generate and control a task with a method for generating an operation memory of each task based on a task definition, i.e., a task frame and forming a data area corresponding to a needed work.
The task monitoring unit 620 monitors an operation situation of a task and a CPU operation situation in real time or periodically and provides a monitor result to the task controller 610. For example, the task monitoring unit 620 may distinguish a task executing in each layer and distinguish a task processed in each CPU.
The task workload estimation unit 630 estimates a change of a workload of each of presently executing tasks based on a monitor result of the task monitoring unit 620 and estimates a workload change of each CPU by a task processed in each CPU. Here, the task workload estimation unit 630 may estimate a workload change with a cooperation with a task related to modem hardware and protocol operation.
The task state DB 640 stores and manages state information of each task. For example, the task state DB 640 may store an estimation workload, a work content, a processing CPU, task identification information, corresponding logic channel identification information, or corresponding component carrier identification information of each task.
As described above, the task control center 600 may exist as a separate task for task generation, division, and combination and be included in a definition of a basic work task. For example, a function of a task control center may be included as a function of a root task to become the base, and the root task may generate a new task or divide or combine an executing task based on a workload and a CPU to process.
First, at least one application operates in the electronic device and thus, in a task control center, one logic channel L0 for data communication exists, and in the logic channel L0, and data traffic of several Mbps (e.g., 1 Mbps) is requested. In this case, it is assumed that one component carrier CC0 is needed.
Referring to
Hereinafter, while operating a task, as shown in
The task control center 750 may reset an RLC layer task 730 of a logic channel L0 generated in
Further, the task control center 750 may allocate an RLC layer task 732 and PDCP layer tasks 742 and 744 of L2 to other CPUs 700-2 to 700-4 in an idle state in consideration that a task of L2 needs a traffic processing of 1 Gbps. As a CPU4 700-4 should perform other work, when the CPU4 700-4 cannot perform a traffic processing of 1 Gbps in which L2 needs, the task control center 750 may divide the RLC layer task 732 of L2 allocated to the CPU4 700-4 into a plurality of tasks and allocate divided partial tasks to other CPUs.
In such a situation, the task control center 750 continues to monitor a use rate of a CPU1 700-1, and when a use rate of the CPU1 700-1 maintains a state enough for processing tasks 730 and 740 of RLC and PDCP layers of L0 and L1, the task control center 750 may maintain an operation state of tasks 730 and 740 of RLC and PDCP layers of L0 and L1. In contrast, the task control center 750 continues to monitor a use rate of the CPU1 700-1, and when a use rate of the CPU1 700-1 exceeds 80% and is changed to a state that is not enough to process the tasks 730 and 740 of RLC and PDCP layers of L0 and L1, the task control center 750 may divide again the tasks 730 and 740 of the RLC and PDCP layers of L0 and L1.
Hereinafter, as shown in
In the foregoing embodiment of the present disclosure, a case in which a separate task is defined on PDCP, RLC, MAC, and PHY layers basis has been described. However, when a task that integrates and processes all of PDCP, RLC, MAC, and PHY layers is defined according to another embodiment of the present disclosure, inefficiency may be removed by task switching and message exchange between tasks. For example, as a user terminates a direct work, in a situation in which a background work, such as e-mail synchronization remains, by performing an entire work with one task instead of maintaining a task on a layer basis, task switching or message exchange between tasks is not performed and thus, efficiency of a system may be improved.
Referring to
The system determines whether a workload to process is changed in operation 803. For example, in operation 801, traffic of 1 Mbps was requested within the system, but it is determined whether traffic of 1 Gbps is requested within the system through a determination of operation 803.
If it is determined in operation 803 that a workload to process is changed, the system determines whether generation, division, or a combination of a task is needed in operation 805. For example, the system determines whether a workload and a CPU situation are changed or a new work occurs and determines whether generation, division, or a combination of a task is needed.
When a new work occurs within a system and thus, when the number of logic channels increases, the system may determine task generation. In another example, when traffic requested in a specific logic channel within the system is changed to a threshold value or more, the system may determine task division. In another example, when traffic requested in a specific logic channel within the system is changed to a threshold value or less, the system may determine task combination.
If it is determined in operation 805 that task generation is needed, the system may additionally generate a task based on a workload and a CPU situation in operation 807. In this case, the task may be additionally generated on each layer basis and may be additionally generated for a specific layer.
If it is determined in operation 805 that task division is needed, the system may divide a presently executing task into a plurality of tasks based on a workload and a CPU situation in operation 809. In this case, task division may be performed through a method for dividing a workload to process into a plurality of workloads, resetting a workload of a presently executing task to divided partial workloads, and generating at least one task that performs the remaining workloads of the divided workload.
If it is determined in operation 805 that task combination is needed, the system may combine an executing plurality tasks to one task based on a workload and a CPU situation in operation 811. In this case, task combination may be performed through a method for resetting corresponding one task so that one task of a presently executing plurality of tasks performs an entire workload and removing other tasks that is not reset.
According to an embodiment of the present disclosure, in an embedded system of an electronic device, by generating, dividing, or combining and operating a task based on a varying workload and power efficiency, a processing performance and power efficiency of the electronic device can be improved.
Certain aspects of the present disclosure can also be embodied as computer readable code on a non-transitory computer readable recording medium. A non-transitory computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the non-transitory computer readable recording medium include a Read Only Memory (ROM), a Random Access Memory (RAM), Compact Disc (CD)-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The non-transitory computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. In addition, functional programs, code, and code segments for accomplishing the present disclosure can be easily construed by programmers skilled in the art to which the present disclosure pertains.
At this point it should be noted that the various embodiments of the present disclosure as described above typically involve the processing of input data and the generation of output data to some extent. This input data processing and output data generation may be implemented in hardware or software in combination with hardware. For example, specific electronic components may be employed in a mobile device or similar or related circuitry for implementing the functions associated with the various embodiments of the present disclosure as described above. Alternatively, one or more processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more non-transitory processor readable mediums. Examples of the processor readable mediums include a ROM, a RAM, CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The processor readable mediums can also be distributed over network coupled computer systems so that the instructions are stored and executed in a distributed fashion. In addition, functional computer programs, instructions, and instruction segments for accomplishing the present disclosure can be easily construed by programmers skilled in the art to which the present disclosure pertains.
While the present disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2013-0038275 | Apr 2013 | KR | national |