ALLOCATION OF HETEROGENEOUS COMPUTATIONAL RESOURCE

Information

  • Patent Application
  • 20220342711
  • Publication Number
    20220342711
  • Date Filed
    April 23, 2021
    3 years ago
  • Date Published
    October 27, 2022
    a year ago
Abstract
Allocation of computational resource to requested tasks is achieved by running a scheduling operation across a plurality of schedulers, each in communication with a subset of network entities, the schedulers establishing a virtual bus. In certain embodiments, the scheduling operation is able to run continuously, allocating newly arriving task requests as resources become available.
Description
FIELD

The present disclosure is in the field of computing. Specifically, it relates to the allocation of tasks to computational resource in a heterogeneous computer network.





DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic diagram of a general arrangement of computers in accordance with an embodiment;



FIG. 2 is a schematic architecture of a user computer of FIG. 1;



FIG. 3 is a schematic architecture of a scheduler of FIG. 1;



FIG. 4 is a schematic architecture of a task computer of FIG. 1;



FIG. 5 is a swim-lane flow diagram of a procedure of allocating and executing tasks in accordance with an embodiment; and



FIG. 6 is a flow diagram for a procedure for scheduling tasks to resources in the procedure of FIG. 5.





DESCRIPTION OF EMBODIMENTS

Cloud computing is a term used to denote any form of computing making use of a remote computing resource. The remote computing resource may comprise one computer, or a plurality of computers in network with each other.


Parallel processing and edge computing are two frameworks which enable computing tasks to be performed in an efficient manner. Parallel processing provides the ability to process multiple tasks at the same time using a number of computing processors, while edge computing allows computing tasks to run using available local resources rather than a centralised cloud.


Combining parallel processing with edge computing offers an opportunity to implement fast and massive local data processing, which can bring benefits such as low processing latency, reduced data traffic in a core communication network, and enhanced data privacy.


One important issue is how to efficiently allocate available local computational resources to meet demand. One approach is to allocate tasks to local computational resources at random. While this approach is simple to implement, it exhibits low efficiency and does not provide quality of service (QoS). On the other hand, allocating tasks to computational resources using a centralized optimisation based scheduling method will usually be complicated, and can involve operational overhead which can easily outweigh the benefit of a more considered approach to task/resource allocation.


Embodiments disclosed herein implement a matching algorithm to achieve an effective allocation of local computational resources to meet user requirements. Embodiments use a virtual entity for collecting the user requirement, establish preference lists, and conduct a ‘proposing’ process. Embodiments produce a matching process with advantageously reduced complexity.


In disclosed embodiments, matching is enabled by a virtual entity, for example, a software program coordinating available computing resources, which may be implemented in a distributed manner across multiple nodes using a virtual data bus.


The virtual scheduler may act as if it were centrally deployed, but with the advantage of not requiring a single scheduling point which could present a bottleneck and critical point of failure.


The virtual entity may be considered as a plurality of interconnected schedulers interposing between user computers demanding computational resources, and edge computers which provide computational resources to meet demand.


In an embodiment, the virtual entity, comprising interconnected schedulers, collects the task processing requirements of the users separately, and constructs preference lists for the users. Thus, in practical implementations, the task processing requirements may be collected by interconnected schedulers and information gathered on such task processing requirements may be synchronised between the interconnected schedulers using a virtual data bus as noted above. A matching process on the synchronised information, to establish the allocation of the computing tasks (users) to the computational resources. Allocation of a task to a computing resource is established through the scheduler (which may be considered as a signalling and control plane) and the actual computing tasks are performed by the computing units (which may be considered as a data plane).


As FIG. 1 shows, a plurality of user computers 100 are networked with a plurality of task computers 300, and active network links are made between the user computers 100 and the task computers 300 by schedulers 200 by way of a matching process to be described below. A user computer 100 that requires a task to be performed by a task computer 300 can obtain access to computational resource from task computers 300 on the basis of the matching process performed at the schedulers 200.


Each of the illustrated computers is a general purpose computer of conventional construction. The computers host executable program instructions to enable the performance of the matching process, to cause allocation at the schedulers 200 of tasks requested by a user computer 100 to be performed by a task computer 300.


So, as shown in FIG. 2, the user computer 100 comprises a processor 110, a memory 120 for short term retention of processing data, and storage 130 for long term (but potentially slow retrieval) storage of program, processing and informational data. In conventional implementations, the storage 130 may comprise a magnetic drive, or a solid-state storage device. An audio-visual output facility 140 is provided, such as a visual display unit and speakers, for display and audio output to a user, and a user input device 150, such as a keyboard, pointing device (e.g. mouse) and microphone are provided to enable user input action to the user computer 100. A communications facility 160 provides connection to other computers, such as by Ethernet connection.


The user computer 100 stores and executes a user computer task execution request program, which enables the user computer 100 to request a task to be performed by a task computer, in accordance with a matching process executed at the schedulers 200.


The task execution request program can be introduced to the user computer 100 by way of a download computer program product, or a storage medium computer program product; this is implementation specific.


As shown in FIG. 3, each scheduler 200 comprises a processor 210, a memory 220 for short term retention of processing data, and storage 230 for long term (but potentially slow retrieval) storage of program, processing and informational data. In conventional implementations, the storage 230 may comprise a magnetic drive, or a solid-state storage device. An audio-visual output facility 240 is provided, such as a visual display unit and speakers, for display and audio output to a user, and a user input device 250, such as a keyboard, pointing device (e.g. mouse) and microphone are provided to enable user input action to the scheduler 200. A communications facility 260 provides connection to other computers, such as by Ethernet connection.


Each scheduler 200 stores and executes a scheduler program, which enables the scheduler 200 to manage requests issued by user computers 100 and to match them with computing facilities offered by task computers 300. The scheduler program instances are cooperative, to produce a virtual bus between the schedulers 200, so that a common decision can be taken on scheduling. The scheduler program can be introduced to the scheduler 200 by way of a download computer program product, or a storage medium computer program product; this is implementation specific.


As shown in FIG. 4, each task computer 300 comprises a processor 310, a memory 320 for short term retention of processing data, and storage 330 for long term (but potentially slow retrieval) storage of program, processing and informational data. In conventional implementations, the storage 330 may comprise a magnetic drive, or a solid-state storage device. An audio-visual output facility 340 is provided, such as a visual display unit and speakers, for display and audio output to a user, and a user input device 350, such as a keyboard, pointing device (e.g. mouse) and microphone are provided to enable user input action to the scheduler 300. A communications facility 360 provides connection to other computers, such as by Ethernet connection.


As the task computers 300 may be edge computing devices, in that they provide an interface between local computation facilities and more distributed computation facilities, other forms of computer connections may also be provided, such as internet access to wider networks such as the World Wide Web. This may enable a task computer 300, as the need arises, to recourse to other computation facilities for the performance of requested tasks.


Each task computer 300 stores and executes a task computer program, which enables the task computer 300 to offer task computation facilities to the scheduler 200 and thence to the user computers 100. This program can be introduced to the task computer 300 by way of a download computer program product, or a storage medium computer program product; this is implementation specific.



FIG. 5 illustrates a process of allocating resources in the illustrated system. This is depicted as a swim-lane flow diagram illustrating interaction between a nominal user computer 100, a scheduler 200 and a task computer 300. A first step S1-2 comprises collecting user requirements along with an access request, which provides the fundamental reference for establishing preference in the choices of resources.


Then, a resource allocation step S1-4 establishes a match between requested tasks and computation facilities offered by task computers 300. A principle embodied in the described allocation method is that it requires only a limited number of look up operations. This has the prospect of avoiding useless proposing and checking of proposals, to improve time efficiency of the matching process. By using a virtual scheduling entity embodied by the plurality of schedulers, the process of matching is synchronised in real time on the network, meaning that there is no delay and repeat information exchange between the matching nodes.


Once task allocation has been completed, in step S1-6, the allocated tasks are performed by allocated task computers (edge nodes, EN).



FIG. 6 illustrates a flow chart of the matching method of this embodiment using a virtual scheduler distributed across nodes using a data bus.


The matching method follows the operational procedure described below. Firstly, a preference list (PL) is established for every entity, namely the user end devices (ED) and the task computers edge nodes (EN). The PL is a ranking created for each entity of one side to represent its preference of the entities at the other side. The preference can be based on a variety of technical and non-technical parameters in combination. For example, the user's QoS requirement may be a factor, or the service price charged by the resource provider (the operator of the task computer). For example, it may be that one user prefers the edge computer that has the fastest processing capacity or has the lowest price tag. The decentralised schedulers 200 collect the user information for each user computer (ED) 100, generate the PL for the user computer (ED) 100, and synchronise among each other to obtain an overview of the network, as if the schedulers 200 were in fact a centrally deployed entity.


The schedulers 200 collect the requirement and preference criteria and generate PLs for the entities at the resource side, namely the task computers (EN) 300, as well.


All schedulers 200 have the same information as to the membership of all of the PLs. Then, every scheduler 200 can then conduct process matching individually without communicating (which can result in massive signalling overhead) with other schedulers. The outcome of matching will be the same throughout the whole network.


In a second stage of the process, for an unmatched resource ENj, the scheduler 200 checks the first entity on its PL, PLj. The first entity, EDi, indicates the most preferable user/task for that particular resource. Then, the scheduler looks into the PLs of the corresponding users at the demand side and takes the following action.


For an unmatched user computer ED; on the preference list PLj the scheduler 200 examines the first task computer (resource) in the PL, PLi, for the user computer. In other words, the user computers propose to their favourite resources.


As illustrated, if the PL for EDi does not have the originating task computer ENj on it, then ED; is removed from the preference list PLj for ENj. This provides computational gain in that there is no point in ENj retaining any form of preference for ED; if the preference has no reciprocation at EDi.


Then, a candidate match is made between ED; and the top entity in its PL, PLi. This, for the purpose of this description, is named ENk.


It is possible that the most favoured task computer ENk for EDi is in fact ENj. That is, this is a perfect match, wherein the most favoured task computer for ED; does in fact also most favour user computer EDi.


No other match will be preferred by either device, and so the matching is established and is deemed fixed. The scheduler will authorise the allocation to the user and resource and will then remove the matched entities from all other entities' PLs.


As the procedure is iterative, there will be temporary matches as a result of earlier iterations. These temporary matches are made with the knowledge that better matches may arise through iteration, before a match is considered sufficiently optimal that it can be fixed.


If the resource ENk has been temporarily matched to another user EDz, the scheduler compares the potential user EDi with the existing matched user, EDz. If the resource prefers the potential user (that is, it has a higher rank in PLk), the scheduler 200 updates the matching: it temporarily matches the resource ENk to the new user EDi, and deletes the resource from the PL for EDz, PLz for the previously temporarily matched user EDz. Otherwise the existing matching (ENk EDz) remains.


If the resource ENk is unmatched, the resource will be temporarily matched to the user EDi.


A round of matching is completed when the scheduler has considered all unmatched users once. A new round starts if there are still unmatched users.


During the matching process, whenever a fixed matching is established, the scheduler checks the temporarily matched pairs. Because fixing a matching means that users and resources will be removed from respective PLs, there is a possibility that temporarily matched pairs now can become fixed, as the temporarily matched entities become the most favourite to each other, among the remaining entities.


The process terminates when no unmatched user exists. When the process terminates, temporary matching is considered fixed, the scheduler authorises those connections.


After the matching, every scheduler can individually establish a data processing request that it is responsible for to the allocated resources. No conflict will be expected as the allocation is based on the same matching process at each scheduler, coordinated as it were one virtual entity.


This approach is able to ensure that the allocation result is stable, meaning that once the resources are allocated to tasks, such allocation can maintain for the whole duration of the processing period. One user (resource) would have no incentive to change to a different resource (user) for better utility (computing operation performance), measured by, for example, task processing delay, so that they would like to remain in the same allocation until the tasks are processed. By the described method, the resource is also allocated in an optimal way such that every computing task/user can obtain the best possible computational resource provided in the shared situation to process its requirement. Utility cannot be increased by a different resource allocation, without reducing the utility of other entities.


The implementation of a virtual entity of schedulers avoids the need of a centralised node to carry out the resource allocation. Moreover, the individual schedulers, acting on particular user requests, reduces the need to share information, such as ‘proposals’, ACKs, NAKs, among the users/schedulers during the matching negotiation process. Information exchange only happens in the initial stage of the process when establishing PLs.


Embodiments are described in the context of implementation of executable software being executed on suitably configured hardware. However, the reader will appreciate that other implementations are possible, including greater or complete reliance on application specific hardware devices. Further, while a software implementation may be achieved by execution of a software product on a hardware device, the reader will appreciate that a more distributed approach may be achievable, including obtaining software functionality from a remote location such as a “cloud based” service.


Where reference is made to particular hardware components, the reader will appreciate that they may be implemented in different order without altering the general principles conveyed by the present disclosure.


The embodiments described above are intended to be indicative, and are not limiting on the scope of protection sought, which should be determined on the basis of the claims appended hereto.

Claims
  • 1. A method of matching tasks requested by requesting computers, to be performed by processing computers, the method comprising: establishing a priority list for each requesting computer, each priority list being a list of available processing computers in an order of preference for the respective requesting computer;establishing a priority list for each available processing computer, each priority list being a list of requesting computers in an order of preference for the respective processing computer;for a first unmatched processing computer of the processing computers: identifying a first requesting computer being the highest preference requesting computer on the priority list for the first processing computer;if the first requesting computer is not matched to a processing computer: if the first processing computer is not on the priority list for the first requesting computer, then removing the first requesting computer from the priority list for the first processing computer;if the first processing computer is on the priority list for the first requesting computer, then: matching the first requesting computer to a second processing computer, the second processing computer being the highest preference computer on the priority list associated with the first requesting computer;if the second processing computer is the first processing computer, then fixing the match between the first requesting computer and the first processing computer;else if the second processing computer is unmatched, temporarily matching the second requesting computer with the first requesting computer while, if the second processing computer is already matched to a second requesting computer, if the first requesting computer has higher preference on the priority list for the second processing computer than the second requesting computer, temporarily matching the second processing computer to the first requesting computer and removing the second processing computer from the priority list for the second requesting computer;if the preceding steps result in a new fixed matching, then revising priority lists for all unmatched computers and reiterating the preceding steps until all computers are matched.
  • 2. A method in accordance with claim 1 wherein the method is performed at a scheduling entity comprising a plurality of schedulers, each scheduler being associated with one of the requesting computers and one of the processing computers, the schedulers establishing a virtual bus between the schedulers to cause formation of the scheduling entity.
  • 3. A computer network comprising a plurality of requesting computers configured to request tasks to be performed, and a plurality of processing computers configured to perform requested tasks, and a scheduling entity in communication with each of the requesting computers and each of the processing computers, the scheduling entity being configured to: establish a priority list for each requesting computer, each priority list being a list of available processing computers in an order of preference for the respective requesting computer;establish a priority list for each available processing computer, each priority list being a list of requesting computers in an order of preference for the respective processing computer;for a first unmatched processing computer of the processing computers: identify a first requesting computer being the highest preference requesting computer on the priority list for the first processing computer;if the first requesting computer is not matched to a processing computer: if the first processing computer is not on the priority list for the first requesting computer, then remove the first requesting computer from the priority list for the first processing computer;if the first processing computer is on the priority list for the first requesting computer, then: match the first requesting computer to a second processing computer, the second processing computer being the highest preference computer on the priority list associated with the first requesting computer;if the second processing computer is the first processing computer, then fix the match between the first requesting computer and the first processing computer;else if the second processing computer is unmatched, temporarily match the second requesting computer with the first requesting computer while, if the second processing computer is already matched to a second requesting computer, if the first requesting computer has higher preference on the priority list for the second processing computer than the second requesting computer, temporarily match the second processing computer to the first requesting computer and remove the second processing computer from the priority list for the second requesting computer;if the preceding functionalities result in a new fixed matching, then to revise priority lists for all unmatched computers and reiterate the preceding functionalities until all computers are matched.
  • 4. A scheduling entity for scheduling tasks, each task being requested by one of a plurality of requesting computers, each task to be performed by one of a plurality of processing computers, the scheduler being configured to: establish a priority list for each requesting computer, each priority list being a list of available processing computers in an order of preference for the respective requesting computer;establish a priority list for each available processing computer, each priority list being a list of requesting computers in an order of preference for the respective processing computer;for a first unmatched processing computer of the processing computers: identify a first requesting computer being the highest preference requesting computer on the priority list for the first processing computer;if the first requesting computer is not matched to a processing computer: if the first processing computer is not on the priority list for the first requesting computer, then remove the first requesting computer from the priority list for the first processing computer;if the first processing computer is on the priority list for the first requesting computer, then: match the first requesting computer to a second processing computer, the second processing computer being the highest preference computer on the priority list associated with the first requesting computer;if the second processing computer is the first processing computer, then fix the match between the first requesting computer and the first processing computer;else if the second processing computer is unmatched, temporarily match the second requesting computer with the first requesting computer while, if the second processing computer is already matched to a second requesting computer, if the first requesting computer has higher preference on the priority list for the second processing computer than the second requesting computer, temporarily match the second processing computer to the first requesting computer and remove the second processing computer from the priority list for the second requesting computer;if the preceding functionalities result in a new fixed matching, then to revise priority lists for all unmatched computers and reiterate the preceding functionalities until all computers are matched.
  • 5. A scheduling entity in accordance with claim 4, the scheduling entity comprising a plurality of schedulers, each scheduler being associated with one of the requesting computers and one of the processing computers, the schedulers being operable to establish a virtual bus between the schedulers to cause formation of the scheduling entity.
  • 6. A non-transitory computer readable medium having stored thereon software instructions that, when executed by a processor, cause the processor to match tasks requested by requesting computers, to be performed by processing computers, by executing the steps comprising: establishing a priority list for each requesting computer, each priority list being a list of available processing computers in an order of preference for the respective requesting computer;establishing a priority list for each available processing computer, each priority list being a list of requesting computers in an order of preference for the respective processing computer;for a first unmatched processing computer of the processing computers: identifying a first requesting computer being the highest preference requesting computer on the priority list for the first processing computer;if the first requesting computer is not matched to a processing computer: if the first processing computer is not on the priority list for the first requesting computer, then removing the first requesting computer from the priority list for the first processing computer;if the first processing computer is on the priority list for the first requesting computer, then: matching the first requesting computer to a second processing computer, the second processing computer being the highest preference computer on the priority list associated with the first requesting computer;if the second processing computer is the first processing computer, then fixing the match between the first requesting computer and the first processing computer;else if the second processing computer is unmatched, temporarily matching the second requesting computer with the first requesting computer while, if the second processing computer is already matched to a second requesting computer, if the first requesting computer has higher preference on the priority list for the second processing computer than the second requesting computer, temporarily matching the second processing computer to the first requesting computer and removing the second processing computer from the priority list for the second requesting computer;if the preceding steps result in a new fixed matching, then revising priority lists for all unmatched computers and reiterating the preceding steps until all computers are matched.