The present disclosure relates generally to computer systems and in particular to providing software code for execution on a client.
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
The frequency of patching of software code—i.e. modification of deployed software code to implement new features, correct errors, etc.—has increased over the last few years. While some patches may have little importance, other patches like security patches can be critical and be implemented immediately.
Hot swapping enables modification of the code of a program without having to interrupt execution. This technique however is used for hardware components such as disks, memory, USB components and is thus not suitable when the hardware is not changed.
Patching software that is not currently executed is easy as it may suffice to simply put the patch in place after which the code has been updated. However, patching software code that is executed is trickier. This may be a big problem as it is relatively common for software to run for a long time, for example game consoles are often used for hours on end, or even without end such as on gateways, security systems and even because the user does not switch off the computer to avoid wasting time when it starts again.
A known way to patch running code is to require restart of the machine the code runs on. An obvious drawback is that this interrupts program execution.
Hot patching and memory injection are hacking techniques that allow partial modification of running code by injection of code in the memory. The technique is described, in French, by Fred Raynal and Jean-Baptiste Bedrune in “Malicious Debugger!” Sogeti IS/ESEC. However, there is a high risk of program crash, mainly due to the use of a so-called ptrace system call and the overwriting of previous memory space. In addition, languages with dynamic call dispatching (e.g., Java) are able to support redefinition of the code supporting some classes, but only at the granularity of classes. Hence, some code like main loops of threads cannot be hot patched.
The game consoles PS4 and XBoX1 allow “play without all game”, which means that game execution can start before all of the code has been downloaded. While this in a sense can be said to modify the code that is executed, it does not allow patching of the downloaded code. Similar possibilities exist with P2P progressive download.
None of the techniques allows fine grained modification of code during execution without restart. In particular, many of these techniques require user patience during the update. Further, some techniques require that the program is either stop or aware of the on-going patching operation.
In a first aspect, the present principles are directed to a server device for providing blocks of code of a program to a client device executing the blocks of code. The server device comprises an interface configured to relay messages between the client device and a processor of the server device. The server device also comprises the processor configured to receive from the client device a request comprising an identifier of a block of code, in case the block corresponding to the identifier has been patched during execution of the code on the client device: verify if the client device has executed a memory patch block for the block corresponding to the identifier. In case the client device has executed a memory patch block, the processor is configured to obtain a subsequent block of code corresponding to the identifier, obtain at least one first transition for the subsequent block of code, the first transition enabling the client to calculate an identifier of a block of code to request following execution of the subsequent block of code, and send the subsequent block of code and the first transition for the subsequent block of code to the client. In case the client device has not executed a memory patch block, the processor is configured to obtain the memory patch block, obtain a second transition for the memory patch block, the second transition enabling the client to calculate the identifier of the block of code, and send the memory patch block and the second transition for the subsequent block of code to the client.
Various embodiments of the first aspect include:
In a second aspect, the present principles are directed to a method for providing blocks of code of a program to a client device executing the blocks of code, the method comprising at a server device comprising a processor: receiving from the client device a request comprising an identifier of a block of code, and in case the block corresponding to the identifier has been patched during execution of the code on the client device, verifying if the client device has executed a memory patch block for the block corresponding to the identifier. In case the client device has executed a memory patch block, the processor obtains a subsequent block of code corresponding to the identifier, obtains at least one first transition for the subsequent block of code, the first transition enabling the client to calculate an identifier of a block of code to request following execution of the subsequent block of code, and sends the subsequent block of code and the first transition for the subsequent block of code to the client. In case the client device has not executed a memory patch block, the processor obtains the memory patch block, obtains a second transition for the memory patch block, the second transition enabling the client to calculate the identifier of the block of code and sends the memory patch block and the second transition for the subsequent block of code to the client.
Various embodiments of the first aspect include:
Preferred features of the present principles will now be described, by way of non-limiting example, with reference to the accompanying drawings, in which
It should be understood that the elements shown in the figures may be implemented in various forms of combinations of hardware and software. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces. Herein, the phrase “coupled” is defined to mean directly connected to or indirectly connected with through one or more intermediate components. Such intermediate components may include both hardware and software based components.
The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its scope.
All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read only memory (ROM) for storing software, random access memory (RAM), and nonvolatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
The processor 111 of the client is configured to execute the software code. The software code is arranged in a series of blocks B, preferably so-called basic blocks, and the software code is arranged so that it is sufficient for the client 110 to store only the presently executed basic block at a given time.
Formally, the code can be represented as a Control Flow Graph (CFG) comprising all possible paths that can be traversed through the code during execution. The code is split into a set of disjoint blocks B and a set of oriented transitions between blocks T:int×int. With this definition, a CFG is defined by CFG:B×T where (Bi,Ti)∈CFG wherein Bi is a block of code corresponding to the program and Ti is set of available transitions from this block.
More generally, at any time (except for during transitions between blocks), the client 110 has the current block Bi and the set of corresponding transitions Ti where all t∈Ti have the form (i,*), calculates the next block Bk, requests this from the server 120 that in turn provides the requested block Bk and the corresponding transition in the set Tk. In each t=(i,*)∈Ti, the i is the index of the source block and the asterisk is an identifier for a possible target block Bk. While the number of available transitions usually is greater than one (|T1|>1), it may also be zero if execution stops with the current block (Ti=) or one if the next block is predetermined (|T1|=1) (an example being shown hereinafter).
The server 120 is configured to store, for each client c, the spatial indices (or other identifier) of each block already executed by the client Xc:{int}. Initially, Xc={0}. The memory 122 advantageously also stores the blocks of software code, but it will be appreciated that the blocks can also be stored in external memory (not shown) accessible by the server 120.
Thus, the client 110 requests, step S24, block Bk from the server 120 that sends, step S25, Bk and Tk, and updates, step S26, Xc∪={j}. The client 110 executes, step S27, the received block Bk.
Now, in order to patch code running on a client, it is sufficient to patch (i.e. modify) the corresponding block(s). For ease of illustration, it is assumed that a single block Bp is modified to block Bp′. Ψ:int×int is the set of patches, where (p,p′) means that block Bp has been replaced by block Bp′. The server receives the new block Bp′ and modifies the transition system to integrate the new block Bp′ in place of the old block Bp:all (*, p) are replaced by (*,p′), and all (p,*) are replaced by (p′,*). Depending on the block modifications, it may be necessary to patch the memory 112 of the client 110 too. In this case, a complement block Bp′
Then, whenever a client requests a block i, there are a number of possibilities:
It will be understood that in the last case, using the notation of the example before, the client will execute the memory patch block Bi′
Thus, since the client can have a single block to execute at a time, the program can be patched without stopping the program or rebooting. A possible exception is if there is a need to patch a block that is currently executed and in which execution is at least temporarily stuck in a loop, which may occur if the execution idles such as when the client waits for some input that has not yet arrived.
It will be appreciated that if there is no need to patch the memory patch, then there is no need to store the indices of executed blocks Xc, to verify if the client has executed the block or to send the memory patch to the client.
It will also be appreciated that it is possible for the client to use a cache for the blocks, in particular those it has executed. Using cache techniques can help to optimize the transmission. For example, the client can then indicate in its request for a block that the block is stored in its cache, and the server can respond with OK (to use the version in cache) or, if for example the block has been patched, with a block to execute (which can be a patched version of the block or a block to patch the memory).
It will further be appreciated that the server can be configured to compute a set of possible blocks that the client can request (with Ti when it has sent the block Bi) and to verify that the requested block is in this list. A requested block is sent only if it is in the set of possible blocks. This measure can help against hacking.
It will further be appreciated that various conventional optimization methods including partial evaluation, block prediction, speculative execution and memorization (see for example http://en.wikipedia.org/wiki/Memorization) can be used by the server in order to optimize the selection and evaluation of blocks that are to be sent.
It will further be appreciated that it is preferred to provide authentication and confidentiality of exchanges using an encrypted channel such as the Secure Authenticated Channel (SAC) described in WO 2006/048043.
The following example further illustrates the present principles using a simple evolution of a program during execution. The evolution operates within the body of a loop, which stresses the fine granularity level allowed by an embodiment of the present principles.
Source Code
Byte Code
In the example, instruction 17 in the body of the loop is changed and new instructions are inserted. The set of blocks B is the set of instructions 17 to 21. The set of transition corresponds to the sequential execution: 17→18, 18→19, until 20→21.
Each time an instruction is added, all subsequent instructions and instruction references are renumbered accordingly. It is noted that the machine executing the code does not need to be aware of the renumbering when instruction caching techniques are not used.
In the most basic version of an embodiment of the present principles, each instruction is requested from the server before execution; in other words, a block equals one instruction. The code can evolve on the instruction level at the price of a penalty on the execution speed. It will be appreciated that common caching techniques can mitigate the speed penalty. For instance, the code evolution can be requested every two instructions, in association with a rollback mechanism.
The skilled person will appreciate that the present principles can be used in various contexts:
It will thus be appreciated that the present principles provide a solution for code execution that, at least in certain cases, can improve on the prior art code execution solutions.
Each feature disclosed in the description and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination. Features described as being implemented in hardware may also be implemented in combinations of hardware and software, and vice versa. Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
15305616.3 | Apr 2015 | EP | regional |