The present technology is concerned with digital twins which are digital representations of physical objects or processes. Digital twins are used in many application domains including product and process engineering, internet of things, logistics, asset management, and others. The digital twin provides a model of the behavior of the physical object and once such digital representations are available it is possible for automated computing systems to use the digital twins to facilitate management and control of the physical objects.
Digital twins are often manually created by an operator or expert who is familiar with the physical objects to be represented and understands how the physical objects behave and/or interact with one another. However, it is time consuming and burdensome to form digital twins in this way and difficult to scale the process up for situations where there are huge numbers of digital twins to be formed.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known apparatus and methods for digital twins.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not intended to identify key features or essential features of the claimed subject matter nor is it intended to be used to limit the scope of the claimed subject matter. Its sole purpose is to present a selection of concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
In various examples there is a computer-implemented method performed by a digital twin at a computing device in a communications network. The method comprises: receiving at least one stream of event data observed from the environment. Computing at least one schema from the stream of event data, the schema being a concise representation of the stream of event data. Participating in a distributed inference process by sending information about the schema or the received event stream to at least one other digital twin in the communications network and receiving information about schemas or received event streams from the other digital twin. Computing comparisons of the sent and received information. Aggregating the digital twin and the other digital twin, or defining a relationship between the digital twin and the other digital twin on the basis of the comparison.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example are constructed or utilized. The description sets forth the functions of the example and the sequence of operations for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
As mentioned above a digital twin is a digital representation of a physical object or process (referred to herein as a physical entity). A digital twin of a physical object or real world process comprises software which simulates or describes event data about the behavior of the physical object or real world process. The event data is obtained by monitoring the physical objects or processes, for example, using capture apparatus in the environment of the physical object or process. Additionally or alternatively sensors instrumenting the physical objects or processes are used to obtain the event data.
It is not straightforward to automatically compute a digital twin and enable it to predict future events or state of the digital twin in a manner which takes into account state information from other digital twins in the environment. This is not only because of the large amounts of data involved but because the event data is heterogeneous structured data. Finding a way to deal with this type of data in an efficient practical manner is difficult.
It is an additional challenge to achieve automatic computation of digital twins which are able to learn in an online manner so as to be able to take into account changes in the incoming event data. To achieve such digital twin functionality without using cloud computing resources such as a data center is especially difficult since the amounts and rates of data involved are extremely large and since the complexity of the task is one which lends itself to availability of plentiful computing resources.
Another problem is that conventional machine learning technology typically expects input data in a known format and size and breaks down or computes erroneous predictions if the input data is not suitable. In the case of a digital twin receiving heterogeneous structured input data from a variety of sources it is not easy to find a way to use conventional machine learning technology. Typically a data scientist has to spend considerable time and effort to select, format, normalize, and clean data, sometimes padding it with zeros to bring it to the correct size, before it is suitable to input to a machine learning system. However, availability of a human data scientist is not an option for applications where fully automated creation and online training of digital twins is desired.
The data which the digital twin is to describe and predict is “dark” data in that no semantic information is available regarding the meaning of the data. This makes it especially difficult to design an automated system to create digital twins, train them and use them to make predictions suitable for controlling or managing or maintaining physical entities in the real world.
Most conventional machine learning systems use offline training whereby the machine learning system is taken offline and is unavailable for computing test time predictions, during a training phase. However, digital twins need to be able to operate continually and it is not acceptable to take them offline to carry out training. This is because there would be consequential problems for management, maintenance or control of physical entities which the digital twins represent. Therefore an online training solution is needed. However, training algorithms for machine learning systems are computationally resource intensive and time consuming. Therefore it is a challenge to create a way to achieve high quality training which is computed online, at the same time as the digital twins are being used to compute predictions for controlling, configuring or maintaining the physical entities.
The event data 102 is captured by capture apparatus 108 which is any type of sensor or other apparatus for capturing data about the behavior of the physical entities 106. In
The event data 102 stream is a real time stream of event data. A non-exhaustive list of examples of event data is: temperature measurements, ambient light levels, latitude and longitude data, power level, error rate and many other data values associated with events in the behavior of the physical entities 106. Each event data 102 item is associated with a time of occurrence of the event and these times are referred to as time stamps.
The event data 102 is input to a digital twin 100 which, in some examples, is an edge device at the edge of the internet or other communications network. A digital twin 100 does not have to be at an edge device and in some cases is located at the core of a communications network. Note that
Each digital twin knows about other digital twins in its environment since this data is available to it from another computing system (not illustrated in
The task of a digital twin is to represent the physical entity associated with the digital twin, learn from the event data 102 and state data received from other digital twins, and predict the behavior of the physical entity in the context of its environment of other digital twins, to enable control and/or configuration and/or maintenance of the physical entity. The task of the digital twin is to be achieved with no or minimal human input and without semantic information about the physical entities.
The digital twins exchange (also referred to as gossip) their event data. Since the event data is at a high rate and is large, differences or deltas of the event data 104 are exchanged between the digital twins as indicated in
The digital twin has a machine learning component 202 which comprises any machine learning technology including but not limited to: a neural network, a random decision forest, a support vector machine, a probabilistic program or other machine learning technology.
The machine learning component is configured to receive input in a specified form referred to herein as an input structure. The input structure has a defined format comprising a tensor of columns and rows, with each column storing state data at a given time step and where the columns of state data are in chronological order in the input structure. A time step is a time interval such as a second, a minute, an hour, a day or other length of time. Each row of the tensor comprises state data over time steps for a specified field of a schema. It is recognized herein that it is also possible to have the rows of the tensor holding state data at individual time steps and the columns to hold state data over time steps for a specified field of the schema.
The machine learning component is configured to learn by predicting event data, observing the corresponding empirical event data, computing an error between the predicted and observed event data and using an update process to update itself. Any suitable update process is used depending on the type of machine learning technology in the machine learning component.
The machine learning component is also configured to predict event data for use in controlling, managing or maintaining the physical entities. These predictions are made as an integral part of the learning process so that online learning takes place together with test time prediction. In some examples, the machine learning component is used to predict behavior of the physical entity in a hypothetical situation as described in more detail later in this document.
The digital twin takes 302 samples of raw event data that it receives, or of deltas 104 of the raw event data, during a time window. The duration of the time window is one of the hyperparameters set at operation 300. The samples are from data of other digital twins and also of event data received directly at the digital twin itself.
The digital twin maps the samples of raw event data, or of deltas of raw event data, into the input structure of the machine learning component 304. The mapping is computed on the basis of the schemas of the digital twins. Samples from digital twin B are mapped to the input structure using schema B. Samples from digital twin C are mapped to the input structure using schema C and so on. Thus the samples from a particular digital twin are mapped to the input structure using the schema of the particular digital twin.
As mentioned above a schema is a concise representation of the event data received at a digital twin and it comprises one or more structural types and optional metadata. A structural type has information about the structure of the event data and about the content of the event data. The structural type is one of a plurality of specified structural types from a hierarchy of structural types. The hierarchy of structural types is described below.
In an example, the structural type is a range and the schema comprises numerical values defining the range. This describes event data where the event data comprises numerical values within the range. The digital twin receives samples of the range type and maps them to the input structure by putting the sampled values into a row of the input structure. In some cases the digital twin normalizes the sampled values according to the range before entering the normalized values into the input structure. Thus each row of the input structure has an associated structural type and comprises numerical values computed from the samples of that structural type.
In an example, the mapping of the event data into the input structure comprises using a reduction function. The reduction function acts to aggregate or compress event data items received in a single time step. In an example, where a time step is one day, and the aggregation comprises a weighted average, the reduction function computes a weighted average of the event data items received during the day. The weights are related to the frequency of occurrence of the particular data items. Note that it is not essential to use a weighted average as other types of aggregation are used in some examples. The reduction function is specified in the schema. By using a reduction function in this way, data compression is achieved which helps with making the digital twin work even for huge amounts of incoming data. The reduction function also helps to reduce the effects of noise in the incoming event data.
In an example, the input structure is a specified size and the number of rows and the number of columns of the input structure are hyperparameters which are set at operation 300.
If the apparatus controlling, managing or maintaining the physical entities wants to ask the digital twin what the physical entity will do in a hypothetical situation the check at operation 306 is answered in the affirmative. In an example, an apparatus controlling, managing or maintaining the physical entity or physical entities sends a request to the digital twin comprising the hypothetical situation details. In response the digital twin adds or edits or deletes data in the input structure. The modified input structure is then used to compute 310 a prediction and the prediction is used 312 to control, manage or maintain the physical entity. In an example, the physical entities are traffic lights. The hypothetical situation is a new behavior of a particular traffic light and the prediction is a predicted traffic behavior.
If no hypotheticals are asked at check 306 the machine learning component computes 314 a prediction using the filled input structure. The digital twin observes 316 the corresponding empirical event data and checks 318 if the observations are good data or not. If noise has introduced outliers in the empirical event data it is not good data in which case the process returns to operation 314 to compute further predictions and make further observations.
If the empirical data meets criteria which indicate that it is suitable for use in a training process, an error is computed 320 between the empirical data and the prediction 314. The error is used to update 322 the machine learning component using a suitable update procedure according to the type of machine learning technology.
A check is made at operation 324 as to whether to update the hyperparameters or not. The check involves using thresholds, rules or other criteria to decide whether to change the size of the sampling window and/or change the size of the input structure.
As mentioned above the schema of a digital twin sometimes changes with time. If the schema does change, or the schema of one of the other digital twins changes, the process of
Suppose the process of
Once the second machine learning component has been trained so that convergence is reached at check point 408, the error rate of the second machine learning component is stable. A second check is then made at check 410 to see if the performance of the second machine learning component is better than the first machine learning component. If so, the first machine learning component is replaced 412 by the second machine learning component. If not, the second machine learning component is discarded.
Using a convolutional neural network in the context of the present technology gives unexpected benefits. Typically convolutional neural networks are used for image processing where spatial information is contained in the image so that there are relationships expected between rows and columns of the image. In contrast, the present technology does not use images as inputs but rather has matrices formed from time steps of data from schema fields of event streams. Relationships are not expected between the schema field data. However, it is unexpectedly found that using convolution where the convolutional filters span both one or more time steps and one or more schema fields gives good quality prediction results.
Each feature map has columns, one column per time step. Each row of a feature map has results from a different convolutional filter of a convolutional neural network layer 506, 512. In a preferred embodiment, each convolutional neural network layer 506 has a plurality of different convolutional filters which are the same height as a column of the input tensor but which have different widths, where the widths correspond to numbers of time steps. By using convolutional filters which are the same height as one another efficiencies are gained without significantly sacrificing accuracy. In other example, both the width and height of the convolutional filters varies.
The effect of a convolutional neural network layer can be thought of as sliding each convolutional filter over the input tensor, from column to column, and computing a convolution, which is an aggregation of the neural network node signals falling within the footprint of the filter, at each position of the filter as it is slid from column to column. This gives a convolution result, which aggregates each schema field over the time steps that fall within the footprint of the filter. For a given column, there is a convolution result from each convolutional filter. One of the convolution results is selected and stored in the corresponding feature map column. In an example, the selection is done by selecting the maximum convolution result.
The second feature map 514 is input to a fully connected neural network layer 518 which is an output layer in this architecture. The fully connected layer 518 computes an output vector 520 of the same length as a column of the input tensor. The output vector 520 is a column of predicted schema field values for the predicted time step; it is a regression result and not a classification result as the neural network is not performing classification.
In the example of
The mapping that was done from the event data to the input structure using the schemas is applied in reverse to the output vector 520. Thus any normalization that was applied to the sampled data as it was mapped to the input structure is applied in reverse to the output vector 520 to obtain predictions of the state of the digital twins at a future time step which is the next time step in the chronological sequence of the columns of the input structure. The reverse mapping gives the benefit that the output vector 520 is quickly, simply and efficiently converted into a format suitable for use by legacy computing systems. The legacy computing systems are ones which were originally designed to work with the raw event data. In an example, the event data is produced by the capture apparatus in extensible mark-up language formal (XML format), or in Java (trade mark) script object notation (JSON) format. The machine learning component maps the XML formatted event data into a tensor for input to the machine learning component. The output vector of the neural network is then reverse mapped into the original XML or JSON format. In this way the prediction of the neural network is available in XML format or JSON format and is available for use by computing systems which expect XML or JSON format input.
In some examples, pooling is used in conjunction with the neural network architecture of
The process of computing the predicted output vector 520 is computationally expensive since the tensor is large and the number of parameters of the convolutional neural network layers is significant. In order to achieve substantial efficiencies the following insight is recognized herein. Since each row represents state at a time step, and when a new observation is made it is added to the right hand side of the input structure as a column (see column 524 in
Suppose the machine learning component at the digital twin is carrying out a full learning step 600 without any reuse of computation. A forward pass through the neural network is computed 602 and the intermediate prediction results (the feature maps) are saved 604. The empirical event data is observed 606 for the next time step and the error between the empirical event data and the prediction is computed 608. The weights of the neural network layers are then updated using backpropagation in a conventional manner.
A check is made at operation 612 as to whether or not to make an incremental learning step at the next training iteration. The check involves assessing one or more of the following factors: the size of the error at operation 608, the quality of the observed data at operation 606, the quality of the saved feature maps, the size of the time interval since the last learning step, the number of learning steps that have occurred since the last observation at operation 618, a user input event. Any one or more of the factors are hyperparameters used at operation 300 of
If the incremental learning step is to proceed the machine learning component shifts the current feature map in synchrony with the shift made to the input tensor 614. The machine learning component then re-computes 615 the parts of the feature map which are affected by the shift in the input tensor. If a data value in the feature map results from a convolution using a convolutional filter that overlapped part of the input tensor which has changed, the data value is recomputed.
The feature maps computed in the forward pass are saved 616, the event data of the next time step is observed 618 and an error between the prediction and the observed event data is computed 620. The neural network weights are then updated 622 using backpropagation.
As mentioned above the machine learning component at a digital twin continues to learn using online learning as event data is observed. This means that significant events which are relatively rare become forgotten over time by the digital twin. In order to address this the machine learning component is configured to save event and/or state data observed during a significant event and to periodically retrain using the significant event data as explained with reference to
Suppose the machine learning component has carried out a learning step 700 using the process of
If re-training is to be done, a training data item is selected 712 from the library of saved training data. The selection is made using a round robin selection process in some examples.
The current training data is replaced 712 by the selected item of saved data from the library of saved data. This is done by replacing the current input tensor of the neural network with the input tensor from the selected item of saved data. The selected item of saved data also has an associated stored prediction. A learning step is carried out (see operation 700) using the stored prediction as the observed data. The learning step 700 computes a prediction and an error is computed between the prediction and the stored prediction. The error is then used to update the weights of the neural network using backpropagation. The process of
Using one or more parent digital twins is useful to make higher level predictions which take into account predictions of many child digital twins. It is also possible to have grandparent digital twins and so on to make higher level predictions about global behavior of a distributed system of physical entities. In this way very high quality control of physical entities is achieved by taking into account global behavior of the plurality of physical entities.
As mentioned above, in some examples the digital twins infer their own schemas automatically. More detail about how this is achieved is now given.
With reference to
Each digital twin has a component for schema computation 908. This component takes output from the data ingestion component 906, where that output comprises structural types describing the event data streams, and computes a schema of the event data stream. The schema represents the observed data and is computed automatically from the observed data rather than being defined by a human operator. The schema is for interpreting the data in the event data stream and it comprises one or more fields, each field having a structural type and a range of possible values. A schema comprises structural types and metadata about the structural types. A non-exhaustive list of examples of metadata about structural types is: name of string, time range in which the schema was generated, information about how the schema has been used to compute a mapping, a user annotation.
The computing device 918 has a component for distributed inference 912. The distributed inference component 912 sends and receives data about the dynamic schemas and/or the event data, with other ones of the computing devices 918. The distributed inference component 912 makes comparisons and aggregates digital twins, or establishes peer relationships between digital twins, according to the comparison results. The data ingestion component 906, dynamic schema computation 908 and distributed inference 912 operate continually and at any point in time the current inferred digital twins 916 are available as output. Identification of any peers in the output digital twins is also output.
Alternatively, or in addition, the functionality of a digital twin described herein is performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that are optionally used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
The process of
The process of
The primitive digital twin tries to find a way to compress the event data stream because it is not practical to retain all the data in the event data stream. However, if conventional data compression methods are used the structure in the event data stream is lost or corrupted.
The method of
The process of
The process of
The primitive digital twin computes 1126 a least upper bound between the inferred type and the literal type. The least upper bound of a structural type A, and a structural type B, is the minimal structural type that includes all values of structural type A, and all values of structural type B (where the minimal type is the smaller type in terms of memory size needed to store the type in a memory). An approximation to the least upper bound of structural type A and structural type B is computed in an efficient manner by computing a union of structural type A and structural type B. A least upper bound is less precise than a union, however despite that difference, the process of
The primitive digital twin checks 1128 whether the least upper bound result is different from the inferred type. If so, the inferred type is set 1130 to be the least upper bound result and the process continues at operation 1132 by checking the size of the inferred type. If the check at operation 1128 shows that the least upper bound result is the same as the current inferred type then the process moves directly to operation 1132.
At operation 1132, if the inferred type is larger than a threshold the inferred type is simplified 1134 in order to reduce its size. In an example, to simplify an EnumType comprising a list of values a range type is computed which expresses the range of values in the EnumType rather than listing each of the values in the EnumType. More generally, an inferred type is simplified by using the structural type hierarchy of
After the inferred type has been simplified at operation 1134, or has been found to be smaller than the threshold at operation 1132, the process returns to operation 1122 at which the next value from the decoded event stream is taken to be processed using the method of
The process of
A data source 1206 of captured event data is fed to a computing device 1202 executing the primitive digital twin, such as an edge device or other computing device. The primitive digital twin buffers event data items, of the same structural type, for K events from the event data stream in buffer 1200. It computes the union between pairs of event data items in the buffer to produce a field of a schema 1204. The buffer is then emptied. This process repeats for other structural types, one for each field of the schema. Note that the primitive digital twin has the structural type information since this has been computed using the process of
Computing the union is a fast, efficient and effective way of enabling the computing device to retain useful parts of the event data in the schema and discard the majority of the event data. Thus the computing device is able to operate for huge amounts of event data without breaking down or introducing errors.
As mentioned above a process of distributed inference between two or more primitive digital twins takes place in order to infer digital twins and infer relationships between the digital twins as now described with reference to
The digital twin at the computing device selects 1302 one of the other primitive digital twins. The selection is random or according to one or more heuristics. An example of a heuristic is to select a digital twin with the closest physical proximity.
The digital twin at the computing device gossips 1304 with the selected primitive digital twin using a communications channel between the computing device and the selected primitive digital twin, referred to as a gossip channel. Gossiping means sending and receiving data about dynamic schemas or event data. The computing device compares 1306 the sent and received data. If a potential correlation is detected 1308 between the sent and received data then a bandwidth of the gossip channel is increased. If a potential correlation is not detected then the process returns to operation 1300 and another one of the other primitive digital twins is selected at operation 1302. Any well know statistical process is used to compute the correlation.
If a potential correlation is found at check 1308 and the correlation is above a first threshold amount but below a second threshold amount, the process proceeds to operation 1310. At operation 1310 the bandwidth of the gossip channel between the present digital twin and the other primitive digital twin which was selected at operation 1302 is increased. The increased bandwidth is used to gossip larger amounts of data so that finer grained data is communicated between the gossip partners of the gossip channel. Once the larger amounts of data are gossiped an assessment of correlation between the data sent and received over the gossip channel is made. The assessment is indicated at check point 1312 of
When two primitive digital twins are aggregated this is done by deleting one of the two primitive digital twins after having redirected the event stream of the deleted primitive digital twin to the remaining primitive digital twin of the two. When two primitive digital twins are found to have a peer relation there is no change to the digital twins themselves, although these two digital twins now have stored information indicating the identity of a peer.
Operation 1314 is also reached directly from operation 1308 in cases where the correlation at operation 1314 is above a second threshold which is higher than the first threshold.
In this way the method of
The method of
Computing-based device 1400 comprises one or more processors 1402 which are microprocessors, controllers or any other suitable type of processors for processing computer executable instructions to control the operation of the device in order to compute predictions, execute online training, periodically retrain using significant training data and compute predictions for hypothetical scenarios. In some examples, for example where a system on a chip architecture is used, the processors 1402 include one or more fixed function blocks (also referred to as accelerators) which implement a part of the method of any of
The computer executable instructions are provided using any computer-readable media that is accessible by computing based device 1400. Computer-readable media includes, for example, computer storage media such as memory 1412 and communications media. Computer storage media, such as memory 1412, includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or the like. Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), electronic erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that is used to store information for access by a computing device. In contrast, communication media embody computer readable instructions, data structures, program modules, or the like in a modulated data signal, such as a carrier wave, or other transport mechanism. As defined herein, computer storage media does not include communication media. Therefore, a computer storage medium should not be interpreted to be a propagating signal per se. Although the computer storage media (memory 1412) is shown within the computing-based device 1400 it will be appreciated that the storage is, in some examples, distributed or located remotely and accessed via a network or other communication link (e.g. using communication interface 1414).
The computing-based device 1400 optionally comprises an input/output controller 1416 arranged to output display information to an optional display device 1418 which may be separate from or integral to the computing-based device 1400. The display information may provide a graphical user interface such as for displaying inferred types, schemas, inferred key relations, inferred digital twins and other data. The input/output controller 1416 is also arranged to receive and process input from one or more devices, such as a user input device 1420 (e.g. a mouse, keyboard, camera, microphone or other sensor). In some examples the user input device 1420 detects voice input, user gestures or other user actions and provides a natural user interface (NUI). This user input may be used to set parameter values, view results and for other purposes. In an embodiment the display device 1418 also acts as the user input device 1420 if it is a touch sensitive display device.
The term ‘computer’ or ‘computing-based device’ is used herein to refer to any device with processing capability such that it executes instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the terms ‘computer’ and ‘computing-based device’ each include personal computers (PCs), servers, mobile telephones (including smart phones), tablet computers, set-top boxes, media players, games consoles, personal digital assistants, wearable computers, and many other devices.
The methods described herein are performed, in some examples, by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the operations of one or more of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. The software is suitable for execution on a parallel processor or a serial processor such that the method operations may be carried out in any suitable order, or simultaneously.
This acknowledges that software is a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions are optionally distributed across a network. For example, a remote computer is able to store an example of the process described as software. A local or terminal computer is able to access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a digital signal processor (DSP), programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
The term ‘subset’ is used herein to refer to a proper subset such that a subset of a set does not comprise all the elements of the set (i.e. at least one of the elements of the set is missing from the subset).
It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the scope of this specification.