Example embodiments may relate to an apparatus, method and/or computer program for the updating, or tuning, of classifiers, for example classifiers which include a computational model and which provide at least a positive or negative output in response to input data.
A classifier may comprise a computational system, or part thereof, which may include a computational model such as a trained machine learning model, e.g. a neural network. The computational model may be a predictive model, i.e. one which receives input data and generates therefrom an output value representing a prediction that the input data falls within either a positive or negative class. This output value may be classified as a positive or negative class based on a threshold value with which the output value is compared. This threshold value may be predefined at the time of provisioning the classifier to an end-user. Classifying the input data to a positive class may result in some further processing operations related to an intended task of the classifier. For example, a voice-activated digital assistant may comprise a classifier for determining whether a received voice utterance includes a wake command. The input data may comprise a digital representation of the received utterance and a predictive model may be used to output a value which is compared with the threshold value to determine if the utterance comprises the wake word, i.e. a positive classification, or does not, i.e. a negative classification. If a positive classification, one or more other processing operations may be performed, e.g. performing queries or tasks based on one or more other words in the utterance, or a subsequent utterance.
The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
According to a first aspect, there is described an apparatus comprising: means for receiving data indicative of a positive or negative classification based on comparing an output value, generated by a computational model responsive to an input data, with a threshold value which divides a range of output values of the computational model into positive and negative classes of output values, a positive or a negative classification being usable by the apparatus, or another apparatus, to trigger one or more processing operations; means for determining that the positive or negative classification is a false classification based on one or more events detected subsequent to generation of the output value; and means for updating the threshold value responsive to determining that the positive or negative classification is a false classification.
The means for determining a false classification may be configured to determine the false classification based on feedback data indicative of the one or more events detected subsequent to generation of the output value.
Responsive to a positive classification, a false classification may be determined based on the feedback data indicating a negative classification event associated with one or more further classification processes triggered by the positive classification.
Responsive to a positive classification, a false classification may be determined based on the feedback data indicating a negative interaction event associated with the computational model.
A negative interaction event may be indicated by the feedback data if no input data, or input data below a predetermined threshold, is received by the computational model within a predetermined time period subsequent to generation of the output value.
Responsive to a negative classification, a false classification may be determined based on the feedback data indicating a repeat interaction event associated with the computational model.
A repeat interaction event may be indicated by the feedback data if the computational model receives the same or similar input data to the previous input data within a predetermined time period subsequent to generation of the output value.
The updating means may be configured such that: for a false positive classification, the updated threshold value has a value within the positive class of output values; or for a false negative classification, the updated threshold value has a value within the negative class of output values.
The updating means may be configured to modify the threshold value by a predetermined amount d within the respective positive or negative classes of output values and to use the modified threshold value as the updated threshold value if the modified threshold value satisfies a predetermined rule.
The predetermined rule may be satisfied if:
(FN tolerance value−k)*(FN tolerance value−modified threshold value), or
(FP tolerance value−k)*(FP tolerance value−modified threshold value)
is a positive value, where FN tolerance value is a predefined first tolerance value within the positive class of output values, FP tolerance value is a predefined second tolerance value within the negative class of output values and k is the threshold value.
The updating means may be configured, responsive to the predetermined rule not being satisfied: for a false positive classification, to use FN tolerance value as the updated threshold value; or for a false negative classification, to use FP tolerance value as the updated threshold.
The predetermined amount d may be dynamically changeable based, at least in part, on the generated output value of the computational model.
The predetermined amount d may be dynamically changeable based, at least in part, on the difference between the threshold value and the generated output value of the computational model.
The predetermined amount d may be dynamically changeable as:
where k is the threshold value, c is the generated output value of the computational model and n is a positive value.
The apparatus may comprise at least part of a digital assistant, wherein the computational model is configured to receive input data representing a user utterance and/or gesture and to generate an output value indicative of whether the utterance and/or gesture corresponds to a wakeup command for the digital assistant, a positive classification being usable to trigger one or more processing operations for performance by the digital assistant.
The one or more processing operations may comprise one or more of: responding to a query received after the wakeup command; and controlling a remote electronic system in communication with the digital assistant.
The apparatus may comprise the computational model and one or more sensors for providing the input data to the computational model.
The means may comprise: at least one processor; and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the performance of the apparatus.
According to a second aspect, there is described a method comprising: receiving data indicative of a positive or negative classification based on comparing an output value, generated by a computational model responsive to an input data, with a threshold value which divides a range of output values of the computational model into positive and negative classes of output values, a positive or negative classification being usable by the apparatus, or another apparatus, to trigger one or more processing operations; determining that the positive or negative classification is a false classification based on one or more events detected subsequent to generation of the output value; and updating the threshold value responsive to determining that the positive or negative classification is a false classification.
Determining a false classification may be based on feedback data indicative of the one or more events detected subsequent to generation of the output value.
Responsive to a positive classification, a false classification may be determined based on the feedback data indicating a negative classification event associated with one or more further classification processes triggered by the positive classification.
Responsive to a positive classification, a false classification may be determined based on the feedback data indicating a negative interaction event associated with the computational model.
A negative interaction event may be indicated by the feedback data if no input data, or input data below a predetermined threshold, is received by the computational model within a predetermined time period subsequent to generation of the output value.
Responsive to a negative classification, a false classification may be determined based on the feedback data indicating a repeat interaction event associated with the computational model.
A repeat interaction event may be indicated by the feedback data if the computational model receives the same or similar input data to the previous input data within a predetermined time period subsequent to generation of the output value.
The updating may be such that: for a false positive classification, the updated threshold value has a value within the positive class of output values; or for a false negative classification, the updated threshold value has a value within the negative class of output values.
The updating may modify the threshold value by a predetermined amount d within the respective positive or negative classes of output values and to use the modified threshold value as the updated threshold value if the modified threshold value satisfies a predetermined rule.
The predetermined rule may be satisfied if:
(FN tolerance value−k)*(FN tolerance value−modified threshold value), or
(FP tolerance value−k)*(FP tolerance value−modified threshold value)
is a positive value, where FN tolerance value is a predefined first tolerance value within the positive class of output values, FP tolerance value is a predefined second tolerance value within the negative class of output values and k is the threshold value.
The updating may be such that, responsive to the predetermined rule not being satisfied: for a false positive classification, to use FN tolerance value as the updated threshold value; or for a false negative classification, to use FP tolerance value as the updated threshold.
The predetermined amount d may be dynamically changeable based, at least in part, on the generated output value of the computational model.
The predetermined amount d may be dynamically changeable based, at least in part, on the difference between the threshold value and the generated output value of the computational model.
The predetermined amount d may be dynamically changeable as:
where k is the threshold value, c is the generated output value of the computational model and n is a positive value.
The method may be performed by at least part of a digital assistant, wherein the computational model is configured to receive input data representing a user utterance and/or gesture and to generate an output value indicative of whether the utterance and/or gesture corresponds to a wakeup command for the digital assistant, a positive classification being usable to trigger one or more processing operations for performance by the digital assistant.
The one or more processing operations may comprise one or more of: responding to a query received after the wakeup command; and controlling a remote electronic system in communication with the digital assistant.
According to a third aspect, there is provided a computer program product comprising a set of instructions which, when executed on an apparatus, is configured to cause the apparatus to carry out the method of any preceding method definition.
According to a fourth aspect, there is provided a non-transitory computer readable medium comprising program instructions stored thereon for performing a method, comprising: receiving data indicative of a positive or negative classification based on comparing an output value, generated by a computational model responsive to an input data, with a threshold value which divides a range of output values of the computational model into positive and negative classes of output values, a positive or negative classification being usable by the apparatus, or another apparatus, to trigger one or more processing operations; determining that the positive or negative classification is a false classification based on one or more events detected subsequent to generation of the output value; and updating the threshold value responsive to determining that the positive or negative classification is a false classification.
The program instructions of the fourth aspect may also perform operations according to any preceding method definition of the second aspect.
According to a fifth aspect, there is provided an apparatus comprising: at least one processor; and at least one memory including computer program code which, when executed by the at least one processor, causes the apparatus to: receive data indicative of a positive or negative classification based on comparing an output value, generated by a computational model responsive to an input data, with a threshold value which divides a range of output values of the computational model into positive and negative classes of output values, a positive or negative classification being usable by the apparatus, or another apparatus, to trigger one or more processing operations; determine that the positive or negative classification is a false classification based on one or more events detected subsequent to generation of the output value; and update the threshold value responsive to determining that the positive or negative classification is a false classification.
The computer program code of the fifth aspect may also perform operations according to any preceding method definition of the second aspect.
Example embodiments will now be described by way of non-limiting example, with reference to the accompanying drawings, in which:
Example embodiments may relate to apparatuses, methods and/or computer programs for the updating, or tuning, of classifiers.
As mentioned above, a classifier may comprise a computational apparatus, part of a computational apparatus, or part of a network of computational apparatuses which may include a predictive model such as a machine learning model, e.g. a neural network or related model. The predictive model may receive input data and may generate therefrom, usually by means of an algorithm, an output value representing a prediction that the input data falls within at least a positive or negative class. This output value may be classified as a positive or negative class based on a threshold value with which the output value is compared. Classifying the input data to a positive class may result in some further processing operations related to the intended function of the classifier.
As used herein, the terms “positive” and negative” are not intended to limit example embodiments to classes using these labels. Other respective labels, such as “true” and “false”, “1” and “0”, and many others may be used. The term “positive” in this sense means that an output value falling within this class may cause triggering of one or more further processing operations that comprise an intended function of an apparatus or system. Additionally, or alternatively, classifying the input data to a negative class may result one or more other processing operations related to one or more other intended functions of the classifier.
For the avoidance of doubt, example embodiments encompass classifiers that may first classify input data into two classes (a binary classification task) or one of three (or more) labelled classes (a multi-class classification task.) Where there are two possible labelled classes, e.g. “correct” or “incorrect” then it follows that a “positive” classification may correspond to “correct” and a negative classification may correspond to “incorrect.” In case of the two possible labelled classes, e.g., “class 1” and “class 2”, a positive classification may be “class 1”, such as “dog”, and a negative classification may be “class 2”, such as “cat”, or vice versa. Where there are three (or more) labelled classes, a classifier adapted for “positive” or “negative” classifications can still be used. For example, in image processing, input data representing an image of an animal may first be classified into one of three labelled classes, e.g. “dog”, “cat” or “horse.” In this context, in one formulation of the classifier disclosed herein, a positive classification may correspond to one of these labelled classes, e.g. “dog” and a negative classification may correspond to “not dog”, i.e. covering the “cat” or “horse” labels. Other classifier formulations can be used for the other respective labelled classes, i.e. one where a positive classification may correspond to “cat” and the negative classification to “not cat”, i.e. covering the “dog” or “horse” labels, and one where a positive classification may correspond to “horse” and the negative classification to “not horse. i.e. covering the “dog” or “cat” labels”. Each classifier formulation may have its own threshold value that can be updated or tuned according to example embodiments disclosed herein.
Classifiers are currently embedded in large numbers of real-world devices, including so-called Internet-of-Things (IoT) devices, which may include wearables such as heart monitors, smart watches and earphones, and/or a multitude of other personal, home and healthcare appliances such as mobile communication devices, smart speakers, etc. Input data to such classifiers may be received from one or more sensors on, or connected to, such devices, wherein a sensor is, for example, a microphone, a motion sensor, a camera, a gesture detection sensor, a touch sensor, a speed sensor, an acceleration sensor, a an IMU (inertial measurement unit), a GNSS (global navigation satellite system) receiver, a radio receiver/transmitter, etc. or any combination thereof. The input data may comprise digital representations of, for example, a user's voice, gesture, movement (e.g. falling, walking or running), temperature, heart rate, location and so on, or any combination thereof. An advantage of using embedded classifiers is that the input data can be processed in-situ, reducing privacy concerns and communications overhead.
The classifier 100 may receive input data 102 from one or more data sources, e.g. one or more sensors which may form part of an apparatus in which the classifier 100 is embedded. Alternatively, the one or more sensors are separate from the apparatus but communicatively connected, such as via wireless or wireline connection, to it. A computational model 104, which may comprise any suitable form of computational model/analysis/algorithm, such as a trained neural network, may be configured to receive the input data 102 and to generate an output value 105 representing a prediction that the input data corresponds to at least a positive or negative class. The output value 105 may be compared to a threshold value 106 which is a value dividing a range of available output values of the computational model 104 into at least positive and negative classes. A classification labelling module 108 may label the input data 102, for a current time instance, as assigned to either a positive class or negative class based on which side of the threshold value 106 the output data falls on. One or more further processing operations 11o may result, at least from a positive classification, such as performing some intended function of an apparatus or system of which the classifier 100 forms part, or some other apparatus or system in communication with the classifier.
A practical example may be considered in the context of a voice-controlled digital assistant which comprises the above-described classifier 100. This is a mere example, and classifiers which take (as input data) data other than that representing a user's voice, such as any one or more of the above examples, may be used.
The classifier 100 may receive input data 102 in the form of a digital representation of a user's voice, i.e. a spoken utterance of one or more words, received by one or more microphones. The computational model 104 may generate an output value 105 which represents a score indicative of a probability that at least part of the utterance falls within a positive or negative class. This is determined by the threshold value 106. The output value 105 may be used, therefore, to detect whether the utterance comprises one or more wake commands for enabling the digital assistant, e.g. “hey assistant” or “hello assistant.”
One can consider probability distributions associated with the range of available output values for the positive and negative classes, the probability distributions being distinct and partially separated. As such, by use of the threshold value 106, the classification labelling module 108 determines which of the positive and negative classes the input data 102 is assigned to, i.e. based on whether the output value is above or below the threshold. A positive classification of the input data, indicative in this example of the wake command being detected in the utterance, may result in one or more further processing operations 110 that the digital assistant is configured to perform.
The one or more further processing operations may include one or more of: responding to a query (e.g. “what time is it?”, “what is the temperature in my home?”, “how many kilometers have I run today?”, and/or “what is my heart rate?”) and/or controlling an electrical or electronic system (e.g. “turn on/off my central heating system”, “turn on/off the ignition of my car”, “lock/unlock my front door” and/or “turn on/off my bedroom lights in ten minutes.”)
Such query or control commands may be comprised in the same input data, i.e. as utterances that immediately follow the wake command utterance (e.g. “hey assistant, what time is it?”), or in follow-up input data received after the digital assistant signals it has been enabled by the wake command.
Classifiers may comprise part of stand-alone or multi-stage classifier process.
An example of a stand-alone classifier process is one where one classifier, or classification process is used to perform an intended function.
For example, an intended function, e.g. controlling an electrical or electronic system, may result from a positive classification being assigned to received input data by one classifier. For example, a positive classification may result from a particular utterance or gesture being received by a smart watch. Other utterances or gestures may result in a negative classification and the intended function is not then performed, although an error message may be issued by default.
An example of a multi-stage classifier process is one where multiple classifiers or classification processes are used, in series or in parallel, to perform an intended function.
For example, referring back to the voice-controlled digital assistant example, a first classifier may be used for assigning a positive or negative classification to an initial part of an utterance, for detecting the wake command, and a second classifier may be used for assigning a positive of negative classification to a subsequent part of the utterance, or a follow-up utterance.
A vertical axis 200 of the graph represents probability and a horizontal axis 202 of the graph represents available output values, or scores, that may be produced by a computational model, such as the computational model 104 in
Therefore, for a given threshold value, there will be an associated false positive rate (FPR) and false negative rate (FNR). For classifiers, or apparatuses comprising classifiers which have a fixed threshold value, i.e. a preset threshold value determined prior to deployment to end-users, this can result in a number of drawbacks that example embodiments may alleviate or avoid. For example, although a preset threshold value may be determined by manufacturers to perform well on average, i.e. for a potentially large number of end-users, it may not take into account specific characteristics of end-users and/or application contexts. As a result, a deployed classifier may perform badly for some end-users and/or application contexts, making it potentially discriminatory or unfair to some users and unreliable with the potential for random failures. This may have severe consequences for high-risk applications, such as a switching-on a critical device or calling emergency services.
Example embodiments may alleviate or avoid such drawbacks by way of determining that, based on a threshold value, a classification is a false classification and then, based on it being false, updating (or tuning) the threshold value.
A first operation 302 may comprise receiving data indicative of a positive or negative classification based on comparing an output value, generated by a computational model responsive to an input data, with a threshold value which divides a range of output values of the computational model into positive and negative classes of output values, a positive or a negative classification being usable by the apparatus, or another apparatus, to trigger one or more processing operations.
A second operation 304 may comprise determining that the positive or negative classification is a false classification based on one or more events detected subsequent to generation of the output value.
A third operation 306 may comprise updating the threshold value responsive to determining that the positive or negative classification is a false classification.
Regarding the first operation 302, data indicative of the positive or negative classification may be received from part of a classifier configured to label, or assign, a current input value with a positive or negative classification, e.g. the classification labelling module 1o8 in
As mentioned above, a positive classification is usable by the apparatus, such as any apparatus of which the classifier forms a part, or another apparatus (e.g. an IoT apparatus in communication with the apparatus of which the classifier forms a part) to trigger one or more processing operations, examples of which are given above.
Regarding the second operation 304, the one or more events detected subsequent to generation of the output value 105 (or the positive or negative classification based on the output value) may be any detectable event which may or may not relate to the triggered one or more processing operations. For example, the event may be based on a subsequent user interaction with the apparatus, a subsequent user interaction with another apparatus, or a lack of user interaction when expected, such as within a predetermined time period from generation of the output value 105.
In some example embodiments, detection of the one or more events may be based on feedback data indicative of the one or more events. For the avoidance of doubt, the absence of feedback data, e.g. within a predetermined time period, may still be indicative of an event.
For example, responsive to a positive classification, a false positive classification may be determined based on feedback data indicating a negative classification event associated with one or more further classification processes triggered by the positive classification, e.g. in a multi-stage classifier process. For example, taking the above voice-controlled digital assistant example, an utterance received subsequent to a positively-classified wake command utterance may be assigned a negative classification, e.g. it is not recognised by a subsequent classifier. In this case, the positively classified wake command may be determined in the second operation 304 as false, i.e. a false positive classification. This may also be the case if the feedback data is indicative of a negative interaction event, i.e. if no further input data or input data below a predetermined threshold (e.g. a partial or low-volume utterance) is received subsequent to generation of the output value.
For example, responsive to a negative classification, a false negative classification may be determined based on feedback data indicating a repeat interaction event associated with the computational model. For example, a repeat interaction event may be indicated if the computational model receives the same or similar input data to the previous input data within a predetermined time period subsequent to generation of the output value. For example, taking the above voice-controlled digital assistant example, a second utterance received subsequent to a negatively-classified first utterance, may result in the latter utterance being determined as false, i.e. a false negative classification, if the second utterance is substantially a repeat of the first utterance, i.e. the same or similar.
The classifier 400 is similar to that shown in, and described with respect to,
A true or false labelling module 412 may be configured to receive the output value 405, the classification from the classification labelling module 408 and, optionally, feedback data that may relate to one or more further processing operations 410, although not necessarily so as will become clear. The true or false labelling module 412 may be configured to determine, using example tests mentioned above and/or below, whether the classification for the current output value 405 is a true or false classification. The determination may be passed to a threshold value updater 414. If false, the threshold value updater 414 may be configured to determine an updated threshold value which may update the threshold value 406 used in the next round or iteration of the classifier 400. If true, the threshold value 406 may remain the same.
A first, upper part 502 of the flow diagram indicates processes performed by the true or false labelling module 412.
A first operation 506 may comprise receiving a classification outcome, i.e. positive or negative, from the classification labelling module 408, for which see
If positive, subsequent operations depend on whether the classifier 400 is part of a stand-alone or a multi-stage classification process. This is indicated by second operation 508 in which “PRESENT” indicates a multi-stage classification process and “ABSENT” indicates a stand-alone classification process.
If a multi-stage classification process, a third operation 510 determines if a next stage, i.e. one or more subsequent classification processes, results in a positive (TRUE) or negative (FALSE) classification. If positive (TRUE) then in a fourth operation 512, the true or false labelling module 412 assigns a TRUE POSITIVE label to the classification outcome. If negative (FALSE) then in a fifth operation 516, the true or false labelling module 412 assigns a FALSE POSITIVE label to the classification outcome.
As indicated by the dashed region 540, such operations indicate that the true or false labelling module 412 makes its determination based on supervision from follow-up algorithmic components as opposed to, for example, the result of user interactions subsequently received.
Returning to the second operation 5o8, if a stand-alone classification process is determined, in a sixth operation 514 the true or false labelling module 412 determines if there is or are one or more follow-on requests from the end user. If further input data is received from the 35 end user, e.g. within a predetermined time period and/or above a predetermined threshold, then this may be indicative of a follow-on request. As in the fourth operation 512, this may result in assignment of a TRUE POSITIVE label because it signifies that the end-user continues to use the classifier 400. If no further input data is received from the end user, then this may be indicative of a negative interaction event and, as in the fifth operation 516, a FALSE POSITIVE label may be assigned to the classification outcome.
If the outcome from the first operation 506 is a negative outcome, then a sixth operation 518 may determine if there is a repeat interaction event associated with the computational model 404. For example, a repeat interaction event may be indicated if the computational model 404 receives the same or similar input data to the previous input data within a predetermined time period subsequent to generation of the output value 405. If the true or false labelling module 412 determines no repeat interaction event in the sixth operation 518, then a TRUE NEGATIVE label may be assigned to the classification outcome in a seventh operation 520. If there is a repeat interaction event in the sixth operation 517, a FALSE NEGATIVE label may be assigned to the classification outcome in an eighth operation 522.
As indicated by the dashed region 542, such operations indicate that the true or false labelling module 412 makes its determination based on supervision from follow-up user interactions.
In the case that a FALSE POSITIVE or FALSE NEGATIVE label is assigned by the true or false labelling module 412, the threshold value updater 414 may perform the operations indicated in a second, lower part 504 of the flow diagram.
The threshold value updater 414 may store its own internal threshold value, which may initially be the same as the current threshold value indicated by reference numeral 406, but is not necessarily the same because the nature of the updating process should adjust the current threshold value in the appropriate direction.
In general, for a FALSE POSITIVE label, as determined in the fifth operation 516, an updated threshold value may have a value within the positive class of output values. For a FALSE NEGATIVE classification, as determined in the eighth operation 522, the updated threshold value may have a value within the negative class of output values.
For example, the threshold value updater 414 may update its internal threshold value by a predetermined amount, or distance d, within the respective positive or negative classes, with a view to moving the threshold value towards the (unknown) mean of the relevant class. For example, in respect of a FALSE POSITIVE label in the fifth operation 516, a ninth operation 524 may comprise the threshold value updater 414 shifting its internal threshold value by the predetermined amount d within the positive class. For example, in respect of a FALSE NEGATIVE label in the eighth operation 522, a tenth operation 534 may comprise the threshold value updater 414 shifting its internal threshold value by the predetermined amount d within the negative class.
Having updated the internal threshold value, the threshold value updater 414 may test if the modified threshold value satisfies a predetermined rule prior to actually updating the current threshold value 406 used by the classifier 400.
The predetermined rule may be based on predefined tolerance values respectively associated with the positive and negative classes. The predefined tolerance values may be stored by the threshold value updater 414.
For example, following on from the ninth operation 524, an eleventh operation 526 may comprise testing the modified threshold value, which has been moved within the positive class, against a predefined false negative tolerance value (FNR tolerance value). If the modified threshold value is within the FNR tolerance value, the test is considered a pass, and hence the modified threshold value may be used in a twelfth operation 528 to update the threshold value 406 in a thirteenth operation 532. If, in the eleventh operation 526, the modified threshold value is the same or goes beyond the FNR tolerance value, then the test is considered a fail, and a fourteenth operation 530 updates the threshold value 406 to that of the FNR tolerance value.
Similarly, a fifteenth operation 536 may comprise testing the modified threshold value, which has been moved within the negative class, against a predefined false positive tolerance value (FPR tolerance value). If the modified threshold value is within the FPR tolerance value, the test is considered a pass, and hence the modified threshold value may be used in the twelfth operation 528 to update the threshold value 406 in the thirteenth operation 532.
If, in the fifteenth operation 536, the modified threshold value is the same or goes beyond the FPR tolerance value, then the test is considered a fail, and a sixteenth operation 538 updates the threshold value 406 to that of the FPR tolerance value. In general, the predetermined rule may be satisfied, or a pass, if:
(FNR tolerance value−k)*(FNR tolerance value−modified threshold value), or
(FPR tolerance value−k)*(FPR tolerance value−modified threshold value)
are positive values, where FNR tolerance value is a predefined first tolerance value within the positive class of output values, FPR tolerance value is a predefined second tolerance value within the negative class of output values and k is the current threshold value 406.
In some example embodiments, the predetermined amount d may be dynamically changeable based, at least in part, on the generated output value 405 of the computational model 404. For example, the predetermined amount d may be dynamically changeable based, at least in part, on the difference between the current threshold value 406 and the generated output value of the computational model. For example, the predetermined amount d may be dynamically changeable as:
where k is the current threshold value 406, c is the generated output value 405 of the computational model and n is a positive value. For example, n may be “2” but is not necessarily so. If n is “2”, then this will move the updated threshold value halfway between the current threshold value 406 and the output value 405 of the computational model 404. A benefit of using a dynamic value for d is that it may lead to faster convergence to an optimal threshold value, although a benefit of using a fixed and relatively smaller value of d is that system performance will not fluctuate significantly as the threshold value converges to an optimal solution for the given end user and context.
A vertical axis 600 of the graph represents probability and a horizontal axis 602 of the graph represents available output values, or scores, that may be produced by a computational model, such as the computational model 404 in
This process, and that of
Example embodiments therefore provide for updating classifiers, particularly the threshold value of classifiers, in a way that may improve performance whilst achieving a degree of personalization based on one or more of end-user and application context and which can be performed in-situ within a given apparatus after deployment by a manufacturer. The computational model and/or data labelling does not need changing per se, as updating is based on user actions and application context, which can apply to a wide range of applications.
Example embodiments may be performed at inference time use of the computational model 404 which forms a part of the classifier 400 but could be used during training time, e.g. to signal to a model developer that a computational model is not properly trained. A feedback loop can in this way be implemented as an output of the true or false labelling module 412, in case many FALSE POSITIVES or FALSE NEGATIVES are observed.
Example Apparatus
The apparatus comprises at least one processor 700 and at least one memory 701 directly or closely connected to the processor. The memory 701 includes at least one random access memory (RAM) 701a and at least one read-only memory (ROM) 701b. One or more computer program code (software) 705 is stored in the ROM 701b. The apparatus may be connected to one or more transmitter (TX) and receiver (RX), and wireline communication interface. The apparatus may, optionally, be connected with a user interface (UI) for instructing the apparatus and/or for outputting data. The at least one processor 700, with the at least one memory 701 and the computer program code 705 are arranged/configure to cause the apparatus to at least perform at least the method according to any preceding process, for example as disclosed in relation to the flow diagrams of
It is contemplated that the functions of the components of the classifier 400 as described above may be combined or performed by other components of equivalent functionality. The above presented components can be implemented in circuitry, hardware, firmware, software, or a combination thereof.
As used in this application, the term “circuitry” may refer to one or more or all of the following:
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
Names of network elements, protocols, and methods are based on current standards. In other versions or other technologies, the names of these network elements and/or protocols and/or methods may be different, as long as they provide a corresponding functionality. For example, embodiments may be deployed in 2G/3G/4G/5G/6G telecommunication networks and further generations of 3GPP but also in non-3GPP radio networks such as short-range wireless communication networks, for example, a wireless local area network (WLAN), Bluetooth®, or optical wireless communication network. Additionally or alternatively, the embodiments may be deployed in one or more wired communication networks.
A memory may be volatile or non-volatile. It may be e.g. a RAM, a SRAM, a flash memory, a FPGA block ram, a DCD, a CD, a USB stick, and a blue ray disk.
If not otherwise stated or otherwise made clear from the context, the statement that two entities are different means that they perform different functions. It does not necessarily mean that they are based on different hardware. That is, each of the entities described in the present description may be based on a different hardware, or some or all of the entities may be based on the same hardware. It does not necessarily mean that they are based on different software. That is, each of the entities described in the present description may be based on different software, or some or all of the entities may be based on the same software. Each of the entities described in the present description may be embodied in the cloud.
Implementations of any of the above described blocks, apparatuses, systems, techniques or methods include, as non-limiting examples, implementations as hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. Some embodiments may be implemented in the cloud.
It is to be understood that what is described above is what is presently considered the preferred embodiments. However, it should be noted that the description of the preferred embodiments is given by way of example only and that various modifications may be made without departing from the scope as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
20225025 | Jan 2022 | FI | national |