This disclosure relates to methods, devices and systems in a communications network. More particularly but non-exclusively, the disclosure relates determining, in coordination with a second device, whether a first device can perform a first action.
In a smart environment such as a smart home, there may be multiple Internet of Things (IoT) devices acting in the same space. Such devices may each have individual (e.g. private) goals that should be performed in a manner that best satisfies the user, which in a smart home, for example, may be the tenant of the home. For instance, a noise controller may have a persistent goal to keep ambient noise below a certain threshold, an audio station may have a planned goal to play music for a period of time and a kettle may receive a spontaneous goal (or instruction) from the user to boil water. In such situations, the devices need to communicate information about their intentions and collectively make a decision based on incomplete and conflicting information. For instance, an audio station and a kettle operating in a common environment may be unaware that their simultaneous functioning yields noise over the threshold set for a noise controller.
The processes of information exchange and decision making may involve inquiry, persuasion, negotiation, and deliberation among the devices. The decision may need to be taken at any given step of the interaction, with all the available yet potentially incomplete and conflicting information, or else upon a complete exchange of information (e.g.
according to planned goals and constraints). In any event, collective decision-making has to exhibit desirable properties, including distributed reasoning, run-time execution, successful termination, satisfied goals and user preferences.
Argumentation protocols are a methodology that enable on-demand interaction and collective decision making of multiple devices, such as for example, sensors; smart appliances and/or other IoT devices, in shared environments. Using argumentation protocols, the devices engage in so-called dialogue games using the rules of the argumentation protocol, whereby they communicate by sending public messages which are known as “utterances”. Utterances can contain claims, questions, counterclaims, supporting-claims etc. and these are sent between devices to collectively argue about their claims, including intents to take actions that allow them to achieve their individual goals. From an IoT device perspective, the main advantageous features of dialogue games are the following:
Effectively thus, the winners and losers of the game (to establish the initial claim) can be determined at any point during the dialogue, leading to on-the-fly decision making.
An example argumentation protocol that may be used in an IoT setting is found in the paper by H. Prakken, entitled “Coherence and Flexibility in Dialogue Games for Argumentation,” J. Log. Comput., vol. 15, no. 6, pp. 1009-1040, Dec. 2005. Another example may be found, in the paper by X. Fan and F. Toni, entitled “A General Framework for Sound Assumption-Based Argumentation Dialogues,” Artif. Intell., vol. 216, pp. 20-54, 2014, doi: 10.1016/j.artint.2014.06.001.
As noted above, an argumentation protocol is procedural description (a set of rules) describing how agents are to communicate (abstractly - what information to exchange, in what order, etc). A dialogue (game) is the object created during such communication, e.g.
a collection of utterances.
Normally, a dialogue game admits a main claim, expressed via the initial utterance (by some device), towards acceptance of which the devices argue. The goal is for devices to evaluate the main claim, i.e. to collectively accept or reject it, at the same time evaluating acceptance of other utterances in the dialogue.
The following is an example of a formal definition of an utterance taken from X. Fan & F. Toni (2014):
Definition 3.1. An utterance from agent ai to agent aj (i, j=1, 2, i≠j) is a tuple ai, aj, T, C, ID where:
For illustration, <ai, aj, 0, claim(s), 1> is an utterance from agent ai to agent aj with the content claim(s) expressing that ai claims s to be the case, where s is an object in some formal language . The identifier ID of this utterance is 1 and the target T is 0, by convention (this being the very first utterance).
For example, agents ai and aj can be Kettle and Noise Controller (e.g. a device configured to control noise in the environment), respectively, and claim(Boil) can represent the intention of Kettle to boil water, thus initiating a dialogue with the first utterance <Kettle, Noise Control, 0, claim(Boil), 1>.
In what follows, the words “device” and “agent” may be used interchangeably.
The following is an example of a formal definition of a dialogue game (X. Fan & F. Toni (2014)):
Definition 3.2. A dialogue ajai(X) (between ai, and aj,i, j∈{1,2}, i≠j, for X∈) is a sequence u1, . . . , un, n≥0, where each ul, l=1, . . . , n, is in U, and:
For illustration, given a language ={s, a, b, c, d, g, q, r}, a set of identifiers = (natural numbers including 0) and initial identifier ID0=0, a possible dialogue a
And similarly further until the dialogue ends with the two agents uttering pass sentences π.
To further illustrate a dialogue with a natural language reading, consider the following sets of assumptions ϵ, rules δ and contraries δ (with the a formal language implicitly defined thereof):
The following is an example natural language reading of a possible dialogue drawn using the above components (which are constructed from the film Twelve Angry Men as in X. Fan & F. Toni (2014)):
Some argumentation-based approaches have been proposed for decision making in IoT settings, see for example, the paper by E. Lovellette, H. Hexmoor, and K. Rodriguez, entitled “Automated argumentation for collaboration among cyber-physical system actors at the edge of the Internet of Things,” Internet of Things, vol. 5, no. March 2019, pp. 84-96, 2019, doi: 10.1016/j.iot.2018.12.002.
However, these methods are limited to negotiation by pooling internally built arguments at once and using either argumentation semantics, game-theoretic or voting mechanisms to resolve conflicts among posited arguments. In settings such as IoT environments, actions may be time sensitive. For instance, if the planned goal for an audio station is to play music for two hours, and the spontaneous goal of boiling water during that period requires the kettle to work for a few minutes only, the devices should be able to communicate and resolve this in a manner that enables total noise to be kept below a noise threshold. For example, the kettle could commit to boiling for a few minutes and the audio station could commit to stop playing in the meantime but might resume playing after the commitment expires. Such time-based commitments are currently not taken into consideration by state-of-the-art argumentation protocols but are greatly needed in their applications to settings such as IoT. It is thus an object of embodiments herein to improve on argumentation protocols, particularly for use in IoT settings.
Thus, according to a first aspect there is a computer implemented method for determining, in coordination with a second device, whether a first device can perform a first action. The method comprises: sending a first message to the second device, wherein the first message comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed. The method further comprises receiving a second message from the second device according to an argumentation protocol defined between the first device and the second device, wherein the second message comprises i) a first response related to whether the first device can perform the first action, and ii) a second time interval in which the first response is applicable. The first response is valid according to the argumentation protocol if the second time interval overlaps the first time interval.
According to a second aspect there is a computer implemented method for determining, in coordination with a first device, whether the first device can perform a first action. The method comprises: receiving a first message from the first device, wherein the first message comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed. The method further comprises sending a second message to the first device according to an argumentation protocol defined between the first device and the second device, wherein the second message comprises i) a first response related to whether the first device can perform the first action, and ii) a second time interval in which the first response is applicable. The first response is valid according to the argumentation protocol if the second time interval overlaps the first time interval.
According to a third aspect there is a method in a computing system comprising a digital twin of a first device and a digital twin of a second device, wherein the method is for determining, in coordination with the digital twin of the second device, whether the first device can perform a first action. The method comprises: the digital twin of the first device sending a first message to the digital twin of the second device, wherein the first message comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed, and the digital twin of the second device, responsive to receiving the first message, sending a second message to the digital twin of the first device according to an argumentation protocol defined between the first device and the second device, wherein the second message comprises i) a first response related to whether the first device can perform the first action, and ii) a second time interval in which the first response is applicable. The first response is valid according to the argumentation protocol if the second time interval overlaps the first time interval.
According to a fourth aspect there is a first device configured for determining, in coordination with a second device, whether a first device can perform a first action. The first device is configured to: send a first message to the second device, wherein the first message comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed. The first device is further configured to receive a second message from the second device according to an argumentation protocol defined between the first device and the second device, wherein the second message comprises i) a first response related to whether the first device can perform the first action, and ii) a second time interval in which the first response is applicable. The first response is valid according to the argumentation protocol if the second time interval overlaps the first time interval.
According to a fifth aspect there is a second device configured for determining, in coordination with a first device, whether the first device can perform a first action. The second device is configured to receive a first message from the first device, wherein the first message comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed. The second device is further configured to send a second message to the first device according to an argumentation protocol defined between the first device and the second device, wherein the second message comprises i) a first response related to whether the first device can perform the first action, and ii) a second time interval in which the first response is applicable. The first response is valid according to the argumentation protocol if the second time interval overlaps the first time interval.
According to a sixth aspect there is a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method according to the first, second, or third aspects.
According to a seventh aspect there is a carrier containing a computer program according to the sixth aspect, wherein the carrier comprises one of an electronic signal, optical signal, radio signal or computer readable storage medium.
According to an eight aspect there is a computer program product comprising non transitory computer readable media having stored thereon a computer program according to the sixth aspect.
Thus, in embodiments herein, time-sensitive interaction is introduced to argumentation protocols in order to improve IoT device collective decision-making. In IoT decision making, accounting for time-sensitive actions is crucial as actions and claims related thereto are generally time-sensitive. The contribution herein thus lies in the introduction of time carrying utterances which means that the resolution of the dialogue among agents can be predicated on timed commitments of the devices, rather than e.g., voting. The commitments can be internally and individually generated, based on e.g. the devices' goals. What is more, the prioritization of goals provides an additional means to resolve conflicts arising in the argumentation process. In summary, the disclosure herein introduces argumentation protocols with timed-commitments for collective decision making in IoT through the use of time-carrying utterances, time-based commitments and conflict-resolution among agents' utterances (and commitments thereof) based on the priority of goals.
For a better understanding and to show more clearly how embodiments herein may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings, in which:
The disclosure herein relates to devices operating in a communications network (or telecommunications network). A communications network may comprise any one, or any combination of: a wired link (e.g. ASDL) or a wireless link such as Global System for Mobile Communications (GSM), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), New Radio (NR), WiFi, Bluetooth or future wireless technologies. The skilled person will appreciate that these are merely examples and that the communications network may comprise other types of links. A wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards.
The disclosure herein relates to devices connected to (e.g. operating or communicating over) a communications network. Some embodiments herein relate to internet of things (IoT) devices. For example, IoT devices operating in a local geographical environment.
Examples of IoT devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g. refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.). Other examples of IoT devices include but are not limited to smart devices in the home, e.g. such as smart kettles, toasters, radios, lightbulbs or other light fixtures, smart bins and/or smart meters (e.g. such as energy meters or water meters).
More generally, an IoT device may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another device and/or a network node. The device may in this case be a machine-to-machine (M2M) device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the device may be a device implementing the 3GPP narrow band internet of things (NB-IoT) standard.
More generally, the disclosure herein applies to the operation of any device or user equipment (UE) capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. A device may be a user equipment (UE) or wireless device (WD). Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. In some embodiments, a device may be configured to transmit and/or receive information without direct human interaction. For instance, a device may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network. Examples of a device include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE), a vehicle-mounted wireless terminal device, etc. A device may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-everything (V2X) and may in this case be referred to as a D2D communication device. In other scenarios, a device may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
Turning now to
The device 100 may be configured or operative to perform the methods and functions described herein, such as the methods 200, or 300 as described below. The device 100 may comprise a processor (or logic) 102. It will be appreciated that a device 100 may comprise one or more virtual machines running different software and/or processes. The device 100 may therefore comprise one or more servers, switches and/or storage devices and/or may comprise cloud computing infrastructure or infrastructure configured to perform in a distributed manner, that runs the software and/or processes.
The processor 102 may control the operation of the device 100 in the manner described herein. The processor 102 can comprise one or more processors, processing units, multi-core processors or modules that are configured or programmed to control the device 100 in the manner described herein. In particular implementations, the processor 102 can comprise a plurality of software and/or hardware modules that are each configured to perform, or are for performing, individual or multiple steps of the functionality of the device 100 as described herein.
The device 100 may comprise a memory 104. In some embodiments, the memory 104 of the device 100 can be configured to store program code or instructions that can be executed by the processor 102 of the device 100 to perform the functionality described herein. Alternatively, or in addition, the memory 104 of the device 100, can be configured to store any requests, resources, information, data, signals, or similar that are described herein. The processor 102 of the device 100 may be configured to control the memory 104 of the device 100 to store any requests, resources, information, data, signals, or similar that are described herein.
It will be appreciated that the device 100 may comprise other components in addition or alternatively to those indicated in
The device 100 may have other components that are not associated with the communications system. For example, a smart toaster may be configured with a heating element for toasting, a smart kettle may be configured with a heating element for boiling water, and a smart music player may be configured with speakers. The skilled person will appreciate that these are examples only and that devices have a wide range of functionality and associated components dependent on the purpose of the respective device.
The device 100 may be equipped with a reasoner component that contains relevant argumentation protocols and allows the devices to participate in the dialogue games described below.
Turning now to other embodiments, some embodiments herein relate to a first device and a second device. The first device may be a device such as any of the devices described above with respect to the device 100. In some examples, the first device may be an IoT device, operating in a smart environment such as a smart home.
The second device may be a device such as any of the devices described above with respect to the device 100. The second device may be the same type of device as the first device, or a different type of device to the first device. In some examples, the second device may be an IoT device, operating in the same smart environment as the first device.
The first device and the second device may operate in an IoT environment, otherwise known as a smart environment. Examples of smart environments include but are not limited to smart homes, smart offices and/or smart factories. The first device and the second device may operate in a common environment, for example, within a common geographic environment, or within the same smart environment. As such, actions performed by the first device may impact the second device (or goals of the second device).
Generally, the first device and the second device may be autonomous and independent from one another (and/or autonomous and independent from other devices operating in the environment). For example:
The first device and the second device may perform actions. Actions may comprise performing any functionality of the device. Examples of actions include, but are not limited to, boiling (e.g. by a kettle), playing music (e.g. by a radio or streaming device), switching on a lightbulb (e.g. by a smart light), switching on and/or setting the temperature of a cooking apparatus (e.g. in smart oven). The skilled person will appreciate that these are merely examples however and that the first device and the second device may perform other actions to those described herein.
Generally, there may be goals set for the environment. For example, collective goals for all of the devices in the environment, in addition or alternatively to goals for individual devices.
Goals may be heterogenous, including but not limited to:
Goals and actions can be e.g., conflicting, complementary, or neutral from the user and/or device points of view.
In some embodiments (as described in more detail below) there may be a priority ordering over different types of goals. For example, it may be defined that spontaneous goals are preferred over planned ones, which are in turn preferred over persistent goals.
In embodiments herein, when the first device wants to (e.g. is instructed to) perform a first action, it may engage with the second device (and other devices, if applicable) in order to determine whether the first device can perform the first action, given the goals and/or other constrains on the first and second devices.
There may be various aims of communication and reasoning between the devices, including but not limited to
The first device and the second device may perform a dialogue game according to the argumentation protocol, as described above, in order to determine whether the first action can be performed by the first device. As noted above, the argumentation protocol is the procedural description (e.g. set of rules) by which agents are to communicate, and may describe, for example, what information to exchange, in what order, etc. A dialogue (game) is the object created during communication according to the argumentation protocol, e.g. a collection of utterances. As will be described in detail below, messages (e.g. utterances) sent as part of the dialogue game are modified herein so as to indicate a time interval in which the first device would like to perform the first action. Utterances made in response are also modified with time intervals indicating when the respective utterance is valid. This is used to ensure that decisions are made on information that is relevant to the time period in which the first device intends to perform the first action.
In more detail, turning now to
In some embodiments, the method 200 may be performed by the first device. In other embodiments, the method 200 may be performed by a digital twin of the first device. Digital twins are advantageous, if for example, the first device is memory limited.
In some embodiments, the method 300 may be performed by the second device. In other embodiments, the method 300 may be performed by a digital twin of the second device (for example, if the second device is memory limited).
The methods 200 and 300 may be performed in a complementary manner as a part of a dialogue game between the first device and the second device (or as part of a dialogue game between a first digital twin of the first device and a second digital twin of the second device.
In more detail, preceding the method 200, the first device may receive an instruction to perform the first action. For example, the instruction may be received from a user of the device. For example, if the first device is a kettle, then the first device may receive an instruction to boil. For example, the instruction may be received through the user pressing a button on the device, or remotely through e.g. an Application Programming Interface (API) for the device, or in any other manner.
If the method 200 is performed by a first digital twin of the first device, the first digital twin may perform the method 200 responsive to receiving an indication from the first device that the first device intends to perform the first action in the first time interval.
The first device (or digital twin thereof) may then perform step 202 of the method 200 and send a first message to the second device. The first message comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed.
The first message may be sent over the communications network as described above. The first message may be an utterance according to an argumentation protocol. In some embodiments, the first message is a first utterance and the first action is a main claim admitted by the first utterance. In this sense a main claim is a statement that starts a new dialogue according to an argumentation protocol.
In embodiments herein utterances are augmented with time intervals that express when the utterance is valid. Given a definition of an utterance as above, an utterance may generally be defined as a tuple: ai, aj, T, C, ID, int with the elements as above and int=(t1,t2) an interval (in some timeline) with timepoint t1 preceding timepoint t2. It is noted that intervals can take any standard form of intervals on the real line, e.g. int=[t1,t2).
Utterances may further be generalised by defining an utterance as a tuple
ai, aj, T, C, ID, S
where S=int1∪ . . . ∪intn is a set of n≥1 intervals in some timeline (e.g. the real line). Such utterances may be referred to as time-carrying utterances. For example, if the first device is a Kettle, then the kettle can send a first message (e.g. first utterance) to a Noise Controller (or noise control module) of the format:
<Kettle, Noise Control, 0, claim (Boil), 1, [16:05, 16:08]> representing the intention to boil water during time from 16:05 to 16:08.
Thus, generally, the first message may comprise a tuple of the form:
ai, aj, T, C, ID, S;
wherein ai, is an identifier for the first device, aj is an identifier for the second device, T is an target identification number, ID is an integer identification number for the first message, C comprises the indication of the first action to be performed by the first device, and S is the first time interval. As noted above, S may be in the form of an interval, S=(t1,t2), where the first action is to be performed between the times t1 and t2.
It is noted that, as is the custom in dialogue games, the target T refers to the ID of a previous message to which the current utterance relates to. The first message starts a new chain of utterances and has T=0 (and ID=1) by convention. Subsequent responses and other messages made in reply to the first message will have T′=ID of the first message. As such, in step 202 the first message may have T set to 0.
It is noted that generally, messages made in reply to a particular previous message may have T′ set the ID of the particular previous message. For example, a third message may have T=2, referring to the very previous message with ID=2. Other messages may have T values anywhere in between 1 and the ID of the most recently sent message in the dialogue.
The first message may be sent directly to the second device or a digital twin of the second device. In other examples, the first message may be broadcast, e.g. to which ever device(s) are listening.
After sending the first message, the first device may update a set of commitments, , made by the first device. The first device may update the set of commitments to incorporate the first action, C, and the first time interval, S according to <C,S>∈.
Thus herein, an utterance-to-timed-commitments mapping is defined as a function f that maps the content C of an utterance u into a commitment <C,S>∈ where S is the set of intervals carried by u and is the set of all the current commitments of the device. For instance, the initial commitments K of agent Kettle may be empty ∅, but after the user demands boiled water, the kettle emits utterance
<Kettle, Noise Control, 0, claim(Boil), 1, [16:05, 16:08]> and its content claim(Boil) is mapped into a commitment claim(Boil), [16:05, 16:08] so that K={claim(Boil), [16:05, 16:08]}.
In step 302 of the method 300 the first message, as described above, is received (either by the second device or by a digital twin of the second device) from the first device (or digital twin thereof).
The first message (or first utterance) is assessed by the entity or agent receiving the first message (which may be the second device or second digital twin of the second device) according to an argumentation protocol defined between the first device and the second device.
Any type of argumentation protocol may be used, that can be modified to take time-carrying utterances as described herein into consideration. For example the argumentation protocol may be as described in the paper by H. Prakken (full citation given above); or as described in the paper by X. Fan and F. Toni (full citation given above); or as described in the paper by E. Lovellette, H. Hexmoor, and K. Rodriguez (full citation given above).
Generally, the first and second devices (or their digital twins) internally possess individual argument construction and evaluation mechanisms containing components such as:
The agents exchange utterances according to a well-defined protocol of the game. The protocol specifies the rules of information exchange, such as concerning
As envisaged by some known argumentation strategies, if the first action intended by the first device initiating the dialogue is compatible with the second device's commitments, then the latter agent agrees with it (i.e. concedes the contents of utterances); else, it starts arguing (i.e. question or challenge the utterances).
In embodiments herein, the argumentation protocol takes into account (e.g. considers) that the first message (first utterance) is time carrying. Which may mean that for an utterance u=<ai, aj, T, C, ID, S>, the utterance u′=<aj, ai, T′=ID, C′, ID′, S′> whose target T′ is the identifier ID of u, is such that the intervals S′ are in an appropriate relationship with the intervals S of u. The appropriate relationships would be specified for the legal moves of agents, for instance that with contents C=asm(α) of u and C′=contr(α, β) of u′, and interval S=(a, b) of u, it must be that S′=(c, d) satisfies a<c<b<d.
A pseudo-code extension of a given argumentation protocol may include the following extension of an existing legal move function:
Input u=<ai, aj, T, C, ID, S>: utterance, u′=<aj, ai, T′=ID, C′, ID′, S′>: potential utterance
Output True/False [indicates whether a potential utterance is legal to put forward in the dialogue]
Extends Function Legal-Move [legal move function for standard, non-time-carrying protocol]
Function Legal-Move-Time(u, u′)
In other words, in some embodiments, the second time interval overlaps the first time interval if the first time interval and the second time interval satisfy any one of the: starts; ends; overlaps; during; or equals criteria of Allen's time interval logic.
For example, suppose an existing argumentation protocol is extended so that agent ai has uttered an assumption α∈ for time interval (a, b) and agent aj has uttered a contrary β∈ of a for time interval (c, d). If the interval (a, b) overlaps (c, d) and d precedes the termination time t, then ai was not able to argue about ctr(α, β) and asm(α) will not be accepted. (The idea here is that aj contradicted ai's assumption a while it was still valid (interval overlap) and ai failed to react to this contradiction β while it was still possible to (the termination time t is passed d)). The difference from the existing argumentation protocol is that the intervals carried by utterances have to be checked for the legality of moves, namely that (a, b) overlaps (c, d) as in Allen's time interval logic.
Turning back to the method 300, in step 304 a second message is sent to the first device according to the argumentation protocol defined between the first device and the second device. The second message comprises i) a first response related to whether the first device can perform the first action, and ii) a second time interval in which the first response is applicable. The first response is valid according to the argumentation protocol if the second time interval overlaps the first time interval.
In some embodiments, the second message is a second utterance, and the first response is a claim admitted by the second utterance. In this sense a claim is a response (e.g. information) related to the main claim of the first message as described above. The claim in the second message may be of various different types, for example, a question, counterclaim, or supporting-claim. The skilled person will be familiar with claims, questions, counterclaims, supporting claims and other types of claims as defined in argumentation protocols.
The second message may comprise a tuple of the form: ai, aj, T, C, ID, S where ai, is an identifier for the second device, aj is an identifier for the first device, T is a target identification number, ID is an identification number for the second message, C comprises the first response, and S is the second time interval.
As the second message is a response to the first message described above, the target T, will be set to the ID of the first message. S is in the form of an interval, S=(t3,t4) and wherein the first response is applicable between the times t3 and t4.
In some examples, the first response (e.g. counterclaim) in the second message identifies a conflict between performance of the first action by the first device and a first goal and wherein the first time interval is further used to resolve the conflict based on a priority of the first action compared to the first goal during the first time interval. For example, the conflict may be resolved based on the relative priorities of the first action and the first goal (e.g., whether it is more important to perform the first action or meet the first goal). As an example, a first goal may be to keep noise below a noise threshold, and the first action may be for a kettle to boil (creating noise above the noise threshold). In such an example, boiling the kettle (as a spontaneous goal) may take priority over keeping noise below the noise threshold.
In some embodiments, the first response in the second message identifies that performance of the first action by the first device in the first time interval and performance of a second action by the second device in the second time interval, leads to a conflict with respect to a first goal. For example, a first action of boiling a kettle by a first kettle device whilst simultaneously a second music player device performs a second action of playing music might lead to conflict with a first goal of keeping noise less than a noise threshold.
In this scenario, the length of the first time interval and the length of the second time interval may be used to determine a priority of the first action with respect to the second action and thus resolve the conflict.
For example, as described above, goals may be categorised into different levels, based on duration. An example scheme may categorise goals as being
There may be a priority ordering over different types of goals. For example, it may be defined that spontaneous goals are preferred over planned ones, which are in turn preferred over persistent goals.
The skilled person will appreciate that the above scheme is merely an example however, and that different categories could be defined to those above, for example, with different numbers of categories, and/or where each category is defined by thresholds of different lengths.
In some embodiments, the first goal is associated with the first device and performance of the second action is associated with a second goal for the second device. In this scenario, a relative priority of the first goal compared to the second goal may be used to determine a priority of the first action with respect to the second action and thus resolve the conflict.
Thus, where needed conflicts be resolved using preferences induced by the priorities over goals: intuitively, if an action A leads to achieving a goal g that is of higher priority than the goal g′ which action A′ leads to achieving, then A is preferred over A′. For instance, the action of boiling water to achieve the spontaneous goal of having boiled water is preferred over the action of playing music which achieves the planned goal of listening to music.
Formally, goals may be stratified into sets G1, . . . , Gn such that each goal in Gi+1 is preferred over any goal in Gi: ∀g′∈Gi, g∈Gi+1 g′g and is a transitive relation. Otherwise, for any two goals in the same set Gi, we say that they are incomparable wrt and thus of equal priority.
A given argumentation protocol may be modified so that whenever two utterances u and u′ with contents associated to goals g and g′ are in conflict according to the argumentation protocol, the preference gg′ dictates that the agent uttering u has to concede the content in u′.
This can be encoded by modifying the functions for determining an agent's next utterance, with priorities available globally (to all agents).
A pseudo-code extension of a given argumentation protocol may be given as follows:
For illustration, suppose a goal g is associated to a timed commitment of agent ai and we have utterances u2=<ai, aj, T1, claim(g), T2, (a, b)> and u4=<ai, aj, T3, asm(g), T4, (a, b)>. Suppose further a conflict arises with the following utterances: u5=<aj, ai, T4, contr(g, g′), T5, (c, d)> and u6=<aj, ai, T5, asm(g′), T6, (c, d)>, where a<b<c<d and g′ is a goal such that gg′. (Essentially, the two agents put forward contradictory assumptions based on their goals.) Then the legal moves stipulate that a has to retract its claim g and concede g′ instead.
Turning back to the methods 200 and 300, the second message sent by the second device as described above is then received by the first device (or agent of the first device) in step 204.
Dependent on the content of the first response by the second device, the method 200 may then comprise exchanging subsequent messages with the second device according to the argumentation protocol, wherein each subsequent message comprises i) a subsequent response related to the first action, and ii) a corresponding subsequent time interval in which said response is applicable. As above, each subsequent response is valid according to the argumentation protocol if the corresponding subsequent time interval overlaps the first time interval.
For example, the first device may send a third message to the second device in response to the second message from the second device. And the second device may respond once again according to the argumentation protocol as described above until a consensus is reached.
If the second device agrees that the first device may perform the action, then the method 200 may further comprise receiving an agreement from the second device that the first device may perform the first action in the first time interval, and initiating performance of the first action in the first time interval. For example, if the method 200 is performed by the first device, then the first device may perform the first action. If the method 200 is performed by a digital twin of the first device, then the digital twin may send an instruction to the first device to trigger or cause the first device to perform the first action.
During the dialogue, the set i of an agent's ai commitments is updated using the utterance-to-timed-commitments mapping f that maps the content C of an utterance u into a commitment to <C,S>∈, as described above. When the dialogue has terminated (e.g. according to existing Argumentation protocols and termination rules), i is updated by:
For instance, suppose utterances (among others)
Thus, using the methods 200 and 300 a first device and a second device can determine whether the first device can perform a first action in a collaborative manner. It will be appreciated that the methods 200 and 300 can be generalized to dialogue games according to an argumentation protocol between any number of devices, where the utterances between the devices are time stamped using intervals in the manner described above.
Turning now to another embodiment, as described above, the methods 200 and 300 may be performed using digital twins. For example, in some embodiments there is a method in a computing system comprising a digital twin of a first device and a digital twin of a second device, wherein the method is for determining, in coordination with the digital twin of the second device, whether the first device can perform a first action. The method comprises: the digital twin of the first device sending a first message to the digital twin of the second device, wherein the first message comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed; and the digital twin of the second device, responsive to receiving the first message, sending a second message to the digital twin of the first device according to an argumentation protocol defined between the first device and the second device, wherein the second message comprises i) a first response related to whether the first device can perform the first action, and ii) a second time interval in which the first response is applicable. The first response is valid according to the argumentation protocol if the second time interval overlaps the first time interval.
The first message and the second message were described in detail above and the detail therein will also be understood to apply equally to an embodiment where both the first and second devices are represented by digital twins.
Turning now to
The first device performs step 202 of the method 200 and in step 412 sends a first message to the second device 404. The first message 412 comprises i) an indication of the first action to be performed by the first device and ii) a first time interval indicating when the first action is to be performed.
The second device 404 receives the first message (as in step 302 described above) and evaluates the first action according to the argumentation protocol in step 414. It then performs step 304 of the method 300 and in step 416 sends a second message to the first device containing a first response, as described above.
The first device 402 then exchanges subsequent messages 418, 422, 426, with the second device 404 and the third device 406 until a consensus is reached as to whether the first device can perform the first action.
In this example, the fourth device 408 may perform 430 global evaluation using the argumentation protocol and argumentation semantics.
Turning now to
In this embodiment, the following goals are defined:
<K, N, 0, claim (Boil), 1, [16:05,16:08]>
<K, N, 1, asm(Boil), 2, [16:05,16:08]>
<N, K, 2, contr(Boil, ¬Boil), 3, [16:05,16:08]>
<K, N, 3, why(¬Boil), 4, [16:05,16:08]>
<N, K, 4, rl(¬Boil←Play), 5, [16:05,16:08]>, <N, K, 5, asm(Play), 6, [16:00,18:00]>
<K, A, 0, claim(¬Play), 1, [16:05,16:08]>
In summary, the methods herein achieve the following:
In another embodiment, there is provided a computer program product comprising a computer readable medium, the computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method or methods described herein.
Thus, it will be appreciated that the disclosure also applies to computer programs, particularly computer programs on or in a carrier, adapted to put embodiments into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the embodiments described herein.
It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The sub-routines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other.
The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a data storage, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.
Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/063012 | 5/17/2021 | WO |