The present specification relates to a security status of a network element or a security slice comprising one or more network elements.
There remains a need for alternative or improved systems for managing security status, such as levels of security assurance, in network elements security slices.
In a first aspect, this specification describes an apparatus comprising: means for sending one or more requests to an attestation server, wherein each request requests security attributes corresponding to one of one or more network elements of a security slice of a is system; means for receiving the requested security attributes from the attestation server;
and means for processing the received security attributes to determine a security status of the security slice. The security attributes may, for example, be properties or measurements. Some embodiments further comprise means for outputting the determined security status of the security slice.
Some embodiments comprise means for comparing the determined security status of the security slice with a required security status for the security slice.
Some embodiments comprise means for adding one or more additional network elements to the security slice. The means for adding one or more network elements to the security slice may comprise means for obtaining information relating to each of the one or more network elements to be added to the security slice. The said information may comprise at least one of availability information and/or authentication parameters and/or information regarding whether the network element(s) are suitable to be added to the slice.
Some embodiments further comprise: means for receiving, from the attestation server, security attributes (e.g. properties or measurements) of a first additional network element to be added to the security slice; means for determining an integrity level of the first additional network element based on the received security attributes of said first additional network element; and means for preventing the first additional network element from being added to the security slice in the event that said integrity level is below a required level. The integrity level may, for example, be a security status.
The means for processing the received security attributes may further comprise means for determining whether any of the network elements of the security slice fail to satisfy a defined requirement. In response to detecting a failing network element, another element of the system may be informed (e.g. at least one of: the VIM, the security orchestrator and/or the SDN). Alternatively, or in addition, the identified network element may be repaired.
The apparatus may be one of a security attribute manager and a trusted slice manager.
A network element may comprise at least one of: a physical network element, and/or a virtualized network function, and/or a virtual machine image, and/or a virtual machine instance. Alternatively, or in addition, a network element may, for example, be a core network element, an edge device, a mobile communication device, a network function virtualisation node, a virtualised network function or an Internet of Things device such as a wireless sensor. In some embodiments, a network element may comprise a top-level element, such as an NFVI element, server, edge device, IoT device, VM image, VM instance, VNF, UE (mobile device) etc. The network element may comprise a structured element comprises a number of top-level elements.
The requested security attributes may comprise at least one of: a measured boot capability, and/or a secure boot capability, and/or a runtime integrity measurement, and/or a virtual machine integrity level.
The security status of the security slice may comprise a level of assurance.
The said means may comprise at least one processor; and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the performance of the apparatus.
In a second aspect, this specification describes a system comprising an apparatus as described above with reference to the first aspect and further comprising: an attestation server for receiving requests for security attributes; and one or more network elements. Each network element may further comprise a trust agent for providing an interface between the respective network element and said attestation server.
In a third aspect, this specification describes a method comprising: sending one or more requests to an attestation server, wherein each request requests security attributes corresponding to one of one or more network elements of a security slice of a system; receiving the requested security attributes from the attestation server; and processing the received security attributes to determine a security status of the security slice. Some embodiments further comprise outputting the determined security status of the security slice.
The method may comprise comparing the determined security status of the security slice with a required security status for the security slice.
The method may comprise adding one or more network elements to the security slice.
The method may comprise: receiving, from the attestation server, security attributes (e.g. properties or measurements) of a first additional network element to be added to the security slice; determining an integrity level of the first additional network element based on the received security attributes of said first additional network element; and preventing the first additional network element from being added to the security slice in the event that said integrity level is below a required level. The integrity level may, for example, be a security status.
Processing the received security attributes may further comprise determining whether any of the network elements of the security slice fail to satisfy a defined requirement.
In a fourth aspect, this specification describes any apparatus configured to perform any method as described with reference to the third aspect.
In a fifth aspect, this specification describes computer-readable instructions which, when executed by computing apparatus, cause the computing apparatus to perform any method as described with reference to the third aspect.
In a sixth aspect, this specification describes a computer program comprising instructions for causing an apparatus to perform at least the following: send one or more requests to an attestation server, wherein each request requests security attributes corresponding to one of one or more network elements of a security slice of a system; receive the requested security attributes from the attestation server; and process the received security attributes to determine a security status of the security slice.
In a seventh aspect, this specification describes a computer-readable medium (such as a non-transitory computer readable medium) comprising program instructions stored thereon for performing at least the following: sending one or more requests to an attestation server, wherein each request requests security attributes corresponding to one of one or more network elements of a security slice of a system; receiving the requested security attributes from the attestation server; and processing the received security attributes to determine a security status of the security slice.
In an eighth aspect, this specification describes an apparatus comprising: at least one processor; and at least one memory including computer program code which, when executed by the at least one processor, causes the apparatus to: send one or more requests to an attestation server, wherein each request requests security attributes corresponding to one of one or more network elements of a security slice of a system; receive the requested security attributes from the attestation server; and process the received security attributes to determine a security status of the security slice.
In a ninth aspect, this specification describes an apparatus comprising: a first output for sending one or more requests to an attestation server, wherein each request requests security attributes corresponding to one of one or more network elements of a security slice of a system; a first input for receiving the requested security attributes from the attestation server; and a processor for processing the received security attributes to determine a security status of the security slice. The security attributes may, for example, be properties or measurements. Some embodiments further comprise means a second output outputting the determined security status of the security slice.
Example embodiments will now be described, by way of non-limiting examples, with reference to the following schematic drawings, in which:
In the description and drawings, like reference numerals refer to like elements throughout.
Computing systems, including (but not limited to) distributed or cloud computing systems, may include a wide range of hardware or other elements connected thereto. Such modules may, for example, be provided to run virtual workloads, base stations of a telecommunication network, edge devices etc. At least some of those elements may be arranged in slices, as described further below.
The example embodiments disclose apparatus and methods for creating security slices for managing levels of security assurance in network slices. An attestation server may be utilized for creating interfacing with security slices (such as trust slices). Such security slices are independent of network slices and may, for example, be orthogonal to network function virtualisation (NFV) slices. Some of the example uses of security slicing may include guaranteeing different levels of security assurances for critical workload deployment, such as:
As defined in ETSI NFV SEC0007 (2018), the overall attestation scope depends also on the exact use case and, most importantly, on the agreed Level-of-Assurance (LoA), described in SEC007. In particular, the LoAs define the sets of systems and components to be considered during attestation procedures and, thus, facilitate the determination of the overall attestation scope. An overview of the defined LoAs in relation to the attestation scope is depicted in table 1.
Definition of Integrity Levels
In some example embodiments, integrity levels (sometimes referred to herein as security status) may be determined for each network element of a system. Each network element may belong to one or more security slices. Details of how integrity levels/security statuses are used in the example embodiments are discussed later in the document. The integrity levels for security slices define properties or measurements that can be taken in order to satisfy a specific level of assurance (LoA). A level of assurance may be guaranteed based on the integrity level/security status. An example mapping of integrity level of security slices with respect to the level of assurance guaranteed by the security slice is illustrated in Table 2. For example, in order to guarantee a particular LoA, the devices must pass all the integrity checks defined in the mapping of Table 2.
The LoAs define the desired level of trustworthiness or integrity of the platform. They also define what subdomains need to be attested to guarantee that level of assurance. On the other hand, the trusted slice integrity levels define concrete measurements that can be considered using attestation in order to guarantee the security status of the attestation scope defined by the LoA.
For better understanding of Table, 2, the integrity levels of various platforms are explained below. There may be two level hierarchies for platform integrity measurements depending on the capabilities of the platform. Many variants to the arrangements described herein are possible.
Platform with Measured Boot Capabilities
For a platform with Measured Boot capabilities, there may be five integrity levels that can be achieved:
The checks for levels 1 to 3 can be done locally and/or remotely and level 4 can only be done remotely. Additionally, these integrity levels include both the platform and the hypervisor sub-scopes.
Platform with Secure Boot Capabilities
For a platform with Secure Boot capabilities, there may be two integrity levels that can be achieved:
Level 1 for a platform with Secure Boot capabilities can only be achieved locally. These checks are performed during boot time and if any of the checks fail, the platform will not start up. For a platform with Secure Boot capabilities, there is only one integrity level which includes the all the checks of the platform and hypervisor sub-scopes.
Platform that is Capable of Performing Runtime Integrity Measurements
For a platform that is capable of performing runtime integrity measurements (e.g. by using a kernel module such as Linux IMA), there are four integrity levels that may be be achieved:
Virtual Machine Sub-Scope
For the virtual machine sub-scope, there are different integrity levels that can be achieved, depending on whether the checks are for a virtual machine image or a virtual machine instance.
Virtual Machine Image
For a virtual machine image, there may be three integrity levels that can be achieved:
Virtual Machine Instance
For a virtual machine instance, there may be four integrity levels that can be achieved:
Note that the levels of integrity defined for a platform can be achieved by a virtual machine instance, given that it has the capabilities of a virtual trusted platform module (vTPM).
A typical NFV system consists of any number of NFVI elements (typically servers of some form) managed by management and orchestration (MANO). Using the concept of slicing, these physical machines can be partitioned into logical blocks and purposed for specific purposes, for example, a specific operator provisioning etc. The mechanisms for slicing based on network partitioning are well known.
Network elements may be assigned to network slices based on their uses, customers, etc. For example, NS1 may specifically contain network elements relating to Internet-of-Things (IoT) devices, and NS2 may specifically contain network elements relating to medical devices.
Network elements may be assigned to security slices regardless of their functionality and regardless of their corresponding network slicing. A security status of SS1 and SS2 may be indicated by Level of Assurance (LoA) level, which is determined based on security attributes of the network elements assigned to the security slices. For example, security status of SS1 may be dependent upon the security attributes of NE1, NE2 and NE4, and security status of SS2 may be dependent upon the security attributes of NE3 and NE4. The security status of the security slices may be indicated by LoA level. For example, if SS1 has
LoA level 2 (explained in Tables 1 and 2 above) and SS2 has LoA level 1, this may mean that integrity levels of NE1, NE2, and NE4 are collectively higher than collective integrity levels of NE3 and NE4.
Network elements may be chosen for specific workloads based on the network slicing as well as security slicing. More particularly, for a specific workload, a network slice requirement and a security slice requirement may be defined, and one or more network elements may be selected for the specific workload accordingly.
For example, assume that there is a requirement for a first workload for network elements relating to medical devices and the LoA level of the network elements must be at least 2.
Referring to
In an alternative example, assume that there is a requirement for a second workload for network elements relating to IoT devices and LoA level of the network elements must be at least 2. In this case, only NE1 and NE2 are suitable for the second workload, as they are in NS1 relating to IoT devices and they belong to SSi which has LoA level 2.
The network functions virtualization infrastructure (NFVI) 12 comprises multiple network elements of a system. The network elements may take many forms, such as physical network elements (such as servers), virtualised network function nodes, a virtual machine image, a virtual machine instance, an edge node, an Internet-of-things (IoT) enabled device, a communications module etc. The network elements of the NFVI 12 may be grouped into one or more slices. One or more of those slices may be security slices, as discussed above with reference to
The trusted slice manager (TSM) 16 is responsible for managing aspects of security slices. The trusted slice manager 16 may communicate with the virtual infrastructure manager 14 and the attestation server 18 to obtain security attributes corresponding to network elements of a security slice. The attestation server 18 provides such information to the trusted slice manager 16, as discussed in detail below.
On the basis of the security attributes corresponding to network elements of a security slice, the trusted slice manager 16 can determine a security status, e.g. a Level of Assurance (LoA), for a particular security slice that is formed from network elements of the NFVI 12. Algorithms for determining a security status of a security slice are discussed further below.
The system 20 comprises a group of network elements 21. As shown schematically in
An example network element 21 may belong to any number of security slices 25 (including zero). Similarly, a security slice may include any number of network elements 21 (including zero).
The algorithm 30 starts at operation 32, where a security slice is defined. For example, the operation 32 may involve providing a name for the security slice and providing other basic information. Thus, the operation 32 may result in a security slice 25 being instantiated.
At operation 34, one or more elements are added to the slice. For example, instances of the element 21 may be added to an instance of the security slice 25 described above.
At operation 36, information regarding the added element(s) is obtained. As described further below, the information obtained in the operation 36 may be used for functions such as determining security attributes of the added element(s).
As described above, one aspect of a security slice is to define a Level of Assurance (LoA) that the elements in that slice must be able to guarantee. Indeed, there may be provided means for determining an integrity level or security status of the network element referred to in operation 34 based on the received security attributes of said first additional network element. In the event that the integrity level or security status is below a required level, then the network element may be prevented from being added to the security slice.
The algorithm 40 starts at operation 42 where a trust status or security status of one or more elements of a relevant security slice is obtained. Then, at operation 44, a level of Assurance (LoA) is determined for the security slice. An example implementation of the algorithm 40 is described further below.
Optionally, the algorithm 40 may include an operation 46 in which the security status determined in operation 44 is compared with a required security status for the security slice. If a security status is below a required security status, then action may be taken (as discussed further below).
The message sequence 60 starts with a define slice instruction 61 received at the trusted slice manager 56. This operation may define a slice (e.g. a security slice) in terms of identity and name. Other characteristics such as encryption keys, authentication, access etc. may also be defined. A required level of assurance (LoA) may be defined for the slice. Other aspects, such as LoA failure handling may also be defined.
As shown in
In response to the instruction 62, the VIM 54 may be contacted to obtain element information to decide upon availability, authentication parameters, suitability for inclusion in that slice etc. Elements are not necessarily restricted to traditional NFVI elements such as servers, but may include VM/VNF images, their potential instances, Edge, IoT and UE (user equipment) devices. Inclusion into a security slice may also involve other NFV MANO components such as the Orchestrator and VNFM in making these decisions. Interaction with the OSS/BSS layer and other MANO components is also permittable and possible.
As shown in
It should be noted that the messages 63 and 64 are provided by way of example only. Alternative arrangements to enable the TSM 56 to obtain the requirement element information from the NFVI element 52 could be provided.
As shown in
In response to the request 66, the AS 58 may send a message 67 to the relevant element 52 for the security attributes, which are received in the reply message 68 and then returned to the TSM 56 in a message 69. Thus, the message 69 provides the security attributes requested in the message 66 to the TSM 56. In the example message sequence 60, the AS 58 checks the elements either directly or by proxy via the VIM/VNFM, and the subsequently makes a decision on the trustworthiness and trust status of that element.
The TSM 56 processes the received security attributes to determine a security status of the security slice (as indicated by the analysis step 70 and decision step 71). The TSM 56 may perform any amount of analysis and decision-making either at a point in time or also including historical information as required to determine whether an element or whole or partial slice has achieved the required level of assurance.
The TSM 56 may send a message 72 to the VIM 54 indicating the security status of the security slice. The decision to inform either the VIM 54 or other components is optional in some embodiments.
If an element within a security slice fails the LoA checks (e.g. is below a required level) then it may be marked accordingly. Various options are available for handling this situation, including but not limited to:
The exact choices made (there may be more than one, unless failure is ignored) may depend upon the system configuration. For example, LoA failure might trigger an alert to the/a security orchestrator, which in turn may cause the VIM to migrate workload away from that element.
The elements 106 to 108 are examples on the network elements discussed above with respect to
In order to monitor aspects of the system 100, the attestation server 102 may communicate with each of the elements 106 to 108 to obtain measurements. For example, a trusted platform module (TPM) may be provided at each element to generate a cryptographic hash that summarises the hardware and software configuration of the relevant module. A set of platform configuration registers (PCRs) may be provided which store cryptographic hashes of measurements of the relevant components. A hash may, for example, be obtained from a TPM by a mechanism known as quoting. A quote may be generated for a set of PCRs, with the TPM generating a hash from the contents of the set of PCRs and signing them with an attestation key (AK) unique to the respective TPM (e.g. with a private key of an attestation key pair).
The attestation server 102 may offer a query application programming interface (API) that can be used, for example, by command line tools. The attestation user interface 105 of the system 100 (e.g. a web application) may enable a user to interact with the attestation server 102 (e.g. to enable viewing of a trust status of a cloud or to request measurements of one or more elements of the system boo).
The first element 106 comprises a trust agent 110a, a trusted platform module (TPM) software stack nob and a trusted platform module 100c. Similarly, the second element 107 comprises a trust agent 112a, a trusted platform module (TPM) software stack 112b and a trusted platform module 112C. The third element 108 comprises a trust agent 114a, a trusted platform module (TPM) software stack 114b and a trusted platform module 114c.
The trust agents 112a, 112a and 114a at each element of the system 100 provide an interface between the respective element and the attestation server 102.
As indicated above, each of the elements 106 to 108 of the system may have a trusted platform module associated therewith. The trusted platform module may form part of the respective element. The trusted platform module may be implemented as a device of the respective element, but may alternatively be distributed. In essence, the trusted platform module is a specification of behaviour implemented by the relevant element.
The trusted platform modules (TPMs) 100, 112C and 114c may store cryptographic keys, certification and confidential data. For example, two unique key-pairs may be stored at (or be available to) each TPM: an endorsement key pair (EK) and an attestation key pair (AK). A set of platform configuration registers (PCRs) may be provided to store measurements, in the form of hashes, of hardware or software components of the relevant machine (e.g. the element within which the TPM is installed). A TPM may be asked for provide a “quote” for a defined set of PCRs (e.g. a hash over the stored value of the defined PCRs). The TPM may then return the quote for the requested PCRs, a cryptographic signature of that quote (signed by the attestation key, e.g. the private key of the attestation key pair) and possibly other information, such as a timestamp and a reboot count.
An attestation policy is a set of expected values for different measurements that can be taken from a machine (such as one or more of the elements 106 to 109 described above). If a machine has a TPM, a policy can be a mapped between a PCR number and an expected value. An administrator can define policies in order for a machine to be considered in a trusted state. When attestation is carried out, the measurements can be checked against the expected values in the policy. The expected values are reference values that can be used to define a “correct” state of a system and/or can be used to detect changes in a system. When quoting, if a machine stops following a certain policy, this may indicate that what was measured by the policy has changed in the system.
The attestation server 102 may be responsible for obtaining quotes from the elements 106 to 108. For example, the attestation server 102 may be responsible for attesting the devices and checking the status of the relevant system (e.g. the system 100). During an attestation process, the attestation server 102 may compare values obtained by quoting an element to a defined attestation policy for the relevant element(s). Then, if measurements from an element no longer satisfy the relevant policy/policies, an action may be initiated (e.g. generating system alerts for administrators).
The algorithm 120 starts at operation 122, where a request is received from a first module (such as the attestation server 102) at a request receiving means of one of the elements of the system (such as one of the elements 106 to 108 of the system 100 described above). As described further below, the request may include a command, a nonce (to prevent replay attacks) and details of a cryptographic key for use in responding to the request. In addition, cryptographic structures may be provided by the relevant transport layer (e.g. secure sockets layer (SSL) or transport layer security (TLS)).
At operation 124, a response to said request is generated at a response generating means of the respective element of the system 100. The response may be generated at a trust agent of the respective element (such as one of the trust agents 110a, 112a and 114a described above). The response may include one or more of the following (depending on the received command): an identity of said element; a cryptographic hash of data representing configurations of said element; and capabilities relating to said element of the computing system.
At operation 126, the response is provided to the first module (such as the attestation server 102) in response to said request. As described further below, the response includes the nonce (as provided in the request) and is signed using the cryptographic key (as identified in the request).
Thus, in the system 100, the relevant trust agent (noa, 112a, 114a) may receive the request from the attestation server 102 and return the response to the attestation server. The relevant trust agent may, at least in part, generate the response to the request. Thus, the trust agent may be one or more of: the means for receiving the request (operation 122 discussed above); the means for generating the response (operation 124 discussed above); and the means for providing the response (operation 126 discussed above).
The message sequence 130 is implemented between a trust agent 132 (such as one of the trust agents 110a, 112a and 114a described above) and an attestation server 134 (such as the server 102 described above).
A request 136 is received at the trust agent 132 from the attestation server 134, implementing the operation 122. The request consists of a command (such as the get_identity, get_quote and get_capabilities commands discussed further below) and possible additional data, such as a nonce and a cryptographic key. Of course, other commands could also be implemented in example embodiments. The trust agent 132 processes the request 136 and runs any commands on the system needed for gathering the requested information (as indicated by the reference numeral 137).
Finally, the trust agent sends a response 138 to the attestation server with the requested information, implementing the operation 126. Details of example responses 138 are provided below.
As indicated above, the request 132 may include one or more of: get_identity; get_quote; and get capabilities commands.
A get_identity command may request the identity of the element that the trust agent 132 is running on (e.g. the identity of the relevant element 106 to 108). In the case of a device with a trusted platform module (TPM), the identity may take the form of public keys of the trusted platform module (e.g. public keys of endorsement key (EK) and attestation key (AK) pairs). Alternatively, or additionally, the identity may include metadata that can be used to identify the relevant machine, but may not be permanent identities (e.g. an IP address, MAC address, system information OpenStack ID etc.). Other implementations (e.g. non-TPM based implementations) are possible. For example, a single key may be provided. In some hardware security modules, for example, a single key, sometimes called an attestation key, may be provided.
A get_quote command may request the results of quoting or measuring an element on which the trust agent 132 is running, according to some policy indicated by the attestation server 134. The quote may take the form of a cryptographic hash of data representing configurations of said element and may be generated by a trusted platform module. In one embodiment, a cryptographic hash of data representing configurations of an element is a cryptographic hash of data representing hardware, firmware and/or software configurations of said element (as stored, for example, in one or more platform confirmation registers).
A get capabilities command may request information about the capabilities of the device, such as the trusted platform module, TPM. A response to a get_capabilities command may identify measurements (or other data) that can be provided to the attestation server. Thus, the capabilities may be used to decide what kind of measurements can be obtained by the attestation server 134 from the respective element. The capabilities information may also be used to identify properties of the TPM, such as the manufacturer or the installed firmware version.
The response 138 may include one or more of the following fields:
Of course, other fields may be provided instead of, or in addition to, some or all of the fields described above.
By way of example, in the case of a device with a TPM 2.0, a response to a get_identity request may take the following form:
For completeness,
The processor 302 is connected to each of the other components in order to control operation thereof.
The memory 304 may comprise a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD). The ROM 312 of the memory 314 stores, amongst other things, an operating system 315 and may store software applications 316. The RAM 314 of the memory 304 is used by the processor 302 for the temporary storage of data. The operating system 315 may contain code which, when executed by the processor implements aspects of the algorithms 30, 40 and 120 or the message sequence 60 described above. Note that in the case of small device/apparatus the memory can be most suitable for small size usage i.e. not always hard disk drive (HDD) or solid-state drive (SSD) is used.
The processor 302 may take any suitable form. For instance, it may be a microcontroller, a plurality of microcontrollers, a processor, or a plurality of processors.
The processing system 300 may be a standalone computer, a server, a console, or a network thereof. The processing system 300 and needed structural parts may be all inside device/apparatus such as IoT device/apparatus i.e. embedded to very small size
In some example embodiments, the processing system 300 may also be associated with external software applications. These may be applications stored on a remote server device/apparatus and may run partly or exclusively on the remote server device/apparatus. These applications may be termed cloud-hosted applications. The processing system 300 may be in communication with the remote server device/apparatus in order to utilize the software application stored there.
Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on memory, or any computer media. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a “memory” or “computer-readable medium” may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
Reference to, where relevant, “computer-readable storage medium”, “computer program product”, “tangibly embodied computer program” etc., or a “processor” or “processing circuitry” etc. should be understood to encompass not only computers having differing architectures such as single/multi-processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specify circuits ASIC, signal processing devices/apparatus and other devices/apparatus. References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device/apparatus as instructions for a processor or configured or configuration settings for a fixed function device/apparatus, gate array, programmable logic device/apparatus, etc.
As used in this application, the term “circuitry” refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Similarly, it will also be appreciated that the flow charts and message sequences of
It will be appreciated that the above described example embodiments are purely illustrative and are not limiting on the scope of the invention. Other variations and modifications will be apparent to persons skilled in the art upon reading the present specification.
Moreover, the disclosure of the present application should be understood to include any novel features or any novel combination of features either explicitly or implicitly disclosed herein or any generalization thereof and during the prosecution of the present application or of any application derived therefrom, new claims may be formulated to cover any such features and/or combination of such features.
Number | Date | Country | Kind |
---|---|---|---|
19157811.1 | Feb 2019 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/052076 | 1/29/2020 | WO | 00 |