The present technology relating to an information processing apparatus, a server, an information processing system, and an information processing method that predict a wireless environment by machine learning.
In recent years, as information terminals such as smartphones, those that installs a plurality of communication bearers have been a mainstream. Here, the communication bearer means a series of physical or logical paths for transferring users’ normal information. In general, priorities are given to the respective communication bearers. Further, there is known a method of measuring a throughput of a communication bearer in use and switching to another communication bearer on the basis of a measurement result (e.g., see Patent Literature 1).
Patent Literature 1: Japanese Pat. Application Laid-open No. 2010-135951
In the current state, the precision of prediction of a wireless environment is insufficient, and for example, there is communication quality degradation for each communication path. Therefore, it leads to lowering of the degree of satisfaction of users, for example, due to continuous use of the wireless environment where the communication status is degraded or the like.
It is an object of the present technology to provide an information processing apparatus, a server, an information processing system, and an information processing method that can improve the prediction precision of a wireless environment and can enhance a shared model in a case of predicting the wireless environment by machine learning securely and efficiently in terms of personal information protection.
In order to solve the above-mentioned problem, an information processing apparatus according to the present technology includes
The wireless environment may be at least one of that a communication status of a communication path to which a prediction target of the wireless environment is connected is deteriorated or that characteristics of a communication path to which the prediction target of a wireless environment is not connected are not good.
The arithmetic processing unit may be configured to switch a communication path to be connected on the basis of a prediction result of a wireless environment.
Switching the wireless path by the arithmetic processing unit may be switching between communication paths using different communication methods.
Switching the wireless path by the arithmetic processing unit may be switching between different communication paths using a same communication method.
The arithmetic processing unit may be configured to switch the communication path at a particular timing.
The timing of switching the communication path may be at least any one of a timing when a communication traffic volume of an application becomes equal to or smaller than a threshold, a timing when a user does not use the information processing apparatus, or a timing depending on an attribute of a user.
The arithmetic processing unit may be configured to predict information about a relationship between time and position and a wireless environment.
The shared model is one classified for each cluster based on an attribute of a user.
The arithmetic processing unit may be configured to predict a wireless environment by using a composite model obtained by combining the result of the learning with the shared model acquired from the server.
A server according to another aspect of the present technology includes
The server may be constituted by a plurality of server apparatuses having a class relationship to each other, in which a server apparatus at an upper-level class of the plurality of server apparatuses may be configured to integrate shared models generated by server apparatuses in a lower-level class and generate a shared model for the upper-level class.
An information processing system according to another aspect of the present technology includes:
An information processing method according to another aspect of the present technology includes:
[
[
[
[
[
[
[
[
[
[
[
[
Hereinafter, embodiments according to the present technology will be described with reference to the drawings.
As for products installing a plurality of communication bearers (4G, 5G (Sub6, mmW), IEEE802.11 wireless LAN, Bluetooth (registered trademark) PAN, ZigBee (registered trademark), and the like), which are represented by smartphones, a priority is generally determined for each bearer. This priority can vary depending on communication quality. However, communication quality estimation and a determination method therefor are insufficient in the current state. For example, there have been cases where users cannot do web browsing and the like with comfort because the communication bearers are not suitably selected.
As one of solutions, for example, a solution of predicting near-future quality of a communication bearer by machine learning and switching the communication bearer in a manner that depends on the prediction result has been studied. In this solution, in order to perform the machine learning, learning data is collected from information processing apparatuses such as users’ smartphones and development devices and is uploaded to a server that performs the learning. However, data that can be collected from users is limitative from the perspective of personal information protection. More specifically, there is a concern about transferring, for example, user positions, use applications, user activity histories, sensor information, and the like, which are valid as the learning data, to the server. Therefore, it has been considered that there is a limitation on building a shared model suitable for switching to a bearer having higher communication quality.
As a first embodiment according to the present technology, an information processing system 100 using federated learning for generating a shared model that determines deterioration of a communication status will be described.
Here, a learning result uploaded from the information processing apparatus 10 to the server 20 is uploaded as a weight value of each node or as difference data from the weight value of each node of a model at a time at which it is distributed from the server 20 to the information processing apparatus 10. Accordingly, the upload volume can be reduced. Further, since raw data does not leak from the information processing apparatus 10 when uploading the difference data, it is useful from the perspective of personal information protection. Further, it is suitable for learning mass data owing to the upload volume reduction and upload cycles of the shared model 5 can be increased. Therefore, the highly refined shared model 5 can be efficiently obtained.
The shared model 5 obtained in the server 20 in the above-mentioned manner is distributed to the information processing apparatus 10. Using the acquired shared model 5, the information processing apparatus 10 predicts (6) a wireless environment, for example, predicts that a communication status of a wireless path to which the information processing apparatus 10 is connected will be deteriorated or that communication characteristics of a wireless path to which the information processing apparatus 10 is not connected will lower. Further, in a case where it is predicted that the communication status of the wireless path connected will be deteriorated, the information processing apparatus 10 switches a wireless path to be connected between wireless paths using different communication methods, for example, or switches between different wireless paths using the same communication method.
Hereinafter, the first embodiment according to the present technology will be described more specifically.
The information processing apparatus 10 includes a communication unit 12 and a learning and prediction unit 13. The communication unit 12 has a plurality of communication bearers and includes a communication path control unit 11 that performs control to switch a communication bearer to be used as appropriate. The learning and prediction unit 13 learns a model for predicting deterioration of a communication status of a communication bearer to which the information processing apparatus 10 is currently connected or can be connected in the communication unit 12 and predicts deterioration of the communication status of the communication bearer by using a shared model generated by the server 20 on the basis of the learning result of the model. The learning and prediction unit 13 is constituted by a central processing unit (CPU) that is a first arithmetic processing unit, a memory for storing programs and data to be executed by the CPU, and the like.
The learning and prediction unit 13 includes a learning joining management unit 131, a learning unit 132, a data storage unit 133, a learning result upload unit 134, a shared-model download unit 135, and a prediction unit 136.
The learning joining management unit 131, for example, sends an instruction to join in model learning to the server 20, acquires a model permitted to be learned from the server 20, and manages a timing of learning in the information processing apparatus 10.
The learning unit 132 learns the acquired model. The model learning is performed using data about communication parameters and the like indicating a communication status for each communication bearer for example, which is prestored in the data storage unit 133.
The learning result upload unit uploads a result of the model learning (information about the local model) to the server 20.
The shared-model download unit 135 downloads a shared model from the server 20 through inquiry to the server 20 as to whether or not it is possible to download the shared model.
Using the downloaded shared model, the prediction unit 136 predicts deterioration of a communication status of a communication path that the information processing apparatus 10 is using or can use.
The communication parameters used in learning will be described.
For example, in a case where the communication bearer is IEEE802.11, there can be communication parameters as follows, for example.
That obtained by combining some of these parameters and processing them by an arithmetic operation may be used.
The parameter indicating the busy communication status includes a round trip time (RTT) for a gateway of a connection access point, an average throughput, a TCP error rate, the number of users connected to the access point, a CCA busy time of the connection access point, the number of packets discarded after cancelling sending, information about whether the number of delayed packets in a sending buffer has exceeded a certain threshold, and the like.
In addition to the above-mentioned communication parameters, the following data that affects the communication may be used.
For learning a model that estimates an access point at which communication congestion does not occur on the basis of the above-mentioned parameters, it is sufficient to add a correct label 1 to a congested access point, add a correct label 0 to a not congested access point, and perform learning.
The model used for learning may include simple regression analysis and the like besides neural networks such as a convolutional neural network (CNN) and a long short-term memory (LSTM).
As shown in the figure, the server 20 includes a learning joining instruction unit 21, a learning-result integration unit 22, and a shared-model distribution unit 23.
The learning joining instruction unit 21 determines whether or not it is possible to allow to join in learning of the information processing apparatus 10 that has requested to join in learning and notifies the information processing apparatus 10, which the learning joining instruction unit 21 allows to join in learning, of a learning joining request.
The learning-result integration unit 22 combines learning results (information about local models) uploaded from a plurality of information processing apparatuses 10 by averaging and the like or repeats the combining, thereby generating a shared model.
The shared-model distribution unit 23 distributes a shared model in accordance with a download request for the shared model from an information processing apparatus 10.
The learning joining instruction unit 21, the learning-result integration unit 22, and the shared-model distribution unit 23 are constituted by a central processing unit (CPU) that is a second arithmetic processing unit, a memory for storing programs and data to be executed by the CPU, and the like.
Next, the following operations of this information processing system 100 will be described.
The learning joining management unit 131 sends the identifier of the model retained in its own information processing apparatus 10 and a hash value indicating generation information of this model to the server 20. With respect to the model specified by the identifier received from the information processing apparatus 10, the learning joining instruction unit 21 of the server 20 compares the hash value of the corresponding model retained in the server 20 with the hash value notified from the information processing apparatus 10. In a case where both the hash values are different, the learning joining instruction unit 21 determines that the model retained in the server 20 is newer and sends signaling that permits download of this model, for example, an HTTP response code 200 or the like to the information processing apparatus 10. In a case where both the hash values are the same, the learning joining instruction unit 21 of the server 20 sends signaling indicating that it is impossible to download this model, for example, an HTTP response code 400 or the like to the information processing apparatus 10. When receiving a notification to permit the download, the learning joining management unit 131 of the information processing apparatus 10 requests the server 20 to download this model. The learning joining instruction unit 21 in the server 20 downloads this model to the information processing apparatus 10 in accordance with this request.
The learning joining instruction unit 21 in the server 20 manages version information represented by update time and date or the like of each model as the generation information and notifies each information processing apparatus 10 of the version information of the model at constant time intervals. The learning joining management unit 131 in the information processing apparatus 10 compares the version information notified from the server 20 with the version information of the model that the information processing apparatus 10 has and requests the server 20 to download the model in a case where the version information notified from the server 20 is newer. The learning joining instruction unit 21 in the server 20 downloads this model to the information processing apparatus 10 in accordance with this request.
It should be noted that in a case where learning is not performed on the model acquired from the server 20 for a predetermined continuous time, the model may be automatically deleted from the information processing apparatus 10.
In the above-mentioned manner, a model is downloaded to the information processing apparatus 10. The server 20 controls a timing at which the model is learned actually.
That is, in a case where the information processing apparatus 10 enters an environment in which the information processing apparatus 10 can favorably perform information processing for learning (YES Step S1 in
Examples of the environment in which the information processing apparatus 10 can favorably perform information processing for learning can include a charging duration, a duration in which a charge-free communication bearer is used, and a timing when user’s processing is not performed (e.g., the display is off and it is suspended). It is desirable that the user can set this environment arbitrarily through a graphical user interface.
Examples of information included in the above-mentioned learning joining permission notification can include a charge status (charge rate or the like), a time of learning in which the information processing apparatus 10 joined recently, the amount of learning data in the information processing apparatus 10, and apparatus information such as a country code, a ZIP code, a cluster identifier, and a model name.
When the learning joining instruction unit 21 in the server 20 receives a learning joining permission notification from the information processing apparatus 1 (Step S3 in
The learning joining instruction unit 21 in the server 20 sends a learning joining request notification to the information processing apparatus 10 determined to be allowed to join in learning (Step S5 in
When the learning joining management unit 131 in the information processing apparatus 10 receives the learning joining request notification (YES in Step S6 of
The learning joining instruction unit 21 in the server 20 sends a learning joining non-permission notification to an information processing apparatus 10 determined not to be allowed to join in learning (Step S8 in
After the learning unit 132 in the information processing apparatus 10 performs model learning, the learning result upload unit 134 sends to the server 20 a weight value (difference data) of each node of a model that is a result of the learning together with the number of pieces of data used for the learning, an identifier of the model that is the learning target, and the generation information such as hash value and version information. Using the learning-result integration unit 22 in the server 20, the server 20 generates a shared model integrating the learning result received from the information processing apparatus 10 by, for example, averaging with other learning results.
On the basis of the generation information such as the hash value and version information of the model that is the learning target sent with the learning result from the information processing apparatus 10, the learning-result integration unit 22 in the server 20 determines whether the learning result is data useful for generating the shared model. For example, the learning-result integration unit 22 in the server 20 checks whether the generation information of the model sent with the learning result from the information processing apparatus 10 is identical to the generation information of the model that the server 20 currently retains. Here, in a case where they are not identical, the learning result uploaded from the information processing apparatus 10 is discarded because the learning result uploaded from the information processing apparatus 10 is a learning result of a model at a generation older than the generation of the model that the server 20 currently retains. Further, in a case where the generation information of the model sent with the learning result from the information processing apparatus 10 is identical to the generation information of the model that the server 20 currently retains, the learning result uploaded from the information processing apparatus 10 is the learning result of the model that the server 20 currently retains. In this case, the learning-result integration unit 22 generates a shared model by for example averaging with other learning results, determining that the learning result sent from the information processing apparatus 10 is data useful for generating the shared model.
Although for example, federated averaging, a federated learning matching algorithm, and the like can be used as an integration method for the learning result, other methods may be used. In addition, the learning-result integration unit 22 in the server 20 may adjust weighing for reflecting each learning result to the shared model on the basis of the amount of learning data. More specifically, for example, the value of weighing for reflecting each learning result to the shared model is increased as the amount of learning data becomes larger, as one of methods. The model thus generated on the basis of the result obtained by integrating more learning results is defined as a highly refined shared model in a next generation.
The shared-model distribution unit 23 in the server 20 sets models that have finished learning in a predetermined number of rounds (e.g., 100 rounds or the like) or have particular learning precision (e.g., precision of 95% or more or the like) or models that ensure both as shared models that can be distributed.
The shared-model download unit 135 in the information processing apparatus 10 inquires of the server 20 about the presence/absence of a shared model newer than the shared model retained by the information processing apparatus 10, for example, at constant time intervals (e.g., once a week or the like) (Step S11 in
When the shared-model distribution unit 23 in the server 20 receives the inquiry from the information processing apparatus 10 (Step S12 in
When the shared-model download unit 135 in the information processing apparatus 10 receives the shared-model download request from the server 20 (YES in Step S15 in
The shared model downloaded by the shared-model download unit 135 in the information processing apparatus 10 is installed to the prediction unit 136. A plurality of shared models can be installed in the prediction unit 136 and a plurality of inference processes can be performed using the plurality of shared models. For example, a model for predicting a degree of degradation of IEEE802.11 communication and a model for predicting congestion of the access point can perform inference processes or the like at the same time.
The prediction unit 136 acquires various types of data such as communication parameters regarding each communication bearer of the communication unit 12, information of an internal sensor 31 and an external sensor 32, and also, information about an application that affects the communication (Step S21), inputs them to the shared model (Step S22), and calculates a score of output of the shared model (Step S23). Here, assuming output of a shared model for predicting a degree of degradation of IEEE802.11 communication for example, in a case where the score of output of the model exceeds a threshold (YES in Step S24), the prediction unit 136 determines to issue a communication bearer switching request to the communication path control unit 11 (Step S25). For example, the prediction unit 136 issues a communication path switching request for instruction to switch to a communication bearer other than IEEE802.11 to the communication path control unit 11 in the communication unit 12. In accordance with the communication path switching request, the communication path control unit 11 in the communication unit 12 switches the communication path used by the information processing apparatus 10.
The communication path control unit 11 is configured to switch the communication path at a particular timing. The timing of switching the communication path is selected avoiding as much as possible timings at which communication for which continuity should be ensured is highly likely to be performed. Examples of the timing can include a timing at which a communication traffic volume of an application becomes equal to or smaller than a threshold, a timing when the user does not use the information processing apparatus 10 (e.g., the display is off and it is suspended), and a timing at which the communication traffic volume is statistically known to lower depending on the user’s attributes (contracted plan of communication, residence, gender, age, occupation, nationality, and the like).
As described above, in accordance with the present embodiment, the shared model for predicting a communication path or the like the communication status of which is degraded is learned by federated learning in the information processing apparatus 10 of each user. In this manner, the model can be enhanced securely and efficiently without sending sensitive information such as the user’s personal information and the information about the information processing apparatus 10 as learning data to an external device from the information processing apparatus 10 of the user. Further, since a learning result obtained by each information processing apparatus 10 is uploaded to the server 20 as difference data between the original model that is the learning target and the weight value of the node, the raw data is not updated to the server 20, and it is possible to protect the personal information and reduce the update volume.
It should be noted that in the information processing apparatus 10 described above, the learning and prediction unit 13 does not necessarily need to be located in the information processing apparatus 10, and for example, a configuration in which the learning and prediction unit 13 is provided in an edge server or a cloud server, data necessary for learning such as communication parameters is loaded from the information processing apparatus 10, and learning results and prediction results are sent to the information processing apparatus 10 may be employed.
In general, after connecting to an access point of IEEE802.11, the communication status can be deteriorated when the user moves away from the access point or a radio wave transmission environment surrounding the access point is degraded. For predicting such communication degradation at the access point after connection, the shared model learned in a distributed manner by federated learning can also be used. In this case, it is desirable also from the perspective of personal information protection because positional information of the information processing apparatus 10, an identifier (BSSID) of each access point, and the like, which are used for model learning, do not leak from the information processing apparatus 10.
In the present embodiment, BSSID, SSID, the number of packets discarded after cancelling sending, the number of successfully sent packets, the number of resent packets, the number of successfully received packets, CCA Busy Time, Tx Time, Rx Time, radio on time, contention time, channel width, and the like after connection are input to the model as communication parameters of learning data with a correct label every n-seconds. In the learning data with the correct label that is input every n-seconds, correct labels 1 meaning that they are correct are added to learning data n-seconds before disconnection and learning data the communication status of which is degraded and correct labels 0 meaning that they are incorrect are added to other learning data. Other processing is performed in the same flow as the above-mentioned first embodiment. For example, the communication parameters collected by the communication unit after connection to IEEE802.11 are input to the shared model. In a case where a value output from the shared model is equal to or larger than a threshold, a request to switch the communication path from IEEE802.11 to another communication bearer is issued to the communication path control unit in the communication unit. In this manner, the communication bearer is switched.
In general, after connecting to an access point of IEEE802.11, the communication status can be deteriorated when the user moves away from the access point or a radio wave transmission environment surrounding the access point is degraded. For predicting such communication degradation at the access point due to differences in the user’s activity after connection, the shared model learned in a distributed manner by federated learning can also be used. In this case, it is desirable also from the perspective of personal information protection because output of sensors, for example, output of an acceleration sensor, positional information, output of a pedometer, a pulsation rate, blood pressure information, and the like, which are related to the user’s personal information, do not leak from the information processing apparatus 10.
In the present embodiment, output of an acceleration sensor, positional information, output of a pedometer, a pulsation rate, blood pressure information, output of an illuminance sensor, output of an atmospheric pressure sensor, and the like are input to the shared model as communication parameters of learning data with a correct label every n-seconds. In such learning data with the correct label that is input every n-seconds, correct labels 1 meaning that they are correct are added to learning data n-seconds before disconnection and learning data the communication status of which is degraded and correct labels 0 meaning that they are incorrect are added to other learning data. Other processing is performed in the same flow as the above-mentioned first embodiment. For example, the communication parameters collected by the communication unit after connection to IEEE802.11 are input to the shared model. In a case where a value output from the shared model is equal to or larger than a threshold, a request to switch the communication path from IEEE802.11 to another communication bearer is issued to the communication path control unit in the communication unit. In this manner, the communication bearer is switched.
A switching policy of the communication bearer differs depending on a user. Therefore, the switching policy can be classified into several patterns. However, it has been difficult to express switching timings matching users’ preference with a single model.
This problem can be solved by preparing a shared model for each of clusters classified by, for example, the user’s characteristics, for example, gender, age, and the like to the server 20, and acquiring and learning, the information processing apparatus 10, a shared model matching the user’s characteristics from the server 20. More particularly, for example, when a learning joining permission notification is set from the information processing apparatus 10 to the server 20, the server 20 is notified of an identifier of a cluster matching the user’s characteristics. Accordingly, a shared model associated with the identifier is downloaded to the information processing apparatus 10 from the server 20, and learning in the learning unit 132 of the information processing apparatus 10 is performed.
Although the case where the shared model is clustered on the basis of the user’s characteristics has been described above, the shared model may be clustered by a used network carrier, its plan, and the like.
In this modified example 2, in order to add characteristics depending on the user of the information processing apparatus 10 and the environment to the shared model, the shared model acquired from the server 20 by the shared-model download unit 135 is combined with a local model learned by the learning unit 132 in the information processing apparatus 10 and stored in the data storage unit 133. As a combining method, for example, the weight of the shared model and the weight of the local model are combined at a particular rate. For example, provided that the weight of the shared model is denoted by w_central, the weight of the local model is denoted by w_user, and the degree of fusion is denoted by α, a weight w of the model finally used by the prediction unit 136 is as follows:
By changing the variable α, it can be changed between a behavior close to the local model to a behavior equal to the shared model. The variable α may be set by the user or may be automatically changed in accordance with the system’s status.
Further, the shared model and the local model may be both used, their results may be combined, and an inference result may be derived. For example, provided that the output of the local model is denoted by y_user, the output of the shared model is denoted by y_central, and the degree of fusion is denoted by α, output y of the model finally used by the prediction unit 136 is as follows:
This modified example relates to a technology that predicts a relationship between positional information and time and a communication status for each base station such as a cellular base station and a carrier Wi-Fi base station.
In this case, a model having the positional information and time as input is used as the model for each base station. Further, cellular information (number of component carriers, an average rate (MCS: modulation and coding scheme), capability (LTE/HSPA+/GSM), signal strength, the number of MIMO layers, the number of hours allocated for communication, the number of actual resource blocks, received/sent packet counter values, the number of successes of sending, the number of successes of receiving, the number of resent frames (MAC), RLC numbers, the number of interface errors, a throughput (PHY/IP)), a TCP error rate, RTT for a particular host, a communication error displayed on an application, a delayed communication status of a browser or the like, and the like are used as the communication parameters of the learning data. In such learning data, the correct label 1 is added to learning data representing the deterioration of the communication status and the correct label 0 added to the learning data not representing the deterioration of the communication status.
When the information processing apparatus 10 located in a certain cell uploads a result of learning in the cell to the server 20, an cell ID is added to the learning result. Accordingly, the integration unit of the server 20 integrates, for each cell ID, learning results uploaded by the respective information processing apparatuses 10 and generates a shared model with the cell ID. The thus generated shared model is stored in the server 20 or base station and is distributed to the information processing apparatus 10 from the server 20 or base station in accordance with a request specifying the cell ID from the shared-model download unit 135 of the information processing apparatus 10.
In the prediction unit 136 in the information processing apparatus 10, positional information and times are comprehensively input to the shared model for each base station and prediction results of relationships between the positional information and times and communication degradation statuses of the base station are output. Accordingly, it is possible to determine positions and times of base stations at which the communication status is predicted to be deteriorated without actually measuring radio wave environments and communication quality with probes. Further, a shared model having high prediction precision can be obtained for each base station by distributed learning based on federated learning.
The present technology can also be applied to prediction of an activity status of the user of the information processing apparatus 10.
In this modified example 4, distributed learning based on federated learning is performed with respect to a shared model having sensor information and a wireless environment such as cellular or available IEEE802.11 relevant statistic information as input and having the user’s status (stop/moving) and positional information as output. The user’s status may be labelled by the user answering questions on a user interface or may be derived of other information. For example, a service of suggesting a user to use advertisement and coupon ticket of a near store or providing a user with vacancy information of a near bathroom or a time table of near transportation or the like in accordance with prediction results of user’s status and positional information output from the shared model after sensor information and wireless environment information are input, are provided.
The server 20 may be constituted by a plurality of servers having a class relationship to each other. In this case, a server at an upper-level class may be configured to integrate shared models generated by servers at a lower-level class and generate a shared model for the upper-level class. For example, the server 20 is placed for each country, region, or municipality. Then, the server 20 at the municipality level integrates learning results uploaded from the plurality of information processing apparatuses 10 on a municipality-by-municipality basis, generates a shared model for the municipality level, and uploads an integrated learning result to the server 20 at an upper level, for example, the region level. The server 20 at the region level further integrates a plurality of integrated learning results for the municipality level uploaded by the server 20 of each municipality, generates a shared model for the region level, and uploads an integrated learning result to the server 20 at an upper level, for example, the country level. Finally, the server 20 at the country level further integrates a plurality of integrated learning results for the region level and generates a shared model for the country level.
In accordance with a download request from the information processing apparatus 10, the server 20 at each class sends a shared model for the level that the server 20 manages to the information processing apparatus 10. Accordingly, a shared model reflecting to region properties is obtained.
It should be noted that the present technology may also take the following configurations.
10
11
12
13
20
21
22
23
100
131
132
133
134
135
136
Number | Date | Country | Kind |
---|---|---|---|
2020-071258 | Apr 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/013920 | 3/31/2021 | WO |