The present invention relates to an information processing system and an information processing program that output a fair prediction result or determination result.
A technology of mechanical training or artificial intelligence (hereinafter, referred to as AI) for outputting a prediction or determination result by training a complicated pattern from the past performance data has characteristics of faithfully training a determination result including the human prejudice or the unfair past convention. Therefore, in recent years, for example, AI for automatically determining whether or not to hire has serious problems of performing unfair or discriminatory determination such as applying a higher score to males than females or having a greatly different sexual adoption rate. The AI for performing such unfair or discriminatory prediction or determination is referred to as biased AI. Regarding such problems, for example, there is means for avoiding discrimination by not including sensitive information such as gender and race in input information with respect to the AI. However, for example, in the AI, an intermediate feature amount corresponding to the gender may be automatically generated from the input information such as a body height and an income, and thus, the biased AI can be created, which is not a substantial solution.
On the other hand, in the related art, there is a technology of controlling a threshold value of AI determination such that a statistical value of a determination result of AI is a desired value, in the AI which does not include the sensitive information in the input information described above but is biased (Non-Patent Document 1). In Non-Patent Document 1, for example, in AI application for determining a finance loan, AI that tends not to provide a loan to females is biased by decreasing a threshold value for determining the finance loan for females such that the finance loan is likely to be provided, and thus, a statistical value of an AI determination result relevant to a finance loan ratio of males and females is controlled to be a desired value.
In Non-Patent Document 1, an assessment threshold value of the AI is controlled on the basis of sensitive individual information of a user, and the determination result is adjusted. That is, in a system according to Non-Patent Document 1, the user is required to input the sensitive individual information to the system.
When providing a service utilizing the AI, requiring the user to input the sensitive individual information leads to an increase in the anxiousness of the user with respect to service use since how the input information is actually used is not clear for the user. In addition, even in a case where the sensitive individual information of the user continuously remains in the service system, anxiousness with respect to the misuse or leakage of the individual information remains, and thus, a psychological burden of the user on the use increases.
An object of the present invention is to attain an AI system that outputs a fair assessment result without requiring a service user to input sensitive individual information.
In a preferred example of an information processing system of the present invention, the information processing system is configured by including a first predicting unit which outputs an assessed value from input information not including sensitive attribute information that a user has decided not to input, a second predicting unit which has been trained in advance using teacher data to estimate the sensitive attribute information that the user has decided not to input, and estimates the sensitive attribute information from the input information not including the sensitive attribute information, and a first quantizing unit which, on the basis of an estimated value of the sensitive attribute information obtained from the second predicting unit, adjusts the assessed value output by the first predicting unit, and outputs an assessment result.
In addition, in a preferred example of an information processing program of the present invention, the information processing program is configured by allowing a computer to function as a first predicting section which outputs an assessed value from input information not including sensitive attribute information that a user has decided not to input, a second predicting section which has been trained in advance using teacher data to estimate the sensitive attribute information that the user has decided not to input, and estimates the sensitive attribute information from the input information not including the sensitive attribute information, a third predicting section which inputs the input information not including the sensitive attribute information and the estimated value of the sensitive attribute information output by the second predicting section, and outputs a reference assessed value that has predicted the assessed value output by the first predicting section, a reference predicting analyzing section which inputs the input information not including the sensitive attribute information and the estimated value of the sensitive attribute information output by the second predicting section, and outputs contribution degree information of each input feature amount with respect to the reference assessed value of the third predicting section, an attribute influence adjusting section which inputs the assessed value output by the first predicting section, the reference assessed value output by the third predicting section, the contribution degree information of each of the input feature amounts output by the reference predicting analyzing section, and threshold value information, and outputs an adjusted assessed value adjusted to remove an influence of the sensitive attribute information from the assessed value when a distance between the assessed value and the reference assessed value is less than a distance threshold value, and a second quantizing section which calculates a fourth threshold value on the basis of the estimated value of the sensitive attribute information, a first threshold value for adjusting an assessed value with respect to preferential attribute, a second threshold value for adjusting an assessed value with respect to non-preferential attribute, and information relevant to a degree of the sensitive attribute information included in the adjusted assessed value output by the attribute influence adjusting section, and adjusts the adjusted assessed value output by the attribute influence adjusting section and outputs an assessment result on the basis of the fourth threshold value.
According to the present invention, since it is not necessary for a service user to input sensitive individual information to a system, a psychological burden of the user on service use can be reduced. In addition, in an information processing system and an information processing program according to the present invention, since it is not necessary to input the sensitive individual information, even in a case where individual information is leaked, the damage of the user can be reduced from the viewpoint of security.
Hereinafter, Examples will be described with reference to the diagrams. The same reference numerals will be applied to the same constituents, and the repeated description thereof will be omitted.
Here, Examples will be described with an information processing system using label prediction type AI for assessing whether or not to provide a finance loan as an example. In this information processing system, in a case where an assessment result of the finance loan with respect to a user is set to Y, the finance loan is available (that is, a result that is preferable for the user) when Y=1, and the finance loan is not available (that is, a result that is not preferable for the user) when Y=0.
In addition, in a case where sensitive attribute information (attribute information that may particularly lead to discrimination in individual information, attribute information including performance of being discriminated in the past (in history)) of the user is set to Si, Si=0 indicates that the user belongs to a societally preferential group (also referred to as preferential attribute), and Si=1 indicates that the user belongs to a societally non-preferential group (also referred to as non-preferential attribute), for example, the user was a target for discrimination in the past. For example, in a case where the sensitive attribute information is “gender” S1, S1=1 corresponds to females, and S1=0 corresponds to males. For example, in a case where the sensitive attribute information is “race (the color of the skin)” S2, S2=1 corresponds to black people, and S2=0 corresponds to white people.
In addition, an adjustment parameter and a constraint condition are set in advance and stored in a storing unit (not illustrated). When executing the information processing system 10, the adjustment parameter and the constraint condition are used by being suitably read out from the storing unit.
The information processing system 10 can be configured on a general-purpose calculator, and a hardware configuration thereof is not illustrated, but each of the function units is attained by loading a program stored in the storing unit to a RAM and by executing the program with a CPU.
The predicting unit 110 is a processing unit of AI that outputs a label predicted value (that is, assessed value information 111) on the basis of input information 11. The input information 11 is the individual information of the user that is used in the finance loan assessment, but does not include the sensitive attribute information Si (when operating the information processing system 10, sensitive attribute information listed in advance as individual information that does not require the user to which a service is provided to input). The predicting unit 110, for example, is a predictor using deep learning, and in this Example, an output value thereof (that is, an assessed value y) is a continuous value of “0” to “1” normalized by a sigmoid function.
The hidden attribute predicting unit 120 is a processing unit that outputs an estimated value si′ of the sensitive attribute information Si of the user (that is, estimated sensitive attribute information 121) on the basis of the input information 11. The hidden attribute predicting unit 120, for example, is a predictor using deep learning, and can be created by using training data with a feature amount identical to an input feature amount of the predicting unit 110 as input, and the corresponding sensitive attribute information Si (the sensitive attribute information listed in advance as the individual information that does not require the user to which the service is provided to input) as an answer. Here, it is not necessary that training data that is applied to the learning is identical to the training data used for creating the predicting unit 110.
In a case where there are a plurality of sensitive attribute information items Si, the hidden attribute predicting unit 120 includes a “hidden attribute predicting unit 1 predicting s1′”, a “hidden attribute predicting unit 2 predicting s2′”, and the like.
In this Example, the estimated value si′ of the sensitive attribute information Si is a continuous value of “0” to “1” normalized by a sigmoid function. As described above, in Non-Patent Document 1, a true value that requires the user to input is give as the sensitive attribute information, and thus, the value is a discrete value of “0” or “1”, whereas the sensitive attribute information in this Example is given by the estimated value si′, and thus, it is necessary to treat the continuous value indicating the degree of confidence for estimating the sensitive attribute information, which is one of major differences between Patent Document 1 and this Example.
The quantizing unit 130 is a processing unit that calculates the assessment result by quantizing the assessed value y of the assessed value information 111, and outputs the assessment result as assessment result information 131, on the basis of the estimated sensitive attribute information 121 and adjustment parameter information 12.
The adjustment parameter information 12 is information relevant to a threshold value for quantizing the assessed value y, and in this Example, includes information of a standard assessment threshold value ThY, an assessment threshold value αp of the preferential attribute, and an assessment threshold value αnp of the non-preferential attribute. The standard assessment threshold value ThY is a threshold value that is used for quantizing the assessed value y when fairness is not considered, and is generally a value close to 0.5. The standard assessment threshold value ThY is obtained by a procedure of training the predictor of the predicting unit 110. The assessment threshold value αp of the preferential attribute is a threshold value that is used for quantizing the assessed value y of the preferential attribute group, and as a guide, is a value of ThY≤αp<1. The assessment threshold value αnp of the non-preferential attribute is a threshold value that is used for quantizing the assessed value y of the non-preferential attribute group, and as a guide, is a value of 0<αnp≤ThY. Examples of a method for calculating an optimum value of the assessment threshold value αp of the preferential attribute and the assessment threshold value αnp of the non-preferential attribute include Non-Patent Document 1 and the like.
[Expression 1]
α=αp×(1−s′)+αnp×s′ (1)
Here, αp and αnp are the assessment threshold values of the preferential attribute and the non-preferential attribute, respectively. s′ is the estimated value of the sensitive attribute information. Note that, in this Example, a calculation example of the assessment threshold value α in a case where Sensitive Attribute Information S=0 is the preferential attribute, and the possibility of the non-preferential attribute increases as the value of the estimated value s′ of the sensitive attribute information increases is described. As described above, the assessment threshold value α is set such that a result that is preferable for the user of the non-preferential attribute is easily obtained, and thus, AI that performs unfair prediction can be biased to suppress the unfair prediction, and a fairer information processing system can be provided.
Note that, in a case where there are a plurality of estimated values (s1′, s2′, . . . ) of the sensitive attribute information, the assessment threshold value α can be obtained as a linear sum represented in Expression (2). Here, the assessment threshold value α is clipped to be 0≤α≤1.
[Expression 2]
α=αp1×(1−s1′)+αnp1×s1′+αp2×(1−s2′)+αnp2×s2′+ (2)
Here, αp1 and αp2, and αnp1 and αnp2 are the assessment threshold values of the preferential attribute and the non-preferential attribute, respectively.
The log management unit 140 is a storage processing unit that stores state information relevant to a processing procedure that is performed until the information processing system 10 derives the assessment result information 131 from the input information 11 in a non-volatile storage device such as a hard disk or a volatile storage device such as a dynamic random access memory (DRAM). The state information that is stored includes the input information 11, the assessed value information 111, the estimated sensitive attribute information 121, and the assessment result information 131, and for example, is used for visualizing internal state information of the system to an owner of the information processing system according to this Example, such as a financial institute.
The state visualizing unit 150 reads out state log information 141 from the log management unit 140 to be output as state log information 151 to a user screen 300 of the system owner by further using constraint information 13. The constraint information 13 includes the attribute threshold value Ths for quantizing the estimated value si′ of the sensitive attribute information, and information of a target value, an upper limit value, and a lower limit value of a fairness index value F.
In the region 310 for displaying the internal state information, for example, an event ID that is an identifier of a finance loan assessment event in a financial institute, and input information X, an assessment result Y before being adjusted, predicted attribute Si′, and an assessment result Y* after being adjusted in the finance loan assessment event are displayed. The input information X is the input information 11. The assessment result Y before being adjusted is obtained by using the assessment threshold value ThY to be “1” in a case where the assessed value y is greater than or equal to the assessment threshold value ThY (that is, finance loan approval), and to be “0” in a case where the assessed value y is less than the assessment threshold value ThY (that is, finance loan disapproval). The predicted attribute S′ is obtained by using the attribute threshold value Ths to be “1” in a case where the estimated value si′ of the sensitive attribute information is greater than or equal to the attribute threshold value Ths (that is, the non-preferential attribute), and to be “0” in a case where the estimated value si′ is less than the attribute threshold value Ths (that is, the preferential attribute). The assessment result Y* after being adjusted is the assessment result information 131.
In the region 320 for displaying the state of the fairness index value, a target value 321, an upper limit value 322, and a lower limit value 323 of the fairness index value F, and the transition of an estimated value of the fairness index value F with respect to the finance loan assessment event are displayed. An example of the fairness index value F includes an index value of group fairness, and a ratio of a probability {P[Y=1|S=1]} of obtaining the finance loan approval assessment (Y=1) in the non-preferential attribute (S=1) group to a probability {P[Y=1|S=0]} of obtaining the finance loan approval assessment (Y=1) in the preferential attribute (S=0) group can be obtained as represented in Expression (3).
For example, in a case where the sensitive attribute information is the “gender” S, as the transition of the ratio of a probability that the person who performs the finance loan approval assessment is male and a probability that the person who performs the finance loan approval assessment is female, the transition of the fairness index value F is displayed from the past log stored in the log management unit 140. The internal state information of the past event ID that is represented by a white circle 324 is displayed in the region 310.
In this case, the target value 321 is generally set to F=1. The upper limit value 322 and the lower limit value 323 depend on the culture or the law of each country, but the upper limit value 322 is a value such as 1.2, and the lower limit value 323 is a value such as 0.8, on the basis of a fluctuation range of approximately 20%. It is considered that the owner of the information processing system 10 (for example, a bank) suitably sets the constraint information 13, monitors the transition of the fairness index value F, and performs the maintenance of the system in a case where the transition is greater than the upper limit value or the lower limit value.
The above description is Example of the information processing system 10. According to Example, it is possible to attain a fair assessment result without requiring the sensitive individual information of the service user, and it is possible to reduce a psychological burden of the user on the service use. In addition, in the information processing system according to this Example, it is not necessary to input the sensitive individual information, and thus, it is possible to reduce a risk relevant to privacy for the sensitive individual information from the viewpoint of the system owner.
In the information processing system 10 described in Example 1, the service user of the non-preferential attribute is easily obtain uniformly preferred assessment results without considering the substantial influence of the sensitive attribute information on the assessment result of the AI. In Example 2, an information processing system 20 that performs adjustment to obtain an assessment result from which the influence of the sensitive attribute information is removed is described. Accordingly, it is possible to attain a fair information processing system that derives an assessment result from which the substantial influence of the sensitive attribute information on the assessment result of the AI is removed.
In addition, threshold value information, an adjustment parameter, and a constraint condition are set in advance and stored in a storing unit (not illustrated). When executing the information processing system 20, the threshold value information, the adjustment parameter, and the constraint condition are used by being suitably read out from the storing unit.
The reference predicting unit 210 is a prediction processing unit that simulates the behavior of the predicting unit 110, and outputs a reference assessed value y′ that has predicted the assessed value y of the predicting unit 110 by using the input information 11 and the estimated sensitive attribute information 121 as reference assessed value information 211.
The reference predicting analyzing unit 220 is a processing unit that analyzes the authority of the reference assessed value y′ output by the reference predicting unit 210, and outputs contribution degree information 221 of each of the input feature amounts with respect to an output value of the reference predicting unit 210 (that is, the reference assessed value y′) by using the input information 11 and the estimated sensitive attribute information 121 as input.
The attribute influence adjusting unit 260 is a processing unit that calculates the adjusted assessed value y* adjusted such that the influence of the sensitive attribute information is removed from the assessed value y of the predicting unit 110 to be output as adjusted assessed value information 261 by using the assessed value information 111, the reference assessed value information 211, the contribution degree information 221 of each of the input feature amounts, and threshold value information 21.
Note that, in the example of
First, as a processing step S810, initialization processing is performed. Specifically, the assessed value y is set to the adjusted assessed value y* (that is, y*=y), and “0” is set to an attribute influence adjustment flag γ (that is, γ=0).
Next, as a processing step S820, similarity degree calculation processing of the assessed value is performed. Specifically, a distance L between the assessed value y and the reference assessed value y′ is introduced as the index of the similarity degree, and a difference absolute value between the assessed value y and the reference assessed value y′ is obtained as the distance L (that is, L=|y−y′|).
Next, as a processing step S830, with reference to the distance threshold value ThL, in a case where the distance L is less than the distance threshold value ThL, it is determined that the assessed value y and the reference assessed value y′ are sufficiently similar to each other, and the process proceeds to processing step S840, otherwise the adjustment processing is ended. The distance threshold value ThL is a parameter that is give as the threshold value information 21.
In the processing step S840, assessed value adjustment processing is performed. Specifically a lower limit value of the adjusted assessed value y* is set to “0”, an upper limit value thereof is set to “1”, and a value obtained by subtracting the contribution degree Cs of the sensitive attribute information from the assed value y is set by Expression (4).
[Expression 4]
y*=max(min(y−CS,1),0) (4)
In addition, in the processing step S840, “1” is set to the attribute influence adjustment flag γ (that is, γ=1). Here, the lower limit value and the upper limit value of the adjusted assessed value y* are set to “0” and “1”, respectively, on the premise that in this Example, the predicting unit 110 is configured by a deep neural network, a value normalized from “0” to “1” by a sigmoid function is output as the assessed value y. The above description is the specific example of the adjusting section of the attribute influence adjusting unit 260.
The quantizing unit 230 is a processing unit that calculates the assessment result by quantizing the adjusted assessed value y* of the adjusted assessed value information 261 to be output as assessment result information 231, by using estimated sensitive attribute information 121 and an adjustment parameter 22.
The adjustment parameter information 22 is information relevant to a threshold value for quantizing the adjusted assessed value y*, and includes a threshold value adjustment factor βp of the preferential attribute and a threshold value adjustment factor βnp of the non-preferential attribute, in addition to the standard assessment threshold value ThY, the assessment threshold value αp of the preferential attribute, and the assessment threshold value αnp of the non-preferential attribute, which are used in the information processing system 10.
The threshold value adjustment factor βp of the preferential attribute is a parameter for adjusting how much a bias effect due to the assessment threshold value αp of the preferential attribute is removed in a case where the influence of the sensitive attribute information is removed from the assessed value y by the attribute influence adjusting unit 260, and has a value of “0” to “1” (that is, 0≤βp≤1). The threshold value adjustment factor βnp of the non-preferential attribute is a parameter for adjusting how much a bias effect due to the assessment threshold value αnp of the non-preferential attribute is removed in a case where the influence of the sensitive attribute information is removed from the assessed value y by the attribute influence adjusting unit 260, and has a value of “0” to “1” (that is, 0≤βnp≤1). As described above, the threshold value adjustment factor βp of the preferential attribute and the threshold value adjustment factor βnp of the non-preferential attribute function as a parameter for adjusting the effect of the adjusting section by adjusting the assessment threshold value α, which is used in the information processing system 10 of Example 1, and the adjusting section of the attribute influence adjusting unit 260.
[Expression 5]
α=αp′×(1−s′)+αnp′×s′ (5)
Here, s′ is the estimated value of the sensitive attribute information. In addition, αp′ and αnp′ are an adjusted assessment threshold value of the preferential attribute and an adjusted assessment threshold value of the non-preferential attribute, respectively, which can be obtained by Expression (6) and Expression (7) using the standard assessment threshold value Thy, the assessment threshold value αp of the preferential attribute, the assessment threshold value αnp of the non-preferential attribute, the threshold value adjustment factor βp of the preferential attribute, the threshold value adjustment factor βnp of the non-preferential attribute, and the attribute influence adjustment flag γ.
[Expression 6]
αp′=αp+(ThY−αp)×βp×γ (6)
[Expression 7]
αnp′=αnp+(ThY−αnp)×βnp×γ (7)
Note that, in a case where there are a plurality of estimated values (s1′, s2′, . . . ) of the sensitive attribute information, the assessment threshold value α represented in Expression (5) can be obtained as a linear sum represented in Expression (8). Here, the assessment threshold value α is clipped to be 0≤α≤1.
[Expression 8]
α=αp1′×(1−s1′)+αnp1′×s1′+αp2′×(1−s2′)+αnp2′×s2′+ (8)
In addition, an adjusted assessment threshold value αpi′ of the preferential attribute and an adjusted assessment threshold value αnpi′ of the non-preferential attribute of the sensitive attribute information Si can be obtained by Expression (9) and Expression (10).
[Expression 9]
αpi′=αpi+(ThY−αpi)×βpi×γ (9)
[Expression 10]
αnpi′=αnpi+(ThY−αnpi)×βnpi×γ (10)
Here, αpi and αnpi are the assessment threshold value of the preferential attribute and the non-preferential attribute of the sensitive attribute information Si, and βpi and βnpi are the threshold value adjustment factor of the preferential attribute and the non-preferential attribute of the sensitive attribute information Si.
Here, the attribute influence adjustment flag γ is flag information indicating whether or not the adjustment processing obtained by the attribute influence adjusting unit 260 is performed, and is included in the adjusted assessed value information 261. A case where both values of the threshold value adjustment factor βp of the preferential attribute and the threshold value adjustment factor βnp of the non-preferential attribute are indicates that the adjustment processing relevant to the sensitive attribute information is not performed in the quantizing unit 230, and thus, it is preferable that the values are set to a value close to “1” for an application in which it can be expected that the adjusted assessed value y* can be obtained by sufficiently removing the influence of the sensitive attribute information from the assessed value y. As described above, the threshold value adjustment factor βp of the preferential attribute and the threshold value adjustment factor βnp of the non-preferential attribute can also expressed as a parameter for expressing the degree of sensitive attribute information Si included in the adjusted assessed value y*. Note that, in this Example, Sensitive Attribute Information Si=0 is set to the preferential attribute, and a calculation example of the assessment threshold value α in a case of indicating that the possibility of the non-preferential attribute increases as the value of the estimated value si′ of the sensitive attribute information increases is described.
The log management unit 240 is a storage processing unit that stores state information relevant to a processing procedure that is performed until the information processing system 20 derives the assessment result information 231 from the input information 11 in a non-volatile storage device or a volatile storage device. The state information that is stored includes the input information 11, the assessed value information 111, the estimated sensitive attribute information 121, the assessment result information 231, the contribution degree information 221 of each of the input feature amounts, the adjusted assessed value information 261, and validity information of the contribution degree Cs of the sensitive attribute obtained by the attribute influence adjusting unit 260 (that is, the distance L between the assessed value y and the reference assessed value y′). Here, the assessed value information 111, the contribution degree information 221 of each of the input feature amounts, adjusted assessed value information 261 and the validity information of the contribution degree Cs are included in internal state information 262 that is obtained from the attribute influence adjusting unit 260.
The state visualizing unit 250 reads out state log information 241 from the log management unit 240 to be output as state log information 251 to the user screen 1000 of the system owner by further using the constraint information 13.
In the region 910 for displaying the internal state information by designating the event ID, an attribute influence adjusted assessed value that is adjusted to remove the attribute influence is displayed, in addition to the event ID, the input information, the assessment result before being adjusted, the predicted attribute, and the assessment result after being adjusted, which are displayed on the user screen 300 of the information processing system 10. The attribute influence adjusted assessed value is the adjusted assessed value included in the adjusted assessed value information 261.
In the region 920 for displaying the authority description of the assessment result, an assessment result 921 and an attribute contribution degree 922 thereof, and authenticity 923 of contribution degree analysis are displayed for each of the event IDs. The assessment result 921 is information of the assessment result Y* after being adjusted, and the assessment result of finance loan approval or disapproval is displayed. Here, the finance loan approval (that is, Y*=1) is expressed as permission, and the finance loan disapproval (that is, Y*=0) is expressed as rejection. The attribute contribution degree 922 is information of the contribution degree information 221 of each of the input feature amounts, and not only the contribution degree of the feature amount included in the input information 11 but also information of the contribution degree of the sensitive attribute information with respect to the assessed value y are displayed. Here, the feature amount 4 corresponds to the contribution degree of the sensitive attribute information. The authenticity 923 is information indicating the validity of the evaluation of the attribute contribution degree 922, and is equivalent to the validity information of the contribution degree Cs of the sensitive attribute information (that is, the distance L between the assessed value y and the reference assessed value y′). Here, in the authenticity 923, when the distance L is less than the distance threshold value ThL, the authenticity can be simply expressed as “high”, when the distance L is greater than the distance threshold value ThL, the authenticity can be simply expressed as “low”. In
As described above, the information processing system 20 is characterized in that not only is the attribute influence removed from the assessed value y output by the predicting unit 110, but also the degree of influence (that is, the contribution degree) of the sensitive attribute information on the assessed value y can be quantified and visualized, and thus, can be extremely effectively utilized in order to improve the fairness of the AI determination not only when operating the service but also when developing the AI.
The above description is the information processing system 20 described as a second embodiment according to the present invention. Accordingly, it is possible to attain the fair information processing system 20 considering a substantial influence of the sensitive attribute information on the assessment result of the AI.
In addition, the predicting unit 110 of the information processing system 10 and the information processing system 20 described above is not limited to the label prediction type AI for assessing whether or not to provide a finance loan, and can be applied to AI for outputting an assessment result according to the service purpose by inputting the individual information according to the user. Even in a case where the predicting unit 110 is AI of which the inside is a black box created by a third person, it is possible to configure the information processing system 10 or the information processing system 20 by introducing the AI.
Number | Date | Country | Kind |
---|---|---|---|
2020-053667 | Mar 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/002282 | 1/22/2021 | WO |