The present invention relates to a system.
Patent Document 1 discloses multi-agent reinforcement learning.
An autonomous distributed system which is one of multi-agent systems, is known. In the autonomous distributed system, a purpose is that each of a plurality of agents independently and autonomously learns, and acquires a cooperative behavior. A case where the respective agents are in exactly the same state (for a structure or a parameter), is referred to as being homogeneous, and a case where the agents are brought, from the homogeneous state, into states different (heterogeneous) from each other, is referred to as a functional differentiation. As an application example of the autonomous distributed system, there is an example of a tracking problem or the like in which a Target and a plurality of Hunters exist, and the plurality of Hunters learn how to capture the Target, that is, the plurality of Hunter agents each learn their own roles to capture the Target in cooperation. In the autonomous distributed system, in a group of agents with the same performance (homo), functions are differentiated by learning, and each takes the role (hetero). For example, in the tracking problem, there is a division between an agent that performs tracking in a straight manner, and an agent that goes around. By using such a technology, for example, it is possible to autonomously perform the learning in an environmentally adaptive manner by the same algorithm, or to perform the learning autonomously in a distributed manner by the multi-agents. In the present embodiment, the autonomous distributed system is applied to a network.
Hereinafter, the present invention will be described through embodiments of the invention, but the following embodiments do not limit the invention according to the claims. In addition, not all of the combinations of features described in the embodiments are essential to the solution of the invention.
A first NW (NetWork) layer is configured by the upper computing 110, a second NW layer which is lower than the first NW layer is configured by the plurality of intermediate computings 120, and a third NW layer which is lower than the second NW layer is configured by the plurality of lower computings 130.
The communication network 100 that is a target to which the autonomous distributed system is applied, may be any communication network as long as it has a layer structure. For example, the communication network 100 is a cloud network.
The cloud computing 310 may be an example of the upper computing 110. The fog computing 320 may be an example of the intermediate computing 120. The edge computing 330 may be an example of the lower computing 130.
In the cloud network 300, a first NW layer is configured by the cloud computing 310, a second NW layer is configured by the plurality of fog computings 320, and a third NW layer is configured by the plurality of edge computings 330.
Each of the plurality of edge computings 330 communicates with one or more IoT devices 400 via a mobile communication system. Each of the plurality of IoT devices 400 may transmit information to at least any of the plurality of edge computings 330, via a wireless base station, a Wi-Fi (registered trademark) access point, and the like.
The IoT device 400 may be any device as long as it is capable of acquiring and transmitting any information. The IoT device 400 includes, for example, various types of sensors. An example of the information that is transmitted by the IoT device 400 includes image data (a still image and a moving image), sound data, infrared data, location data, object sensing data, distance data, weather data, temperature data, humidity data, or the like; however, these are merely examples and any information may be used.
The mobile communication system is, for example, a 5G (5th Generation) communication system. The mobile communication system may be an LTE (Long Term Evolution) communication system. The mobile communication system may be a 3G (3rd Generation) communication system. The mobile communication system may be a mobile communication system of a 6G (6th Generation) communication system onwards.
The autonomous distributed system 200 is applied to the communication network 100. For example, the upper agent 210 is arranged on the upper computing 110, each of the plurality of intermediate agents 220 is arranged in each of the plurality of intermediate computings 120, and each of the plurality of lower agents 230 is arranged in each of the plurality of lower computings 130. The plurality of intermediate agents 220 may be arranged for one intermediate computing 120. The plurality of lower agents 230 may be arranged in one lower computing 130.
The autonomous distributed system 200 is applied, for example, to the cloud network 300. For example, the upper agent 210 is arranged in the cloud computing 310, each of the plurality of intermediate agents 220 is arranged in each of the plurality of fog computings 320, and each of the plurality of lower agents 230 is arranged in each of the plurality of edge computings 330. The plurality of intermediate agents 220 may be arranged for one fog computing 320. The plurality of lower agents 230 may be arranged in one edge computing 330.
When the autonomous distributed system 200 has two layers, for example, the upper agent 210 is arranged in the cloud computing 310, and each of the plurality of lower agents 230 is arranged in each of the plurality of edge computings 330. In addition, for example, the upper agent 210 is arranged in any of the plurality of fog computings 320, and the plurality of lower agents 230 are arranged in each of the plurality of edge computings 330. It should be noted that in this case, the cloud network 300 may be configured by the two layers without including the plurality of fog computings 320.
The lower agent 230 collects the information. The lower agent 230 collects, for example, the information transmitted by the IoT device 400. The lower agent 230 may collect the information from the IoT device 400 by a mobile communication. The lower agent 230 may collect the information transmitted by any device other than the IoT device 400. The lower agent 230 may collect the information from any device other than the IoT device 400 via the mobile communication.
The lower agent 230 uses collected information to execute learning in cooperation with another lower agent 230. The lower agent 230 may execute the learning according to a reward registered in advance. The lower agent 230 may execute the learning according to knowledge registered in advance. The lower agent 230 may use a learning result to execute various types of processing. For example, the lower agent 230 uses the learning result to select, from among a plurality of pieces of collected information, the information that is to be transmitted to the intermediate agent 220; and transmits the selected information to the intermediate agent 220. For example, the lower agent 230 uses the learning result to generate summary information obtained by summarizing the plurality of pieces of collected information; and transmits the summary information to the intermediate agent 220.
For example, when the purpose is to transmit only useful information to an upper layer, from among a large amount of information that is transmitted by the plurality of IoT devices 400, a degree of usefulness of the information is registered as the reward, the information being collected from the IoT device 400 and being transmitted to the intermediate agent 220 or the upper agent 210, by the lower agent 230. Each of the plurality of lower agents 230 cooperates with another lower agent 230, and autonomously advances the learning to maximize the reward. The plurality of lower agents 230 collect the information from one or more different IoT devices 400; select the information that is to be transmitted to the upper layer, from the collected information, by criteria different from each other, or generate the summary information that is to be transmitted to the upper layer, from the collected information; and thus the system can be constructed such that the only information of which the degree of usefulness is high overall, is transmitted to the upper layer.
The lower agent 230 may execute the processing with a comparatively high real time performance. For example, the lower agent 230 uses the learning result to perform the selection processing, or the generation processing of the summary information, on the information that is collected from one or more IoT devices 400 for each predetermined time, such as every one minute and every three minutes, and transmits the information to the upper layer. This makes it possible to adjust the information that is to be transmitted to the upper layer, in real time. For example, in a case of the plurality of lower agents 230 arranged in the edge computing 330 which handles the information that is mostly an image during the day and infrared data at night, many of the lower agents 230 are responsible for the images during the day, and are responsible for the infrared data at night. In this way, the lower agent 230 is able to be dynamically adapted to the environment, and thus is able to be adapted, for example, even to a case where the IoT device 400 which is an information collection target, is changed, or a case where a type of the information that is collected by the IoT device 400 which is an information collection target, is changed. In this manner, the lower agent 230 may execute the processing in real time, but may transmit statistical information, or transmit summary information, after a time delay. The lower agent 230 may be able to intentionally introduce a delay.
The intermediate agent 220 collects the information. The intermediate agent 220 may collect the information from the lower agent 230. The intermediate agent 220 collects, for example, from the lower agent 230, the information collected from the IoT device 400 by the lower agent 230. For example, the intermediate agent 220 collects, from the lower agent 230, the information selected by the lower agent 230, from among the plurality of pieces of information collected from the IoT device 400 by the lower agent 230. For example, the intermediate agent 220 collects, from the lower agent 230, the summary information obtained by summarizing the plurality of pieces of information collected from the IoT device 400 by the lower agent 230.
The intermediate agent 220 uses the collected information to execute the learning in cooperation with another intermediate agent 220. The intermediate agent 220 may use the learning result to execute various types of processing. For example, the intermediate agent 220 uses the learning result to select, from among the plurality of pieces of collected information, the information that is to be transmitted to the intermediate agent 220; and transmits the selected information to the upper agent 210. For example, the intermediate agent 220 uses the learning result to generate summary information obtained by summarizing the plurality of pieces of collected information; and transmits the summary information to the upper agent 210.
The intermediate agent 220 may execute the processing with a low real time performance in comparison with that of the lower agent 230. For example, the intermediate agent 220 executes the processing using the information that is collected from one or more lower agents 230 for each predetermined period, such as every hour and every day. The intermediate agent 220 executes, for example, the learning using the information of the predetermined period. The intermediate agent 220 may cooperate with another intermediate agent 220 to adjust an amount of information that is to be transmitted from the plurality of lower agents 230 to the upper layer, or to adjust an amount of information that is to be transmitted from the plurality of intermediate agents 220 to the upper agent 210. The intermediate agent 220 may execute the learning and the processing to ensure robustness in the autonomous distributed system 200. The intermediate agent 220 may take the role of a Spinal Cord in the autonomous distributed system 200. When the information is received from the lower agent 230, the intermediate agent 220 may determine whether to transmit, to the upper agent 210, the information or information generated based on the information, or to make a response or transmit an instruction to the lower agent 230.
The upper agent 210 collects the information. The upper agent 210 may collect the information from the intermediate agent 220. For example, the upper agent 210 collects the information from the intermediate agent 220, which is collected from the lower agent 230, from the lower agent 230. For example, the upper agent 210 collects, from the intermediate agent 220, the information selected by the intermediate agent 220, from among the plurality of pieces of information collected from the lower agent 230 by the intermediate agent 220. For example, the upper agent 210 collects, from the intermediate agent 220, the summary information obtained by summarizing the plurality of pieces of information collected from the lower agent 230 by the intermediate agent 220. The upper agent 210 may collect the information from the lower agent 230.
The upper agent 210 executes the processing using the collected information. The upper agent 210 executes, for example, the learning using the collected information. The upper agent 210 uses, for example, the learning result to execute various types of processing.
The upper agent 210 may execute the processing with a low real time performance in comparison with that of the intermediate agent 220. For example, the upper agent 210 executes the processing using the collected information for each predetermined period, such as one week, one month, and one year. The upper agent 210 executes, for example, the learning using the information of the predetermined period.
The upper agent 210 may execute the processing having a purpose of stabilizing the entire autonomous distributed system 200. For example, the upper agent 210 may execute a factor analysis or the like of an overall communication load of the communication network 100 or the cloud network 300, based on the collected information. By a content of the collected information, or a collection state of the information, the upper agent 210 may specify a location where a communication load is high in the network; and for example, adjust an amount of information that is to be transmitted by the IoT device 400, the lower agent 230, and the intermediate agent 220, or adjust a propagation path, to reduce the communication load.
The amount of information that is handled in the so-called cloud network is increasing dramatically. In the future, as the 5G mobile communication systems become more widespread and the number of IoT devices increases, the amount of information is expected to be further increased. As usual means, a countermeasure against such an increase in the amount of information, is to enhance a communication function of the network or a performance of network equipment; however, by this alone, the handling is not possible. In contrast with that, by applying the autonomous distributed system 200 according to the present embodiment, to the network, it is possible to layer the processing. With the autonomous distributed system 200, for example, while the processing with a high real time performance is executed, by the lower agent 230, the useful information is sorted out and is transmitted to the upper layer; while information processing having a slightly wider span and using carefully selected information, is performed, by the intermediate agent 220, the information is further sorted out; information processing having a wider span and using more carefully selected information, is performed, in the upper agent 210, and the stabilization of the entire network is achieved; and thus it is possible to contribute to a construction of an environment in which a large amount of information is handled appropriately.
The registration unit 232 executes various types of registration. The registration unit 232 registers, for example, training information that is used for the learning by the lower agent 230. The registration unit 232 stores the registered training information in the storage unit 231. The training information may include a reward. The training information may include knowledge. The registration unit 232 registers the training information, for example, by receiving an input by an operator or the like of the autonomous distributed system 200. The registration unit 232 registers, for example, the training information received from the upper agent 210. The registration unit 232 registers, for example, the training information received from the intermediate agent 220. The registration unit 232 may be an example of a third registration unit.
The information collection unit 233 collects the information. The information collection unit 233 may collect the information transmitted by any device. The information collection unit 233 collects, for example, the information transmitted by the IoT device 400. The information collection unit 233 may collect the information from the IoT device 400 by the mobile communication. The information collection unit 233 stores the collected information in the storage unit 231. The information collection unit 233 may be an example of a third information collection unit.
The learning execution unit 234 uses the information collected by the information collection unit 233, to execute the learning. The learning execution unit 234 may use the information collected by the information collection unit 233 to execute the learning in cooperation with another lower agent 230.
The learning execution unit 234 stores the learning result in the storage unit 231. The learning result may include a model generated by the learning execution unit 234 through the learning. The learning result may include a neural network generated by the learning execution unit 234 through the learning. The learning execution unit 234 may be an example of a third learning execution unit.
The learning execution unit 234 may use any learning method. For example, the learning execution unit 234 first executes pre-learning (Pre-training) by a simulation, and updates the model, the neural network, and the like, by the information that is collected by the information collection unit 233. For the pre-training, real data collected by the information collection unit 233 in the past may be used. For the pre-training, synthetic data may be used. In a case of an environment where trial and error is not permitted in learning, a method to perform the rule-based learning rather than reinforcement learning, and to learn only a parameter thereof, is effective. The learning method may be an ANN (Artificial Neural Network), a DNN (Deep Neural Network), heuristics, or the like.
The processing execution unit 235 executes various types of processing. The processing execution unit 235 may use the learning result obtained by the learning execution unit 234, to execute the processing on the information collected by the information collection unit 233. The processing execution unit 235 may use the plurality of pieces of information collected by the information collection unit 233, to generate the information that is to be transmitted to the intermediate agent 220.
For example, the processing execution unit 235 uses the learning result obtained by the learning execution unit 234, to generate transmission information including the information selected from the plurality of pieces of information collected by the information collection unit 233. For example, the processing execution unit 235 uses the learning result obtained by the learning execution unit 234, to generate summary information obtained by summarizing the plurality of pieces of information collected by the information collection unit 233. The processing execution unit 235 may be an example of a third processing execution unit.
The information transmission unit 236 transmits the information. For example, the information transmission unit 236 transmits, to the intermediate agent 220, the information collected by the information collection unit 233. For example, the information transmission unit 236 transmits, to the intermediate agent 220, the information generated by the processing execution unit 235. The information transmission unit 236 may be an example of a third information transmission unit.
The registration unit 222 executes various types of registration. The registration unit 222 registers, for example, training information that is used for the learning by the intermediate agent 220. The registration unit 222 stores the registered training information in the storage unit 221. The training information may include a reward. The training information may include knowledge. The registration unit 222 registers the training information, for example, by receiving an input by an operator or the like of the autonomous distributed system 200. The registration unit 222 registers, for example, the training information received from the upper agent 210. The registration unit 222 may be an example of a second registration unit.
The information collection unit 223 collects the information. The information collection unit 223 may collect the information from the lower agent 230. The information collection unit 223 collects, for example, the information transmitted by the information transmission unit 236 of the lower agent 230. The information collection unit 223 stores the collected information in the storage unit 221. The information collection unit 223 may be an example of a second information collection unit.
The learning execution unit 224 uses the information collected by the information collection unit 223, to execute the learning. The learning execution unit 224 may use the information collected by the information collection unit 223, to execute the learning in cooperation with another intermediate agent 220.
The learning execution unit 224 stores the learning result in the storage unit 221. The learning result may include a model generated by the learning execution unit 224 through the learning. The learning result may include a neural network generated by the learning execution unit 224 through the learning. The learning execution unit 224 may be an example of a second learning execution unit.
The learning execution unit 224 may use any learning method. For example, the learning execution unit 224 first executes pre-training by a simulation, and updates the model, the neural network, and the like, by the information that is collected by the information collection unit 223. For the pre-training, real data collected by the information collection unit 223 in the past may be used. For the pre-training, synthetic data may be used. In a case of an environment where trial and error is not permitted in learning, a method to perform the rule-based learning rather than reinforcement learning, and to learn only a parameter thereof, is effective. The learning method may be an ANN, a DNN, heuristics, or the like.
The processing execution unit 225 executes various types of processing. The processing execution unit 225 may use the learning result obtained by the learning execution unit 224, to execute the processing on the information collected by the information collection unit 223. The processing execution unit 225 may use the plurality of pieces of information collected by the information collection unit 223, to generate the information that is to be transmitted to the upper agent 210.
For example, the processing execution unit 225 uses the learning result obtained by the learning execution unit 224, to generate transmission information including the information selected from the plurality of pieces of information collected by the information collection unit 223. For example, the processing execution unit 225 uses the learning result obtained by the learning execution unit 224, to generate summary information obtained by summarizing the plurality of pieces of information collected by the information collection unit 223. The processing execution unit 225 may be an example of a second processing execution unit.
The information transmission unit 226 transmits the information. For example, the information transmission unit 226 transmits, to the upper agent 210, the information collected by the information collection unit 223. For example, the information transmission unit 226 transmits, to the upper agent 210, the information generated by the processing execution unit 225. The information transmission unit 226 may be an example of a second information transmission unit.
The registration unit 212 executes various types of registration. The registration unit 212 registers, for example, training information that is used for the learning by the upper agent 210. The registration unit 212 stores the registered training information in the storage unit 211. The training information may include a reward. The training information may include knowledge. The registration unit 212 registers the training information, for example, by receiving an input by an operator or the like of the autonomous distributed system 200. The registration unit 212 may be an example of a first registration unit.
The information collection unit 213 collects the information. The information collection unit 213 may collect the information from the intermediate agent 220. The information collection unit 213 collects, for example, the information transmitted by the information transmission unit 226 of the intermediate agent 220. The information collection unit 213 stores the collected information in the storage unit 211. The information collection unit 213 may be an example of a first information collection unit.
The learning execution unit 214 uses the information collected by the information collection unit 213, to execute the learning. The learning execution unit 214 may be an example of a first learning execution unit. The learning execution unit 214 may use the information collected by the information collection unit 213, to execute the learning in cooperation with the plurality of intermediate agents 220. The learning execution unit 214 may use the information collected by the information collection unit 213, to execute the learning in cooperation with the plurality of lower agents 230. The learning execution unit 214 may use the information collected by the information collection unit 213, to execute the learning in cooperation with the plurality of intermediate agents 220 and the plurality of lower agents 230.
The learning execution unit 214 stores the learning result in the storage unit 211. The learning result may include a model generated by the learning execution unit 214 through the learning. The learning result may include a neural network generated by the learning execution unit 214 through the learning. The learning execution unit 214 may be an example of a first learning execution unit.
The learning execution unit 214 may use any learning method. For example, the learning execution unit 214 first executes pre-training by a simulation, and updates the model, the neural network, and the like, by the information that is collected by the information collection unit 213. For the pre-training, real data collected by the information collection unit 213 in the past may be used. For the pre-training, synthetic data may be used. In a case of an environment where trial and error is not permitted in learning, a method to perform the rule-based learning rather than reinforcement learning, and to learn only a parameter thereof, is effective. The learning method may be an ANN, a DNN, heuristics, or the like.
The processing execution unit 215 executes various types of processing. The processing execution unit 215 may be an example of a first processing execution unit. The processing execution unit 215 may use the learning result obtained by the learning execution unit 214, to execute the processing on the information collected by the information collection unit 213. The processing execution unit 215 may use the learning result obtained by the learning execution unit 214, to execute processing with a purpose of stabilizing the entire network to which the autonomous distributed system 200 is applied.
The processing execution unit 215 may generate instruction information for the intermediate agent 220 or the lower agent 230, based on a result obtained by analyzing the information collected by the information collection unit 213. The processing execution unit 215 generates, for example, the instruction information for instructing the lower agent 230, regarding the information that is to be transmitted to the intermediate agent 220, among the information collected from the IoT device 400. The processing execution unit 215 generates, for example, instruction information for instructing the intermediate agent 220, regarding a processing content of the information collected from the intermediate agent 220.
The processing execution unit 215 may generate the training information that is to be transmitted to the lower agent 230, based on the result obtained by analyzing the information collected by the information collection unit 213. The processing execution unit 215 may generate the training information that is to be transmitted to the intermediate agent 220, based on the result obtained by analyzing the information collected by the information collection unit 213. For example, the processing execution unit 215 generates the training information including a reward that is set in accordance with a change in a trend of the information collected by the information collection unit 213.
The information transmission unit 216 transmits the information. For example, the information transmission unit 216 transmits, to the lower agent 230, the instruction information generated by the processing execution unit 215. For example, the information transmission unit 216 transmits, to the intermediate agent 220, the instruction information generated by the processing execution unit 215.
For example, the information transmission unit 216 transmits, to the lower agent 230, the training information generated by the processing execution unit 215. For example, the information transmission unit 216 transmits, to the intermediate agent 220, the training information generated by the processing execution unit 215.
For example, when one of the purposes of the autonomous distributed system 200 is to analyze the information that is transmitted by the plurality of IoT devices 400, the lower agent 230 executes an analysis for which more real time performance is required; the upper agent 210 analyzes a trend over a long period to a certain degree; and the intermediate agent 220 executes an analysis corresponding to a period in between.
As a specific example, when the purpose is to manage a state of an occurrence of an accident in a certain area, the plurality of lower agents 230 collect image data, object sensing data, or the like, from the plurality of IoT devices 400 arranged in the area. For example, the area is divided into a plurality of sub-areas, and each of the plurality of lower agents 230 collects the information from the IoT device 400 arranged in each of the plurality of sub-areas. Then, the plurality of lower agents 230 which are responsible for geographically adjacent sub-areas, share information, and execute the learning to detect the occurrence of an accident. The lower agent 230 may use the learning result, to detect the occurrence of an accident. In addition, the plurality of intermediate agents 220 are allocated to a group of the sub-areas, to collect the information from the lower agent 230 which collects the information from the IoT device 400 in the sub-area of the group. For example, the plurality of intermediate agents 220 execute the learning to predict the occurrence of an accident, by cooperating with each other. The intermediate agent 220 may use the learning result to predict the occurrence of an accident. The upper agent 210 executes the learning to perform an overall control of the information, for example, by prediction results obtained by the plurality of intermediate agents 220. As a specific example, the upper agent 210 performs the control: to increase the amount of information that is collected, or increase the types of information, for the sub-area where an accident is predicted to occur; and to reduce the amount of information that is collected, or the types of information, for the sub-area other than that.
For example, in the cloud network 300, when one of the purposes of the autonomous distributed system 200 is to smoothly route a message for providing a notification of the information obtained by the IoT device 400, the lower agent 230 performs the message routing. The lower agent 230 cooperates with another lower agent 230 to execute, for the message from a publisher, the learning to control Topic, To (including Copy), and splitting. The lower agent 230 uses the learning result, to cooperate with another lower agent 230, and control a generation of the Topic, a determination of the To, a duplication of the message, the splitting of the message, or the like such that the message from the publisher reaches an appropriate subscriber.
The intermediate agent 220 monitors the routing performed by the lower agent 230, for example, by the information that is collected from the lower agent 230, to execute the learning to be able to execute the processing to resolve a problem that occurs in the routing. For example, in a case where there are messages which have the same To and of which destinations are unknown, the intermediate agent 220 buffers the messages; and releases parts of the messages that are being buffered onto the network when an amount of buffering exceeds a threshold value. As a result, when the message returns again, the buffering is performed until it arrives or for a certain period of time. Then, when the message starts arriving, the transmission of the messages which have the same To, among the messages that are being buffered, is started.
The upper agent 210 executes the learning to detect the problem in the network, for example, by the information that is collected from the intermediate agent 220. Then, when the problem in the network is detected, the upper agent 210 uses the learning result, to notify a network operator or the like of the detection result, or to output an instruction to change a configuration of the network.
For example, when one of the purposes of the autonomous distributed system 200 is to implement a control in relation to automatic driving, each of the plurality of lower agents 230 collects the information, for each of a plurality of divided areas, from: the IoT device 400 mounted on a vehicle located within the area; the IoT device 400 mounted on a traffic light or the like; the IoT device 400 installed on a road or the like; and others. As a specific example, the lower agent 230 collects vehicle location information; an image captured by a vehicle camera or a street camera; vehicle sensing information or human sensing information sensed by a road sensor; a distance between vehicles sensed by a vehicle sensor or the like; local weather information; vehicle navigation information; a vehicle travel speed; and others. In this case, the lower agent 230 is responsible for a certain area, and collects, from a time when a certain vehicle enters the area, the information from the IoT device 400 mounted in the vehicle; and hands over the collection of the information, when the vehicle leaves the area, to the lower agent 230 which is responsible for an area into which the vehicle enters.
The lower agent 230 shares the information with another lower agent 230, to execute sensing of a danger such as a case where a distance between vehicles becomes shorter than a threshold value, and a collision between a vehicle and a human being or the like. When the danger is sensed, the lower agent 230 transmits danger sensing information to a vehicle that is a target. In response to receiving the danger sensing information, the vehicle issues a warning to a driver, or stops traveling.
The intermediate agent 220 instructs, for example, the lower agent 230, regarding what information to communicate. For example, based on the information (a day of the week, a time, or the like) collected from the lower agent 230 in the past, the intermediate agent 220 analyzes (learns) the important information (a fact that there are a large number of people, a large number of vehicles, or the like), and makes it possible to determine the information that has a high priority for collection. Then, the intermediate agent 220 issues the instruction to the lower agent 230 such that the information that has a high priority can be collected for each period. By the learning in cooperation with another intermediate agent 220, the intermediate agent 220 is able to specify the information that has a high priority, with a high precision. As a specific example, as a result of the learning, the intermediate agent 220 provides the instruction such that in the morning on weekdays, the information can be collected, with a vehicle sensing result being given a higher priority; and provides the instruction such that in the afternoon on holidays, the information can be collected, with a higher priority being given to a human sensing result.
The upper agent 210 executes, for example, a broader range of analysis. The upper agent 210 executes a prediction several steps ahead (for a set time), from traffic information (a traffic volume, a time, a day of the week, an event, and weather) for the area of responsibility (in a unit of a city, a prefecture, and a nation); and provides the information to the intermediate agent 220.
The computer 1200 according to the present embodiment includes the CPU 1212, a
RAM 1214, and a graphics controller 1216, which are connected to each other via a host controller 1210. The computer 1200 also includes a communication interface 1222, a storage device 1224, a DVD drive 1226, and an input/output unit such as an IC card drive, which are connected to the host controller 1210 via an input/output controller 1220. The DVD drive 1226 may be a DVD-ROM drive, a DVD-RAM drive, and the like. The storage device 1224 may be a hard disk drive, a solid-state drive, and the like. The computer 1200 also includes a ROM 1230 and a legacy input/output unit such as a keyboard, which are connected to the input/output controller 1220 via an input/output chip 1240.
The CPU 1212 operates according to the programs stored in the ROM 1230 and the RAM 1214, thereby controlling each unit. The graphics controller 1216 acquires image data which is generated by the CPU 1212 in a frame buffer or the like provided in the RAM 1214 or in itself so as to cause the image data to be displayed on a display device 1218.
The communication interface 1222 communicates with other electronic devices via a network. The storage device 1224 stores a program and data used by the CPU 1212 in the computer 1200. The DVD drive 1226 is configured to read the program or the data from a DVD-ROM 1227 or the like, and to provide the storage device 1224 with the program or the data. The IC card drive reads the program and data from an IC card, and/or writes the program and data to the IC card.
The ROM 1230 stores therein a boot program or the like executed by the computer 1200 at the time of activation, and/or a program depending on the hardware of the computer 1200. The input/output chip 1240 may also connect various input/output units via a USB port, a parallel port, a serial port, a keyboard port, a mouse port, or the like to the input/output controller 1220.
A program is provided by a computer-readable storage medium such as the DVD-ROM 1227 or the IC card. The program is read from the computer-readable storage medium, installed into the storage device 1224, the RAM 1214, or the ROM 1230, which are also examples of a computer-readable storage medium, and executed by the CPU 1212. Information processing written in these programs is read by the computer 1200, and provides cooperation between the programs and the various types of hardware resources described above. An apparatus or a method may be constituted by realizing the operation or processing of information in accordance with the usage of the computer 1200.
For example, in a case where a communication is performed between the computer 1200 and an external device, the CPU 1212 may execute a communication program loaded in the RAM 1214 and instruct the communication interface 1222 to perform communication processing based on a process written in the communication program. The communication interface 1222, under control of the CPU 1212, reads transmission data stored on a transmission buffer region provided in a recording medium such as the RAM 1214, the storage device 1224, the DVD-ROM 1227, or the IC card, and transmits the read transmission data to a network or writes reception data received from a network to a reception buffer region or the like provided on the recording medium.
In addition, the CPU 1212 may be configured to cause all or a necessary portion of a file or a database, which has been stored in an external recording medium such as the storage device 1224, the DVD drive 1226 (the DVD-ROM 1227), the IC card and the like, to be read into the RAM 1214, thereby executing various types of processing on the data on the RAM 1214. Next, the CPU 1212 may write the processed data back in the external recording medium.
A various types of information, such as various types of programs, data, tables, and databases, may be stored in the recording medium to undergo information processing. The CPU 1212 may execute, on the data read from the RAM 1214, various types of processing including various types of operations, information processing, conditional judgement, conditional branching, unconditional branching, information search/replacement, or the like described throughout the present disclosure and designated by instruction sequences of the programs, to write the results back to the RAM 1214. In addition, the CPU 1212 may search for information in a file, a database, or the like in the recording medium. For example, when a plurality of entries, each having an attribute value of a first attribute associated with an attribute value of a second attribute, are stored in the recording medium, the CPU 1212 may search for an entry whose attribute value of the first attribute matches a designated condition, from among the plurality of entries, and read the attribute value of the second attribute stored in the entry, thereby acquiring the attribute value of the second attribute associated with the first attribute satisfying a predetermined condition.
The above described program or software modules may be stored in the computer-readable storage medium on or near the computer 1200. In addition, a recording medium such as a hard disk or a RAM provided in a server system connected to a dedicated communication network or the Internet can be used as the computer-readable storage medium, thereby providing the program to the computer 1200 via the network.
Blocks in flowcharts and block diagrams in the present embodiments may represent steps of processes in which operations are performed or “units” of apparatuses responsible for performing operations. A specific step and “unit” may be implemented by dedicated circuitry, programmable circuitry supplied along with a computer-readable instruction stored on a computer-readable storage medium, and/or a processor supplied along with the computer-readable instruction stored on the computer-readable storage medium. The dedicated circuitry may include a digital and/or analog hardware circuit, or may include an integrated circuit
(IC) and/or a discrete circuit. The programmable circuitry may include, for example, a reconfigurable hardware circuit including logical AND, logical OR, logical XOR, logical NAND, logical NOR, and other logical operations, and a flip-flop, a register, and a memory element, such as a field-programmable gate array (FPGA) and a programmable logic array (PLA).
The computer-readable storage medium may include any tangible device capable of storing an instruction performed by an appropriate device, so that the computer-readable storage medium having the instruction stored thereon constitutes a product including an instruction that may be performed in order to provide means for performing an operation specified by a flowchart or a block diagram. Examples of the computer-readable storage medium may include an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, and the like. More specific examples of the computer-readable storage medium may include a floppy (registered trademark) disk, a diskette, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an electrically erasable programmable read only memory (EEPROM), a static random access memory (SRAM), a compact disk read only memory (CD-ROM), a digital versatile disc (DVD), a Blu-ray (registered trademark) disc, a memory stick, an integrated circuit card, or the like.
The computer-readable instructions may include an assembler instruction, an instruction-set-architecture (ISA) instruction, a machine instruction, a machine dependent instruction, a microcode, a firmware instruction, state-setting data, or either of source code or object code written in any combination of one or more programming languages including an object oriented programming language such as Smalltalk (registered trademark), JAVA (registered trademark), and C++, or the like, and a conventional procedural programming language such as a
“C” programming language or a similar programming language.
The computer-readable instruction may be provided to a general purpose computer, a special purpose computer, or a processor or programmable circuitry of another programmable data processing device locally or via a local area network (LAN), a wide area network (WAN) such as the Internet or the like in order that the general purpose computer, the special purpose computer, or the processor or the programmable circuitry of the other programmable data processing device performs the computer-readable instruction to provide means for performing operations specified by the flowchart or the block diagram. Examples of the processor include a computer processor, a processing unit, a microprocessor, a digital signal processor, a controller, a microcontroller, and the like.
While the present invention has been described with the embodiments, the technical scope of the present invention is not limited to the above-described embodiments. It is apparent to persons skilled in the art that various alterations or improvements can be added to the above-described embodiments. It is also apparent from the scope of the claims that the embodiments added with such alterations or improvements can be included in the technical scope of the invention.
The operations, procedures, steps, and stages of each process performed by a device, system, program, and method shown in the claims, embodiments, or diagrams can be performed in any order as long as the order is not indicated by “prior to,” “before,” or the like and as long as the output from a previous process is not used in a later process. Even if the process flow is described using phrases such as “first” or “next” in the claims, embodiments, or diagrams, it does not necessarily mean that the process must be performed in this order.
Number | Date | Country | Kind |
---|---|---|---|
2022-066766 | Apr 2022 | JP | national |
The contents of the following patent application(s) are incorporated herein by reference: NO. 2022-066766 filed in JP on Apr. 14, 2022NO. PCT/JP2023/014300 filed in WO on Apr. 6, 2023
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2023/014300 | Apr 2023 | WO |
Child | 18911251 | US |