The present invention relates to a server control apparatus, a server control method, and a program.
There have been increasing examples that offload some of the processes of a software application (hereinafter, referred to as “APL”) to an accelerator such as a Graphics Processing Unit (GPU) or a Field Programmable Gate Array (FPGA) to implement performance and power efficiency that cannot be reached by software (CPU processing) alone.
A case is envisioned in which such an accelerator as described above is applied in a large server cluster such as a Network Functions Virtualization (NFV) or a data center (see NPL 1, NPL 2).
The offloading of server load factors of the related art is described.
A left view of
The arithmetic system 10A illustrated in the left view of
An arithmetic system 10B illustrated in a right view of
However, because the capacitance of electronic circuitry that may be incorporated into an FPGA is limited, it is difficult to offload some of multiple types of APL. Thus, in a cluster that performs multiple types of APL, the overall throughput may not be improved without optimizing which FPGA offloads some of which APL.
In the servers 20-1 to 20-4 illustrated in
As illustrated in
In this manner, in the case that the connection with the FPGA is required for the APL, if allocation of the FPGA 21 is unbalanced with respect to the APL load, an APL limit is reached and the overall throughput decreases.
In the servers 20-1 to 20-4 illustrated in
As illustrated in
In this manner, in the case that the connection with the FPGA is optional for the APL, if the allocation of the FPGA 21 is unbalanced with respect to the APL load even though a certain APL is offloaded into the FPGA and the other APL can be still put thereon, the APL limit is reached and the overall throughput decreases.
The present invention has been made in view of such a background, and has an object to provide a network system and a network bandwidth control management method capable of improving throughput and availability of a server to which an accelerator is applied.
In order to solve the problems described above, the present invention provides a server control apparatus including: an acquisition unit configured to acquire a request to offload a certain process of an application of applications to an accelerator for each of the applications in a server, and configurations of the accelerator and the application in the server; an optimization arithmetic unit configured to determine, by referring to information of the request acquired and the configurations acquired of the server, a ratio of processing performance to the request, and optimize allocation of the accelerator so that variance of the ratio between the applications is equal to or less than a predetermined threshold; and a determination unit configured to determine a configuration suggestion to be taken by the server by using an arithmetic result from the optimization arithmetic unit and a predetermined policy, and command the server to execute the configuration suggestion.
According to the present invention, it is possible to provide a server control apparatus, a server control method, and a program capable of improving the throughput and availability of a server to which an accelerator is applied.
Hereinafter, a network system and the like in a mode for implementing the present invention (hereinafter referred to as the “embodiment of the present invention”) will be described with reference to the drawings.
Configuration of Network System 1
As illustrated in
Each of the physical server 30 includes an APL control unit 31, APLs 32 and 33, a virtual patch panel 34, and an accelerator 35.
In the network system 1, the intranet 50 and the physical servers 30 are connected by the SW 40. In the physical server 30, the accelerator 35 and each of the APLs 32 and 33 are connected to each other via the virtual patch panel 34. Instead of the virtual patch panel 34, a virtual switch may be used.
In the network system 1, each of the APLs 32 and 33 is connected to the accelerator 35 via the virtual patch panel 34 and the virtual switch (not illustrated). A Soft Patch Panel (SPP), which is an application, is used for flexible connection change in units of module as in a Service Function Chaining (SFC). The SPP provides a shared memory between Virtual Machines (VMs) which are configured to directly refer to an identical memory space to eliminate packet copy in a virtualization layer. To exchange packets between the physical Network Interface Card (NIC) and the shared memory, the SPP may control a reference destination for memory exchange of each VM to change input and output destination of the packet by software. In this way, the SPP realizes dynamic connection switching between the VMs and between each VM and the physical NIC.
Configurations of Server Control Apparatus 100 and APL Control Unit 31
Server Control Apparatus 100
The server control apparatus 100 is disposed in the network system 1, and instructs the APL control unit 31 of each physical server 30 (see
As illustrated in
The integrated control unit 110 issues an arithmetic request to the optimization arithmetic unit 130 periodically or through an instruction by an operator.
The integrated control unit 110 acquires a threshold from the threshold table 111, and determines whether an arithmetic result of the optimization arithmetic unit 130 exceeds the threshold of the threshold table 111.
The request and configuration collection unit 120 acquires a request to offload a certain process of an application to an accelerator for each application in each of the physical servers 30, and configurations of the accelerator and application in the server, and stores the acquired request and configuration in the total request table 122 and the total configuration table 121, respectively.
The optimization arithmetic unit 130 determines, by referring to the acquired information of the request and configurations of the physical server 30, a performance-to-request ratio (P/R) that is a ratio of processing performance to the request, and optimizes the allocation of the accelerator so that variance of the performance-to-request ratio between the applications is equal to or less than a predetermined threshold.
The optimization arithmetic unit 130 calculates a performance-to-request ratio (P/R) for a current allocation of the accelerator.
The optimization arithmetic unit 130 performs arithmetic of the performance-to-request ratio (P/R) in accordance with Equation (2) or Equation (3) described below.
The optimization arithmetic unit 130 optimizes the allocation of the accelerator in ranges of capacity conditions indicated by Relationship (4) and Relationship (5) described below.
The optimization arithmetic unit 130 performs an optimization arithmetic that minimizes divergence of the performance-to-request ratio (P/R) in accordance with the expression (6) described below.
The optimization arithmetic unit 130 may perform the optimization arithmetic on the allocation of the accelerator by using processing capability of the optimization arithmetic unit 130 itself or processing capability of an external calculator 71 (see
The configuration determination unit 140 acquires a policy from the policy table 112. The configuration determination unit 140 determines a configuration suggestion to be taken by the physical server 30 by using the arithmetic result received from the integrated control unit 110 and the policy.
The configuration command unit 150 is an interface for commanding the physical server 30 to execute the configuration suggestion determined by the configuration determination unit 140.
Details of the threshold table 111, the policy table 112, the configuration information table 113, the total configuration table 121, and the total request table 122 are described below.
APL Control Unit 31
The APL control unit 31 performs APL control for offloading a certain process of the APL to the accelerator.
The APL control unit 31 changes configurations of generation or deletion of the APL and accelerator, and the virtual patch panel 34 in the physical server 30 (see
As illustrated in
The configuration update unit 311 updates the configurations of the accelerator and APL of the physical server 30 stored in the current configuration table 316, and transmits the updated configurations of the accelerator and APL to the request and configuration collection unit 120 of the server control apparatus 100.
The request update unit 312 updates the request stored in the current request table 317 and transmits the updated request to the request and configuration collection unit 120 of the server control apparatus 100.
The APL generation/deletion unit 313 generates or deletes the APL in accordance with a configuration command (configuration suggestion) instruction from the server control apparatus 100.
The connection switching unit 314 switches the connection through the virtual patch panel 34 in accordance with the configuration command (configuration suggestion) instruction from the server control apparatus 100.
The accelerator generation/deletion unit 315 generates or deletes the accelerator in accordance with the configuration command (configuration suggestion) instruction from the server control apparatus 100.
The configuration command unit 150 of the server control apparatus 100 and the APL control unit 31 may be directly connected to each other, and may be connected on communication control software 60 by executing the communication control software 60 as illustrated in
Table Configuration Next, the threshold table 111, the policy table 112, the configuration information table 113, the total configuration table 121, and the total request table 122 for the server control apparatus 100 will be described.
The above-described threshold table 111, policy table 112, and configuration information table 113 configure basic operating parameters for the physical server 30. The above-described total configuration table 121 and total request table 122 are tables in which the data of the physical servers 30 are aggregated, and have similar configurations (items) to those of the current configuration table 316 and the current request table 317 for the APL control unit 31, respectively.
Table Configuration of Server Control Apparatus 100
As illustrated in
Here, P indicates performance, R indicates a request, and P/R indicates a performance-to-request ratio.
In a case that the divergence of P/R exceeds the value of “0.5”, the server control apparatus 100 performs an optimization arithmetic (step S16 in
In a case that the minimum value of P/R falls below the value of “0.5”, the server control apparatus 100 performs the optimization arithmetic (step S16 in
In a case that the maximum value of P/R exceeds the value of “2”, the server control apparatus 100 performs the optimization arithmetic (step S16 in
Note that it is sufficient that at least one of the divergence of the ratio, the minimum value of the ratio, and the maximum value of the ratio may be included as the parameter of the ratio.
As illustrated in
The optimization policy determines how strict with optimization calculation results of P/R.
The APL migration policy determines whether to suppress the number of APL migrations.
The FPGA rewrite policy determines whether to suppress the number of FPGA rewrites.
As illustrated in
For example, APL #1 has the performance-with-FPGA Pi of “80”, the software performance pi of “40”, the FPGA required capacity Ci of “1 (1 represents 100%)”, and the software required capacity ci of “0.3 (1 represents 100%)”. The APL #1 has the performance-with-FPGA Pi and software performance pi greater than those of other APLs, and thus, has the software required capacity ci larger than those of other APLs.
An APL #2 has the performance-with-FPGA Pi of “40”, the software performance pi of “20”, the FPGA required capacity Ci of “1”, and the software required capacity ci of “0.1”, that is, has the performance-with-FPGA Pi, the software performance pi, and the software required capacity ci smaller than those of the other APLs.
An APL #3 has the performance-with-FPGA Pi of “60”, the software performance pi of “30”, the FPGA required capacity Ci of “1”, and the software required capacity ci of “0.2”, that is, has the performance-with-FPGA Pi, the software performance pi, and the software required capacity ci which are medium degrees as compared to those of the other APLs.
As described above, the above-described total configuration table 121 and total request table 122 are tables in which the data of the physical servers 30 are aggregated.
As illustrated in a lower view of
The total configuration table 121 holds the number of servers aggregated in the cluster for each server ID row. Note that the current configuration table 316 (see an upper view of
Here, an FPGA column in the total configuration table 121 illustrated in the lower view of
As illustrated in a lower view of
Table Configuration of APL Control Unit 31
As illustrated in the upper view of
The example in the upper view of
As indicated by a reference sign d in
As illustrated in the upper view of
As indicated by a reference sign e in
Note that even if the number of requests for an APL in the current request table 317 is large in a certain physical server 30, the number of requests for APL is not necessarily large in a case of aggregation from all the servers. By referring to the total request table 122 aggregated from all the server, it is possible to offload the APL with a large number of requests to the FPGA.
The items and values of the tables described above are examples and are not limited thereto.
Hereinafter, a sever control method for the server control apparatus 100 configured as described above will be described.
Accelerator Allocation
In addition, Pi represents the performance of i with accelerator allocation, and pi represents the performance of i without accelerator allocation. Furthermore, Ri represents a request for i.
The server control apparatus 100 calculates a performance ratio to request for each APL, and calculates an accelerator allocation optimal solution such that the variance of the performance ratio between the APLs is reduced.
For example, the accelerator allocation optimal solution is calculated in accordance with Expression (1).
(NiPi+(N−Ni)pi)/Ri (1)
where the following is assumed:
N: the number of physical servers,
Ni: the number of accelerator allocations for i,
Pi: performance of i with accelerator allocation,
pi: performance of i without accelerator allocation, and
Ri: request for i,
here, ΣNi=N.
Any method may be used for an optimal solution calculation method, such as heuristics, a Genetic Algorithm (GA), or Machine Learning (ML).
If the accelerator allocation optimal solution differs from a current state, the server control apparatus 100 rewrites contents of the accelerator for the APL with more allocations to contents of the accelerator for the APL with less allocations.
For example, assume that in a case that the accelerator allocation optimal solutions (ideals) for the APL 1, the APL 2, and the APL 3 are expressed as APL 1:APL 2:APL3=1:1:1, the current state is expressed as APL 1:APL 2:APL3=5:3:2. In this case, the performance of the APL 1 is excessive, and so the contents of the accelerator for the APL 1 are rewritten to the contents of the accelerator for the APL 3.
Accelerator Rewriting Method
An accelerator rewriting method can be divided into for a case that only a single APL is present on the physical server 30 (see
Case of Only Single APL on Physical Server 30
As illustrated in a left view of
As indicated by the arrow in the left view of
As indicated by a reference sign fin
As illustrated in the right view of
The following procedures (1) and (2) are performed as procedures for changing the contents of the accelerator (acc 1) to those of the accelerator (acc 2) for the APL 2.
(1) Delete the APL 1 from the physical server 30.
(2) Generate the APL 2 in the physical server 30.
Case of Plurality of APLs Mixedly Present on Physical Server 30
As illustrated in
The APL 1 occupies the accelerator 35 via the virtual patch panel 34 as indicated by an allow g in a left view of
As indicated by a reference sign i in
As illustrated in a right view of
The following procedures (3), (4), and (5) are performed as procedures for changing the contents of the accelerator (acc 1) to those of the accelerator (acc 2) for the APL 2.
(3) Generate the acceleration portion (acc 1) of the APL 1 (see an arrow j in the right view of
(4) Delete the acceleration portion (acc 2) of the APL 2.
(5) Switch a path of the virtual patch panel 34 such that the APL 2 occupies the accelerator (acc 2) (see an arrow k in the right of
Operation Image
Physical servers 30-1 to 30-4 illustrated in
Since the relatively more FPGAs are allocated for the APL 2, the server control apparatus 100 applies the APL 2 only to the physical server 30-4 illustrated in
As compared to
In the physical servers 30-1 to 30-4 illustrated in
Because the connection with the FPGA is optional for the APL, the server control apparatus 100 can offload a certain APL into the FPGA and also put another APL on the FPGA. Since the relatively more FPGAs are allocated for the APL 2, the server control apparatus 100 connects, to the APL 2, only the FPGA 35 in the physical server 30-4 illustrated in
Control Sequence
As illustrated in
The optimization arithmetic unit 130 requests the total number of requests from the total request table 122 (see
The total request table 122 responds with the total number of requests to the optimization arithmetic unit 130 (step S103).
The optimization arithmetic unit 130 requests the total configurations from the total configuration table 121 (see
The total configuration table 121 responds with the overall configurations to the optimization arithmetic unit 130 (step S105).
The optimization arithmetic unit 130 requests configuration information from the configuration information table 113 (see
The configuration information table 113 responds with the configuration information to the optimization arithmetic unit 130 (step S107).
The optimization arithmetic unit 130 responds with the current configuration arithmetic to the integrated control unit 110 (step S108).
The integrated control unit 110 requests a threshold from the threshold table 111 (see
The threshold table 111 responds with the threshold to the integrated control unit 110 (step S110).
The integrated control unit 110 requests optimization arithmetic from the optimization arithmetic unit 130 (step S111).
The optimization arithmetic unit 130 responds with the optimization arithmetic to the integrated control unit 110 (step S112).
The integrated control unit 110 requests configuration determination (arithmetic result) from the configuration determination unit 140 (step S113).
The configuration determination unit 140 requests a policy from the policy table 112 (see
The policy table 112 responds with the policy to the configuration determination unit 140 (step S115).
The configuration determination unit 140 commands the configuration (transmits the configuration suggestions) to the configuration command unit 150 (step S116).
The configuration command unit 150 responds with the configuration to the configuration determination unit 140 (step S117).
The configuration determination unit 140 responds with the configuration to the integrated control unit 110 (step S118).
The control sequence in the server control apparatus 100 ends, here.
As illustrated in
The connection switching unit 314 responds with the connection switching to the configuration command unit 150 (step S202).
The configuration command unit 150 requests accelerator generation/deletion from the accelerator generation/deletion unit 315 (step S203).
The accelerator generation/deletion unit 315 requests configuration update from the configuration update unit 311 (step S204).
The configuration update unit 311 requests the configuration update from the current configuration table 316 (step S205).
The current configuration table 316 responds with the configuration update to the configuration update unit 311 (step S206).
The configuration update unit 311 responds with the configuration update to the accelerator generation/deletion unit 315 (step S207).
The accelerator generation/deletion unit 315 responds with the accelerator generation/deletion to the configuration command unit 150 (step S208).
The configuration command unit 150 requests APL generation/deletion from the APL generation/deletion unit 313 (step S209).
The APL generation/deletion unit 313 requests from configuration update of the configuration update unit 311 (step S210).
The configuration update unit 311 requests from the configuration update of the current configuration table 316 (step S211).
The current configuration table 316 responds with the configuration update to the configuration update unit 311 (step S212).
The configuration update unit 311 responds with the configuration update to the APL generation/deletion unit 313 (step S213).
The APL generation/deletion unit 313 responds with the APL generation/deletion to the configuration command unit 150 (step S214).
The configuration command unit 150 requests connection switching from the connection switching unit 314 (step S215).
The connection switching unit 314 responds with the connection switching to the configuration command unit 150 (step S216).
The configuration command unit 150 requests connection switching from the SW control unit 41 in the SW 40 (see
The SW control unit 41 responds with the connection switching to the configuration command unit 150 (step S217).
The configuration command unit 150 responds with the configuration to the configuration determination unit 140 (see
Flowchart
In step S12, the optimization arithmetic unit 130 (see
In step S13, the optimization arithmetic unit 130 calculates a performance-to-request ratio for a current allocation of the accelerator (“current configuration arithmetic processing”).
A performance-to-request ratio of an APL i is described.
The optimization arithmetic unit 130 calculates the performance-to-request ratio for the current allocation of the accelerator in accordance with Equation (2) expressed by δ-functions.
In a case that the APL i in a state of either using or not using the FPGA is located on each of all the servers, Equation (2) above is simplified to Equation (3).
An FPGA capacity condition on the server and a software capacity condition on the server are described.
The FPGA capacity condition on the server j is expressed by Relationship (4). Relationship (4) expresses a condition that an FPGA capacity (Sj) on a server j is not exceeded.
The software capacity condition on the server j is expressed by Relationship (5). Relationship (5) expresses a condition that a software capacity (sj) on the server j is not exceeded.
The FPGA capacity condition on the server j (refer to Relationship (4)) and the software capacity condition on the server j (refer to Relationship (5)) are the conditions that are prioritized over the performance-to-request ratio of the APL i described above and an optimization condition described later.
Returning to the flowchart in
In step S15, the integrated control unit 110 determines whether or not the arithmetic result in step S13 described above has a deviation equal to or greater than the threshold of the threshold table 111 (e.g., whether the arithmetic result is equal to or greater than the threshold).
In a case that the arithmetic result has a deviation equal to or greater than the threshold of the threshold table 111 (step S15: Yes), the process of the server control apparatus 100 proceeds to step S16, and in a case that the arithmetic result does not have a deviation equal to or greater than the threshold of the threshold table 111 (step S15: No), the process proceeds to step S23.
In step S16, the optimization arithmetic unit 130 performs the optimization arithmetic on the allocation of the accelerator by using the processing capability of the optimization arithmetic unit 130 itself or the processing capability of the external calculator 71 (see
The optimization condition for the allocation of the accelerator is described.
The optimization condition for the allocation of the accelerator is expressed by the expression (6). The expression (6) expresses a condition for minimizing divergence of the performance-to-request ratio of the APL i.
In addition, in a case that the arithmetic results are included in the external DB 72 (see
In step S17, the configuration determination unit 140 (see
In step S18, the configuration determination unit 140 uses the arithmetic result received from the integrated control unit 110 and the acquired policy to determine a configuration suggestion to be taken next (“configuration suggestion create processing”).
Hereinafter, a portion surrounded by dashed lines in
In step S19, the connection switching unit 314 in the physical server 30 (see
In step S20, the accelerator generation/deletion unit 315 in the APL control unit 31 (see
In step S21, the APL generation/deletion unit 313 in the APL control unit 31 (see
In step S22, the connection switching unit 314 in the APL control unit 31 (see
In step S23, the SW control unit 41 in the SW 40 (see
Hardware Configuration
The server control apparatus 100 according to the present embodiment is realized by a computer 900 configured as illustrated in
The CPU 910 operates based on programs stored in the ROM 930 or the HDD 940, and performs control of each unit. The ROM 930 stores a boot program executed by the CPU 910 when the computer 900 is activated, a program dependent on the hardware of the computer 900, and the like.
The HDD 940 stores programs executed by the CPU 910, data used by such programs, and the like. The HDD 940 may store, for example, the threshold table 111, the policy table 112, and the configuration information table 113 (see
The CPU 910 controls, via the input/output interface 960, an output device such as a display and a printer, and an input device such as a keyboard and a mouse. The CPU 910 acquires data from the input device via the input/output interface 960. The CPU 910 also outputs the generated data to the output device via the input/output interface 960.
The media interface 970 reads the program or data stored in a recording medium 980 and provides the read program to the CPU 910 via the RAM 920. The CPU 910 loads such programs from the recording medium 980 onto the RAM 920 via the media interface 970 to execute the loaded programs. The recording medium 980 is, for example, an optical recording medium such as a Digital Versatile Disc (DVD) and a Phasechangerewritable Disk (PD), a magneto-optical recording medium such as a Magneto Optical disk (MO), a tape medium, a magnetic recording medium, a semiconductor memory, or the like.
For example, in a case that the computer 900 functions as the server control apparatus 100 according to the present embodiment, the CPU 910 in the computer 900 implements the functions of the respective units of the server control apparatus 100 by executing the programs loaded on the RAM 920. In addition, the HDD 940 stores therein data of the respective units in the server control apparatus 100. The CPU 910 in the computer 900 reads and executes these programs from the recording medium 980, but may acquire these programs from other devices via the communication network 80, as another example.
Advantages
As described above, the server control apparatus 100 includes a request and configuration collection unit 120 that acquires a request to offload a certain process of an application to an accelerator for each application in a server, and configurations of the accelerator and the application in a server 30, an optimization arithmetic unit 130 that determines, by referring to information of the acquired request and configurations of the physical server 30, a ratio of processing performance to the request, and optimizes allocation of the accelerator so that variance of the ratio between the applications is equal to or less than a predetermined threshold, and a configuration determination unit 140 that determines a configuration suggestion to be taken by the physical server 30 by using an arithmetic result from the optimization arithmetic unit 130 and a predetermined policy, and commands the physical server 30 to execute the configuration suggestion.
This can improve the throughput and availability of the server to which the accelerator is applied.
The server control apparatus 100 further includes a threshold table 111 that includes at least one of a divergence of the ratio, a minimum value of the ratio, or a maximum value of the ratio as a parameter of the ratio, and holds a threshold for applying the parameter, wherein the optimization arithmetic unit 130 performs optimization arithmetic by using the threshold for the parameter.
This allows combination of the parameters to be used to select and perform the more appropriate optimization arithmetic for cluster situations.
In the server control apparatus 100, the optimization arithmetic unit 130 performs arithmetic of the ratio in accordance with Equation (2) above.
This makes it possible to calculate a performance ratio to request for each APL, and calculate an accelerator allocation optimal solution such that the variance of the ratio between the APLs is reduced.
In the server control apparatus 100, the optimization arithmetic unit 130 performs arithmetic of the ratio in accordance with Equation (3) above.
This makes it possible to more easily calculate the accelerator allocation optimal solution such that the variance of the ratio between the APLs is reduced in the case that the APL i in a state of either using or not using the FPGA is located on each of all the servers.
In the server control apparatus 100, the optimization arithmetic unit 130 optimizes the allocation of the accelerator in the ranges of the capacity conditions indicated by Relationship (4) and Relationship (5) above.
This makes it possible to calculate the accelerator allocation optimal solution such that the variance of the ratio between the APLs is reduced without exceeding the accelerator capacity or the software capacity on the server.
In the server control apparatus 100, the optimization arithmetic unit 130 performs the optimization arithmetic that minimizes the divergence of the ratio.
This makes it possible to optimize the allocation of the accelerator such that the variance of the ratio between the APLs is reduced by performing the optimization arithmetic that minimizes the divergence of the ratios.
Note that the server control apparatus 100 according to the present embodiment, more specifically, includes the request and configuration collection unit 120 that acquires a request to offload a certain process of an application to an accelerator for each application in the physical server 30, and configurations of the accelerator and the application in the server, the optimization arithmetic unit 130 that determines, by referring to information of the acquired request and configurations of the physical server 30, a performance-to-request ratio (P/R) that is a ratio of processing performance to the request, and optimizes allocation of the accelerator so that variance of the performance-to-request ratio (P/R) between the applications is equal to or less than a predetermined threshold, the configuration determination unit 140 that determines a configuration suggestion to be taken by the physical server 30 by using an arithmetic result received and a predetermined policy, and the configuration command unit 150 that commands the physical server 30 to execute the configuration suggestion determined by the configuration determination unit 140.
In this manner, the server control apparatus 100 according to the present embodiment calculates the performance ratio to the request for each APL, and optimizes the allocation of the accelerator so that the variance of the performance-to-request ratio between the APLs is reduced. This allows the server control apparatus 100 according to the present embodiment to allocate more accelerators for the APL with relatively more requests to effectively utilize the accelerator and achieve the improvement in the throughput and availability of the overall cluster (physical servers).
Others
Among processes described in the embodiments, all or some of processes described as being performed automatically can be manually performed, or all or some of processes described as being performed manually can be performed automatically by well-known methods. In addition, information including the processing procedures, the control procedures, the specific names, and the various types of data, and various parameters described in the aforementioned document and drawings can be modified as desired unless otherwise specified.
In addition, components of the devices illustrated in the drawings are functionally conceptual and are not necessarily physically configured as illustrated in the drawings. That is, the specific aspects of distribution and integration of the devices are not limited to those illustrated in the drawings. All or some of the components may be distributed or integrated functionally or physically in desired units depending on various kinds of loads, states of use, and the like.
Some or all of the configurations, the functions, the processing units, the processing mechanisms, and the like may be realized in hardware by being designed, for example, in an integrated circuit. Each of the configurations, the functions, and the like may be realized in software for a processor to interpret and execute a program that implements the functions. Information such as programs, tables, files, and the like, which are for implementing the functions can be held in a recording apparatus such as a memory, a hard disk, and a Solid State Drive (SSD), or a recording medium such as an Integrated Circuit (IC) card, a Secure Digital (SD) card, and an optical disk. In the present specification, the processes describing the time sequential processes include parallel or individually performed processes (for example, parallel processing or object processing) without necessarily being processed sequentially, in addition to processing performed sequentially in described order.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/025240 | 6/25/2019 | WO |