This application relates to the technical field of computers, and specifically, to a method and apparatus for scoring a precomputation model, a device, and a storage medium.
An Online Analytical Processing (OLAP) precomputation model is widely applied to a data query scenario. In the prior art, if facing with a plurality of different OLAP precomputation models, there is no way to determine and compare each OLAP model horizontally, and as a result, it lacks a solution for quantitative horizontal comparison.
This application is mainly intended to provide a method and apparatus for scoring a precomputation model, a device, and a storage medium, to resolve the above problem.
In order to realize the above purpose, an aspect of this application provides a method for scoring a precomputation model, including the following operations.
In a plurality of precomputation models, a score when each precomputation model executes a same query load is calculated.
The precomputation model with the largest score is determined as a target precomputation model according to the score of each precomputation model.
The target precomputation model is used for query calculation.
In an implementation, the operation of calculating, in the plurality of precomputation models, the score when each precomputation model executes the same query load includes the following operations.
Computation resource overhead of each precomputation model is calculated.
Query time of each precomputation model is calculated.
The score of the precomputation model is determined according to the computation resource overhead and the query time.
In an implementation, the precomputation model is a multidimensional cube model.
In an implementation, the operation of determining the score of the precomputation model according to the computation resource overhead and the query time includes the following operation.
The score of the precomputation model is calculated by means of the following formula.
θ is a marked score, A is an occupied computation resource, B is the query time, and λ is a preset unit weight.
In order to realize the above purpose, another aspect of this application provides an apparatus for scoring a precomputation model.
The apparatus for scoring a precomputation model according to this application includes a computation module, a selection module, and a query module. The computation module is configured to calculate, in a plurality of precomputation models, a score when each precomputation model executes a same query load.
The selection module is configured to determine the precomputation model with the largest score as a target precomputation model according to the score of each precomputation model.
The query module is configured to use the target precomputation model for query calculation.
In an implementation, the computation module is further configured to execute the following operations. Computation resource overhead of each precomputation model is calculated.
Query time of each precomputation model is calculated.
The score of the precomputation model is determined according to the computation resource overhead and the query time.
In an implementation, the computation module is further configured to execute the following operation.
The score of the precomputation model is calculated by means of the following formula.
θ is a marked score, A is an occupied computation resource, B is the query time, and λ is a preset unit weight.
In an implementation, the precomputation model is a multidimensional cube model.
A third aspect of this application further provides an electronic device, including at least one processor and at least one memory. The memory is configured to store one or more program instructions. The processor is configured to run the one or more program instructions to execute any of the methods described above.
A fourth aspect of this application further provides a computer-readable storage medium, including one or more program instructions. The one or more program instructions are configured to execute any of the methods described above.
In the embodiments of this application, and according to this application, the precomputation model may be quantitatively scored, so that horizontal comparison is conveniently performed when different precomputation models execute query tasks of the same load. Therefore, a user can conveniently select different precomputation models for query calculation.
The accompanying drawings described herein are used to provide a further understanding of this application, constitute a part of this application, so that other features, objectives and advantages of this application become more obvious. The exemplary embodiments of this application and the description thereof are used to explain this application, but do not constitute improper limitations to this application. In the drawings:
In order to enable those skilled in the art to better understand the solutions of this application, the technical solutions in the embodiments of this application will be clearly and completely described below in combination with the drawings in the embodiments of this application. It is apparent that the described embodiments are only part of the embodiments of this application, not all the embodiments. All other embodiments obtained by those of ordinary skill in the art on the basis of the embodiments in this application without creative work shall fall within the scope of protection of this application.
It is to be noted that the embodiments in this application and the features in the embodiments may be combined with one another without conflict. This application will now be described below in detail with reference to the drawings and the embodiments.
This application provides a method for scoring a precomputation model, referring to a flowchart of the method for scoring a precomputation model shown in
At S102, in a plurality of precomputation models, a score when each precomputation model executes a same query load is calculated.
Specifically, the plurality of precomputation models are pre-stored in a database. The precomputation model is a multidimensional cube model.
At S104, the precomputation model with the largest score is determined as a target precomputation model according to the score of each precomputation model.
At S106, the target precomputation model is used for query calculation.
According to the technical solution of the present invention, the precomputation model with the optimal performance may be determined for query. Therefore, query efficiency can be enhanced.
In an implementation, the step of calculating, in the plurality of precomputation models, the score when each precomputation model executes the same query load adopts the following steps.
At S202, computation resource overhead of each precomputation model is calculated.
At S204, query time of each precomputation model is calculated.
At S206, the score of the precomputation model is determined according to the computation resource overhead and the query time.
Specifically, a query system of a cloud includes a cluster of servers, including a plurality of query servers. A distributed computation mode is used for query.
Computation resource consumption includes: a CPU or GPU resource, an internal storage resource, and a storage resource.
Exemplarily, a computation resource may be a disk storage space.
Alternatively, the computation resource is a resource occupied by a network which is a network occupied by data transmission from one node to another.
Exemplarily, when the precomputation model A performs query calculation, 10 servers are used. Each server includes a CPU of 16 GB, hard disk storage of 500 GB, and an internal storage of 16 T, and a speed of a network bandwidth is 1 G per second. The 10 servers take one hour to complete the query for a query task.
For the same query task, when the precomputation model B performs query calculation, if 5 servers are used, the servers also take 1 hour to complete the query task with same configuration. Then a score of the precomputation model B is determined to be twice the score of the precomputation model A.
θ is the score, A is the occupied computation resource, and B is the query time.
λ is a preset unit weight, and the weight is obtained according to analysis and statistics of big data.
Exemplarily, for A, when 10 servers are used, each server includes the CPU of 16 GB, the hard disk storage of 500 GB, and the internal storage of 16 T, and the speed of the network bandwidth is 1 G per second, A is regarded as 1.
If 20 servers are used, other parameters remain unchanged, each server includes the CPU of 16 GB, the hard disk storage of 500 GB, and the internal storage of 16 T, and the speed of the network bandwidth is 1 G per second, A is regarded as 2.
If A is 1, it takes 1 hour, and B is 1; and λ is 100, the score θ = 100.
If A remains unchanged, it takes 2 hours, and B is 2; and the score θ = 50.
If B remains unchanged, and A becomes 2, the score becomes 50.
The above formula indicates that, the score is inversely proportional to computation time, and is inversely proportional to the occupied resource. When the same task is completed, if the computation time is longer, the score is lower, and if the occupied resource is more, the score is lower.
Referring to
It is to be noted that the steps shown in the flow diagram of the accompanying drawings may be executed in a computer system, such as a set of computer-executable instructions, and although a logical sequence is shown in the flow diagram, in some cases, the steps shown or described may be executed in a different order than here.
An embodiment of this application further provides an apparatus for scoring a precomputation model. As shown in
The computation module 31 is configured to calculate, in a plurality of precomputation models, a score when each precomputation model executes a same query load.
The selection module 32 is configured to determine the precomputation model with the largest score as a target precomputation model according to the score of each precomputation model.
The query module 33 is configured to use the target precomputation model for query calculation.
In an implementation, the computation module 31 is further configured to execute the following operations. Computation resource overhead of each precomputation model is calculated.
Query time of each precomputation model is calculated.
The score of the precomputation model is determined according to the computation resource overhead and the query time.
A third aspect of this application further provides an electronic device, including at least one processor and at least one memory. The memory is configured to store one or more program instructions. The processor is configured to run the one or more program instructions to execute any of the above methods.
A fourth aspect of this application further provides a computer-readable storage medium, including one or more program instructions. The one or more program instructions are configured to execute any of the methods described above.
In this embodiment of the present invention, the processor may be an integrated circuit chip and has a signal processing capacity. The processor may be a general processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components.
Each method, step and logical block diagram disclosed in the embodiments of the present invention may be implemented or executed. The general processors may be microprocessors or the processor may also be any conventional processors. In combination with the method disclosed in the embodiments of the present invention, the steps may be directly implemented by a hardware processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software module may be located in a mature storage medium in the field such as a Random Access Memory (RAM), a flash memory, a Read-Only Memory (ROM), a Programmable ROM (PROM) or Electrically Erasable PROM (EEPROM), and a register. The processor reads information in the memory, and completes the steps of the above method in combination with hardware.
The storage medium may be a memory. For example, the storage medium may be a volatile memory or a non-volatile memory, or may include both the volatile and non-volatile memories.
The non-volatile memory may be an ROM, a PROM, an Erasable PROM (EPROM), an EEPROM or a flash memory.
The volatile memory may be an RAM, and is used as an external high-speed cache. It is exemplarily but unlimitedly described that RAMs in various forms may be adopted, such as a Static RAM (SRAM), a Dynamic RAM (DRAM), a Synchronous DRAM (SDRAM), a Double Data Rate SDRAM (DDR SDRAM), an Enhanced SDRAM (ESDRAM), a Synchlink DRAM (SLDRAM) and a Direct Rambus RAM (DR RAM).
The storage medium described in this embodiment of the present invention is intended to include, but not limited to, memories of these and any other proper types.
Those skilled in the art should note that, in one or more of the above examples, functions described in the present invention may be implemented by means of a combination of hardware and software. When the software is applied, the corresponding functions may be stored in the computer-readable medium or transmitted as one or more instructions or codes on the computer-readable medium. The computer-readable medium includes a computer storage medium and a communication medium. The communication medium includes any media transmitting a computer program from one place to another place. The storage medium may be any available media capable of being stored by a general or special computer.
It is apparent that those skilled in the art should understand that the above modules or steps of the present invention may be implemented by a general computing apparatus, and may also be gathered together on a single computing apparatus or distributed in a network composed of a plurality of computing apparatuses. Optionally, the above modules or steps of the present invention may be implemented with program codes executable by the computing apparatus, so that may be stored in a storage apparatus for execution by the computing apparatus, or can be fabricated into individual integrated circuit modules respectively, or a plurality of modules or steps thereof are fabricated into a single integrated circuit module for implementation. In this way, the present invention is not limited to any specific combination of hardware and software.
The above are only the preferred embodiments of this application and are not intended to limit this application. For those skilled in the art, this application may have various modifications and variations. Any modifications, equivalent replacements, improvements and the like made within the spirit and principle of this application shall fall within the scope of protection of this application.
Number | Date | Country | Kind |
---|---|---|---|
202110688956.5 | Jun 2021 | CN | national |
The present application is a continuation of international application No. PCT/CN2021/109933 filed on 2021-07-31, which claims the priority benefits of Chinese applications No. 202110688956.5, filed with the Chinese Patent Office on Jun. 21, 2021 and entitled “METHOD AND APPARATUS FOR SCORING PRECOMPUTATION MODEL, DEVICE, AND STORAGE MEDIUM”. The entirety of each of the above mentioned patent applications is hereby incorporated by reference herein and made a part of this specification
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2021/109933 | Jul 2021 | WO |
Child | 18092327 | US |