This application relates to the field of storage technologies, and in particular, to a data sending method, a network interface card, and a computing device.
To ensure data reliability in a storage system, data redundancy is usually implemented by using erasure coding (EC) check mechanism or a multi-copy mechanism. To be specific, data is stored in a plurality of storage nodes, and when some of the nodes are faulty, reliability and availability of the data can still be ensured. To implement a multi-copy solution or an EC solution, a computing node that delivers input/output (I/O) write request needs to prepare required data and required context information (for example, a write address) for a plurality of copies or slices, assemble the data and the context information into a plurality of groups of data and context information and send the plurality of groups of data and context information to a network interface card to generate a plurality of work requests, place the plurality of work requests into corresponding send queues, and send the work requests to the plurality of storage nodes for storage. Because processes of performing assembling to generate the work requests and placing the work requests into the queues are sequentially performed at an operating system layer, a larger quantity of slices or copies indicates higher latency overheads in the foregoing processes. Especially, in a small I/O (for example, 64 byte (B)) scenario, a proportion of latencies in the foregoing processes is larger.
This application provides a data sending method, a network interface card, and a computing device, to effectively reduce data sending latencies in an EC scenario and a multi-copy scenario.
According to a first aspect, an embodiment of this application provides a data sending method. The method is applied to a network interface card. First, the network interface card obtains first data and a first address; then generates P write requests based on the first data and the first address, where each of the P write requests carries to-be-written data and a corresponding write address, and P is a positive integer greater than 2; then places the P write requests into P send queues (QPs), where the P write requests are in one-to-one correspondence with the P QPs; and finally, sends the P write requests to P storage nodes based on the P QPs, where the write addresses in the P write requests are in one-to-one correspondence with the P storage nodes.
In this method, a data sending function originally executed by a central processing unit (CPU) is offloaded to a network interface card for parallel execution, and a data sending procedure is changed. Further, the network interface card may simultaneously generate a plurality of write requests based on the obtained first data and the obtained first address, and place the requests into a plurality of send queues. Because the plurality of send queues may be executed in parallel, a data sending latency (for example, in an EC scenario or a multi-copy scenario) can be effectively reduced. In addition, offloading the function originally executed by the CPU to the network interface card can reduce CPU resource occupation. Further, the network interface card has a data sending function. Therefore, offloading the data sending function to the network interface card (instead of other hardware) can improve data sending efficiency.
In a possible design manner, the first data is copied to obtain P pieces of to-be-written data; or the first data is split into P pieces of to-be-written data, where the P pieces of to-be-written data are P pieces of identical data. In a multi-copy scenario, splitting or copying multi-copy data by a dedicated processor of the network interface card can effectively reduce a processing latency.
In a possible design manner, the first data is split into P pieces of to-be-written data, where the P pieces of to-be-written data include n data slices and m check slices corresponding to the n data slices, m and n are positive integers, and P=n+m. In an EC scenario, splitting data by a dedicated processor of the network interface card can effectively reduce a processing latency.
In a possible design manner, the first address is split into P write addresses, where the first address represents a segment of storage space, and each of the P write addresses is corresponding to a segment of storage space on one of the P storage nodes. The P pieces of to-be-written data and the P write addresses are assembled into the P write requests, where each write request carries one of the P pieces of to-be-written data and one of the corresponding P write addresses. In this method, an address of the network interface card is split, and the address and data are assembled into a plurality of write requests such that a processing latency can be effectively reduced.
In a possible design manner, the network interface card obtains the first data and the first address from a processor of a host, where the network interface card is located in the host; or the network interface card directly obtains the first data and the first address from a memory of a host.
In a possible design manner, the write request is a remote direct memory access (RDMA) write request, and the P write addresses are respectively corresponding to memory storage space of all of the P storage nodes. When being used in a memory EC scenario or a memory multi-copy scenario, this method can reduce an end-to-end latency more greatly.
According to a second aspect, an embodiment of this application further provides a data sending apparatus. The apparatus is used in a network interface card, and the apparatus includes an obtaining module configured to obtain first data and a first address; a processing module configured to generate P write requests based on the first data and the first address, where each of the P write requests carries to-be-written data and a corresponding write address, and P is a positive integer greater than 2, where the processing module is further configured to place the P write requests into P QPs, where the P write requests are in one-to-one correspondence with the P QPs; and a sending module configured to send the P write requests to P storage nodes based on the P QPs, where the write addresses in the P write requests are in one-to-one correspondence with the P storage nodes.
In a possible design manner, the processing module is further configured to copy the first data to obtain P pieces of to-be-written data; or split the first data into P pieces of to-be-written data, where the P pieces of to-be-written data are P pieces of identical data.
In a possible design manner, the processing module is further configured to split the first data into P pieces of to-be-written data, where the P pieces of to-be-written data include n data slices and m check slices corresponding to the n data slices, m and n are positive integers, and P=n+m.
In a possible design manner, the processing module is further configured to split the first address into P write addresses, where the first address represents a segment of storage space, and each of the P write addresses is corresponding to a segment of storage space on one of the P storage nodes; and assemble the P pieces of to-be-written data and the P write addresses into the P write requests, where each write request carries one of the P pieces of to-be-written data and one of the corresponding P write addresses.
In a possible design manner, the obtaining module is further configured to obtain the first data and the first address from a processor of a host, where the network interface card is located in the host; or directly obtain the first data and the first address from a memory of a host.
In a possible design manner, the write request is an RDMA write request, and the P write addresses are respectively corresponding to memory storage space of all of the P storage nodes.
According to a third aspect, an embodiment of this application further provides a network interface card. The network interface card includes a processor and a storage device, where the storage device stores computer instructions, and the processor executes the computer instructions to perform the method in any one of the first aspect or the possible design manners of the first aspect.
According to a fourth aspect, an embodiment of this application further provides a computing device. The computing device includes a network interface card and a processor, where the processor is configured to generate first data and a first address, and the network interface card is configured to perform the method in any one of the first aspect or the possible design manners of the first aspect.
According to a fifth aspect, an embodiment of this application further provides a computer storage medium, where the computer storage medium stores a computer program, and when the computer program is run on a computer, the computer is enabled to implement the method in any one of the first aspect or the possible design manners of the first aspect.
For ease of understanding embodiments of this application, some terms used in this application are explained and described first.
Multi-copy is a data redundancy protection mechanism. The multi-copy is that one piece of data is copied and data copies are written into a plurality of nodes in a storage system, to ensure strong data consistency between the plurality of data copies. In this way, a fault of a single node does not affect a service, and the data can be read from another copy that is not faulty, to ensure service reliability.
EC is an erasure coding technology, and is also a data redundancy protection mechanism. The EC means that m pieces of data are added to n pieces of original data, and any n pieces of data in n+m pieces of data can be used to restore the original data. If the n+m pieces of data are distributed on different nodes of the storage system, when any m or fewer nodes are faulty (m pieces of data are invalid), the other remaining data can be used to restore the original data. In this way, the service is not affected.
RDMA: The RDMA is a direct memory access technology in which a system kernel can be operated without using a remote application server such that a computing device directly reads and writes data in a memory of another computing device, and the data does not need to be processed by a processor in this process. This not only saves a large quantity of CPU resources, but also improves a system throughput and reduces a network communication latency of a system.
Queue: The RDMA supports a total of three queues: a send queue (SQ), a receive queue (RQ), and a complete queue (CQ). The SQ and the RQ are usually created in pairs, and are referred to as a queue pair (QP). The RDMA is a message-based transmission protocol, and data transmission is an asynchronous operation. An operation process of the RDMA is as follows.
Hardware with an RDMA engine, for example, the network interface card, may be considered as a queue element processing module. The hardware continuously obtains the WR from the WQ for execution, and after completing the execution, the hardware places the WC in the CQ.
The network 30 implements data transmission and communication by using one or a combination of the following protocols, for example, Transmission Control Protocol/Internet Protocol (TCP/IP), a User Datagram Protocol (UDP), another type of protocol, and a network protocol that supports a RDMA technology, for example, an InfiniBand (IB) protocol, an RDMA over Converged Ethernet (RoCE) protocol, and an Internet Wide Area RDMA Protocol (iWARP). In a specific implementation process, one or more switches and/or routers may be used to implement communication processing between a plurality of nodes.
The computing node cluster includes one or more computing nodes 10 (where only one computing node 10 is shown in
The storage node cluster includes one or more storage nodes 20 (where three storage nodes 20 are shown in
Optionally, at the software layer, a server program (not shown in the
To ensure reliability of data storage, in the storage system in
In embodiments of this application, the network interface card 102 and the network interface card 201 in
The memory pool may include a storage device 203 (for example, the foregoing DRAM, SCM, or hard disk) in each storage node 20. The memory pool shown in
A management node centralizes storage space provided by each storage node cluster 20, and uses centralized storage space as the memory pool for unified management. Therefore, physical space of the memory pool is from various storage devices included in each storage node. The management node needs to perform unified addressing on the storage space added to the memory pool. Through unified addressing, each segment of space of the memory pool has a unique global address. Space indicated by the global address is unique in the memory pool, and each storage node 20 knows a meaning of the address. After physical space is allocated to a segment of space of the memory pool, a global address of the space has a corresponding physical address. The physical address indicates a storage device of a storage node where the space represented by the global address is actually located and an offset of the space in the storage device, that is, indicates a location of the physical space. The management node may allocate the physical space to each global address after creating the memory pool, or may allocate the physical space to a global address corresponding to a data write request when receiving the data write request. For example, the foregoing P log client applies to the storage node for a segment of glob al address (logical address space), where the global address points to a plurality of storage nodes 20 (for example, 20a to 20c), and may be used to implement memory multi-copy storage. A correspondence between each global address and a physical address of the global address is recorded in an index table, and the management node synchronizes the index table to each storage node 20. Each storage node 20 stores the index table such that a physical address corresponding to a global address is queried based on the index table when data is subsequently read or written.
Similarly, to ensure reliability of data in the memory, an EC mechanism or a multi-copy mechanism is also used, to implement data redundancy in the memory pool. Principles of the EC mechanism and the multi-copy mechanism of the memory are not described herein again. Compared with conventional EC and multi-copy, usually, memory EC and memory multi-copy are mainly used in a small I/O (less than 2 kilobyte (KB)) scenario. A minimum I/O may be 64 B. For example, in a memory EC solution or a memory multi-copy solution, after receiving a write request, the storage node 20 may directly write data into the memory like the DRAM or the SCM. In a possible implementation, EC and multi-copy implemented based on a single-side RDMA network can greatly reduce an end-to-end latency. It should be noted that in addition to the EC scenario and the multi-copy scenario, this application is also applicable to another scenario in which data needs to be sent. Embodiments of this application are also not limited to an RDMA transmission scenario. The method in embodiments of this application may also be applied to a write request that is based on another network protocol, provided that there is a network interface card device and the network interface card can generate a plurality of queues. The RDMA write request in embodiments of this application is merely used as an example for ease of understanding the solution by a reader.
In practice, regardless of the multi-copy scenario or the EC scenario, during data sending, required data, required addresses, and required context information need to be prepared for a plurality of copies or slices first, data and context information that are to be sent to a plurality of nodes are assembled to generate a plurality of WQEs, and the WQEs are placed into corresponding QPs (which are SQs herein). Then, the data is sent to a plurality of storage nodes 20. Operation processes (which are referred to as encode and send processes in embodiments of this application) of performing assembling to generate the WQEs and placing the WQEs into the QPs are performed at an operating system layer in a computing node 10 in series. In other words, a CPU needs to sequentially perform a plurality of encode and send processes such that the data and the write address that are to be sent to the plurality of nodes can be placed into the SQ queues in an RDMA network interface card. For example, in a three-copy scenario, a processor 101 of the computing node 10 needs to perform the following steps.
It can be learned that, a larger quantity of slices or copies indicates higher latency overheads. Especially, in a small I/O (for example, 64 B) scenario (for example, memory multi-copy and memory EC), proportions of latencies of the encode and send processes in the multi-copy scenario and the EC scenario are larger. The latency proportion in the multi-copy scenario may be 25%, and the latency proportion in the EC scenario may be more than 35%. If a multithreading concurrency operation or a coroutine concurrency operation is started at the operating system layer, CPU latency overheads caused by the operation are higher than those of the current encode_and_send processes.
In view of the foregoing problem, embodiments of this application provide a data sending method, which may be applied to the storage system in
Embodiments of this application provide a specific embodiment to describe an overall scenario.
First, a computing node 10 receives an EC or multi-copy I/O write request. The request carries to-be-written data and a virtual address. The virtual address represents an address segment, and corresponds to a segment of logical space in a storage system. The virtual address is visible to an application 103. The JO write request may be generated by the application 103 of the computing node, or may be sent by another storage node or client server. This is not limited in this application.
In a possible implementation, the storage system uses logical unit number (LUN) semantics for communication. The address segment may be identified by three factors: an LUN identifier (ID), an logical block address (LBA), and a length. The three factors may represent a determined address segment, to index a global address.
In another possible implementation, the storage system uses memory semantics for communication. For example, space of a DRAM is mapped to the application of the computing node 10 or another client server such that the computing node 10 can sense the space (referred to as virtual space in this embodiment) of the DRAM and access the virtual space. In this scenario, an address carried in to-be-read/written data sent by the computing node 10 to a storage node 20 may include a virtual space ID, and a start address and a length of the virtual space, which are used to represent an address segment.
The foregoing descriptions are merely used as an example, and a specific representation manner of a write address is not limited in this application.
Then, the computing node 10 needs to perform splitting based on storage space corresponding to the foregoing virtual address, and prepare the write address. The write address is used to write EC and multi-copy data to different storage nodes. In a possible implementation, a client 104 of the computing node receives the foregoing EC or multi-copy JO write request, and completes preparation of the data and the write address.
Further, in the storage system, a DHT manner is usually used for routing. According to the distributed hash table manner, a target partition in a DHT is obtained based on the foregoing virtual address, a node is determined based on the target partition (where it is assumed that the node is the computing node 10), and then a storage unit S is determined based on the node. The storage unit S is actually a segment of logical space, and actual physical space still is from a plurality of storage nodes 20. For example, the storage unit S is a set including a plurality of logical blocks, and different logical blocks may be corresponding to physical blocks on different storage nodes. In this case, for a distributed storage system that supports Plog write, the computing node 10 may index, by using the Plog client 104 in the DHT manner again, Plogs on a plurality of physical storage nodes corresponding to the storage unit. The Plog write is used as an example, and the write address may include: (a) an offset, for example, an offset of writing data into a hard disk or an SCM; (2) a Plog ID, indicating an identifier of a segment of Plog space, and corresponding to encapsulation of a segment of byte-level address space that supports appending; and (c) a size, to be specific, a size of written data. Therefore, each time data with a specific size is written, the data with the size is added to the current offset of the disk. For example, after writing the data successfully, the PLOG sets a current written size to offset+size.
In addition, the computing node 10 further needs to be responsible for preparing slice data or multi-copy data. In embodiments of this application, a processor 101 of the computing node does not assemble a WQE, but directly sends the prepared write address and the prepared data to a network interface card 102.
Finally, the network interface card 102 splits the data and the write address that are in a first message, then assembles the data and the address with a context to generate a plurality of WQEs in parallel, places the WQEs into QPs, and sends the WQEs to a plurality of storage nodes in a storage node cluster by using an RDMA request. Each storage node completes data writing.
To further explain the method provided in embodiments of this application, and in particular, content performed by the network interface card, the following describes a schematic flowchart of a data sending method provided in
Step 301: A network interface card 102 receives first data and a first address that are sent by a processor 101.
The first data and the first address may be data and a write address that are prepared by the processor 101 of a computing node 10 for a multi-copy scenario and an EC scenario. A client 104 needs to first prepare the first data and the write address. Then, the processor 101 sends the first data and the write address to the network interface card 102, in other words, the network interface card obtains the first data and the first address from the processor 101. In an optional manner, the network interface card 102 may alternatively directly obtain the first data and the first address from a memory of the computing node 10.
Optionally, the write address may further include a stripe ID, a node ID, an offset and a length. The stripe ID indicates a stripe to which a data write request belongs. The node ID indicates a storage node of the stripe. The offset indicates an offset of a write location relative to a start location of the stripe, that is, indicates that data is written from the start location of the stripe. The length indicates a size of the to-be-written data.
The foregoing descriptions are merely used as an example to facilitate understanding of a reader. This application does not impose any limitation on a form of the write address.
It should be noted that the first address may directly include a plurality of write addresses, which may be directly obtained by the network interface card 102 by extracting from the obtained message or simply splitting the first address. Optionally, the first address may alternatively include an entire segment of write address.
Step 302: The network interface card 102 generates a plurality of write requests.
For example, the network interface card 102 generates P RDMA write requests based on the first data and the first address, where each of the P RDMA write requests carries to-be-written data and a corresponding write address, and P is a positive integer greater than 2.
It should be noted that the network interface card 102 may be a smart network interface card (NIC), and a processor 106 in the network interface card 102 may be a multi-core CPU, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or the like, for example, may be ASIC chips of a plurality of CPU cores to implement multi-core concurrent scheduling. In a possible implementation, the processor 106 is a dedicated offloading processor in the network interface card, and has almost no scheduling latency.
Further, an EC scenario is used as an example. First, the network interface card 102 receives the first data and the first address that are sent by the processor 101. Then, the network interface card 102 concurrently schedules a plurality of threads by using a plurality of cores of the processor 106, and splits and assembles the first data and the first address, to generate P RDMA write requests (that is, WQEs) in parallel. Each RDMA request carries copy data or slice data, and a write address required for storing data in different storage nodes.
When the obtained first data directly includes P pieces of to-be-written data, the network interface card 102 may directly extract the P pieces of to-be-written data from the received message or simply split the received message to obtain the P pieces of to-be-written data (as shown in
Optionally, when the first address may alternatively include a segment of write address, the network interface card 102 performs specific calculation and processing to split the first address into a plurality of available write addresses.
Similarly, when the obtained first data directly includes P write addresses, the network interface card 102 may directly extract the P write addresses from the first data or simply split the first data to obtain the P write addresses (as shown in
A specific form of the write address has been described above, for example, content such as the stripe ID, the node ID, the offset, and the length.
Optionally, when the first data has only a part of data, the network interface card 102 can obtain a plurality of pieces of data only after performing specific calculation and processing. For example, in the three-copy scenario, the network interface card copies the first data to obtain the P pieces of to-be-written data, or splits the first data into the P pieces of to-be-written data, where the P pieces of to-be-written data are P pieces of identical copy data. The first data may have only one piece of copy data. In this case, the network interface card 102 can generate three copies only after copying the data to obtain other two pieces of data.
After obtaining the P pieces of to-be-written data and the P write addresses, the network interface card 102 assembles the data and the addresses into P RDMA write requests. Each RDMA write request carries one of the P pieces of to-be-written data and one of the corresponding P write addresses.
Step 303: The network interface card 102 sends the plurality of write requests to a plurality of storage nodes in parallel.
For example, the network interface card 102 places the P RDMA write requests into P QPs, where the P RDMA write requests are in one-to-one correspondence with the P QPs; then sends the P RDMA write requests to P storage nodes based on the P QPs, where write addresses in the P RDMA write requests are in one-to-one correspondence with the P storage nodes; and finally sends the P RDMA write requests to the storage nodes via a network.
Further, a calculation module that concurrently schedules the network interface card respectively submits the foregoing plurality of WQEs (to be specific, the P RDMA write requests) to send queues SQs. The SQ is used by the computing node to send a work request to the storage node, and an RQ is used by the storage node to receive the work request sent by the computing node. Each SQ on each computing node is associated with an RQ of a data receiving end such that the storage node 20 and the computing node 10 can communicate with each other by using a queue pair.
In the three-copy scenario shown in
In this embodiment of this application, the processor 106 may invoke computer-executable instructions stored in the network interface card 102 such that the network interface card 102 can perform the operations performed by the network interface card 102 in the embodiment shown in
Step 304: Write data into the plurality of storage nodes.
Further, after receiving the write requests sent by the network interface card 102, the plurality of storage nodes (for example, 20a to 20d) store the data based on the data and the write addresses that are carried in the write requests.
For example, in the three-copy scenario shown in
For another example, in the EC 2+2 scenario shown in
In a possible implementation, the first address in the write request may be an address in memories of the storage nodes 20a to 20c. This is a memory three-copy scenario or a memory EC scenario. For example, the memory may be an SCM, and the SCM may perform addressing by using bytes. The network interface card of the storage node 20 may directly write a copy or a slice into the memory based on the write address. In this scenario, according to the method in embodiments of this application, a latency of an encode and send process can be greatly shortened, and processing efficiency in the EC scenario and the three-copy scenario can be improved.
Based on a same concept as method embodiments, an embodiment of this application further provides a data sending apparatus. The data sending apparatus may be deployed on a network interface card of a computer system or a service node (for example, a computing node 10 or a storage node 20), and is configured to perform the method performed by the network interface card 102 in the method embodiments shown in
Further, the obtaining module 401 is configured to obtain first data and a first address. Optionally, the obtaining module 401 is further configured to obtain the first data and the first address from a processor of a host, where the network interface card is located in the host; or directly obtain the first data and the first address from a memory of a host. The host herein may be the computing node 10.
The processing module 402 is configured to generate P write requests based on the first data and the first address, where each of the P write requests carries to-be-written data and a corresponding write address, and P is a positive integer greater than 2. The processing module is further configured to place the P write requests into P QPs, where the P write requests are in one-to-one correspondence with the P QPs.
Optionally, the processing module 402 is further configured to: copy the first data to obtain P pieces of to-be-written data; or split the first data into P pieces of to-be-written data, where the P pieces of to-be-written data are P pieces of identical data.
Optionally, the processing module 402 is further configured to split the first data into P pieces of to-be-written data, where the P pieces of to-be-written data include n data slices and m check slices corresponding to the n data slices, m and n are positive integers, and P=n+m.
Optionally, the processing module 402 is further configured to split the first address into P write addresses, where the first address represents a segment of storage space, and each of the P write addresses is corresponding to a segment of storage space on one of the P storage nodes; and assemble the P pieces of to-be-written data and the P write addresses into the P write requests, where each write request carries one of the P pieces of to-be-written data and one of the corresponding P write addresses.
The sending module 403 is configured to send the P write requests to P storage nodes based on the P QPs, where the write addresses in the P write requests are in one-to-one correspondence with the P storage nodes.
Optionally, the write request is an RDMA write request, and the P write addresses are respectively corresponding to memory storage space of all of the P storage nodes.
This application further provides a chip. The chip includes a processor and a communication interface. The communication interface is configured to communicate with the processor of a device in which the chip is located. The processor may be in an implementation form of the processor 106. The processor of the chip is configured to implement a function of operation steps of the method performed by the network interface card 102 in the computing node 10 in embodiments of this application. For brevity, details are not described herein again.
Optionally, the chip may alternatively be an offload card other than the network interface card 102 in the computing node 10 shown in
This application further provides a network interface card. A structure of the network interface card is similar to the network interface card 102 shown in
All or some of the foregoing embodiments may be implemented by software, hardware, firmware, or any other combination. When software is used to implement the foregoing embodiments, all or some of the foregoing embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded or executed on a computer, the procedures or the functions according to embodiments of the present disclosure are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DIGITAL VERSTAILE DISC (DVD)), or a semiconductor medium. The semiconductor medium may be a solid-state drive (SSD).
A person skilled in the art should understand that embodiments of this application may be provided as a method, a system, or a computer program product. Therefore, this application may use a form of a hardware-only embodiment, a software-only embodiment, or an embodiment with a combination of software and hardware. In addition, this application may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a compact disc (CD)-ROM, an optical memory, and the like) that include computer-usable program code.
This application is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to this application. It should be understood that the computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. The computer program instructions may be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of another programmable data processing device to generate a machine such that the instructions executed by the computer or the processor of the other programmable data processing device generate an apparatus for implementing a specific function in one or more procedures in the flowcharts and/or in one or more blocks in the block diagrams.
The computer program instructions may be stored in a computer-readable memory that can instruct the computer or another programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
The computer program instructions may alternatively be loaded onto the computer or another programmable data processing device such that a series of operations and steps are performed on the computer or the other programmable device, to generate computer-implemented processing. Therefore, the instructions executed on the computer or the other programmable device provide steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
It is clear that, a person skilled in the art can make various modifications and variations to this application without departing from the scope of this application. In this way, this application is intended to cover these modifications and variations of this application provided that they fall within the scope of the claims of this application and their equivalent technologies.
Number | Date | Country | Kind |
---|---|---|---|
202110910507.0 | Aug 2021 | CN | national |
This is a continuation of International Patent Application No. PCT/CN2022/111169 filed on Aug. 9, 2022, which claims priority to Chinese Patent Application No. 202110910507.0 filed on Aug. 9, 2021. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/111169 | Aug 2022 | US |
Child | 18425429 | US |