This application claims priority to and benefits of Chinese patent Application No. 202210773929.2, filed with the China National Intellectual Property Administration (CNIPA) on Jul. 1, 2022. The entire contents of the above-identified application are incorporated herein by reference.
The disclosure relates generally to customized board for memory accessing.
While traditional deep learning models are good at pattern recognition and data mining by capturing hidden patterns of Euclidean data (e.g., images, text, videos), graph neural networks (GNNs) have shown to extend the power of machine learning to non-Euclidean domains represented as graphs with complex relationships and interdependencies between objects. Research has shown that GNNs can exceed state-of-the-art performance on applications ranging from molecular inference to community detection.
GNNs can be a very effective model for unstructured data modeling and processing. Recently, GNNs are becoming more and more utilized in applications such as recommendation systems, risk control systems, etc. Graph data may be unstructured. As a result, accessing graph data may result in random memory accesses.
Various embodiments of the present specification may include hardware circuits, systems, methods for efficient memory allocation for sparse matrix multiplications.
According to one aspect, a system comprises a host, and a circuitry board, wherein the circuitry board comprises: a plurality of memory drives configured to store structure data of a graph and attribute data of the graph; and an access engine circuitry communicatively coupled with each of the plurality of memory drives and the host, wherein the access engine circuitry is configured to: fetch a portion of the structure data of the graph from one or more of the plurality of memory drives; perform node sampling using the fetched portion of the structure data to select one or more sampled nodes; fetch a portion of the attribute data of the graph from two or more of the plurality of memory drives in parallel according to the selected one or more sampled nodes; and send the fetched portion of the attribute data of the graph to the host; and the host is communicatively coupled with the circuitry board and configured to receive the portion of the attribute data of the graph from the circuitry board, the host comprising: one or more processors configured to perform graph neural network (GNN) processing for the graph using the portion of the attribute data of the graph.
In some embodiments, the access engine circuitry is implemented on a field programmable gate array (FPGA) located on the circuitry board.
In some embodiments, in the system, the plurality of memory drives on the circuitry board are solid state drives (SSDs).
In some embodiments, the plurality of memory drives on the circuitry board have the same memory capacity.
In some embodiments, the access engine circuitry is further configured to access the structure data and the attribute data of the graph from a memory location outside of the circuitry board.
In some embodiments, the one or more processors of the host are central processing units (CPUs), graphics processing units (GPUs), tensor processing units (TPU), neural processing units (NPUs), or graph neural network processing units.
In some embodiments, the host further comprises: one or more double data rate (DDR) synchronous dynamic random access memory (SDRAM) communicatively coupled with the one or more processors of the host and the circuitry board, the one or more DDR SDRAM configured to: store the portion of the attribute data of the graph from the circuitry board; and facilitate the one or more processors of the host to perform GNN processing.
In some embodiments, the circuitry board further comprises a dedicated memory communicatively coupled to the access engine circuitry, wherein the dedicated memory is one or more double data rate (DDR) synchronous dynamic random access memory (SDRAM) and is configured to facilitate an implementation of one or more controllers for controlling access to the plurality of memory drives.
In some embodiments, the host is communicatively coupled with a plurality of the circuitry boards, and the host is configured to communicate with each of the plurality of the circuitry boards in parallel.
In some embodiments, the host is further configured to perform memory management on the plurality of the circuitry boards using open-channel controllers of a plurality of access engine circuitries in the plurality of the circuitry boards.
According to another aspect, a computer-implemented method comprises: fetching, by an access engine circuitry implemented on a circuitry board, a portion of structure data of a graph from one or more of a plurality of memory drives implemented on the circuitry board; performing, by the access engine circuitry, node sampling using the fetched portion of the structure data of the graph to select one or more sampled nodes; fetching, by the access engine circuitry, a portion of attribute data of the graph from two or more of the plurality of memory drives in parallel according to the selected one or more sampled nodes; sending, by the access engine circuitry, the fetched portion of the attribute data of the graph to a host, wherein the host is outside of the circuitry board; and performing, by one or more processors of the host, graph neural network (GNN) processing for the graph using the fetched portion of the attribute data of the graph.
According to another aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: fetching, by an access engine circuitry implemented on a circuitry board, a portion of structure data of a graph from one or more of a plurality of memory drives implemented on the circuitry board; performing, by the access engine circuitry, node sampling using the fetched portion of the structure data of the graph to select one or more sampled nodes; fetching, by the access engine circuitry, a portion of attribute data of the graph from two or more of the plurality of memory drives in parallel according to the selected one or more sampled nodes; and sending, by the access engine circuitry, the fetched portion of the attribute data of the graph to a host to make the host perform graph neural network (GNN) processing for the graph using the fetched portion of the attribute data of the graph, wherein the host is outside of the circuitry board.
These and other features of the systems, methods, and hardware devices disclosed, and the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture will become more apparent upon consideration of the following description and the appended claims referring to the drawings, which form a part of this specification, where like reference numerals designate corresponding parts in the figures. It is to be understood, however, that the drawings are for illustration and description only and are not intended as a definition of the limits of the invention.
The specification is presented to enable any person skilled in the art to make and use the embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present specification. Thus, the specification is not limited to the embodiments shown but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Data may be structured or unstructured. For structured data, information may be arranged according to a pre-set data model or schema. For unstructured data, information may not be arranged using a preset-data model or a pre-defined manner. For example, a text file (e.g., emails, reports, etc.) may include information (e.g., individual letters or words) that does not have a pre-defined structure. As a result, the unstructured data may include irregularities and ambiguities that make it difficult to understand using traditional programs or data structures. Moreover, accessing unstructured data from a computer memory can involve a large number of random memory accessing, which can make memory accessing tedious and inefficient.
One way to represent unstructured data is by using graphs. A graph is a data structure comprising two components—nodes (or vertices) and edges. For example, a graph G may be defined as a collection of a set of nodes V and a set of edges E connecting the set of nodes. A node in a graph may have a set of features or attributes (e.g., a user profile in a graph representing a social network). A node may be defined as an adjacent node of another node, if they are connected by an edge. The graph may be a highly flexible data structure, as the graph may not require pre-defined rules to determine how many nodes it contains or how the nodes are connected by edges. Because the graph may provide great flexibility, it is one of the data structures that are widely used to store or represent unstructured data (e.g., text files). For example, the graph can store data that has a relationship structure, such as between buyers or products in an online shopping platform.
When storing a graph in computer memory, the nodes, edges, and attributes may be stored in many different data structures. One way to store a graph is to separate the attribute data from the corresponding nodes. For example, node identifiers may be stored in an array, with each node identifier providing an address or a pointer that points to the location of the attribute data for the corresponding node. The attributes for all nodes may be stored together, and they may be accessed by reading the address or the pointer stored in the corresponding node identifiers. By separating the attribute data from the corresponding nodes, the data structure may be able to provide faster traversing access on the graph.
A graph neural network (GNN) is a type of neural network that may directly operate on a graph. The GNN may be more suitable than traditional neural networks (e.g., a convolutional neural network) for operations on a graph, since the GNN may be better equipped to accommodate the arbitrary size of the graph or the complex topology of the graph. The GNN may perform inference on data described in graph formats. The GNN is capable of performing node-level, edge-level, or graph-level prediction tasks.
GNN processing may involve GNN training and GNN inference, both of which may involve GNN computations. A typical GNN computation on a node (or vertex) may involve aggregating its neighbor's (direct neighbors or each neighbor's neighbors) features (e.g., attribute data) and then computing new activations of the node for determining a feature representation (e.g., feature vector) of the node. Therefore, GNN processing for a small number of nodes often requires input features of a significantly larger number of nodes. Taking all neighbors for message aggregation is too costly since the nodes needed for input features would easily cover a large portion of the graph, especially for real-world graphs that are colossal in size (e.g., with hundreds of millions of nodes with billions of edges).
To make GNN more practical for these real-word applications, node sampling is often adopted to reduce the number of nodes to be involved in the message/feature aggregation. For example, positive sampling and negative sampling may be used to determine the optimization objective and the resulted variance in the GNN processing. For a given root node whose feature representation is being computed, the positive sampling may sample those graph nodes that have connections (direct or indirect) via edges with the root node (e.g., connected to and within a preset distance from the root node); the negative sampling may sample those graph nodes that are not connected via edges with the root graph node (e.g., outside of the preset distance from the root node). The positively sampled nodes and the negatively sampled nodes may be used to train the feature representation of the root node with different objectives.
To perform GNN computations, a system may retrieve graph data from a memory, and send the data to one or more processors for processing.
As shown in
In some embodiments, as shown in
The GNN sampler 222 may be configured to select, according to the edge information of the one or more root nodes, one or more sampled nodes for GNN processing. In some embodiments, the GNN sampler 222 may select the one or more sampled nodes according to positive sampling or negative sampling. For example, based on the positive sampling, the one or more sampled nodes may be selected from nodes that have a connection via edges with the one or more root nodes (e.g., adjacent to the one or more root nodes). Based on the negative sampling, the one or more sampled nodes may be selected from nodes that are not directly connected via edges with the one or more root nodes (e.g., not adjacent or close to the one or more root nodes). In some embodiments, the positive sampling may select from the neighboring nodes of the root node that are connected to and within a preset distance from the root node. The connection may be a direct (one edge between the source node to the destination node) or indirect (multiple edges from the source node to the destination node) connection. The “preset distance” may be configured according to the implementation. For example, if the preset distance is one, it means only the directly connected neighboring nodes are selected for positive sampling. If the preset distance is infinity, it means that the nodes are not connected, whether directly or indirectly. The negative sampling may select from nodes that are outside the preset distance from the root node. It is appreciated that the sampled nodes may be selected using any algorithms other than the positive sampling and the negative sampling.
Having selected the sampled nodes, the GNN sampler 222 may send the selection information of the sampled nodes to the GNN attribute processor 223. Based on the information of the sampled nodes, the GNN attribute processor 223 may be configured to fetch from the memory 230 information of the sampled nodes. In some embodiments, the information of the sampled nodes may include one or more features or attributes of each of the sampled nodes (also called attribute data). The GNN attribute processor 223 may be further configured to send the fetched information of the sampled nodes and the information of the one or more root nodes and their edges to the dedicated processors 240. The dedicated processors 240 may perform GNN processing based on the information received from the GNN attribute processor 223.
In some embodiments, the graph structure processor 221 and the GNN attribute processor 223 may fetch information from the memory 230 using the address mapper 224. The address mapper may be configured to provide hardware address information in the memory 230 based on information of nodes and edges. For example, a root node as a part of an input GNN may be identified using an identifier n111 (e.g., node n111 of
The system 200 shown in
Although the system 300 may include accelerated engines and processors to speed up GNN-related calculations, it is the access engine 310 that may become a bottleneck for the overall performance of the system 300, since the data retrieval performed by the access engine may be slower than the execution engines performing data processing.
In some embodiments, the GetNeighbor module 410 is configured to access or identify adjacent nodes for an input node identifier. For example, similar to the graph structure processor 221 shown in
In some embodiments, the GetSample module 420 is configured to receive information on one or more nodes from the GetNeighbor module 410 and perform node sampling on the one or more nodes for GNN processing. For example, similar to the GNN sampler 222 shown in
In some embodiments, the GetAttribute module 430 may be configured to receive information of selected or sampled nodes from the GetSample module 420 and fetch attribute information on the sampled nodes from memory (e.g., DDRs shown in
As shown in
In some embodiments, the GAE board 510 shown in
In some embodiments, the access engine 512 shown in
In some embodiments, the sampling module 513 can be configured perform functions similar to GetNeighbor module 410 and GetSample module 420. For example, the sampling module 513 can fetch structure data (e.g., information on one or more nodes, their edges, and their neighbors) from the SSDs 511, perform node sampling, and identify node identifiers of sampled nodes. The sampling module 513 can be further configured to send the node identifiers of the sampled nodes to the fetching module 514.
In some embodiments, the fetching module 513 can be configured to perform functions similar to GetAttribute module 430. For example, the fetching module 513 can fetch attribute data of the sampled nodes from the SSDs 511 based on the node identifiers of the sampled nodes. In some embodiments, after the fetching module 513 fetches the attribute data of the sampled nodes, the access engine 512 can be configured to send the attribute data of the sampled nodes to the host 540. In some embodiments, the graph data may not fit onto the SSDs 511 in its entirety. As a result, the fetching module 512 can be configured to fetch the attribute data of the sampled nodes from a remote location (e.g., an SSD located off the GAE board 510).
In some embodiments, the host 540 can be configured to receive the attribute data of the sampled nodes from the access engine 512, and perform GNN processing. For example, the host 540 can include a processor 541 and a host DDR 542. The host 540 can be configured to store the attribute data of the sampled nodes in the host DDR 542. The processor 541 can be configured to fetch from the host DDR 542 the attribute data of the sampled nodes, and perform graph neural network processing using the fetched attribute data of the sampled nodes. In some embodiments, the processor is similar to the processor 210 or the dedicated processor 240 shown in
In some embodiments, the access engine 610 comprises a memory controller configured to regulate and perform memory accessing of graph data on the plurality of SSDs. In some embodiments, each of the plurality of SSDs has the same storage capacity. For example, each of the plurality of SSDs has a storage capacity of 0.5 terabytes. In some embodiments, the plurality of SSDs can be accessed in parallel. For example, the memory controller of access engine 610 can fetch graph data from some or all of the plurality of SSDs simultaneously. Therefore, many SSDs can be implemented on the GAE board 600, and accessed by the access engine 610 in parallel. As a result, although SSDs can have a longer latency in memory accessing when compared with a DDR, the parallel accessing among the plurality of SSDs can compensate for the latency issue. Moreover, accessing graph data, such as sampled graph data, can involve accessing random memory locations. As a result, implementing many SSDs on the board and spread the graph data on the plurality of SSDs can take advantage of the parallelism among the many SSDs from the system design (e.g., GAE 600 of
In some embodiments, the number of the plurality of SSDs to be implemented on the GAE board 600 can be determined based at least on the bandwidth between each of the SSDs and the access engine. For example, if the bandwidth between one SSD and the access engine is 0.5 GB/s and a requirement for the total bandwidth in accessing sampled attribute data is 4 GB/s, the number of SSDs to be implemented on a single GAE board can be 8. In some embodiments, the memory capacity of each SSD can be determined based at least on an estimation of the size of the graph to be stored. For example, if the graphs to be stored are generally under 4 TB in size and there can be as many as 8 SSDs to be implemented on the GAE board 600, a memory capacity of 0.5 TB or higher for each SSD may suffice. In some embodiments, SSDs with a smaller memory capacity are generally less costly to implement. As a result, if a memory capacity of 0.5 TB or higher for each SSD can suffice for the system requirements, the SSDs with a smallest memory capacity that is higher than 0.5 TB (e.g., 512 GB) may be selected for implementation.
In some embodiments, the GAE board 600 comprises a memory 620 configured to facilitate the memory accessing on the plurality of SSDs. In some embodiments, the memory 620 can be a dedicated memory for the GAE board 600. In some embodiments, the memory 620 can be a DRAM (e.g., DDR SDRAM, DDR4 SDRAM, etc.). In some embodiments, the memory 620 is located on the GAE board 600, separately from the access engine 610. For example, if the access engine 610 is implemented on an FPGA, implementing large dedicated memories on the FPGA can be costly. As a result, the memory 620 can be implemented separately from the FPGA to reduce cost. At the same time, the memory 620 is still located on the same board as the access engine 610, hence preserving data transfer efficiency for the system. In some embodiments, the access engine 610 can be implemented on an application-specific integrated circuit (ASIC).
In some embodiments, the access engine 610 is configured to use the memory 620 to facilitate memory accessing of graph data. For example, when the access engine 610 performs sampling or fetching structure data or attribute data from the plurality of SSDs, the access engine 610 can store data in the memory 620 (e.g., as a memory buffer for performing memory accessing). In some embodiments, the memory 620 can be used as a memory buffer for the communication between the GAE board 600 and a host (e.g., the host 540 of
The GAE board 600 can be configured to be connected with a host (e.g., the host 540). In some embodiments, the connection between the GAE board 600 and the host is based on PCIE or PCIE switches. In some embodiments, the memory controller can be an open-channel controller, and host can be configured to perform memory management through the open-channel controller on the access engine 610 (e.g., via an operating system on the host).
In some embodiments, the host can be communicatively coupled with a plurality of GAE boards.
In some embodiments, each of the plurality of GAE boards 720 are similar to each other. For example, each of the plurality of GAE boards 720 can have a same number of SSDs, and each of the SSDs can have a same storage capacity. When the hardware structure across each of the GAE boards is similar, memory access management of the system may be improved since, for example, the host 710 can better predict the available memory capacity of each GAE board and perform data storing or data fetching accordingly. In some embodiments, the host 710 can be configured to perform memory management through an open-channel controller on the access engines 721 across each of the plurality of GAE boards 720 (e.g., via an operating system on the host 710).
Step 810 includes fetching a portion of structure data of a graph from one or more of a plurality of memory drives implemented on the circuitry board. In some embodiments, the fetching is performed by an access engine circuitry (e.g., access engine 512 of
Step 820 includes performing node sampling using the fetched portion of the structure data of the graph to select one or more sampled nodes. In some embodiments, the node sampling is performed by the access engine circuitry. In some embodiments, the node sampling is performed in a similar manner as the GetNeighbor module 410 of
Step 830 includes fetching a portion of attribute data of the graph the plurality of memory drives according to the selected one or more sampled nodes. In some embodiments, the portion of the attribute data of the graph is fetched by the access engine circuitry. In some embodiments, the portion of the attribute data of the graph is fetched from two or more of the plurality of memory drives in parallel. For example, as shown in
Step 840 includes sending the fetched portion of the attribute data of the graph to a host. In some embodiments, the fetched portion of the attribute data of the graph is sent by the access engine circuitry. In some embodiments, the host is similar to the host 540 of
Step 850 includes performing GNN processing for the graph using the fetched portion of the attribute data. In some embodiments, the GNN processing is performed by the host. In some embodiments, the host comprises one or more processors (e.g., the processor 541 of
Each process, method, and algorithm described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The processes and algorithms may be implemented partially or wholly in application-specific circuit.
When the functions disclosed herein are implemented in the form of software functional units and sold or used as independent products, they can be stored in a processor executable non-volatile computer-readable storage medium. Particular technical solutions disclosed herein (in whole or in part) or aspects that contribute to current technologies may be embodied in the form of a software product. The software product may be stored in a storage medium, comprising a number of instructions to cause a computing device (which may be a personal computer, a server, a network device, and the like) to execute all or some steps of the methods of the embodiments of the present application. The storage medium may comprise a flash drive, a portable hard drive, ROM, RAM, a magnetic disk, an optical disc, another medium operable to store program code, or any combination thereof.
Particular embodiments further provide a system comprising a processor and a non-transitory computer-readable storage medium storing instructions executable by the processor to cause the system to perform operations corresponding to steps in any method of the embodiments disclosed above. Particular embodiments further provide a non-transitory computer-readable storage medium configured with instructions executable by one or more processors to cause the one or more processors to perform operations corresponding to steps in any method of the embodiments disclosed above.
Embodiments disclosed herein may be implemented through a cloud platform, a server or a server group (hereinafter collectively the “service system”) that interacts with a client. The client may be a terminal device, or a client registered by a user at a platform, where the terminal device may be a mobile terminal, a personal computer (PC), and any device that may be installed with a platform application program.
The various features and processes described above may be used independently of one another or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain methods or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The exemplary systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.
The various operations of example methods described herein may be performed, at least partially, by an algorithm. The algorithm may be comprised in program codes or instructions stored in a memory (e.g., a non-transitory computer-readable storage medium described above). Such algorithm may comprise a machine learning algorithm. In some embodiments, a machine learning algorithm may not explicitly program computers to perform a function but can learn from training data to make a prediction model that performs the function.
The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented engines that operate to perform one or more operations or functions described herein.
Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented engines. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)).
The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented engines may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented engines may be distributed across a number of geographic locations.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
Although an overview of the subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or concept if more than one is, in fact, disclosed.
The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or sections of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art.
As used herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A, B, or C” means “A, B, A and B, A and C, B and C, or A, B, and C,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
The term “include” or “comprise” is used to indicate the existence of the subsequently declared features, but it does not exclude the addition of other features. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Number | Date | Country | Kind |
---|---|---|---|
202210773929.2 | Jul 2022 | CN | national |