This application claims priority to Chinese Patent Application No. 202311405086.1, filed with the China National Intellectual Property Administration on Oct. 26, 2023, and entitled “Method for Training Contrastive Learning Model, Acquiring Optimization Item, and Pushing Information,” which is incorporated herein by reference in its entirety.
The present disclosure relates to the field of artificial intelligence technology, and more particularly, to a method for training a contrastive learning model, acquiring an optimization item, and pushing information.
Query classification plays a crucial role in e-commerce, with the goal of assigning user queries to appropriate categories within a category tree. Contrastive learning is a commonly used optimization method for category prediction tasks. However, current contrastive learning methods classify query information based on information from leaf categories within the category tree, resulting in relatively low classification accuracy.
In a first aspect, the embodiments of the present disclosure provide a method for training a contrastive learning model. The method includes: obtaining sample data including first category information and query information; predicting a correlation between the first category information and the query information using the contrastive learning model; establishing a loss function based on the prediction results, wherein the loss function is configured to characterize accuracy of a prediction result; optimizing the loss function based on a semantic relationship between the first category information and the second category information, where the first category information and the second category information correspond to different semantic information within a category tree, and the semantic relationship is related to relative positions of the first category information and the second category information within the category tree; and training the contrastive learning model based on the optimized loss function.
In a second aspect, the embodiments of the present disclosure provide a method for acquiring an optimization item for use in the training the contrastive learning model as described in the first aspect. The method includes: obtaining first category information from first sample data and second category information from second sample data; determining a semantic relationship between the first category information and the second category information, where the semantic relationship is related to the relative positions of the first category information and the second category information within the category tree; and generating an optimization item based on the semantic relationship, where the optimization item is used to optimize the loss function of the contrastive learning model.
In a third aspect, the embodiments of the present disclosure provide an information pushing method. The method includes: obtaining target query information input by a user; predicting the target category information to which the target query information belongs using a contrastive learning model; wherein the contrastive learning model is trained using the training method of the contrastive learning model as described in the first aspect; and pushing object information of at least one object under the target category information to the user.
In a fourth aspect, the embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored. When executed by a processor, the program implements the method according to any of the embodiments of the present disclosure.
In a fifth aspect, the embodiments of the present disclosure provide a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable by the processor, wherein the processor, when executing the program, implements the method according to any of the embodiments of the present disclosure.
In the embodiments of the present disclosure, the semantic relationships between different category information are utilized to optimize the loss function. Since the semantic relationships are related to the relative positional relationships of different category information within the category tree, the optimized loss function based on these semantic relationships is used to train the contrastive learning model. This allows the trained model to perform category prediction based on the position of the category information within the category tree. This training method effectively leverages the hierarchical information of the category tree, thereby significantly improving the classification accuracy of the contrastive learning model for query information.
It should be understood that the above general description and the detailed description that follows are merely exemplary and explanatory, and are not intended to limit the present disclosure.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure.
The exemplary embodiments are described in detail herein, and their examples are illustrated in the accompanying drawings. In the following description, whenever a reference is made to the drawings, the same or similar elements are denoted by the same reference numerals across different figures, unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all possible embodiments consistent with the present disclosure. Instead, they are merely examples of devices and methods consistent with some aspects of the present disclosure as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used in the present disclosure and the appended claims, the singular forms “a,” “an,” “the,” and “this” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term “and/or” as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. Additionally, the term “at least one” as used herein indicates any combination of one or more of the items, including at least two of the items.
It should be understood that although the terms first, second, third, etc., may be used in the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present disclosure, the first information may also be referred to as the second information, and similarly, the second information may be referred to as the first information. Depending on the context, the term “if,” as used herein, may be interpreted as “when,” “while,” or “in response to determining.”
To enable those skilled in the art to better understand the technical solutions in the embodiments of the present disclosure, and to make the above objectives, features, and advantages of the embodiments of the present disclosure more apparent and comprehensible, the technical solutions in the embodiments of the present disclosure will be described in further detail below in conjunction with the accompanying drawings.
The goal of query classification is to assign user queries to the appropriate categories using a hierarchical product category classification method.
In query classification, a category tree can be pre-established, and the query information provided by the user can be assigned to a category within the category tree (e.g., a leaf category). The category tree is a hierarchical tree structure used to organize a large number of product categories in scenarios such as e-commerce. The category tree can be organized as a Directed Acyclic Graph (DAG) T=(C,E), wherein C and E represent the set of all categories and the set of all edges, respectively. Each node represents a category, and an edge between any two nodes indicates a parent-child relationship between these two categories. For example, an edge ci, cj∈C between node ci and node cj indicates that category ci is the parent category of category cj, wherein ci, cj∈C. Each category can have at most one parent category. Each category is further subdivided into one or more subcategories at the next level. This process continues recursively until the final level is reached, where the categories are named leaf categories. For a category ck that has no parent, a virtual croot can be added as the parent category in the category tree, with the relevant edge (croot, ck) added. Based on the number of hops between each category and croot, all categories can be divided into multiple subsets Cl, wherein l represents the l category level. The maximum category level is denoted as L.
Query classification tasks can be viewed as a type of category prediction task. Contrastive learning is a commonly used optimization method for category prediction tasks. It aims to learn feature representations by comparing similar sample pairs with dissimilar sample pairs and improves the quality of the learned representations by learning a distance metric that maps similar samples closer together and dissimilar samples farther apart in the feature space. However, contrastive learning is incompatible with hierarchical tree structures. It is designed to handle flat categories, treating leaf categories as having a “flat” relationship and ignoring the interrelationships between different categories within the category tree.
For example, in the application scenario shown in
Based on this, the present disclosure provides a training method for a contrastive learning model that uses the hierarchical information of the category tree as prior information to classify the query information provided by the user, thereby improving classification accuracy. As shown in
In step S12, sample data is obtained, which includes category information corresponding to a node on the category tree. This category information can be represented by the name of the category associated with the node on the category tree. For example, in the embodiment shown in
In some embodiments, the sample data also includes the ID corresponding to the category information within the sample data. Each piece of category information can be pre-assigned an ID. After assigning IDs to the various pieces of category information, a mapping relationship can be established between the IDs based on the hierarchical structure of the category tree, thereby determining the parent-child relationships between the various pieces of category information. For example, in the embodiment shown in
The function cp(,) can be formally defined as follows to map a pair of category information to their nearest common parent category in the category tree:
wherein, level(ck) maps the category information ck to its corresponding category level, i.e., level(ck)=l when ck∈Cl is at reach (ck, ci) is a reachability function indicating whether ck can reach ci in the category tree, where 1 denotes true. Max represents the selection of the parent category with the highest level as the common parent category.
The sample data also includes query information, which may include one or more keywords used to describe the object to be queried. The sample data may further include first label information, which characterizes the correlation between the first category information and the query information within the sample data. The first label information can be represented by a binary number; for example, the binary number “0” indicates that the first category information in the sample data is not related to the query information in the sample data, meaning the query information in the sample data does not belong to the category corresponding to the first category information. The binary number “1” indicates that the first category information in the sample data is related to the query information in the sample data, meaning the query information belongs to the category corresponding to the first category information. Taking the embodiment shown in
Typically, query classification tasks are multi-label, multi-class tasks, meaning they involve identifying at least one relevant category from multiple categories for the given query information. The query classification task can be transformed into multiple single-label classification tasks, with each label predicted independently, thereby fully leveraging the category-side information. Taking the embodiment shown in
In Step S14, the sample data can be input into the contrastive learning model to be trained, allowing the model to generate a first category vector corresponding to the first category information in the sample data and a query vector corresponding to the query information in the sample data. Referring to
After obtaining the first category vector and the query vector, the contrastive learning model can generate a prediction result of the correlation between the query information and the first category information based on these vectors. This prediction result can be represented by second label information. Similar to the first label information, the second label information can be represented by a binary number. Alternatively, the second label information can be a real number between 0 and 1 (a probability value), where a higher probability value indicates a greater likelihood that the contrastive learning model predicts the first category information to be relevant to the query information. The specific representation of the second label information can refer to the representation of the first label information and will not be elaborated here. It is understood that the representation of the first and second label information is not limited to binary numbers. For example, they can also be represented by characters or a combination of numbers and characters. The specific representation method is not restricted by the present disclosure.
The first label information represents the true result of the correlation between the query information and the first category information, while the second label information represents the predicted result of this correlation by the contrastive learning model. Therefore, the difference between the first label information and the second label information can be used to indicate the prediction accuracy of the contrastive learning model. In Step S16, a loss function can be established based on the difference between the first label information and the second label information, such as a cross-entropy loss function. Assuming the first label information is denoted as y and the second label information is denoted as ŷ, the cross-entropy loss function can be expressed as follows:
The contrastive learning model may also include a Multi-Layer Perceptron (MLP), which can take the query vector and the first category vector as inputs to generate the second label information as output. It should be noted that, although multiple encoders are depicted in the diagram, each encoder can share the same network parameters. Additionally, since the query information may include multiple keywords, to facilitate processing by the encoder, these multiple keywords can be combined and included in a CLS (classification) token. This allows the encoder to directly access the keyword combination within the CLS token and encode these keywords as a single unit. Furthermore, an SEP (separator) token can be appended after the query information to indicate the end of the query sequence.
In some embodiments, the sample data also includes a bag-of-words model for the first category information. The contrastive learning model can generate a category vector corresponding to this bag-of-words model (hereinafter referred to as the “bag-of-words vector”) and obtain the predicted result of the correlation between the query information and the first category information based on the query vector and the bag-of-words vector. This prediction result can be represented by third label information. The representation of the third label information can refer to the representation of the first label information and will not be elaborated on here. In the same sample data, the bag-of-words model and the first category information are used to represent the same category through different representation methods. In query classification tasks, the query information input by the user is often very short, and the amount of information contained in this query is limited. By including a bag-of-words model in the sample data, an augmented sample of the first category information can be constructed, providing more comprehensive category information. This enables the contrastive learning model to extract more useful information, thereby further improving the classification accuracy of the model.
The first label information represents the true result of the correlation between the query information and the first category information. Since the bag-of-words model and the first category information represent the same category using different representation methods, the third label information is also used to indicate the predicted result of the correlation between the query information and the first category information by the contrastive learning model. Therefore, the difference between the first label information and the third label information can be used to indicate the prediction accuracy of the contrastive learning model. After obtaining the third label information, another loss function, denoted as LA, can be generated based on the third label information and the first label information. This loss function can also be a cross-entropy loss function, and its specific representation can refer to Equation (2).
The following is an illustrative example of how to generate a bag-of-words model. One or more product titles under the first category information can be obtained, where each title includes at least one word. The word frequency of each word in the one or more product titles is then determined. Based on the word frequencies, at least one target word is identified from the set of words, and the bag-of-words model for the first category information is generated based on these target words.
At least one product under the first category information can be a product with a high search rate, click-through rate, or purchase rate (ranked in the top k1) within a historical time period. For example, suppose the first category information is “Router,” a title of a product under this category could be “Router 4A Gigabit Edition Dual-Core CPU Dual Gigabit 1200M Dual-Band Wireless Speed 5G Home Smart Router.” This title can be processed using word segmentation to obtain one or more words, and the word frequency of each word can be determined. The word frequency can be represented by the TF-IDF (Term Frequency-Inverse Document Frequency) value of the word. Then, a certain number of words with the highest frequencies (top k2) can be selected as target words. These target words are concatenated to form the bag-of-words model for the first category information. Furthermore, since the length of the input information to the model has an upper limit (denoted as m), the concatenated target words can be truncated to a length less than or equal to m. The truncated and concatenated target words then constitute the bag-of-words model for the first category information.
The above embodiment provides an illustrative example of generating a bag-of-words model. It should be understood that this is merely an exemplary explanation and is not intended to limit the present disclosure. For instance, in other embodiments, an n-gram model of the first category information can also be used as the bag-of-words model for the first category information. Alternatively, the first category information can be split into individual strokes, and the bag-of-words model for the first category information can be generated based on the resulting stroke sequences.
Since the first category information is textual, any addition, deletion, or alteration of some of its characters can easily lead to a meaning that is significantly different from the original text. Therefore, constructing augmented samples for textual information is challenging. In the embodiments of the present disclosure, the bag-of-words model and the first category information represent the same category using different methods. This approach successfully creates an augmented sample of the first category information, thereby addressing the aforementioned challenge.
Traditional contrastive learning only utilizes leaf categories and organizes them in a flat list. Taking the category tree shown in
Based on this, in Step S18, the loss function can be optimized based on the semantic relationship between the first category information and the second category information within the category tree where the first category information is located. The second category information can be the category information from another sample data entry. The semantic relationship between the first category information and the second category information is related to their relative positions in the category tree. This relative positional relationship includes the respective category levels of the first and second category information, as well as the parent-child relationship between the first and second category information.
For example, when the first category information and the second category information are at the same category level, if they belong to the same parent category, the distance between the first category vector corresponding to the first category information and the second category vector corresponding to the second category information should be reduced. If they belong to different parent categories, the distance between the first category vector and the second category vector should be increased. Additionally, if there is a parent-child relationship between the first category information and the second category information, there is likely common semantic information between them, and their category vectors should be aligned accordingly. On the other hand, if there is no parent-child relationship between the first and second category information, there is no shared semantic information, and their category vectors should be distanced from each other.
Optimization items can be established based on the semantic relationship between the first category information and the second category information within the category tree wherein the first category information is located. These optimization items can then be used to optimize the loss function obtained in Step S16. Specifically, the optimization items may include at least one of a first optimization item and a second optimization item. The first optimization item is also referred to as Local Hierarchical Contrastive Loss (LHCL), while the second optimization item is referred to as Global Hierarchical Contrastive Loss (GHCL). LHCL modifies the original formulation of the loss function by introducing the common parent category of the categories being compared. LHCL encourages the model to capture the semantic relationships between categories, thereby producing more meaningful representations (i.e., category vectors) that are consistent with the hierarchical structure of the tree. The effectiveness of LHCL relies on the assumption that the representation of a common parent category should be equivalent to the general semantics of its child categories. GHCL ensures this assumption by aligning the semantic representations across category levels. By adopting these two optimization items, the common semantic components between categories and subcategories are preserved, preventing the disruption of general semantic information during the representation learning process. Referring to
The common parent category information of the first category information and the second category information within the category tree can be obtained. Then, the first non-overlapping semantics between the first category information and the common parent category information, as well as the second non-overlapping semantics between the second category information and the common parent category information, can be obtained. The first optimization item is then obtained based on the first non-overlapping semantics and the second non-overlapping semantics.
Assume that both the first category information and the second category information belong to the l-th category level in the category tree, denoted as ci and cj, respectively. The common parent category information of the first category information and the second category information belongs to the k-th category level, denoted as ck, wherein, ck could belong to the l−1-th category level or any category level prior to the l−1-th category level. Taking the category tree shown in
The second category vector corresponding to the second category information, generated by the contrastive learning model, as well as the third category vector corresponding to the common parent category information, can be obtained. The first non-overlapping semantics can be derived based on the first difference vector between the first category vector and the third category vector. Similarly, the second non-overlapping semantics can be derived based on the second difference vector between the second category vector and the third category vector. Assume that the first category information and the second category information are “black tea” and “green tea,” respectively, with their common parent category information being “tea.” In this case, the first category vector corresponding to “black tea” (denoted as X1) and the third category vector corresponding to “tea” (denoted as Y) can be obtained, resulting in the first difference vector (denoted as Δ). Similarly, the second category vector corresponding to “green tea” (denoted as X2) and the category vector Y corresponding to “tea” can be obtained, resulting in the second difference vector (denoted as Δ). The first difference vector Δ1 can represent the first non-overlapping semantics, while the second difference vector Δ2 can represent the second non-overlapping semantics.
When obtaining the first optimization item, the first similarity between the first difference vector and the second difference vector can be calculated. Additionally, the second similarity between the first category vector and the category vector corresponding to the bag-of-words model of the first category information can be determined. The first optimization item is then derived based on the first similarity and the second similarity.
The method for obtaining the bag-of-words model can be referred to in the previous embodiments and will not be reiterated here. In this context, since the bag-of-words model and the first category information represent the same category, they can be treated as a pair of positive samples. On the other hand, since the first category information and the second category information represent different categories, they can be treated as a pair of negative samples. The optimization items can be determined based on the ratio of the second similarity to the first similarity. The first similarity can be represented by the cosine distance or other distance measures between the first difference vector and the second difference vector. The specific calculation method is as follows:
hci and hcj represent the first category vector corresponding to the first category information ci and the second category vector corresponding to the second category information cj, respectively. hck represents the third category vector corresponding to the common parent category information of the first and second category information. The denominator part indicates the normalization process. The method for obtaining the second similarity is similar to that of the first similarity and will not be reiterated here.
After obtaining the first similarity and the second similarity, the first optimization item can be derived based on the following method:
batch represents a batch of sample data, hci′ represents the category vector corresponding to the bag-of-words model ci′ of the first category information, and t denotes the temperature parameter. The summation symbol Σ in the denominator indicates the summation of the first similarity between the first category information and any different second category information within the same batch of sample data.
In the above embodiment, by subtracting the category vector corresponding to the parent category information from the category vector corresponding to the category information, the overlapping semantic parts between the category information and its parent category information can be filtered out. This allows for comparing only the non-overlapping semantic parts between different category information and their common parent category information during similarity comparison, thereby maximizing the differences between the non-overlapping semantic parts as much as possible. These non-overlapping semantic parts are decoupled with the help of the category tree and vary according to the relative positions of the two pieces of category information.
For category information that does not have a common parent category (e.g., “food” and “office supplies” in
At least one first target category information that has a parent-child relationship with the first category information within the second category information can be identified, as well as at least one second target category information within the second category information that does not have a parent-child relationship with the first category information. The second optimization item is then established based on the third similarity between the first category information and each of the first target category information, as well as the fourth similarity between the first category information and each of the second target category information. The purpose of the second optimization item is to ensure that the semantic similarity between the first category information and at least one first target category information is greater than the semantic similarity between the first category information and at least one second target category information.
Specifically, the contrastive learning model can generate the first category vector corresponding to the first category information, the first target category vector corresponding to each of the at least one first target category information, and the second target category vector corresponding to each of the at least one second target category information. Then, the first average vector of all first target category vectors and the second average vector of all second target category vectors can be obtained. The similarity between the first category vector and the first average vector can be used as the third similarity, while the similarity between the first category vector and the second average vector can be used as the fourth similarity.
Referring again to
The third similarity reflects the similarity between a category information and its subcategory information. Since category information with a parent-child relationship shares common semantic information, the third similarity can be increased to preserve the shared semantic information between the category information and its subcategory information. On the other hand, there is less shared semantic information between category information that does not have a parent-child relationship, so the fourth similarity can be decreased to distance the category vectors corresponding to category information without a parent-child relationship. The second optimization item can be obtained based on the ratio of the third similarity to the fourth similarity, as detailed below:
In the equation, (ci, cj)∈E indicates that the first category information ci and the second category information cj are connected by an edge E, meaning they have a parent-child relationship. (ci, cj)∉E indicates that the first category information ci and the second category information cj do not have a parent-child relationship. The term avg represents the averaging of the second category vectors hcj corresponding to each second category information cj. The vectors hci
The above equation (4) represents the second optimization item for the l-th category level. By summing the second optimization items across all category levels, the total second optimization item can be obtained, which can then be used to optimize the loss function. The total second optimization item can be denoted as:
The average vector of all sibling category information belonging to the same parent category information represents the filtered-out unique semantic components of each category information while retaining the common semantic components. Utilizing this characteristic, GHCL aligns the category vector corresponding to the category information with the common semantics of its subcategory information level by level, while pushing the category vectors corresponding to different category information at the same level far apart on a global scale. Under the influence of GHCL, LHCL can distinguish between the common semantics and unique semantics of any two pieces of category information.
After obtaining the optimization items and the loss function, these optimization items and the loss function can be weighted to derive the optimized loss function. For example, assuming the loss function includes the aforementioned LM and LA, the optimized loss function L can be expressed as:
wherein, λ1, λ2 and λ3 are all hyperparameters that can be set based on empirical values.
In step S110, the contrastive learning model can be trained using the optimized loss function. Since the optimization process utilizes the relative position relationships of different category information within the category tree, the category vectors generated by the trained contrastive learning model for different category information will be able to represent the relative positions of the corresponding category information within the category tree. For example, the distance between the category vectors corresponding to two subcategories belonging to the same parent category information will be closer, while the distance between the category vectors corresponding to two subcategories that do not share the same parent category information will be farther. Additionally, the distance between the category vector corresponding to a category information and its subcategory information will be smaller than the distance between the category vector corresponding to the same category information and a non-subcategory information.
It should be noted that the optimization items can be obtained through an optimization module. The optimization module can function as a plugin module, which is only required during the training process. After the contrastive learning model has been trained, the original model structure can be used for inference. In this way, during the inference phase, there is no need to alter the original model structure or increase the complexity of the contrastive learning model. This ensures that no additional processing complexity or time consumption is introduced during the inference phase.
Referring to
The specific details of the embodiments of the present disclosure can be found in the previously described embodiments of the training method. For the sake of brevity, they will not be reiterated here.
Referring to
In step S32, the user can input the target query information through a client or a web interface. For example, in the case of a client application, the client can provide a user interface where the user can edit or select the target query information. In a practical application scenario, such as an e-commerce client, the user can input the name or characteristics of the product they wish to purchase on the user interface of the e-commerce client. The name or characteristics of the product would then serve as the target query information.
In step S34, the contrastive learning model can output at least one target category information associated with the target query information. For example, if the user inputs the target query information “notebook,” the target category information may include both “laptop” and “paper notebook.” The contrastive learning model can be trained based on any of the previously described embodiments, and the specific training method is not reiterated in this embodiment.
In step S36, the object information of at least one object under the target category information can be pushed to the user. Continuing with the aforementioned e-commerce scenario, after the user inputs the target query information on the e-commerce client, the e-commerce client can send the target query information to the e-commerce server. The e-commerce server, where the contrastive learning model is deployed, can predict the target category information associated with the target query information and push the product information of at least one product under the target category information to the user's e-commerce client for display. By performing category prediction, the system can accurately understand the user's intent, thereby retrieving and displaying product information that is strongly related to the user's intent, ultimately increasing the click-through rate and conversion rate for the user.
The present disclosure also provides a computer-readable storage medium on which computer instructions are stored. When executed by a processor, these computer instructions implement the addressing method described in any embodiment of the present disclosure. The computer-readable storage medium can be phase-change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory, or other memory technologies, compact disc read-only memory (CD-ROM), digital versatile discs (DVD) or other optical storage, magnetic tape cassettes, magnetic tape magnetic disk storage, or other magnetic storage devices, or any other non-transitory medium that can be used to store information accessible by a computing device.
From the above description of the embodiments, it can be understood that the embodiments of the present disclosure can be implemented by means of software combined with the necessary general hardware platform. Based on this understanding, the technical solutions of the embodiments of the present disclosure, or at least the parts contributing to the prior art, can be embodied in the form of a software product. This computer software product can be stored in a storage medium, such as ROM/RAM, magnetic disks, optical disks, etc., and includes several instructions to enable a computer device (which can be a personal computer, server, network device, or the like) to execute the methods described in various embodiments of the present disclosure or in certain parts of these embodiments.
The systems, devices, modules, or units described in the above embodiments can be specifically implemented by a computer device or entity, or by a product with certain functionalities. A typical implementation device is a computer, which can take various forms, such as a personal computer, laptop, cellular phone, camera phone, smartphone, personal digital assistant, media player, navigation device, email communication device, game console, tablet computer, wearable device, or a combination of any of these devices.
The various embodiments in this disclosure are described in a progressive manner, with similar or identical parts in different embodiments referencing each other, while the emphasis in each embodiment is on the differences from other embodiments. Particularly for the device embodiments, since they are essentially similar to the method embodiments, the descriptions are relatively brief, and relevant details can be found in the descriptions of the method embodiments. The device embodiments described above are merely illustrative. The modules described as separate components may or may not be physically separated; in implementing the solutions of the embodiments of this disclosure, the functions of the various modules can be realized in one or multiple software and/or hardware components. It is also possible to select some or all of the modules based on actual needs to achieve the objectives of this embodiment. Those skilled in the art can understand and implement this without undue creative effort.
The above descriptions are merely specific embodiments of the present disclosure. It should be noted that those of ordinary skill in the art may make several improvements and modifications without departing from the principles of the disclosed embodiments, and these improvements and modifications should also be considered within the scope of protection of the disclosed embodiments.
Number | Date | Country | Kind |
---|---|---|---|
202311405086.1 | Oct 2023 | CN | national |