This application claims priority to Chinese Patent Application No. 202110090986.6, filed on Jan. 22, 2021, the contents of which are hereby incorporated by reference in their entirety for all purposes.
The present disclosure relates to the technical field of artificial intelligence, more particularly, to the technical fields of computer vision, deep learning and imaging processing, and in particular to a method and device for training an image recognition model, electronic equipment, a computer readable storage medium and a computer program product.
Optical character recognition (OCR) refers to a process that electronic equipment checks characters printed on paper, determines their shapes by detecting dark and bright patterns and then translates the shapes into computer characters by a character recognition method.
With the development of deep learning, some existing OCR methods for classification and recognition are replaced by various deep neural networks. The training of the deep learning network models needs the support of a large number of data; however, for some scenes with private data, it is difficult to acquire a large amount of training data, resulting in low efficiency of optical character recognition.
The present disclosure provides a method and device for training an image recognition model, an image recognition method and device, electronic equipment and a medium.
According to one aspect of the present disclosure, a method for training an image recognition model is provided. The method for training the image recognition model includes: acquiring training data, wherein the training data includes training images for a preset vertical type, and the training images include a first training image containing real data of the preset vertical type and a second training image containing virtual data of the preset vertical type; building a basic model, wherein the basic model includes a deep learning network, and the deep learning network is configured to recognize the training images to extract text data in the training image; and training the basic model by using the training data to obtain the image recognition model.
According to another aspect of the present disclosure, an image recognition method is provided. The image recognition method includes: acquiring a target image to be recognized; and recognizing the target image based on the image recognition model according to the one aspect of the present disclosure so as to extract text data in the target image.
According to another aspect of the present disclosure, electronic equipment is provided. The electronic equipment includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for causing the electronic device to perform operations comprising: acquiring training data, wherein the training data includes training images for a preset vertical type, and the training images include a first training image containing real data of the preset vertical type and a second training image containing virtual data of the preset vertical type; building a basic model, wherein the basic model comprises a deep learning network, and the deep learning network is configured to recognize the training images to extract text data in the training image; and training the basic model by using the training data to obtain the image recognition model.
According to another aspect of the present disclosure, a non-transitory computer readable storage medium that stores one or more programs comprising instructions that, when executed by one or more processors of an electronic device, cause the electronic device to implement operations comprising: acquiring training data, wherein the training data includes training images for a preset vertical type, and the training images include a first training image containing real data of the preset vertical type and a second training image containing virtual data of the preset vertical type; building a basic model, wherein the basic model comprises a deep learning network, and the deep learning network is configured to recognize the training images to extract text data in the training image; and training the basic model by using the training data to obtain the image recognition model.
According to another aspect of the present disclosure, electronic equipment is provided. The electronic equipment includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for causing the electronic device to perform operations comprising: acquiring a target image to be recognized; and recognizing the target image based on the image recognition model obtained according to the first one aspect of the present disclosure so as to extract text data in the target image.
According to another aspect of the present disclosure, a non-transitory computer readable storage medium that stores one or more programs comprising instructions that, when executed by one or more processors of an electronic device, cause the electronic device to implement operations comprising: acquiring a target image to be recognized; and recognizing the target image based on the image recognition model obtained according to the one aspect of the present disclosure so as to extract text data in the target image.
It should be understood that the content described in this section is not intended to identify key or important features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will be easily understood through the following description.
The accompanying drawings illustrate the embodiments by way of example and constitute a part of the description, and are used to explain the example implementation manners of the embodiments together with the text description of the description. The illustrated embodiments are only for illustration and do not limit the scope of the claims. In all the accompanying drawings, the same reference numerals refer to similar but not necessarily identical elements.
The example embodiments of the present disclosure are described below in conjunction with the accompanying drawings, including various details of the embodiments of the present disclosure for facilitating understanding, and should be regarded as merely the way of example. Therefore, those of ordinary skill in the art should realize that various changes and modifications may be made to the embodiments described herein without departing from the scope of the present disclosure. Similarly, for clarity and conciseness, descriptions of well-known functions and structures are omitted in the following description.
In the present disclosure, unless otherwise specified, the terms “first”, “second” and the like are used to describe various elements and are not intended to limit the position relationship, the timing relationship or the importance relationship of the elements. Such terms are only for distinguishing one element from another element. In some examples, the first element and the second element may point to the same example of the element; and in some cases, based on the description of the context, the first element and the second element may also refer to different examples.
The terms used in the description of various examples in the present disclosure are only for describing specific examples and are not intended to perform limitation. Unless the context clearly indicates otherwise, if the number of the elements is not specifically limited, there may be one or more elements. In addition, the term “and/or” used in the present disclosure covers any one and all possible combinations of the listed items.
Optical character recognition (OCR) is a process of recognizing characters in an image as computer readable characters. OCR has wide application scenes since its development. For example, example application of OCR includes network picture character recognition, card recognition (for example, identity card, bank card and business card recognition), bill recognition (for example, value added tax invoice, travel itinerary, train ticket and taxi ticket recognition), license plate recognition and the like.
With the development of deep learning, some existing OCR methods are replaced by various deep neural networks. The deep learning algorithm may autonomously extract features, learn and achieve better recognition effects. However, the training of the deep learning network model needs the support of a large amount of data. For some application scenes, for example, a financial type (a bank check/customer advice), a card type (an identity card, a vehicle license and a driving license), a bill type (a value added tax invoice and a guarantee slip) and the like, the data is highly private, so it is difficult to collect enough data for training. Moreover, annotation of a large amount of data takes a lot of time and expense. Therefore, the present disclosure provides a method for training an image recognition model. A large amount of virtual data is generated according to a small amount of real data, and the real data and the large amount of generated virtual data are used simultaneously in the process of training the image recognition model, thereby improving the training efficiency and the recognition accuracy of the image recognition model.
The embodiments of the present disclosure are described below in detail with reference to the accompanying drawings.
In the embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the methods of the present disclosure.
In some embodiments, the server 120 may also provide other services or software applications which may include a non-virtual environment and a virtual environment. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of the client equipment 101, 102, 103, 104, 105 and/or 106 under the software as a service (SaaS) model.
In the configuration shown in
The users may use the client equipment 101, 102, 103, 104, 105 and/or 106 to interact with the image recognition model. The client equipment may provide an interface which enables the users of the client equipment to interact with the client equipment. The client equipment may also output information to the users through the interface. Although
The client equipment 101, 102, 103, 104, 105 and/or 106 may include various types of computer equipment, for example: portable handheld equipment, a general-purpose computer (such as a personal computer and a laptop computer), a workstation computer, wearable equipment, a game system, a thin client, various message transceiving equipment, a sensor and other sensing equipment. These computer equipment may run various types and versions of software application programs and operating systems, for example, MICROSOFT Windows, APPLE iOS, an UNIX-like operating system, and an Linux or Linux-like operating system (such as GOOGLE Chrome OS), or include various mobile operating systems, for example, MICROSOFT Windows Mobile OS, iOS, Windows Phone and Android. The portable handheld equipment may include a cell phone, a smart phone, a tablet computer, a personal digital assistant (PDA) and the like. The wearable equipment may include a head-mounted display and other equipment. The game system may include various handheld game equipment, game equipment supporting Internet and the like. The client equipment can execute various different application programs, for example, various Internet-related application programs, communication application programs (such as E-mail application programs) and short message service (SMS) application programs, and may use various communication protocols.
The network 110 may be any type of network well known to those skilled in the art, and may use any one of various available protocols (including but not limited to TCP/IP, SNA, IPX and the like) to support data communication. Only as an example, one or more networks 110 may be a local area network (LAN), an Ethernet-based network, a token ring, a wide area network (WAN), the Internet, a virtual network, a virtual private network (VPN), an internal network, an external network, a public switched telephone network (PSTN), an infrared network, a wireless network (such as Bluetooth, WIFI) and/or any combination of these networks and/or other networks.
The server 120 may include one or more general-purpose computers, dedicated server computers (for example, personal computer (PC) servers, UNIX servers, middle-end servers), blade servers, mainframe computers, server clusters or any other appropriate arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architectures involving virtualization (for example, one or more flexible pools which may be virtualized to maintain logical storage equipment of virtual storage equipment of the server). In various embodiments, the server 120 may run one or more services or software applications which provide the functions described below.
A computing unit in the server 120 may run one or more operating systems including any of the above operating systems and any commercially available server operating systems. The server 120 may also run any one of various additional server application programs and/or intermediate layer application programs, including an HTTP server, an FTP server, a CGI server, a JAVA server, a database server and the like.
In some embodiments, the server 120 may include one or more application programs to analyze and combine data feed and/or event update received from the users of the client equipment 101, 102, 103, 104, 105 and 106. The server 120 may further include one or more application programs to display data feed and/or real-time events by one or more display equipment of the client equipment 101, 102, 103, 104, 105 and 106.
In some embodiments, the server 120 may be server of a distributed system, or a server combined with a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with an artificial intelligence technology. The cloud server is a host product in a cloud computing service system so as to overcome the defects of high management difficulty and weak business scalability in the traditional physical host and virtual private server (VPS) service.
The system 100 may further include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data repository 130 may reside in various positions. For example, the data repository used by the server 120 may be local to the server 120, or may be remote from the server 120 and may be in communication with the server 120 through a network-based or dedicated connection. The data repository 130 may be of different types. In some embodiments, the data repository used by the server 120 may be a database, for example, a relational database. One or more of these databases may store, update and retrieve data to the databases and from the databases in response to commands.
In some embodiments, one or more of the databases 130 may also be used by an application program to store application program data. The database used by the application program may be different types of databases, for example, a key value repository, an object repository or a conventional repository supported by a file system.
The system 100 of
As shown in
In an example, the training image for training may be an image acquired by an image sensor (for example, may be a webcam, a camera and the like), wherein the image may be a color image or a grayscale image. In other examples, the image may also be a static image or a video image, which is not limited by the present disclosure. In addition, the image may be stored (for example, buffered) in storage equipment or a storage medium after being acquired by the image sensor.
In an example, during training, the size of the training image may be specified in advance, for example, long sides of the training image may be preset as 512, 768 and 1024.
In an example, the preset vertical type may include any one of a financial type, a card type and a bill type. For the image recognition model for the financial type, the training image may be a financial type of image, for example, a bank check image, a bank customer advice image and the like. The bank check image or the bank customer advice image may be an image shot by an image sensor or scanned by other sensors (such as a printer). For example, as shown in
In an example, for the image recognition model for the card type, the training image may be a card type of image, for example, an identity card image, a vehicle license image, a driving license image and the like. The identity card image, the vehicle license image or the driving license image may be an image shot by an image sensor or scanned by other sensors (such as a printer). For example, as shown in
In an example, for the image recognition model for the bill type, the training image may be a bill type of image, for example, a value added tax invoice image, a guarantee slip image, a train ticket image, a bus ticket image and the like. The value added tax invoice image or the guarantee slip image may be an image shot by an image sensor or scanned by other sensors (such as a printer). For example, as shown in
In an example, the fields to be recognized in the first training image and the second training image for training may be annotated in advance. The fields to be recognized may be understood as fields to be recognized in the images. It may be understood that one image may include a plurality of fields to be recognized. The number of the fields to be recognized may be preset in advance as required, which is not limited by the present disclosure. For example, for the cash check image, the fields to be recognized may include one or more of the date of issue, payee, check amount, check type and the like. For example, for the identity card image, the fields to be recognized may include one or more of name, gender, date of birth, address, identity card number and the like. For another example, for the value added tax invoice image, the fields to be recognized may include one or more of name of a purchaser, taxpayer identification number, invoice type, invoice amount and the like. The fields to be recognized may be annotated by any annotating methods in related art. In an example, the fields to be recognized in the image may be framed and the type or field value of the fields may be annotated.
As shown in
It may be understood that for financial type, card type and bill type related images, data is usually private and involves the privacy of users, so it is difficult to collect enough data for training. Therefore, when training the corresponding model of vertical type , the training image containing virtual data may be generated according to the acquired training image containing real data. For example, a large number (such as ten thousand level) of training images containing the virtual data may be generated on the basis of a small number of training images containing the real data.
In some examples, the second training image containing the virtual data may be generated on the basis of the first training image containing the real data of the preset vertical type. The second training image may be generated on the basis of the first training image through the following ways of: acquiring a first template image containing the real data of the preset vertical type; erasing field values of a plurality of fields to be recognized in the template image to obtain a second template image; and performing corpus filling on the erased field values of the plurality of fields to be recognized in the second template image to obtain a second training image containing virtual data of the preset vertical type.
In an example, for the identity card template image containing the real data, for example, the name is Mark, the gender is male, the birth is Apr. 15, 1990, and the fields to be recognized are “name” and “birth”. Then the field values “Mark” and “Apr. 15, 1990” may be erased, and corpus filling is performed at the erased positions to obtain a large number of training images containing filled corpus. In some examples, during corpus filling, corresponding corpora may be selected from the corresponding corpus for filling.
In an example, when generating the second training image containing the virtual data on the basis of the first training image containing the real data, the number of the generated second training images may be preset, for example, 8000, 10000 and the like. The number may be set according to the actual training requirement, which is not limited by the present disclosure.
In some examples, the step that train the basic model by using the training data to obtain the image recognition model may include: input the training data into the basic model in batches according to a preset parameter; determine an error between text data in the training image extracted by a deep learning network and real text data corresponding to the training image according to an error function of the basic model; and perform back-propagation training on the deep learning network based on the error to obtain the image recognition model. In an example, the preset parameter may be a batch size, that is, the number of the training data used in the once training process. In some examples, the batch size may be related to the size of the training image.
In an example, training data may be input and a weight parameter of the deep learning network may be initialized, and the data is sequentially input into the model in batches for forward-propagation. Then, an error between text data predicted by the model and a real value identified in the training data may be calculated, an error gradient is subjected to back-propagation and the weight is updated. Finally, iteration is performed, so that the error gradient tends to zero. Optionally, the error function of the basic model may be an Euclidean distance between the predicted text data of the model and the real value. Therefore, through continuous iterative updating, the predicted output of the image recognition model may approach the real value, thereby improving the recognition efficiency and recognition accuracy.
In an example, the basic model for training may be an initial deep learning network without any training. In other examples, the basic model may also be an intermediate image recognition model generated in the process of training the basic model by using the training data. Therefore, the recognition efficiency can be improved and time can be saved.
In some examples, the method for training the image recognition model provided by the exemplary embodiment of the present disclosure may further include: acquire a newly received image aiming at a preset vertical type, wherein the newly received image aiming at the preset vertical type includes: a first image containing the preset vertical type of real data, and the first image and the first training image have a same format; add the first image into the training data; and update the image recognition model on the basis of the training data to which the first image is added.
In an example, for the preset vertical type, during model training, the newly acquired or received image containing the real data of the vertical type may be annotated in advance and is added into the training data for training. Therefore, the recognition efficiency and accuracy of the model may be improved.
In some examples, the method for training the image recognition model provided by the exemplary embodiment of the present disclosure may further include: acquire a newly received image for a preset vertical type, wherein the newly received image for the preset vertical type includes: a first image containing the real data of the preset vertical type, and the first image and the first training image have different formats; generate a second image containing the virtual data of the preset vertical type on the basis of the first image, wherein the second image and the first image have a same format; add the first image and the second image into the training data; and update the image recognition model on the basis of the training data to which the first image and the second image are added.
It may be understood that the image of the preset vertical type may have a plurality of formats, and the plurality of formats have the same data, for example, the same field and field value, but the positions of the fields are different. For example, for the train ticket image, it includes a red style of train ticket and a blue style of train ticket. In an example, by taking a case where the newly received image is the blue style of train ticket image as an example, the blue style of train ticket image includes real data, such as real passenger name, train number, seat number and other information. Due to the small amount of the blue style of train ticket data for training, a large amount of the blue style of train ticket images containing virtual data may be generated in advance. Then, the blue style of train ticket image containing the real data and the blue style of train ticket image containing the virtual data are added into the training data for training so as to update the image recognition model. Therefore, the generalized recognition rate of the model can be increased.
In some examples, in each training process, the number of model iterations may be specified in advance, for example, may be 25000 to 80000. The number of model iterations may be set according to the training requirement, which is not limited by the present disclosure. In some examples, the image recognition model obtained in the training process may be evaluated. For example, a model 1 obtained when the number of the iterations is 10000, a model 2 obtained when the number of the iterations is 20000, a model 3 obtained when the number of the iterations is 30000, and a model 4 obtained when the number of the iterations is 40000, may be evaluated by using an evaluation set, thereby selecting an optimal model with the optimal performance. For example, the model of which the precision and recall rate are greater than a preset threshold may be selected as the optimal model. For example, the evaluation set may include images which are annotated in advance and include the real data and/or the virtual data.
In an example, the image recognition model may be a model which is trained by the method for training the image recognition model provided by the exemplary embodiment of the present disclosure. The target image to be recognized may be an image from which a text or field is to be extracted, and may be any type of image, for example, a financial type of image, a card type of image, a bill type of image and the like. In some examples, the recognition result may be a json result, that is, a key-value json format. For example, as shown in
An embodiment of the present disclosure further provides a model training platform.
The virtual data generating tool 601 may be used to read template information and generate a large amount of virtual data. The model training tool 602 may be used to train models. In an example, an output parameter of the model may include: the size of an input image, a batch size, the number of field categories, the number of maximum iterations and the like. In the training process, real data may be continuously added for training. The end-to-end evaluation tool 603 may be used to read a series of models produced by the model training tool 602 and evaluate the optimal model for the final data prediction. A user may annotate a template and real data in advance, wherein both the template and the real data are images with annotations. In some examples, the template may also be provided with configuration information (conf) and corpus information, wherein the configuration information may be a storage path and other information of the template. In an example, a pre-train model may be a basic model.
The acquisition unit 701 is configured to acquire training data. The training data includes a training image for a preset vertical type, wherein the training image includes a first training image containing real data of a preset vertical type and a second training image containing virtual data of a preset vertical type.
The building unit 702 is configured to build a basic model. The basic model includes a deep learning network, wherein the deep learning network is configured to recognize the training image so as to extract text data in the training image.
The training unit 703 is configured to train the basic model by using the training data so as to obtain the image recognition model.
It should be understood that each unit of the device 700 shown in
The receiving unit 801 is configured to receive a target image to be recognized.
The recognition unit 802 is configured to recognize the target image to be recognized on the basis of the image recognition model so as to extract text data in the target image to be recognized.
It should be understood that each unit of the device 800 shown in
Although specific functions are discussed above with reference to the specific unit, it should be noted that the functions of each unit discussed herein may be divided into a plurality units, and/or at least some functions of the plurality of units may be combined into a single unit. The execution action of the specific unit discussed herein includes: the specific unit itself executes the action, or alternatively, the specific unit calls or accesses another component or unit which executes the action (or executes the action in combination with the specific unit) in other ways. Therefore, the specific unit executing the action may include the specific unit itself which executes the action, and/or another unit which is called by the specific unit or accessed in other ways to execute the action.
Various technologies may be described herein in the general context of software hardware elements or program modules. Various units and sub-units described above may be implemented in hardware or in hardware combined with software and/or firmware. For example, these units and sub-units may be implemented as a computer program code/instruction, wherein the computer program code/instruction is configured to be executed in one or more processors and stored in a computer readable storage medium. Alternatively, these modules may be implemented as a hardware logic/circuit. For example, one or more of the units and the sub-units may be implemented in a system on a chip (SoC). The SoC may include an integrated circuit chip (including a processor (for example, a central processing unit (CPU), a microcontroller, a microprocessor, a digital signal processor (DSP) and the like), a memory, one or more communication interfaces, and/or one or more components in other circuits); moreover, the SoC may optionally execute the received program code and/or include embedded firmware to perform functions.
According to another aspect of the present disclosure, electronic equipment is provided. The electronic equipment includes: at least one processor; and a memory, in communication connection with the at least one processor. The memory stores an instruction executable by the at least one processor, wherein the instruction enables the at least one processor to perform the method provided by the embodiments of the present disclosure when being executed by the at least one processor.
According to another aspect of the present disclosure, a non-transitory computer readable storage medium storing a computer instruction is provided, wherein the computer instruction is configured to enable the computer to perform the method provided by the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided. The computer program product includes a computer program, wherein when the computer program is executed by a processor, the method provided by the embodiments of the present disclosure is implemented.
Hereinafter, the examples of such electronic equipment, non-transitory computer readable storage medium and computer program product are described with reference to
Referring to
As shown in
A plurality of parts in the equipment 900 are connected to the I/O interface 905, including: an input unit 906, an output unit 907, a storage unit 908 and a communication unit 909. The input unit 906 may be any type of equipment capable of inputting information to the equipment 900; the input unit 906 may receive the input digital or character information and generate key signal input related to user setting and/or function control of the electronic equipment; moreover, the input unit 906 may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone and/or a remote controller. The output unit 907 may be any type of equipment capable of presenting information, and may include, but is not limited to, a display, a loudspeaker, a video/audio output terminal, a vibrator and/or a printer. The storage unit 908 may include, but is not limited to, a magnetic disk and an optical disk. The communication unit 909 allows the equipment 900 to exchange information/data with other equipment through a computer network such as the Internet and/or various telecommunication networks, and may include, but is not limited to, a modem, a network card, infrared communication equipment, wireless communication equipment and/or a chipset, for example, Bluetooth™ equipment, 1302.11 equipment, WiFi equipment, WiMax equipment, cellular communication equipment and/or an analogue.
The computing unit 901 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller and the like. The computing unit 901 performs various methods and processes described above, for example, the methods 200 and 400. For example, in some embodiments, the methods 200 and 400 may be implemented as computer software programs, which are tangibly included into a machine readable medium, such as a storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed on the equipment 900 through the ROM 902 and/or the communication unit 909. When the computer program is loaded to the RAM 903 and is executed by the computing unit 901, one or more steps of the methods 200 and 400 described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the methods 200 and 400 through any other appropriate ways (for example, by virtue of firmware).
Various implementation manners of the system and technology described above herein may be implemented in a digital electronic circuit system, an integrated circuit system, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), an application specific standard product (ASSP), a system of a system on a chip (SOC), a complex programmable logic device (CPLD), computer hardware, firmware, software and/or a combination thereof. These various implementation manners may include: implementation in one or more computer programs, wherein the one or more computer programs may be executed and/or explained on a programmable system including at least one programmable processor; and the programmable processor may be a special-purpose or general-purpose programmable processor, and may receive data and an instruction from a memory system, at least one input device and at least one output device and transmit the data and the instruction into the memory system, the at least one input device and the at least one output device.
Program codes for implementing the method of the present disclosure may be written by any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special-purpose computer or other programmable data processing devices, so that the program codes enable the function/operation specified in the flowchart and/or the block diagram to be implemented when being executed by the processor or controller. The program codes may be completely executed on a machine, partially executed on a machine, partially executed on a machine and partially executed on a remote machine as an independent software package, or completely executed on a remote machine or a server.
In the context of the present disclosure, the machine readable medium may be a tangible medium, which may include or store a program for use by or in combination with an instruction executing system, device or equipment. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. The machine readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system, device or equipment, or any appropriate combination of the above. More specific examples of the machine readable storage medium may include one or more wires-based electrical connection, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), optical storage equipment, magnetic storage equipment, or any appropriate combination of the above.
To provide interaction with a user, the system and technology described herein may be implemented on a computer. The computer is provided with: a display device (for example, a cathode-ray tube (CRT) or a liquid crystal display (LCD) monitor) for displaying information to the user; and a keyboard and a pointing device (for example, a mouse or a trackball), wherein the user may provide input to the computer through the keyboard and the pointing device. Other types of devices may further be configured to provide interaction with the user; for example, feedback provided to the user may be sensing feedback in any forms (such as visual feedback, auditory feedback or tactile feedback); moreover, the input from the user may be received in any forms (including sound input, voice input or touch input).
The system and technology described herein may be implemented in a computing system including a background part (for example, as a data server), or a computing system including a middleware part (for example, an application server), or a computing system including a front end part (for example, a user computer with a graphical user interface or a network browser, wherein the user may interact with the implementation manner of the system and technology described herein through the graphical user interface or the network browser), or a computing system including any combination of the background part, the middleware part or the front end part. Parts of the system may be connected to each other through digital data communication (such as a communication network) in any forms or mediums. An example of the communication network includes: a local area network (LAN), a wide area network (WAN) and Internet.
The computer system may include a client side and a server. The client side and the server are generally far away from each other and generally interact through the communication network. A relationship between the client side and the server is generated through computer programs which operate on the corresponding computer and mutually have a client side-server relationship.
It should be understood that steps may be reordered, increased or deleted by using various forms of processes shown above. For example, various steps recorded in the present disclosure may be performed concurrently or sequentially or in different orders, as long as the expected result of the technical solutions disclosed by the present disclosure can be realized, and there is no limitation herein.
Although the embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it should be understood that the above method, system and equipment are only exemplary embodiments or examples, and the scope of the present disclosure is not limited by these embodiments or examples and is limited only by the authorized claims and their equivalent scopes. Various elements in the embodiments or examples may be omitted or may be replaced by their equivalent elements. In addition, various steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as the technology evolves, many elements described herein may be replaced by the equivalent elements that appear after the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202110090986.6 | Jan 2021 | CN | national |