The present application claims priority to Chinese Patent Application No. 202010808807.3, filed with the Chinese Patent Office on Aug. 12, 2020 and entitled “SCENE GENERATION METHOD, APPARATUS AND SYSTEM, DEVICE AND STORAGE MEDIUM”, which is incorporated herein by reference in its entirety.
The present application relates to the field of intelligent vehicle technology, and in particular to a scene generation method, a scene generation apparatus, a scene generation system, a scene generation device and a storage medium.
Multiple output devices such as a display screen, an ambient light, a seat, an audio, an air conditioner, etc. are provided on a vehicle. These output devices usually perform a certain function alone, but cannot cooperate with each other to realize a certain scene.
The embodiments of the present application provide a scene generation method, a scene generation apparatus, a scene generation system, a scene generation device and a storage medium, to solve the problems existing in the relevant art. The technical solutions are as follows.
In a first aspect, an embodiment of the present application provides a scene generation method, applied to an in-vehicle scene APP, and the method includes: controlling, according to an execution request for a target scene, a vehicle to enter a safety preparation state for implementing the target scene; analyzing a script of the target scene, to generate a scene execution strategy for a respective scene execution component of the vehicle, wherein the scene execution strategy includes an execution function and execution time information of the respective scene execution component; and triggering the respective scene execution component to work according to the scene execution strategy, to generate the target scene.
In a second aspect, an embodiment of the present application provides a scene generation method, applied to an audiovisual domain controller, and the method includes: receiving a safety control request sent by an in-vehicle scene APP, wherein the safety control request is configured for requesting a vehicle to enter a safety preparation state for implementing a target scene; generating a corresponding safety service request according to the safety control request; and converting the safety service request into a first SOA gateway signal, and sending the first SOA gateway signal to a corresponding safety domain controller, wherein the first SOA gateway signal is configured for causing the safety domain controller to convert it into a safety control instruction and send the safety control instruction to a corresponding safety execution component, to control the respective safety execution component to enter a corresponding safety preparation state.
In a third aspect, an embodiment of the present application provides a scene generation method, applied to a safety domain controller, and the method includes: receiving a first SOA gateway signal sent by an audiovisual domain controller; converting the first SOA gateway signal into a corresponding safety control instruction; sending each safety control instruction to a corresponding safety execution component, to control the respective safety execution component to enter a corresponding safety preparation state.
In a fourth aspect, an embodiment of the present application provides a scene generation system which includes: an audiovisual domain controller, wherein an in-vehicle scene APP is installed on the audiovisual domain controller, and the in-vehicle scene APP is configured for performing the method of the foregoing first aspect, and the audiovisual domain controller is configured for performing the method of the foregoing second aspect; a safety domain controller, communicatively connected to the audiovisual domain controller, wherein the safety domain controller is configured for performing the method of the foregoing third aspect; a safety execution component, communicatively connected to the safety domain controller, which is configured for entering a corresponding safety preparation state according to a safety control instruction of the safety domain controller; a scene execution domain controller, communicatively connected to the audiovisual domain controller, which is configured for converting a second SOA gateway signal sent by the audiovisual domain controller into a function execution control instruction, and sending the function execution control instruction to a corresponding scene execution component; and the scene execution component, communicatively connected to the safety domain controller, which is configured for working according to the function execution control instruction of the safety domain controller.
In a fifth aspect, an embodiment of the present application provides a scene generation apparatus which includes: a controlling module, which is configured for controlling, according to an execution request for a target scene, a vehicle to enter a safety preparation state for implementing the target scene; an analyzing module, which is configured for analyzing a script of the target scene, to generate a scene execution strategy for a respective scene execution component of the vehicle, wherein the scene execution strategy includes an execution function and execution time information of the respective scene execution component; and a triggering module, which is configured for triggering the respective scene execution component to work according to the scene execution strategy, to generate the target scene.
In a sixth aspect, an embodiment of the present application provides a scene generation device which includes: at least one processor; and a memory communicatively connected to the at least one processor; wherein instructions enable to be executed by the at least one processor is stored in the memory, and the instructions, when executed by the at least one processor, cause the at least one processor to perform the scene generation method of the embodiments of the present application.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, and the computer-readable storage medium stores computer instructions which, when executed by a processor, implement the scene generation method of the embodiments of the present application.
Advantages or beneficial effects of the foregoing technical solutions at least include: under the condition of ensuring safety and controllability of the vehicle, more diverse scenes are provided to a user on the vehicle side, and the entertainment and application experience of the vehicle are improved.
The above summary is only intended for description purpose, not for limiting in any manner. In addition to the illustrative aspects, implementations and features described above, further aspects, implementations and features of the present application are made more apparent with reference to the drawings and the following detailed depictions.
In the drawings, the same reference numbers throughout multiple drawings denote the same or similar members or elements unless otherwise specified. These drawings are not necessarily drawn to scale. It should be understood that these drawings only depict some embodiments of the present application and should not be construed as limiting the scope of the present application.
Only some exemplary embodiments are briefly described below. As a person skilled in the art may realize, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Therefore, the drawings and description are regarded as illustrative and not restrictive in nature.
As shown in
At step S101, controlling, according to an execution request for a target scene, a vehicle to enter a safety preparation state for implementing the target scene.
The target scene may be a scene selected by a user, that is, the user may select the target scene based on the in-vehicle scene APP, and then the execution request for the target scene is triggered. The in-vehicle scene APP may adjust the state of the vehicle according to the execution request, and control the vehicle to enter the safety preparation state for implementing the target scene, thereby putting the vehicle in a safety state for realizing the target scene.
In an implementation, controlling the vehicle to enter the safety preparation state for implementing the target scene may include: controlling an electrical park brake (EPB) system to execute a parking function; and/or controlling a suspension to lift to a preset height; and/or controlling a gear position of the vehicle to be a park position, and controlling the gear position of the vehicle to be in a non-switchable state; and/or determining that the vehicle is in a state of not being power off.
EPB measures a slope by a longitudinal acceleration sensor built in its computer so that a sliding force of the vehicle on the slope generated due to gravity may be calculated. The computer applies braking force to rear wheels by a motor to balance the sliding force so that the vehicle may park on the slope, and then in the process of the vehicle realizing the target scene, it can be ensured that unsafe factors such as vehicle slipping, etc. may not appear. By setting the gear position of the vehicle to the park position, a risk of accidents caused by negligently switching to other gear positions in the process of realizing the target scene is avoided. The state of not being power off may guarantee that sufficient electric power can be provided in the process of the vehicle realizing the target scene.
At step S102, analyzing a script of the target scene, to generate a scene execution strategy for a respective scene execution component of the vehicle, wherein the scene execution strategy includes an execution function and execution time information of the respective scene execution component.
Scripts of multiple scenes may be stored in a cloud library, and these scripts may be preset by a programmer in the cloud library and may be set by any user according to actual needs in the process of using. The cloud library may be a vehicle TSP (Telematics Service Provider) cloud library. The cloud library may automatically push the script of a scene to each vehicle, and the user may also select a scene by a terminal APP, so that the script corresponding to the scene may be download to the vehicle. The terminal includes but not limited to a mobile phone, a personal computer, a tablet personal computer, etc. Terminal APPs such as a mobile phone APP, a personal computer APP, a tablet personal computer APP, etc. may connect to the in-vehicle scene APP through wide area network so that when the user selects a scene by the terminal APPs such as a mobile phone APP, a personal computer APP, a tablet personal computer APP, etc., the script of the scene may be downloaded to the in-vehicle scene APP on an in-vehicle infotainment side. In an example, as shown in
The downloaded scene is stored in a scene library, and the user may select the target scene based on a terminal APP or in-vehicle scene APP.
The in-vehicle scene APP may obtain, by analyzing the script of the target scene, the scene execution strategy for each scene execution component such as the execution function for each scene execution component combined by timeline and trigger conditions.
At step S103, triggering the respective scene execution component to work according to the scene execution strategy, to generate the target scene. For example, the respective scene execution components are controlled to perform corresponding execution functions according to the timeline and trigger conditions.
The scene execution component includes but not limited to various output devices of the vehicle, such as a seat module, an air conditioner vent module, a window module, a door module, a steering wheel module, an exterior light module, a massage module, an interior light module, a dash board, a central control screen, a copilot screen, a HUD (Head Up Display), an air conditioner module, a fragrance module, etc. In other words, the scene execution component on the in-vehicle infotainment side is an executive body which realizes the scene on the in-vehicle infotainment side, and a 5D scene with various senses such as hearing, sight, touch, smell, etc. may be achieved on the in-vehicle infotainment side by triggering multiple scene execution components.
In an implementation, as shown in
At step S301, receiving a scene generation instruction from a user, wherein the scene generation instruction includes the target scene selected by the user.
At step S302, displaying a scene execution effect corresponding to the target scene.
At step S303, after receiving a confirmation instruction from the user on the target scene, generating the execution request for the target scene.
The way of the user sending the scene generation instruction includes, but not limited to interface operation, voice mode, etc. of the terminal APP or the in-vehicle scene APP on the in-vehicle infotainment side. Specifically, the user may perform a corresponding interface operation via the interface of the in-vehicle scene APP, thereby actively sending the scene generation instruction by the way of interface operation. The user may issue a voice instruction to a microphone on the in-vehicle infotainment side, thereby actively sending the scene generation instruction for the target scene. The user may issue the scene generation instruction for the target scene via the terminal APP, and trigger the target scene via the network communication between the terminal APP and the in-vehicle scene APP.
In other words, in the embodiments of the present application, the selection and enablement of the target scene may be realized not only by the in-vehicle scene APP on the in-vehicle infotainment side, but also by the terminal APPs, such as a mobile phone APP, a personal computer APP, a tablet personal computer APP, etc. The terminal APP and the in-vehicle scene APP (such as an audiovisual domain controller for installing the in-vehicle scene APP) are connected by wide area network, and on the basis of satisfying user identity verification, the possibility of vehicle control and multi-scene external triggering of the vehicle may be realized, and the range of use is increased.
The scene execution effect corresponding to the target scene may be displayed on the in-vehicle infotainment side, which causes the user to preview the scene execution effect dynamically. In addition, a confirmation button for the target scene is shown, for example, a display window and a selection button are provided. After receiving the user's confirmation instruction on the target scene, the execution request for the target scene is generated, and go to step S101.
In an example, as shown in
In an implementation, as shown in
In an example, as shown in
In an implementation, as shown in
In other words, in the process of realizing the target scene, the user may trigger a scene pause instruction or a scene exit instruction via the in-vehicle scene APP, thereby causing the vehicle to pause or exit the target scene.
In an implementation, as shown in
In an example, controlling the vehicle to exit the safety preparation state may include: canceling the parking function of EPB; and/or controlling the suspension to return to a regular height; and/or canceling the setting that the gear position of the vehicle is the park position; and/or canceling the function that the vehicle is in a state of not being power off.
In other words, after the target scene is implemented, an original setting of the vehicle may be restored, thereby making the vehicle in a normal and convenient driving state.
In an implementation, the step S101 may include: sending a safety control request to an audiovisual domain controller, to cause the audiovisual domain controller to generate a corresponding safety service request according to the safety control request, convert the safety service request into a first SOA gateway signal, and send the first SOA gateway signal to a corresponding safety domain controller; wherein the safety control request is configured for requesting the vehicle to enter the safety preparation state for implementing the target scene; the first SOA gateway signal is configured for causing the safety domain controller to convert it into a safety control instruction and send the safety control instruction to a corresponding safety execution component, to control the respective safety execution component to enter a corresponding safety preparation state.
In an example, continue to refer to
Specifically, as shown in
In an implementation, as shown in
When the power supply module is in low voltage, the safety domain controller includes a vehicle domain controller, and the low voltage power supply module is controlled by the vehicle domain controller. When the power supply module is in high voltage, the safety domain controller includes a driving domain controller, and the high voltage power supply module is connected to the driving domain controller by the CAN bus and then controlled by the driving domain controller.
In an example, as shown in
In an implementation, the step S103 may include: sending, according to the scene execution strategy, a scene execution control request to the audiovisual domain controller, to cause the audiovisual domain controller to generate a corresponding scene execution service request according to the scene execution control request, convert the scene execution service request into a second SOA gateway signal, and send the second SOA gateway signal to a corresponding scene execution domain controller; wherein the second SOA gateway signal is configured for causing the scene execution domain controller to convert it into a function execution control instruction and send the function execution control instruction to the corresponding scene execution component, to control the respective scene execution component to work according to the scene execution strategy.
In an example, continue to refer to
Specifically, as shown in
In an implementation, as shown in
The scene execution domain controller may further include the audiovisual domain controller. Correspondingly, the scene execution component includes at least one of the following: an interior light module, a dash board, a central control screen, a copilot screen, a HUD. The interior light module is connected to the audiovisual domain controller via the CAN bus. The dash board, the central control screen, the copilot screen and the HUD are connected to the audiovisual domain controller via LVDS wire.
The scene execution domain controller may further include the driving domain controller. Correspondingly, the scene execution component includes the air conditioner module. The air conditioner module is connected to the driving domain controller via the CAN bus, and the air conditioner module may be communicatively connected to the fragrance module via LIN, thereby controlling the fragrance module to release corresponding fragrance via the air conditioner module.
Further, the exterior light module may control multiple exterior lights. In an example, as shown in
The DLP projection light 51 may be used for conventional high and low beam, and may also be used for projecting projection data such as videos and photos.
As shown in
As shown in
In an example, when the target scene selected by the user through the in-vehicle scene APP is a scene of resting on a beach, the step S103 may include: the in-vehicle scene APP sends a scene execution control request to the audiovisual domain controller, and the scene execution control request includes the scene execution strategy for the scene of resting on a beach. The audiovisual domain controller receives the scene execution control request, generates a scene execution service request, and determines that the scene execution domain controllers corresponding to the scene of resting on the beach are the audiovisual domain controller, the vehicle domain controller and the driving domain controller.
The audiovisual domain controller may directly generate a scene execution control instruction and send it to the dash board, the central control screen and the copilot screen. The scene execution control instruction further includes multimedia resources of ocean, beach and seabird images, thereby triggering the dash board, the central control screen and the copilot screen to play the images of ocean, beach and seabird. The audiovisual domain controller may directly generate the interior light scene execution control instruction and send it to the ambient light, to trigger the ambient light to adjust to yellow of beach.
The audiovisual domain controller converts the scene execution service request into a second SOA gateway signal, and sends the second SOA gateway signal to the vehicle domain controller and driving domain controller. The vehicle domain controller generates, according to the second SOA gateway signal, multiple scene execution control instructions, and respectively send them to the seat module and the air conditioner vent module, thereby triggering the main driver's seat and the copilot's seat to adjust to a comfortable lying position, triggering a heating module to heat the main driver's seat and the copilot's seat to a higher temperature, and triggering the massage module to start seat massage; and triggering the air conditioner vent module to adjust the vent according to time and gradually change the air volume according to time. The driving domain controller generates, according to the second SOA gateway signal, a scene execution control instruction, and sends it to the air conditioner module, to trigger the air conditioner to adjust to a coastal climate temperature, and trigger the fragrance module to release the fragrance smelling like the sea.
In another example, when the target scene selected by the user through the in-vehicle scene APP is a forest scene, the step S103 may include: the in-vehicle scene APP sends a scene execution control request to the audiovisual domain controller, and the scene execution control request includes the scene execution strategy for the forest scene. The audiovisual domain controller receives the scene execution control request, generates a scene execution service request, and determines that the scene execution domain controllers corresponding to the forest scene are the audiovisual domain controller, the vehicle domain controller and the driving domain controller.
The audiovisual domain controller may directly generate a scene execution control instruction and send it to the dash board, the central control screen and the copilot screen. The scene execution control instruction further includes multimedia resources of forest and jungle animals images, thereby triggering the dash board, the central control screen and the copilot screen to display the images of forest and jungle animals. The audiovisual domain controller may directly generate an interior light scene execution control instruction and send it to the ambient light, to trigger the ambient light to adjust to green of forest.
The audiovisual domain controller converts the scene execution service request into a second SOA gateway signal, and sends the second SOA gateway signal to the vehicle domain controller and driving domain controller. The vehicle domain controller generates, according to the second SOA gateway signal, multiple scene execution control instructions, and respectively sends them to the seat module and the air conditioner vent module, thereby triggering the massage module to start the seat massage of the main driver's seat and copilot's seat; and triggering the air conditioner vent module to adjust the vent according to time and gradually change the air volume according to time. The driving domain controller generates, according to the second SOA gateway signal, a scene execution control instruction, and sends it to the air conditioner module, to trigger the air conditioner to adjust to a temperature that matches a forest environment; trigger the fragrance module to release the fragrance smelling like the forest; and trigger an air humidifier to start.
In another example, when the target scene selected by the user through the in-vehicle scene APP is a vehicle show scene, the step S103 may include: the in-vehicle scene APP sends a scene execution control request to the audiovisual domain controller, and the scene execution control request includes the scene execution strategy for the vehicle show scene. The audiovisual domain controller receives the scene execution control request, generates a scene execution service request, and determines that the scene execution domain controllers corresponding to the vehicle show scene are the audiovisual domain controller and the vehicle domain controller.
The audiovisual domain controller converts the scene execution service request into a second SOA gateway signal, and sends the second SOA gateway signal to the vehicle domain controller. The vehicle domain controller generates, according to the second SOA gateway signal, multiple scene execution control instructions, and respectively sends them to the door module, thereby controlling the door module to trigger the front and rear doors to open sequentially, and trigger a NT door to flap up and down, and trigger a rearview mirror to fold, and trigger the ambient light on the side door to change color and flash, and trigger a courtesy light on the side door to project a welcome pattern. In addition, when the logo light on the NT door is at the highest point, the logo light is triggered to project a corresponding logo. Further, the vehicle domain controller generates, according to the second SOA gateway signal, a scene execution control instruction, and sends it to the exterior light module. The audiovisual domain controller sends the display content of the ISD light and the display content of the DLP projection light to the exterior light module, thereby triggering the ISD light and the DLP projection light to demonstrate dynamic photos and projections respectively. The exterior light module may also trigger the position light, fog light and brake light on the rear to flash according to a preset logic. The audiovisual domain controller may directly generate the scene execution control instruction, and send it to multiple screens, such as the central control screen, the copilot screen, a rear screen, etc. The scene execution control instruction further includes multimedia resources, thereby triggering multiple screens such as the central control screen, the copilot screen, the rear screen, etc. to interactively play the multimedia resources so that an animation demonstration for the multi-screen interaction is realized. The audiovisual domain controller may directly generate an interior light scene execution control instruction and send it to the ambient light inside the vehicle, to trigger the ambient light to change color and flash according to the preset logic. In addition, the scene execution strategy for the vehicle show scene may further include time information of the scene execution component so that part or all of the foregoing scene execution components may be triggered to operate according to a preset timeline.
In another exemplary application, as shown in
Further, as shown in
In an example, the audiovisual domain controller may include an upper layer controller and a lower layer controller, such as MPU (Microprocessor Unit) and MCU (Microcontroller Unit). The MCU may guarantee the stability and safety, and an APP is installed in an Android system based on the MPU hardware, and the system may access, preview and download real-time updated scene services at the cloud through wifi or a mobile network of the T-BOX. By using the Android system, the cost of network development may be reduced due to the generality of this system.
As shown in
The script analyzing module is configured for analyzing a script of a target scene, to generate a scene execution strategy for a respective scene execution component, which includes an execution function for the respective scene execution component combined by timeline and trigger conditions. The interface demonstrating module is configured for dynamically displaying the scene execution effect of the target scene selected by the user. The state determining module is configured for determining whether the vehicle has entered a safety preparation state for implementing the target scene, that is the vehicle is in the safety state which can realize the target scene. The state determining module may send a safety control request to an audiovisual domain controller, to control and confirm the vehicle has entered the safety preparation state for the target scene. The signal sending module is configured for sending a scene execution request to the audiovisual domain controller after determining that the vehicle has entered the safety preparation state, and then controlling each scene execution component to work according to the scene execution strategy, to realize the target scene.
The foregoing module and architecture are merely an example of realizing the scene generation method of the embodiments of the present application, and are not limiting. The skilled in the art may adjust and set as needed.
An embodiment of the present application further provides a scene generation method which may be applied to an audiovisual domain controller, and the method includes: receiving a safety control request sent by an in-vehicle scene APP, wherein the safety control request is configured for requesting a vehicle to enter a safety preparation state for implementing the target scene; generating, according to the safety control request, a corresponding safety service request; and converting the safety service request into a first SOA gateway signal, and sending the first SOA gateway signal to a corresponding safety domain controller, wherein the first SOA gateway signal is configured for causing the safety domain controller to convert it into a safety control instruction and send the safety control instruction to a corresponding safety execution component, to control the respective safety execution component to enter a corresponding safety preparation state.
In an implementation, the method further includes: receiving a scene execution control request that is sent by the in-vehicle scene APP according to a scene execution strategy, wherein the scene execution strategy includes an execution function and execution time information of a respective scene execution component; generating, according to the scene execution control request, a corresponding scene execution service request; converting the scene execution service request into a second SOA gateway signal, and sending the second SOA gateway signal to a corresponding scene execution domain controller; wherein the second SOA gateway signal is configured for causing the scene execution domain controller to convert it into a function execution control instruction and send the function execution control instruction to a corresponding scene execution component, to control the respective scene execution component to work according to the scene execution strategy.
An embodiment of the present application further provides a scene generation method which may be applied to a safety domain controller, and the method includes: receiving a first SOA gateway signal sent by an audiovisual domain controller; converting the first SOA gateway signal into a corresponding safety control instruction; and sending each safety control instruction to a corresponding safety execution component, to control the respective safety execution component to enter a corresponding safety preparation state.
In an implementation, controlling the respective safety execution component to enter the corresponding safety preparation state includes: controlling an electrical park brake system control module to execute a parking function; and/or controlling an adjustable suspension module to cause a suspension to lift to a preset height; and/or controlling a gear shifting module to cause a gear position of a vehicle to be a park position, and to cause the gear position of the vehicle to be in a non-switchable state; and/or controlling a power supply module of the vehicle to cause the vehicle to be in a state of not being power off.
The foregoing scene generation method for the audiovisual domain controller and the safety domain controller may refer to the relevant description of the foregoing scene generation method for the in-vehicle scene APP, which is not elaborated herein.
In an implementation, the controlling module 1201 includes:
In an implementation, the controlling module 1201 includes:
In an implementation, the controlling module 1201 is also configured for sending a safety control request to an audiovisual domain controller, to cause the audiovisual domain controller to generate a corresponding safety service request according to the safety control request, convert the safety service request into a first SOA gateway signal, and send the first SOA gateway signal to a corresponding safety domain controller; wherein the safety control request is configured for requesting the vehicle to enter the safety preparation state for implementing the target scene; and the first SOA gateway signal is configured for causing the safety domain controller to convert it into a safety control instruction and send the safety control instruction to a corresponding safety execution component, to control the respective safety execution component to enter a corresponding safety preparation state.
In an implementation, the apparatus is also configured for, after triggering the respective scene execution component to work according to the scene execution strategy to generate the target scene: in a case of receiving a scene pause instruction from the user, controlling the respective scene execution component to pause function execution; or in a case of receiving a scene exit instruction from the user, controlling the respective scene execution component to exit function execution.
In an implementation, the triggering module 1203 is also configured for sending, according to the scene execution strategy, a scene execution control request to the audiovisual domain controller, to cause the audiovisual domain controller to generate a corresponding scene execution service request according to the scene execution control request, convert the scene execution service request into a second SOA gateway signal, and send the second SOA gateway signal to a corresponding scene execution domain controller; wherein the second SOA gateway signal is configured for causing the scene execution domain controller to convert it into a function execution control instruction and send the function execution control instruction to a corresponding scene execution component, to control the respective scene execution component to work according to the scene execution strategy.
Functions of the modules in the apparatuses of the embodiment of the present application may refer to corresponding depictions of the above methods. Detailed depictions will not be elaborated.
The device may further includes a communication interface 1303 for communicating with external devices and performing data interactive transmission. The devices are interconnected using different buses and can be mounted on a common mainboard or mounted in other manners as desired. The processor 1302 may process instructions executed within the terminal or server, the instructions including instructions stored in or on the memory and configured to display graphical information of the GUI on external input/output devices, such as a display device coupled to an interface. In other implementations, multiple processors and/or multiple buses may be used together with multiple memories, if necessary. Likewise, multiple terminals or servers may be connected, with each device providing partial necessary operations, e.g., severing as an array of servers, a group of blade servers, or a multiprocessor system. The buses may be classified into address bus, data bus, control bus and so on. For ease of presentation, the bus is represented only with one thick line in
Optionally, in a specific implementation, if the memory 1301, the processor 1302 and the communication interface 1303 are integrated on one chip, the memory 1301, the processor 1302 and the communication interface 1303 may accomplish inter-communication through an internal interface.
It should be appreciated that the foregoing processor may be a Central Processing Unit (CPU), and may also be other general-purpose processors, Digital Signal Processers (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like. The general-purpose processor may be a microprocessor or any conventional processor or the like. It should be appreciated that the processor may be a processor supporting an Advanced RISC Machines (ARM) architecture.
An embodiment of the present application provides a computer-readable storage medium (such as the above-mentioned memory 1301), which stores computer instructions which, when executed by a processor, implement the methods provided in the embodiments of the present application.
Optionally, the memory 1301 may include a program-storing region and a data-storing region, wherein the program-storing region may store an operating system, and an application needed by at least one function; the data-storing region may store data created according to the use of the terminal or server. In addition, the memory 1301 may include high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices. In some embodiments, the memory 1301 may optionally include a memory located remotely relative to the processor 1302, and these remote memories may be connected to the terminal or server through a network. Examples of the network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
In the depictions of the description, depictions referring to terms “one embodiment”, “some embodiments”, “an example”, “a specific example” or “some examples” mean that specific features, structures, materials or characteristics described in combination with the embodiment or example are included in at least one embodiment or example of the present application. Furthermore, the features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may incorporate and combine the different embodiments or examples described in this description, as well as the features of the different embodiments or examples, without conflicting one another.
In addition, terms “first” and “second” are used for the purpose of description only, and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, the features defined by “first” or “second” may explicitly or implicitly include one or more of the features. In the description of the present application, “multiple” means two or more, unless specifically defined otherwise.
Any description of a process or method in a flowchart or otherwise described herein may be understood to represent that one or more (two or more) modules, segments or portions having code of executable instructions for implementing specific logic functions or steps of the process are included. Furthermore, the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved.
The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered as an ordered listing of executable instructions for implementing the logical functions, and may be embodied in any computer-readable medium for use by an instruction execution system, apparatus, or device (for example, a computer-based system, a system including a processor, or other system that can fetch instructions from the instruction execution system, apparatus or device and execute the instructions), or may be used in conjunction with the instruction execution system, apparatus or device.
It should be understood that various parts of the present application may be implemented with hardware, software, firmware, or combinations thereof. In the embodiments described above, various steps or methods may be implemented with software or firmware stored in the memory and executed by a suitable instruction execution system. All or part of the steps of the method in the above-mentioned embodiments may be completed by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium. When the program is executed, it includes one of or combinations of the steps of the method embodiments.
In addition, functional units in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules may be implemented in the form of either hardware or a software function module. If the integrated module is implemented in the form of the software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
What are described above are only specific implementations of the present application, but the protection scope of the present application is not limited to this. Any person skilled in the art may easily think of variations or substitutions within the scope of the technology disclosed in the present application, which shall be covered by the protection scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
202010808807.3 | Aug 2020 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN21/83160 | 3/26/2021 | WO |