Embodiments of this application relate to the field of intelligent vehicles, furthermore, to a vehicle control method and an apparatus thereof.
With the development of intelligent vehicle technologies, vehicle information, environmental information, driver information, and the like of a vehicle may be obtained from sensor, and then the vehicle is controlled based on the information.
Currently, there is an anti-collision warning solution that considers a field of view range of a driver. In such circumstances, a warning policy is obtained based on both a distance between a driver's vehicle and a vehicle within a field of view angle range of the driver and a distance between the driver's vehicle and a vehicle beyond the field of view angle range of the driver. However, some subtle actions of the driver such as a slight head deviation may cause a constant change of the field of view angle range, resulting in an excessively frequent change of the warning policy, and low warning accuracy.
Therefore, how to improve stability of field of view range detection to improve accuracy of vehicle control is a technical problem to be urgently resolved.
Embodiments of this application provide a vehicle control method and an apparatus thereof, to improve stability of field of view range detection, thereby improving accuracy of vehicle control.
According to a first aspect, a vehicle control method is provided. The method includes: obtaining sight information of a driver, and obtaining a control policy based on at least the sight information. The foregoing sight information includes a focus sector of the driver, and the focus sector includes at least one of a plurality of field of view sectors of a vehicle.
In the technical solution of this application, the focus sector represents a field of view range of the driver, so that stability of field of view range detection is improved, thereby improving accuracy of vehicle control.
In the conventional technology, because only a field of view angle range is considered, when the driver slightly rotates eyeballs or the like, the field of view angle range may change. This is quite unstable, and causes a relatively large difficulty and error to subsequent vehicle control. However, in this embodiment of this application, in terms of field of view sectors, all possible field of view ranges of the driver are divided into a plurality of areas, that is, a plurality of sectors. The focus sector is an area at which the driver looks. A detection result of this field of view range is relatively stable, and a case in which the detection result constantly jumps due to a subtle action such as a head deviation of the driver does not occur, so that accuracy of vehicle control is improved.
The sight information may be obtained through the following process: a line-of-sight direction of the driver is obtained with a sensing device such as a camera or an eye tracker, and then the foregoing focus sector is determined based on the line-of-sight direction.
With reference to the first aspect, in some implementations of the first aspect, when the control policy is obtained based on the sight information, the control policy may be obtained based on the sight information and vehicle information of the vehicle. The vehicle information includes at least one of the following: a steering wheel angle, an angular velocity, a turn signal, or a vehicle speed. The vehicle information may be understood as chassis information or vehicle information.
With reference to the first aspect, in some implementations of the first aspect, when the control policy is obtained based on the sight information and the vehicle information of the vehicle, the following steps may be performed: processing the sight information and the vehicle information by using a trained neural network model to obtain a driving intention of the driver, and obtaining the control policy based on the driving intention. The driving intention may be lane keeping, turning, or lane changing, or the driving intention may be acceleration, deceleration, parking, or the like. However, it should be understood that acceleration, deceleration, parking, or the like may also be considered as a case included in lane keeping.
With reference to the first aspect, in some implementations of the first aspect, the foregoing plurality of field of view sectors include the following: a field of view sector of a left vehicle window, a field of view sector of a left rearview mirror, a field of view sector of a front vehicle window, a field of view sector of an in-vehicle rearview mirror, a field of view sector of a right vehicle window, and a field of view sector of a right rearview mirror.
With reference to the first aspect, in some implementations of the first aspect, the field of view sector of the front vehicle window may include a left field of view sector of the front vehicle window and a right field of view sector of the front vehicle window. Because a field of view area of the front vehicle window is relatively large, the driver may not focus on the entire front vehicle window area. For example, when the driver turns right, the driver looks to the right, and in this case, the driver looks out only through a right area of the front vehicle window. Therefore, to further improve accuracy of determining the focus sector, the field of view sector of the front vehicle window may be divided into the left field of view sector of the front vehicle window and the right field of view sector of the front vehicle window, that is, the field of view sector of the front vehicle window is divided into two parts.
With reference to the first aspect, in some implementations of the first aspect, the focus sector may be obtained based on at least a blind zone and/or an obstacle.
With reference to the first aspect, in some implementations of the first aspect, the focus sector may be obtained based on at least a line-of-sight direction of the driver.
With reference to the first aspect, in some implementations of the first aspect, the control policy may include at least one of the following: an anti-collision warning policy, an autonomous emergency braking policy, an adaptive cruise control policy, a lane departure warning policy, a lane keeping assist policy, or a lane centering assist policy.
In some embodiments, a display unit (display apparatus) having a display function, such as a human-machine interaction interface or a display screen, may be further used to present the focus sector. With reference to the first aspect, in some implementations of the first aspect, the foregoing method further includes: displaying the focus sector by using the display unit.
In some embodiments, a reaction time may further be introduced, and the control policy may be obtained with reference to the reaction time. With reference to the first aspect, in some implementations of the first aspect, the control policy may be obtained based on the reaction time and the focus sector, or the control policy may be obtained based on the reaction time and the driving intention, or the control policy may be obtained based on the reaction time, the driving intention, and the focus sector. For introduction of the reaction time, not only the focus sector is considered, but also an attention situation of the driver is considered, so that a driving risk caused by insufficient attention of the driver is alleviated. Therefore, accuracy of vehicle control and safety of vehicle driving can be further improved.
According to a second aspect, a vehicle control apparatus is provided. The apparatus includes units configured to perform the method in any implementation of the first aspect.
According to a third aspect, a vehicle control apparatus is provided. The apparatus includes: a memory, configured to store a program; and a processor, configured to execute the program stored in the memory. When the program stored in the memory is executed, the processor is configured to perform the method in any implementation of the first aspect. The apparatus may be disposed in various devices or systems that need to perform vehicle control. The apparatus may alternatively be a chip.
According to a fourth aspect, a computer-readable medium is provided. The computer-readable medium stores program code to be executed by a device, and the program code includes instructions used to perform the method in any implementation of the first aspect.
According to a fifth aspect, a computer program product including instructions is provided. When the computer program product is run on a computer, the computer is enabled to perform the method in any implementation of the first aspect.
According to a sixth aspect, a chip is provided. The chip includes a processor and a data interface, and the processor reads, by using the data interface, instructions stored in a memory, to perform the method in any implementation of the first aspect.
In some embodiments, in an implementation, the chip may further include the memory. The memory stores the instructions, the processor is configured to execute the instructions stored in the memory, and when executing the instructions, the processor is configured to perform the method in any implementation of the first aspect.
The following describes technical solutions in embodiments in this application with reference to accompanying drawings.
The sight information may be obtained through the following process: a line-of-sight direction of the driver is obtained with a sensing device such as a camera or an eye tracker, and then the focus sector is determined based on the line-of-sight direction. In this process, the obtaining module 110 may directly obtain the sight information; or may first obtain the line-of-sight direction from the sensing device, and then obtain the sight information based on the line-of-sight direction; or may obtain an image and the like including the line-of-sight direction (in a case in which the sensing device is also integrated into the obtaining module 110), and then extract the line-of-sight direction from the image and obtain the sight information based on the line-of-sight direction. In other words, the obtaining module 110 may be the foregoing sensing device, a device that can obtain the line-of-sight direction and determine the sight information based on the line-of-sight direction, an interface circuit or a reading apparatus that can read the sight information from a storage apparatus, or a communication interface that can obtain the sight information by using a network.
The line-of-sight direction may be understood as an orientation of a line-of-sight, and may be represented with a line, or may be represented with an angle, for example, an included angle between the line-of-sight direction and a vehicle traveling direction.
In the conventional technology, because only a field of view angle range is considered, when the driver slightly rotates eyeballs or the like, the field of view angle range may change. This is quite unstable, and causes a relatively large difficulty and error to subsequent vehicle control. However, in this embodiment of this application, in terms of field of view sectors, all possible field of view ranges of the driver are divided into a plurality of areas, that is, a plurality of sectors. The focus sector is an area at which the driver looks. A detection result of this field of view range is relatively stable, and a case in which the detection result constantly jumps due to a subtle action such as a head deviation of the driver does not occur, so that accuracy of vehicle control is improved.
In some implementations, the field of view sector may include at least one of the following: a field of view sector of a left vehicle window, a field of view sector of a left rearview mirror, a field of view sector of a front vehicle window, a field of view sector of an in-vehicle rearview mirror, a field of view sector of a right vehicle window, and a field of view sector of a right rearview mirror. The field of view sector of the front vehicle window may further include a left field of view sector of the front vehicle window and a right field of view sector of the front vehicle window. The field of view sector of the left vehicle window is a field of view area that can be seen by the driver from the left vehicle window. The field of view sector of the left rearview mirror is a field of view area that can be seen by the driver from the left rearview mirror. The field of view sector of the front vehicle window is a field of view area that can be seen by the driver from the front vehicle window. The field of view sector of the in-vehicle rearview mirror is a field of view area that can be seen by the driver from the in-vehicle rearview mirror. The field of view sector of the right vehicle window is a field of view area that can be seen by the driver from the right vehicle window. The field of view sector of the right rearview mirror is a field of view area that can be seen by the driver from the right vehicle window. Because the field of view area of the front vehicle window is relatively large, the driver may not focus on the entire front vehicle window area. For example, when the driver turns right, the driver looks to the right, and in this case, the driver looks out only through a right area of the front vehicle window. Therefore, to further improve accuracy of determining the focus sector, the field of view sector of the front vehicle window may be divided into a left field of view sector of the front vehicle window and a right field of view sector of the front vehicle window, that is, the field of view sector of the front vehicle window is divided into two parts.
For ease of understanding, the following describes the field of view sectors and the focus sector with reference to
The control policy module 120 is configured to obtain a control policy based on the sight information.
In some implementations, the control policy module 120 may perform control on the vehicle based on the focus sector in the sight information. The control may be driving assistance control or autonomous driving control. For example, the control policy module 120 may control acceleration, deceleration, lane changing, turning, parking, obstacle avoidance, or various warnings of the vehicle.
The foregoing control policy may include at least one of the following: an anti-collision warning policy, an autonomous emergency braking (autonomous emergency braking, AEB) policy, an adaptive cruise control (adaptive cruise control, ACC) policy, a lane departure warning (lane departure warning, LDW) policy, a lane keeping assist (lane keeping assist, LKA) policy, a lane centering assist policy (lane centering control, LCC), or the like.
The anti-collision warning policy is to issue a warning when the vehicle has a collision risk. For example, it may be determined, based on the focus sector, whether the driver has a risk of collision with an obstacle, to determine whether a warning needs to be issued. For example, it is assumed that there are three cases. Case 1: The obstacle is not in the focus sector of the driver, and is on a traveling track of the vehicle. Case 2: The obstacle is not in the focus sector of the driver, but is not on a traveling track of the vehicle. Case 3: The obstacle is in the focus sector of the driver, and is not on a traveling track of the vehicle. It is obvious that a collision risk in Case 1 is far higher than collision risks in Case 2 and Case 3, and Case 3 does not affect traveling of the vehicle. Therefore, different levels of warning may be issued for Case 1 and Case 2, and no warning is issued for Case 3.
For another control policy such as the autonomous emergency braking policy, the adaptive cruise control policy, the lane departure warning policy, or the lane keeping assist policy, a control action on the vehicle may also be determined with reference to the foregoing case, and details are not described one by one.
In some implementations, the control policy module 120 may further obtain a driving intention of the driver based on the sight information and vehicle information. The driving intention may be lane keeping, turning, lane changing, or the like, or may be acceleration, deceleration, parking, or the like.
The vehicle information may be understood as chassis information or vehicle information. The vehicle information may include at least one of the following: a steering wheel angle, an angular velocity, a turn signal, or a vehicle speed. For example, assuming that the focus sectors of the driver are the field of view sector of the left rearview mirror and the field of view sector of the left vehicle window, and a left turn light flashes, it may be inferred that the driving intention of the driver is to change a lane to the left. Assuming that the focus sectors of the driver are the left field of view sector of the front vehicle window and the field of view sector of the left vehicle window, and the left turn light flashes, it may be inferred that the driving intention of this driver is to turn left.
In some other implementations, the control policy module 120 may further obtain the control policy based on the driving intention and the sight information. For example, assuming that the driving intention of the driver is lane keeping, a static obstacle in another lane may not be considered. The control policy may be obtained based only on the driving intention, or the control policy may be obtained based on the driving intention and the sight information. Alternatively, the control policy may be further obtained with reference to another factor such as a predicted traveling track of the vehicle.
In some embodiments, the sight information may be processed with a model such as a trained neural network model, to obtain the driving intention. The neural network model may be understood as a model that establishes a correspondence between an input and an output. The input is the sight information and the vehicle information, and the output is the driving intention. It may be understood that the neural network model establishes a mapping relationship between input data and a label. Herein, a mapping relationship between the sight information and the driving intention and a mapping relationship between the vehicle information and the driving intention are established. Training makes the foregoing mapping relationships more accurate. The neural network model may be a convolutional neural network (convolutional neuron network, CNN), a recurrent neural network (recurrent neural network, RNN), a long short-term memory (long short-term memory, LSTM) neural network, or the like. Training data includes input data and a label, the input data includes the sight information and the vehicle information, the label includes the driving intention, and each piece of input data corresponds to one label. In a training process, the training data is used to update parameters of the neural network model (for example, an initial neural network model), to obtain a trained neural network model. The trained neural network model may be used in the foregoing vehicle control method.
It should be noted that the neural network model usually relates to a training process and an inference process. In the training process, training data with a label is used to train the initial neural network model (which may be understood as a neural network model that is not trained herein). That is, the training data with the label is used to update the parameters of the neural network model. In the inference process, the trained neural network model (that is, the neural network model with updated parameters) is used to process to-be-processed data (for example, the foregoing obtained sight information and vehicle information), to obtain an inference result. This inference result is a driving intention corresponding to the to-be-processed data. In some embodiments, the vehicle control apparatus 100 may further include a display unit. This display unit may be configured to display the focus sector, that is, the focus sector is presented with the display unit. This display unit may be, for example, a human-machine interaction interface or an in-vehicle display screen. The human-machine interaction interface may also be referred to as a human-machine interface (human machine interaction, HMI), a user interface, a user interface, an interaction interface, or the like.
In some other implementations, the control policy module 120 may further obtain the control policy based on a reaction time and information such as the focus sector and the driving intention. The reaction time is a time for the driver to take a response measure when the driver encounters an emergency. For example, when the driver brakes in an emergency, a time from a moment when the driver sees an obstacle to a moment when the driver performs emergency braking is the reaction time. In addition to reaction sensitivity of the driver, the response time is also related to an attention degree of the driver. Therefore, the reaction time of the driver may be inferred through detection of the attention degree of the driver. For example, it is assumed that the driver is concentrating in driving, the attention degree is relatively high, and the response time is relatively short. In this case, if an emergency occurs, the driver can quickly process the emergency. However, if the driver is looking around, or is absent-minded, dozing, or the like, the attention degree is relatively low, and the response time is relatively long. In this case, if an emergency occurs, the driver may not respond in time.
In some embodiments, attention information may be obtained based on the vehicle information and driver status monitoring information. A driver status monitoring system (driver monitoring system, DMS) may be used to obtain facial image information of the driver to obtain the driver status monitoring information. A method for obtaining the attention information is described in detail below, and details are not described herein.
For introduction of the reaction time, not only the focus sector is considered, but also an attention situation of the driver is considered, so that a driving risk caused by insufficient attention of the driver is alleviated. Therefore, accuracy of vehicle control and safety of vehicle driving can be further improved.
It should be understood that the blind zone is not the field of view sector, but may be included in a field of view sector, because the driver has no field of view in the blind zone. The field of view sector {circle around (8)} of the front vehicle window in (c) in
The focus sector is a field of view sector included in an actual field of view of the driver. Therefore, the focus sector includes at least one field of view sector in the foregoing plurality of field of view sectors. For example, it is assumed that the driver wants to turn right, and the line-of-sight direction is right front. In this case, the focus sector of the driver includes the right field of view sector of the front vehicle window and the field of view sector of the left vehicle window. The driver may further observe the rearview mirror. In this case, the focus sector of the driver includes the field of view sector of the right rearview mirror.
As shown in
In addition, the focus sector may further be obtained in consideration of impact of an obstacle, that is, the focus sector may be obtained based on obstacle information. With reference to
As described above, the vehicle control solution in this embodiment of this application may be applied to various scenarios such as autonomous driving and driving assistance. For example, some driving assistance functions may be implemented with reference to an advanced driver assistance system (advanced driver assistance system, ADAS). For ease of understanding, the following provides descriptions with reference to
For ease of understanding, some elements in the figures are first described. Bold and parallel dashed lines in the figures represent lane lines, and the focus sector is represented with two straight lines. For example, an area between a straight line A and a straight line B is the focus sector, and an area between a straight line E and a straight line F is also the focus sector. A predicted traveling track of the vehicle is represented with two curves. For example, an area between a curve C and a curve D is a passing area of the vehicle when the vehicle travels based on the predicted traveling track, and a curve G and a curve H are also a predicted traveling track of the vehicle. However, the curve C and the curve D are the predicted traveling track obtained when the driving intention is considered, and the curve G and the curve H are the predicted traveling track obtained when the driving intention is not considered. An object with an arrow indicates that the object moves relative to the ground, an arrow direction is a motion direction, and an object without an arrow indicates that the object is static relative to the ground. For example, an obstacle with an arrow indicates an obstacle that is moving, and an obstacle without an arrow indicates a static obstacle.
In an anti-collision warning scenario shown in
It should be noted that, in the application scenarios of the vehicle control solutions shown in
In an anti-collision warning scenario shown in
In an anti-collision warning scenario shown in
In an anti-collision warning scenario shown in
In addition, it is assumed that
In an anti-collision warning scenario or a longitudinal driving assistance scenario shown in
In a horizontal driving assistance scenario shown in
It should be understood that
1101: Obtain sight information of a driver, where the sight information includes a focus sector.
For explanations of the sight information and the focus sector and a manner of obtaining the sight information, refer to the foregoing related descriptions. Details are not described again.
In some implementations, the focus sector is obtained based on at least blind zone information and/or obstacle information. In other words, a factor of a blind zone and/or an obstacle is considered in a process of determining the focus sector, and a corresponding adjustment is made. Alternatively, it may be understood that impact of the blind zone and/or the obstacle is eliminated when the focus sector is determined. In this way, accuracy of the focus sector, that is, accuracy of the sight information, can be improved, so that accuracy of vehicle control is improved.
1102: Obtain a control policy based on at least the sight information.
For the control policy, refer to the foregoing related descriptions. Details are not described again.
In some embodiments, the control policy may include at least one of the following: an anti-collision warning policy, an autonomous emergency braking policy, an adaptive cruise control policy, a lane departure warning policy, a lane keeping assist policy, a lane centering assist policy, or the like.
In some implementations, a driving intention of the driver may be obtained based on the sight information and vehicle information. The driving intention may be lane keeping, turning, lane changing, or the like, or may be acceleration, deceleration, parking, or the like.
In some other implementations, the control policy may be obtained based on the driving intention and the sight information. For example, assuming that the driving intention of the driver is lane keeping, a static obstacle in another lane may not be considered. The control policy may be obtained based only on the driving intention, or the control policy may be obtained based on the driving intention and the sight information. Alternatively, the control policy may be further obtained with reference to another factor such as a predicted traveling track of the vehicle.
In some other implementations, the driving intention of the driver may be first obtained based on the sight information and the vehicle information, and the control policy is then obtained based on the driving intention.
The driving intention may be obtained with a model such as a trained neural network model. For example, the sight information and the vehicle information may be processed with the trained neural network model, to obtain the driving intention. The neural network model may be understood as a model that establishes a correspondence between an input and an output. The input is the sight information and the vehicle information, and the output is the driving intention. It may be understood that the neural network model establishes a mapping relationship between input data and a label. Herein, a mapping relationship between the sight information and the driving intention and a mapping relationship between the vehicle information and the driving intention are established. Training makes the foregoing mapping relationships more accurate.
The neural network model may be a convolutional neural network, a deep neural network, a recurrent neural network, a long short-term memory neural network, or the like. Training data includes input data and a label, the input data includes the sight information and the vehicle information, the label includes the driving intention, and each piece of input data corresponds to one label. In a training process, the training data is used to update parameters of the neural network model (for example, an initial neural network model), to obtain a trained neural network model. The trained neural network model may be used in the foregoing vehicle control method.
In some other implementations, the control policy may alternatively be obtained based on a reaction time and the focus sector, or the control policy may be obtained based on the reaction time and the driving intention, or the control policy may be obtained based on the reaction time, the driving intention, and the focus sector. For description of the reaction time, refer to the foregoing description.
As described above, the reaction time may be obtained with reference to the attention information. The following describes a method for obtaining the attention information with reference to
In some embodiments, the vehicle information such as a steering wheel angle, a vehicle speed, and a turning angle (that is, a turning angle of a headstock) may be processed with a neural network model represented by RNN in the figure, and the driver status monitoring information may be processed with a neural network model represented by NN in the figure. Results obtained through the foregoing processing are input into a fully connected (fully connected, FC) layer represented by FC in the figure, to obtain attention information. The attention information is information indicating an attention degree of the driver.
In some implementations, a neural network used to process the vehicle information may be, for example, a recurrent neural network such as an LSTM, and a neural network used to process the driver status monitoring information may be, for example, a multilayer perceptron (multilayer perceptron, MLP) neural network.
The neural network model may also be obtained through training with the training data. For a process, refer to the foregoing description of training of the neural network model.
It should be understood that, the RNN, the NN, and the FC in the foregoing figure may be considered as jointly forming an attention model. Inputs to the attention model are the vehicle information and the driver status monitoring information, and an output is the attention information. Alternatively, it may be understood that the attention model is used to process the vehicle information and the driver status monitoring information, to obtain the attention information. In a specific example, the attention model includes an LSTM, an MLP, and an FC. The LSTM is configured to: process the vehicle information, and input an obtained processing result into the FC.
The MLP is configured to: process the driver status monitoring information, and input an obtained processing result into the FC. The FC is configured to continue processing the processing results from the LSTM and the MLP to obtain the attention information.
In the method shown in
In some embodiments, the method shown in
As shown in
As shown in
As shown in
In
As described above,
The obtaining unit 2001 and the processing unit 2002 may be configured to perform the vehicle control method in embodiments of this application. In some embodiments, the obtaining unit 2001 may perform the foregoing step 1101, and the processing unit 2002 may perform the foregoing step 1102.
The apparatus 2000 may be the vehicle control apparatus 100 shown in
The apparatus 2000 may further include a display unit 2003, and the display unit 2003 is configured to display the foregoing focus sector. The display unit 2003 may be further configured to present a warning signal to a driver in a form of a picture. The display unit 2003 may be further integrated into the processing unit 2002.
It should be understood that the processing unit 2002 in the apparatus 2000 may be equivalent to a processor 3002 in an apparatus 3000 in the following.
The memory 3001 may be a read only memory (read only memory, ROM), a static storage device, a dynamic storage device, or a random access memory (random access memory, RAM). The memory 3001 may store a program. When the program stored in the memory 3001 is executed by the processor 3002, the processor 3002 and the communication interface 3003 are configured to perform the steps of the vehicle control method in embodiments of this application.
The processor 3002 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (application-specific integrated circuit, ASIC), a graphics processing unit (graphics processing unit, GPU), or one or more integrated circuits, and is configured to execute a related program, to implement functions that need to be performed by units in the vehicle control apparatus in this embodiment of this application, or perform the vehicle control method in method embodiments of this application.
Alternatively, the processor 3002 may be an integrated circuit chip and has a signal processing capability. In an implementation process, the steps of the vehicle control method in this application may be implemented with an integrated logic circuit of hardware in the processor 3002 or instructions in a form of software. Alternatively, the processor 3002 may be a general-purpose processor, a digital signal processor (digital signal processor, DSP), an ASIC, a field programmable gate array (field programmable gate array, FPGA) or another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component. The processor 3002 may implement or perform the methods, the steps, and logical block diagrams that are disclosed in embodiments of this application. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Steps of the methods disclosed with reference to embodiments of this application may be directly executed and accomplished with a hardware decoding processor, or may be executed and accomplished with a combination of hardware and software modules in the decoding processor. A software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read only memory, a programmable read only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory 3001. The processor 3002 reads information in the memory 3001, and completes, in combination with hardware of the processor 3002, functions that need to be performed by the units included in the vehicle control apparatus in this embodiment of this application, or performs the vehicle control method in method embodiments of this application.
The communication interface 3003 uses a transceiver apparatus, for example but not for limitation, a transceiver, to implement communication between the apparatus 3000 and another device or a communication network. For example, the foregoing sight information may be obtained with the communication interface 3003.
The bus 3004 may include a path for transmitting information between components (for example, the memory 3001, the processor 3002, and the communication interface 3003) of the apparatus 3000.
It should be noted that, although only the memory, the processor, and the communication interface are shown in the apparatus 3000 shown in
A person of ordinary skill in the art may be aware that, in combination with the examples described in embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different apparatuses to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.
It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments. Details are not described herein again.
In the several embodiments provided in this application, it should be understood that the disclosed system, method, and apparatus may be implemented in another manner. For example, the described apparatus embodiment is merely an example. For example, division into the units is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented with some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.
In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units are integrated into one unit.
When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technology, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application. The foregoing storage medium includes a USB flash disk (USB flash disk, UFD). The UFD may also be referred to as any medium that can store program code in short, such as a U disk or a flash disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disc.
The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
This application is a continuation of International Application No. PCT/CN2021/109557, filed on Jul. 30, 2021, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2021/109557 | Jul 2021 | US |
Child | 18425750 | US |