The present disclosure relates to the field of computer technologies and, in particular, to a display technology of a vehicle driving state.
The automated driving technology is currently at a high-speed development stage, and cars with the automated driving functions are increasingly favored by consumers. Therefore, automated driving systems have been adopted on various vehicle models in order to increase their market competitiveness. As part of the automated driving system, a high-precision map also enters the consumers' field of view. In addition, a car with an intelligent cockpit may provide the consumers with ultimate real-time experiences.
The automated driving systems often rely on high-precision map data for navigation. However, the high-precision map data requires high costs, large data volume, and high consumption of a storage resource, a computing resource, and/or a network resource.
One embodiment of the present disclosure provides a method for displaying vehicle driving state, performed by an electronic device. The method includes obtaining a vehicle driving state of a current vehicle, and obtaining environment sensing information related to a current driving scene of the current vehicle, the environment sensing information comprising a scene image of the current driving scene; performing lane line detection on the scene image to obtain lane information in the current driving scene; generating a driving-state prompt element corresponding to the vehicle driving state based on the lane information, and performing superposed fusion on the driving-state prompt element and the environment sensing information to render and generate an augmented reality map; and displaying the augmented reality map.
Another embodiment of the present disclosure provides an electronic device. The electronic device includes one or more processors and a memory, the memory storing a computer program that, when being executed, causes the one or more processors to perform: obtaining a vehicle driving state of a current vehicle, and obtaining environment sensing information related to a current driving scene of the current vehicle, the environment sensing information comprising a scene image of the current driving scene; performing lane line detection on the scene image to obtain lane information in the current driving scene; generating a driving-state prompt element corresponding to the vehicle driving state based on the lane information, and performing superposed fusion on the driving-state prompt element and the environment sensing information to render and generate an augmented reality map; and displaying the augmented reality map.
Another embodiment of the present disclosure provides a non-transitory computer readable storage medium, including a computer program that, when being executed, causes an electronic device to perform: obtaining a vehicle driving state of a current vehicle, and obtaining environment sensing information related to a current driving scene of the current vehicle, the environment sensing information comprising a scene image of the current driving scene; performing lane line detection on the scene image to obtain lane information in the current driving scene; generating a driving-state prompt element corresponding to the vehicle driving state based on the lane information, and performing superposed fusion on the driving-state prompt element and the environment sensing information to render and generate an augmented reality map; and displaying the augmented reality map.
The accompanying drawings described herein are used for providing a further understanding of the present disclosure, and form part of the present disclosure. Exemplary embodiments of the present disclosure and descriptions thereof are used for explaining the present disclosure, and do not constitute any inappropriate limitation to the present disclosure. In the accompanying drawings:
In order to make objectives, technical solutions, and advantages of embodiments of the present disclosure clearer, the technical solutions of the present disclosure will be clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present disclosure. It is clear that the embodiments to be described are only a part rather than all of the embodiments of the technical solutions in the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments recorded in the present disclosure without making creative efforts shall fall within the protection scope of the technical solutions in the present disclosure.
The following describes some concepts involved in the embodiments of the present disclosure.
Vehicle driving state: refers to a status in which an object drives a vehicle, which is divided into a manual driving state and an automated driving state. Manual driving means that a vehicle can be driven only by manual intervention, and automated driving means that a surrounding environment can be automatically sensed and responded to, and the vehicle can be driven without manual intervention. Automated driving may be further divided into many states, for example, a first automated driving state, a second automated driving state, and a third automated driving state that are listed in the embodiments of the present disclosure.
Adaptive cruise control (ACC): refers to a function/automated driving state that is provided by an automated driving system to dynamically adjust a speed of an ego vehicle according to a safe distance between a cruise speed set by an object and a front vehicle. When the front vehicle accelerates, the ego vehicle also accelerates to a set speed. When the front vehicle decelerates, the ego vehicle slows down to keep the safe distance from the front vehicle.
Lane center control (LCC): refers to a function/automated driving state provided by an automated driving system for assisting a driver in controlling a steering wheel, and can keep a vehicle centered in a current lane.
Navigate on autopilot (NOA): This function/automated driving state can guide a vehicle to drive automatically by setting a destination. Under supervision of a driver, the vehicle can complete operations such as lane change and overtaking, driving in automatically, and driving out of a ramp. Driving behaviors of the NOA are as follows: cruise, car following, avoidance, giving way, single rule-based lane change behavior planning (for example, merge into a fast traffic lane or exit as expected), and multi-condition decision-making lane change behavior (lane change during cruise).
Driving-state prompt element: matches a vehicle driving state, so that an object can intuitively learn of prompt information, is used for prompting a current driving state of the object, and may be an element related to a guide line, a guide area (that is, a guide plane), a guide point, or the like. For example, in the present disclosure, a lane line area is corresponding to one or more guide lines, a landing point area is corresponding to one guide point, a drivable lane plane is corresponding to one guide area, and the like.
Automated driving domain: is a set of software and hardware that are in a vehicle and specifically control automated driving.
Cockpit domain: is a set of software and hardware in a vehicle, such as a center console screen, a dashboard screen, and an operation button that are specially used for interacting with an object in a cockpit. In the present disclosure, the cockpit domain specially refers to a map interactive display part on the center console screen in the cockpit.
Vehicle Ethernet: is a new local area network technology that relies on networks to connect electronic units in vehicles. It can implement 100 Mbit/s on single pair of unshielded twisted pairs. It also meets requirements of the automotive industry for high reliability, low electromagnetic radiation, low power consumption, and low delay.
Carrier phase differential technology (Real Time Kinematic, RTK) device: It provides high-precision (centimeter level) positioning data in real time.
Maneuver point: is a location, in map navigation, that guides a driver to make a maneuver action such as steering, slowing down, lane merging, and driving out. It is usually a location of intersection turning, intersection diverting, and intersection merging.
Vehicle landing point: is a location of an ego vehicle when an automated driving system completes automatic lane change.
The present disclosure relates to a vehicle navigation technology in an intelligent traffic system (ITS). The ITS, also referred to as an intelligent transportation system, effectively integrates advanced science and technologies (for example, an information technology, a computer technology, a data communication technology, a sensor technology, an electronic control technology, an automatic control theory, operational research, and artificial intelligence) into transportation, service control, and vehicle manufacturing, and strengthens a relationship among a vehicle, a road, and a user, forming an integrated transport system that ensures safety, improves efficiency, improves environment, and saves energy.
The vehicle navigation technology is a technology in which a real-time location relationship between a vehicle and a road is mapped to a visual navigation interface based on positioning data provided by a satellite positioning system, providing a navigation function for a vehicle associated object (for example, a vehicle driving object or a vehicle riding object) in a driving process of the vehicle from a start point to an end point. In addition, by using the visual navigation interface, the vehicle associated object can learn of a vehicle driving state, and may further learn of information such as a current location of the vehicle, a driving route of the vehicle, a speed of the vehicle, and a road condition in front of the vehicle.
The following briefly describes a design idea of the embodiments of the present disclosure.
With rapid development of computer technologies, vehicle navigation technologies are widely used in daily life. Currently, in a vehicle navigation process, a visual navigation interface is presented to a vehicle associated object (for example, a vehicle driving object or a vehicle riding object). Related driving information of a vehicle may be learned of by using the vehicle associated object on the navigation interface.
In a related automated driving technology, when high-precision map data is relied on, a data amount required is relatively high, and consumption of a storage resource, a computing resource, and a network resource is relatively high. However, when the automated driving state is presented only from a perspective of a map, a multi-aspect fusion manner is not considered, a pure virtual form is used for presenting a related navigation map, a vehicle driving state is presented, and a correlation between a real environment and the driving state is ignored. Alternatively, rendering is performed only from a purely perceptual perspective, an intuitive degree of a rendering result needs to be improved, and a user has relatively high understanding cost and relatively poor experience.
Embodiments of the present disclosure provide method, apparatus, electronic device, and storage medium for displaying vehicle driving state. In the present disclosure, high-precision map data is discarded, and only a vehicle driving state and related environment sensing information need to be obtained. Based on the two types of information, an augmented reality map is rendered, thereby implementing lightweight drawing, reducing performance consumption, and improving drawing efficiency. In addition, in this process, the environment sensing information is used as a fusion basis, and a scene image of a real scene and the vehicle driving state are fused by using an augmented reality technology, presenting a mapping result of sensing data and the real world, which reflects a correlation between a real environment and the driving state. In addition, in the present disclosure, the generated augmented reality map is displayed, and an automated driving behavior can be displayed in multiple dimensions for associated objects of the vehicle to view separately.
The following describes preferred embodiments of the present disclosure with reference to the accompanying drawings of this specification. It is to be understood that the preferred embodiments described herein are merely used for describing and explaining the present disclosure, and are not used for limiting the present disclosure. In addition, in a case of no conflict, features in the embodiments and the embodiments of the present disclosure may be mutually combined.
In this embodiment of the present disclosure, the terminal device 110 includes but is not limited to a device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, an e-book reader, an intelligent voice interaction device, a smart home appliance, and an in-vehicle terminal. The terminal device in
The method for displaying vehicle driving state in the embodiments of the present disclosure may be performed by an electronic device. The electronic device may be the terminal device 110 or the server 120. That is, the method may be separately performed by the terminal device 110 or the server 120, or may be jointly performed by the terminal device 110 and the server 120. For example, when the method is jointly performed by the terminal device 110 and the server 120, the terminal device 110 (for example, an in-vehicle terminal located on a vehicle) first obtains a vehicle driving state of a current vehicle and environment sensing information related to a current driving scene of the current vehicle, where the environment sensing information includes a scene image of the current driving scene. The terminal device 110 may send the scene image to the server 120, and further, the server 120 performs lane line detection on the scene image to obtain lane information in the current driving scene. Then, the server 120 may notify the terminal device 110 of the obtained lane information, and the terminal device 110 generates a driving-state prompt element corresponding to the vehicle driving state based on the lane information, and renders and generates an augmented reality map by performing superposed fusion on the driving-state prompt element and the environment sensing information. Finally, the augmented reality map is displayed.
In an embodiment, the terminal device 110 may communicate with the server 120 by using a communication network.
In an embodiment, the communication network is a wired network or a wireless network.
In this embodiment of the present disclosure, when there are multiple servers, the multiple servers may form a blockchain, and the server is a node on the blockchain. For the vehicle driving state display method disclosed in the embodiments of the present disclosure, data related to display of the vehicle driving state may be stored in a blockchain, for example, the vehicle driving state, the environment sensing information, the lane information, the driving-state prompt element, and the augmented reality map.
In addition, the embodiments of the present disclosure may be applied to various scenarios, including not only an automated driving scene, but also a scenario such as a cloud technology, artificial intelligence, smart transport, and aided driving.
The following describes, with reference to the foregoing described application scenario, the vehicle driving state display method provided in the exemplary implementation of the present disclosure with reference to the accompanying drawings. The foregoing application scenario is merely shown for case of understanding the spirit and principle of the present disclosure, and the implementation of the present disclosure is not limited in this aspect.
S21: Obtain a vehicle driving state of a current vehicle, and obtain environment sensing information related to a current driving scene of the current vehicle.
The vehicle driving state may represent whether the current vehicle is currently in a manual driving state or in an automated driving state. The environment sensing information may be environment-related information of the current driving scene in which the current vehicle is located, and may include a scene image of the current driving scene.
In one embodiment, the vehicle driving state may be obtained from an automated driving domain through cross-domain communication between a driving domain of the current vehicle and the automated driving domain of the current vehicle.
The cockpit domain and the automated driving domain are two relatively independent processing systems. Data cross-domain transmission may be performed between the two systems based on a data transmission protocol such as an in-vehicle Ethernet by using a Transmission Control Protocol (TCP), a User Datagram Protocol (UDP), and a Scalable service-Oriented MiddlewarE over IP (SOME/IP) protocol.
In one embodiment, the environment sensing information may alternatively be obtained from the automated driving domain through cross-domain communication between the driving domain and the automated driving domain.
In this embodiment of the present disclosure, the automated driving domain includes an environment sensing device, and a quantity of the environment sensing devices may be one or more. This is not limited in this embodiment of the present disclosure. The environment sensing information may be obtained by using these environment sensing devices, that is, the environment of the current driving scene of the current vehicle is sensed by using the environment sensing devices.
For example, the environment sensing device may include but is not limited to: an RTK positioning device loaded on an automated driving system, a visual sensing device (such as an image collector), a radar sensing device (such as a laser radar and an ultrasonic radar), an infrared detector, a pressure sensor, and the like.
When the environment sensing device includes an image collector, a scene image may be collected by using the image collector (for example, a camera). For example, a camera shown in
In addition, the environment sensing information may further include traffic participants (including a vehicle, a pedestrian, and a cyclist) around the current vehicle, traffic information such as traffic sign information and signal lamp information, status information of the current vehicle, positioning location information of the current vehicle, and the like. For example, distances to front and rear vehicles may be detected by using a radar, whether a pedestrian exists in front of the vehicle and a distance to the pedestrian are detected by using an infrared detector, and whether the vehicle encounters a collision or the like is determined by using a pressure sensor.
In one embodiment, a manner of obtaining the environment sensing information related to the current driving scene of the current vehicle may be: collecting, by using the automated driving domain, sensing data measured by the environment sensing device, transmitting the sensing data from the automated driving domain to the cockpit domain by using the cross-domain communication, and further determining the environment sensing information based on the sensing data obtained in the cockpit domain.
It may be understood that if the environment sensing device includes an image collector (for example, a camera), the collected sensing data may include a scene image. In this case, when the sensing data is transmitted from the automated driving domain to the cockpit domain, the collected sensing data may be packaged, the packaged sensing data is transmitted to the cockpit domain, and the packaged sensing data is directly used as the environment sensing information.
In addition, the environment sensing device further includes an RTK positioning device and a radar sensing device. The environment sensing information is generated by the RTK positioning device, the vision sensing device, and the radar sensing device that are loaded on the automated driving system. The right side of
Vehicle driving state information of the current vehicle is obtained from the automated driving domain (which is collected based on the RTK positioning device, the camera, and the like shown in
In a case in which the sensing data collected by the automated driving domain does not include the scene image, the scene image may be alternatively collected by using the cockpit domain.
In an embodiment, a manner of determining the environment sensing information based on the sensing data obtained in the cockpit domain may be collecting, by using the cockpit domain, a scene image related to the current driving scene of the current vehicle, and performing fusion based on the scene image and the sensing data to obtain the environment sensing information.
Compared with
Then, the lane information may be extracted from the scene image. Further, a corresponding driving-state prompt element is generated based on the lane information, and fusion rendering is performed, to implement an effect of drawing automated driving state information on an AR map with reference to a state of an ego vehicle.
In this embodiment of the present disclosure, through cross-domain transmission and dual-domain data linkage, excellent automated cockpit experience can be provided.
S22: Perform lane line detection on the scene image to obtain lane information in the current driving scene.
In this embodiment of the present disclosure, AR recognition is performed on the scene image, and lane information such as a road plane, a lane, and a lane line are recognized based on a lane line detection technology.
In a specific implementation process, lane line detection may be first performed on the current scene image to obtain multiple lane lines in the scene image. There are many lane line detection algorithms in an actual application process, which are not specifically limited in this specification.
After road plane lane lines are recognized, a lane and a drivable road plane area are determined according to the lane lines.
When the drivable road plane area is recognized, the foregoing recognized lanes may be combined to determine the drivable road plane area. Alternatively, a manner shown in
The foregoing enumerated manner of recognizing lane information is merely an example for description. Actually, any manner of obtaining related lane information based on a scene image is suitable according various embodiments of the present disclosure, and is not specifically limited herein.
S23: Generate a driving-state prompt element corresponding to the vehicle driving state based on the lane information, and perform superposed fusion on the driving-state prompt element and the environment sensing information to render and generate an augmented reality map.
For the current vehicle, there may be many vehicle driving states. In an automated driving scene, vehicle driving states may be correspondingly switched and jumped.
State jump is divided into two categories: function upgrade and function downgrade. The function upgrade refers to step-by-step upgrade from full manual driving to high-order automated driving. Manual driving can be directly upgraded to ACC, LCC and NOA, or ACC is enabled first, LCC is then enabled, and NOA is finally enabled level-by-level. The function downgrade is the reverse of the function upgrade, indicating gradual downgrade from higher-order automated driving to full manual driving. Similarly, NOA may be directly downgraded to LCC, ACC, and manual driving, or it may first exit to LCC, then exit to ACC, and finally exit to manual driving level-by-level.
In this embodiment of the present disclosure, for each vehicle driving state, a corresponding driving-state prompt element may be generated based on the foregoing lane information. The driving-state prompt element may be prompt information that matches a function of the vehicle driving state, and the driving-state prompt element may be a guide line, a guide plane, a guide point, or the like. For example, when the current vehicle driving state is an automatic lane change state, a corresponding driving-state prompt element is to be an element that can represent a lane change related prompt, such as a lane change route (guide line), a lane after lane change (guide plane), and a location after lane change (guide point).
These driving-state prompt elements are superposed and fused with the environment sensing information, so that an augmented reality map that can represent the vehicle driving state can be rendered and generated. An augmented reality technology is used for fusing a scene image of a real scene with the vehicle driving state, so that an automated driving behavior can be presented to a vehicle associated object in multiple dimensions, thereby improving driving fun. In addition, the environment sensing information is used as a fusion basis, and a mapping result between the sensing data and the real world is presented, so that the vehicle associated object can obtain what it sees, thereby reducing an understanding cost of the vehicle associated object.
In one embodiment, the environment sensing information may further include positioning location information of the current vehicle. In this case, a manner in which the driving-state prompt element and the environment sensing information are superposed and fused, and the augmented reality map is rendered and generated may be: rendering, in the cockpit domain based on the positioning location information of the current vehicle, other information in the environment sensing information to generate an augmented reality map used for navigation, the other information being information in the environment sensing information except the positioning location information; and superposing and drawing, in a triangulation manner, the driving-state prompt element on the augmented reality map, and highlighting the driving-state prompt element in the augmented reality map. With reference to the positioning location information of the current vehicle, an augmented reality map that is more capable of reflecting the actual location of the current vehicle is generated, so that navigation is more accurately performed.
In one embodiment, there may be a deviation in the positioning location information in the environment sensing information. To ensure accuracy of a final augmented reality map, a correction operation may be performed on the positioning location information of the current vehicle with reference to the AR navigation information. Then, other information in the environment sensing information is integrated into the AR map based on the corrected positioning location information, and an augmented reality map used for navigation is rendered and generated. Finally, all the integrated information is displayed in the AR map.
In this embodiment of the present disclosure, before the driving-state prompt element corresponding to the vehicle driving state is generated based on the lane information, the positioning location information of the current vehicle is corrected with reference to the augmented reality navigation information for the current vehicle, thereby further improving accuracy of a final result.
S24: Display the augmented reality map.
After the augmented reality map is obtained, the augmented reality map may be displayed for the vehicle associated object to view. In one embodiment, the augmented reality map may be displayed on a display screen. A quantity of display screens may be one or more (that is, at least one), which is not limited in this embodiment of the present disclosure. In one embodiment, the display screen may be a display screen included in the cockpit domain.
The display screen may be a display screen that is in the current vehicle and that is used for displaying an AR map, or may be another display screen. The AR map may be displayed in some or all areas of these display screens, and this may depend on display policies used by different display screens for the AR map.
In this embodiment of the present disclosure, the display screen that can support display of the AR map in the vehicle may include but is not limited to: a center console screen, a dashboard screen, an augmented reality head up display (AR-HUD) screen, and the like.
In some embodiments, the cockpit domain in the present disclosure includes at least one display screen, that is, the augmented reality map may be displayed on at least one display screen included in the cockpit domain.
The following uses an example in which the cockpit domain includes an AR-HUD screen, a dashboard screen, and a center console screen.
The AR-HUD screen shown in 91 is a display screen on a front windshield at a driving location. Because the AR map may be displayed in full screen, when the AR map is displayed by using the AR-HUD screen, the foregoing mentioned display screen may be the entire area of the AR-HUD screen.
The dashboard screen shown in
The center console screen shown in 93 refers to a display screen on a central console, and is mainly used for displaying content such as vehicle audio, navigation, vehicle information, and rearview camera images. Because the center console screen may display the AR map in full screen or display the AR map in split screen, when the AR map generated in S23 is displayed on the center console screen, the foregoing display screen may be a partial area or the entire area of the center console screen.
The foregoing enumerated several AR map display manners are merely examples for description. In addition, another display screen may be further used for displaying the AR map in the present disclosure, for example, a mobile phone screen, a tablet computer screen, or a third screen in a vehicle. This is not specifically limited in this specification.
As such, a multi-dimensional presentation matrix (a center console screen, a dashboard screen, an AR-HUD screen, and the like) can be provided, to facilitate the vehicle associated object to separately view, thereby improving a sense of technology, and increasing a trust degree by an object.
In one embodiment, in a driving process of the current vehicle, a navigation interface may be displayed, and the navigation interface includes the AR map generated in S23. The AR map is used for presenting the current driving scene of the current vehicle and the vehicle driving state of the current vehicle.
Based on the schematic diagram of the product prototype shown in
As shown in
In an embodiment, the process of “performing superposed fusion on the driving-state prompt element and the environment sensing information to render and generate a corresponding augmented reality map” in S23 may be further divided into the following substeps:
S231: Render and generate, based on the environment sensing information, an augmented reality map for navigation. In this case, the obtained AR map may be referred to as an initial AR map.
S232: Superpose and draw, in a triangulation manner, the driving-state prompt element on the augmented reality map, and highlight the driving-state prompt element in the augmented reality map to obtain a final AR map.
There are many manners of highlighting an element in an image, such as highlighted display, magnified display, marking with a special color, marking with a special pattern, and the like. In this specification, highlighted display is used as an example for description, and another manner is also applicable. This is not specifically limited in this specification.
In addition, considering that there are many automated driving states, a switching logic is complex, and figure presentation in various states is different. With reference to several vehicle driving states listed in
Case 1: If the vehicle driving state is a manual driving state, the driving-state prompt element includes a drivable lane plane corresponding to a lane in which the current vehicle is currently located.
In the manual driving state, the vehicle can be driven only after manual intervention is required. Therefore, only the lane in which the ego vehicle is located needs to be prompted. For example, in the manual driving state, the lane in which the ego vehicle is located may be drawn as a highlighted area full of lanes, where the area is a drivable lane plane, and highlighting is a manner of highlighting the driving-state prompt element.
In this embodiment of the present disclosure, the generated drivable lane plane is drawn on an initial AR map in a triangulation manner, to obtain a final AR map that highlights the driving-state prompt element corresponding to the manual driving state.
In this case, the corresponding driving-state prompt element may be generated in the following manner. For a specific process, refer to
First, lane information of the current scene image obtained through lane line recognition is obtained. An example in which the lane information includes lane lines, for example, the lane line 1, the lane line 2, the lane line 3, and the lane line 4 that are marked in
Further, based on the positioning location information of the current vehicle, the lane in which the current vehicle is currently located is determined from the lane information, and each lane line of the lane in which the current vehicle is currently located is sampled to obtain multiple first sampling points. Affected by a sampling interval, if there is a case in which sampling points on the left and right lanes are not corresponding to each other, multiple obtained first sampling points further need to be aligned, a missing point is supplemented to obtain supplemented first sampling points, and supplemented sampling points on each lane line in the first sampling points are corresponding to each other. Then, a drivable lane plane corresponding to the manual driving state is generated based on the supplemented first sampling points.
As shown in
Affected by the sampling interval, if there is a case in which the sampling points on the left and right lanes are not corresponding to each other, the multiple obtained first sampling points need to be aligned, and a missing point needs to be supplemented. In
Additionally, if left-right alignment can be directly performed, there is no need to supplement the missing point, that is, this step of supplementing the missing point may be determined according to an actual situation. This is not specifically limited in this specification.
Finally, based on the supplemented first sampling points, a drivable lane plane corresponding to the manual driving state may be generated. In this embodiment of the present disclosure, the drivable lane plane may be superposed and drawn on the AR map in a triangulation drawing manner, and finally, the final AR map shown in
Case 2: When the vehicle driving state is a first automated driving state, the driving-state prompt element includes a lane line area corresponding to the lane in which the current vehicle is currently located. The first automated driving state is a lane center control enabled state.
In a state in which LCC is enabled, the vehicle is kept at the center of the current lane. Therefore, the current lane needs to be locked, that is, the object is prompted through figure lane locking that the LCC driving state has been entered, and lane lines on the left and right sides of the lane in which the ego vehicle is located are used as prompts. For example, in the first automated driving state, the lane lines on the left and right sides of the lane in which the ego vehicle is located may be drawn as highlighted lanes, and the two highlighted lanes, that is, the lane line area, are highlighted as a manner of highlighting the driving-state prompt element.
In this embodiment of the present disclosure, the generated lane line area is superposed and drawn on the initial AR map in a triangulation manner, to obtain a final AR map in which the corresponding driving-state prompt element of the first automated driving state is highlighted.
In this case, the corresponding driving-state prompt element may be generated in the following manner. For a specific process, refer to
First, lane information of the current scene image obtained through lane line recognition is obtained. An example in which the lane information includes lane lines, for example, the lane line 1, the lane line 2, the lane line 3, and the lane line 4 that are marked in
Further, each lane line of the lane in which the current vehicle is currently located is extracted from lane information based on the positioning location information of the current vehicle. Still using the foregoing as an example, the lane in which the current vehicle is currently located is the lane formed by the lane line 2 and the lane line 3. In this process, the two lane lines included in the current lane need to be separately extracted. In
After each lane line is extracted, a first width is separately extended in a first preset direction by using a location of each lane line as a center, to obtain a lane line area corresponding to the first automated driving state. The first preset direction is a direction perpendicular to the lane line.
This step is a line-extended-into-plane process. The lane line 2 is used as an example, and the first preset direction is a direction perpendicular to the lane line 2. As shown in
In this embodiment of the present disclosure, the two lane line areas may be superposed and drawn on the AR map in a triangulation drawing manner, and finally, the final AR map shown in
Case 3: When the vehicle driving state is a second automated driving state, the driving-state prompt element includes a lane central area corresponding to the lane in which the current vehicle is currently located. The second automated driving state is a navigate on autopilot enabled state.
In a state in which NOA is enabled, a destination may be set to guide the vehicle to automatically drive, and an operation such as lane change and overtaking, automated driving in, and driving out of a ramp may be completed under supervision of the driver, and the object may be prompted by using a figure guide line that the NOA driving state currently has been entered. For example, in the second automated driving state, a guide line may be drawn based on a lane center line of the lane in which the ego vehicle is located, and this guide line is a lane center area.
In this embodiment of the present disclosure, the generated lane center area is superposed and drawn on the initial AR map in a triangulation manner, to obtain a final AR map in which the corresponding driving-state prompt element of the second automated driving state is highlighted.
In this case, the corresponding driving-state prompt element may be generated in the following manner. For a specific process, refer to
First, lane information of the current scene image obtained through lane line recognition is obtained. An example in which the lane information includes lane lines, for example, the lane line 1, the lane line 2, the lane line 3, and the lane line 4 that are marked in
Further, based on the positioning location information of the current vehicle, the lane in which the current vehicle is currently located is determined from the lane information, and each lane line of the lane in which the current vehicle is currently located is sampled to obtain multiple first sampling points. As shown in
Further, the multiple obtained first sampling points are aligned, and a missing point is supplemented to obtain supplemented first sampling points, sampling points on each lane line in the supplemented first sampling points being corresponding to each other. In
Finally, a second width is extended in a second preset direction by using a location of the lane center line of the current lane as a center, to obtain a lane center area corresponding to the second automated driving state; and the second preset direction is a direction perpendicular to the lane center line.
This step is a process of extending the lane center line left and right, and the second preset direction is a direction perpendicular to the lane center. As shown in
In this embodiment of the present disclosure, the lane center area may be superposed and drawn on the AR map in a triangulation drawing manner, and finally, the final AR map shown in
Case 4: When the vehicle driving state is a third automated driving state, the driving-state prompt element includes at least one of a lane change route area, a target lane area after lane change, and a vehicle landing area; the third automated driving state is an automatic lane change state; and the vehicle landing area is used for representing a location of the current vehicle in a target lane in a case of changing to the target lane.
In the automatic lane change state, the object may be prompted by using a figure lane change guide (such as a blue guide line, a highlighted target lane, and a target vehicle landing point) that a lane change operation is being performed.
An example in which the driving-state prompt element includes a lane change route area, a target lane area after lane change, and a vehicle landing area is used. In this embodiment of the present disclosure, the generated lane change route area, the target lane area after lane change, and the vehicle landing area are superposed and drawn on the initial AR map in a triangulation manner, to obtain a final AR map in which the corresponding driving-state prompt element of the third automated driving state is highlighted.
In this case, the corresponding driving-state prompt element may be generated in the following manner. For a specific process, refer to
First, a lane change route planned for the current vehicle is obtained from the automated driving system, that is, a step of automated driving route data in
Further, a third width is extended in a third preset direction by using a location of the lane change route as a center, to obtain a lane change route area corresponding to the third automated driving state. The third preset direction is a direction perpendicular to the lane change route.
This step is a process of a line-extended-into-plane, and the third preset direction is a direction perpendicular to the lane change route. As shown in
In this embodiment of the present disclosure, the lane change route area may be superposed and drawn on the AR map in a triangulation drawing manner, and finally, 181 shown in
First, lane information of the current scene image obtained through lane line recognition is obtained. An example in which the lane information includes lane lines, for example, the lane line 1, the lane line 2, the lane line 3, and the lane line 4 that are marked in
Further, based on the lane information, each lane line (that is, the lane line 3 and the lane line 4) of a target lane after lane change of the current vehicle is extracted, and each lane line of the target lane is sampled to obtain multiple second sampling points. Still using the foregoing as an example, the lane after lane change of the current vehicle is the lane formed by the lane line 3 and the lane line 4. In this process, the two lane lines included in the target lane need to be separately extracted, represented by multiple sampling points, and denoted as second sampling points. This corresponds to the process of extracting the target lane shown in
Further, the multiple obtained second sampling points are aligned, and a missing point is supplemented to obtain supplemented second sampling points, sampling points on each lane line in the supplemented second sampling points being corresponding to each other. As shown in the alignment step in
Further, the target lane area corresponding to the third automated driving state is generated based on the supplemented second sampling points.
In this embodiment of the present disclosure, the target lane area may be superposed and drawn on the AR map in a triangulation drawing manner, and finally, 182 shown in
First, lane information of the current scene image obtained through lane line recognition is obtained. An example in which the lane information includes lane lines, for example, the lane line 1, the lane line 2, the lane line 3, and the lane line 4 that are marked in
Further, based on the lane information, each lane line (that is, the lane line 3 and the lane line 4) of the target lane after lane change of the current vehicle is extracted, and each lane line of the target lane is sampled to obtain multiple second sampling points. Further, the multiple obtained second sampling points are aligned, and a missing point is supplemented to obtain supplemented second sampling points, sampling points on each lane line in the supplemented second sampling points being corresponding to each other. This is the same as the process of extracting the target lane in
Then, the lane center line of the target lane is obtained based on the average value of groups of aligned second sampling points in the supplemented second sampling points, that is, the second sampling points corresponding to each group (that is, one sampling point on both the left and the right, that is, sampling points corresponding to corresponding locations in the lane line 3 and the lane line 4) are averaged to obtain corresponding center points, and then these center points are connected to form the lane center line of the target lane.
In the present disclosure, driving in the middle by the current vehicle is used as an example for description. Therefore, after the lane center line of the target lane is determined, a vehicle landing point location of the current vehicle on the lane center line of the target lane is determined according to a driving speed and preset lane change time of the current vehicle and the positioning location information of the current vehicle, that is, a process of calculating the vehicle landing point in
Further, based on the size of the current vehicle, the vehicle landing point location may be extended to a vehicle landing area corresponding to the third automated driving state, that is, the line-extended-into-plane process in
In this embodiment of the present disclosure, the vehicle landing area may be superposed and drawn on the AR map in a triangulation drawing manner, and finally, 183 shown in
In this embodiment of the present disclosure, highlighting styles corresponding to different vehicle driving states may be the same or different. The foregoing listed several highlighting styles are merely simple examples for description, and other highlighting styles are also encompassed within the scope of the present disclosure. In addition, in the present disclosure, styles of driving-state prompt elements drawn on the AR map are adjustable. A drawing state is also extensible and is not limited to a state enumerated in the present disclosure. For example, related driving-state prompt elements such as an automated driving road turn deceleration prompt, a stop-and-give-way prompt, a large-car avoidance prompt, an automated driving function exit prompt, a manual takeover prompt, and an abnormal state prompt may be presented in the AR map. Details are not described herein.
The foregoing AR map may be presented through screen projection. The following briefly describes a screen projection logic and a screen projection instance of the AR map in this embodiment of the present disclosure.
Based on the foregoing enumerated screen projection solutions, the AR map in the present disclosure may be projected to the display screen on the current vehicle or another related display screen. The foregoing enumerated dashboard screen is used as an example.
As shown in
In this embodiment of the present disclosure, through multi-dimensional data fusion, an automated driving system state is fused with AR sensed environment information, and a virtual-real fusion target is implemented. In addition, only simple and efficient data processing needs to be performed, and fast and accurate calculation is performed, so that an optimal display effect and performance can be obtained.
S2101: Obtain a vehicle driving state of a current vehicle from an automated driving domain through cross-domain communication and sensing data (not including a scene image) related to a current driving scene of the current vehicle.
S2102: After a scene image related to the current driving scene of the current vehicle is collected in a cockpit domain, fuse the scene image with the sensing data to obtain environment sensing information.
S2103: Perform lane line detection on the scene image to obtain lane information in the current driving scene.
S2104: Generate a driving-state prompt element corresponding to the vehicle driving state based on the lane information and positioning location information of the current vehicle.
S2105: Render and generate, based on the environment sensing information, an augmented reality map for navigation.
S2106: Superpose and draw, in a triangulation manner, the driving-state prompt element on the augmented reality map, and highlight the driving-state prompt element in the augmented reality map.
S2107: Display the augmented reality map on multiple display screens of the current vehicle.
Based on the same/similar inventive concept(s), an embodiment of the present disclosure further provides a apparatus for displaying vehicle driving state.
an obtaining unit 2301, configured to obtain a vehicle driving state of a current vehicle, and obtain environment sensing information related to a current driving scene of the current vehicle;
the environment sensing information including a scene image of the current driving scene; a detection unit 2302, configured to perform lane line detection on the scene image to obtain lane information in the current driving scene;
a fusion unit 2303, configured to: generate a driving-state prompt element corresponding to the vehicle driving state based on the lane information, and perform superposed fusion on the driving-state prompt element and the environment sensing information to render and generate an augmented reality map; and a display unit 2304, configured to display the augmented reality map.
In one embodiment, the obtaining unit 2301 is specifically configured to:
obtain, through cross-domain communication between a driving domain of the current vehicle and an automated driving domain of the current vehicle, the vehicle driving state of the current vehicle from the automated driving domain.
In one embodiment, the automated driving domain includes an environment sensing device; and
the obtaining unit 2301 is further configured to collect, by using the automated driving domain, sensing data measured by the environment sensing device;
transmit the sensing data from the automated driving domain to the cockpit domain by using the cross-domain communication; and
determine the environment sensing information based on the sensing data obtained in the cockpit domain.
In one embodiment, if the sensing data does not include a scene image, the obtaining unit 2301 is further configured to:
collect, by using the cockpit domain, a scene image related to the current driving scene of the current vehicle; and
perform fusion based on the scene image and the sensing data to obtain the environment sensing information.
In one embodiment, the environment sensing information further includes positioning location information of the current vehicle; and the fusion unit 2303 is specifically configured to:
render, in the cockpit domain based on the positioning location information of the current vehicle, other information in the environment sensing information to generate an augmented reality map used for navigation, the other information being information in the environment sensing information except the positioning location information; and
superpose and draw, in a triangulation manner, the driving-state prompt element on the augmented reality map, and highlight the driving-state prompt element in the augmented reality map.
In one embodiment, the fusion unit 2303 is further configured to: before the other information in the environment sensing information is rendered based on the positioning location information of the current vehicle to generate the augmented reality map used for navigation, correct, in the cockpit domain, the positioning location information of the current vehicle with reference to augmented reality navigation information for the current vehicle. The fusion unit 2303 is specifically configured to render, based on the corrected positioning location information, the other information in the environment sensing information to generate an augmented reality map used for navigation.
In one embodiment, the cockpit domain includes a display screen; and the display unit 2304 is specifically configured to:
display the augmented reality map on the display screen included in the cockpit domain.
In one embodiment, the driving-state prompt element is prompt information that matches a function of the vehicle driving state;
when the vehicle driving state is a manual driving state, the driving-state prompt element includes a drivable lane plane corresponding to a lane in which the current vehicle is currently located;
when the vehicle driving state is a first automated driving state, the driving-state prompt element includes a lane line area corresponding to the lane in which the current vehicle is currently located; the first automated driving state is a lane center control enabled state;
when the vehicle driving state is a second automated driving state, the driving-state prompt element includes a lane central area corresponding to the lane in which the current vehicle is currently located; the second automated driving state is a navigate on autopilot enabled state;
when the vehicle driving state is a third automated driving state, the driving-state prompt element includes at least one of a lane change route area, a target lane area after lane change, and a vehicle landing area; the third automated driving state is an automatic lane change state; and the vehicle landing area is used for representing a location of the current vehicle in a target lane in a case of changing to the target lane.
In one embodiment, the vehicle driving state is the manual driving state; and the fusion unit 2303 is specifically configured to:
determine, based on the positioning location information of the current vehicle, the lane in which the current vehicle is currently located from the lane information, and sample each lane line of the lane in which the current vehicle is currently located to obtain multiple first sampling points;
align the multiple first sampling points, and supplement a missing point to obtain supplemented first sampling points, sampling points on each lane line in the supplemented first sampling points being corresponding to each other; and
generate a drivable lane plane corresponding to the manual driving state based on the supplemented first sampling points.
In one embodiment, the vehicle driving state is the first automated driving state; and the fusion unit 2303 is specifically configured to:
extract, from the lane information based on the positioning location information of the current vehicle, each lane line of the lane in which the current vehicle is currently located; and
separately extend by a first width in a first preset direction by using a location of each lane line as a center, to obtain a lane line area corresponding to the first automated driving state; the first preset direction being a direction perpendicular to the lane line.
In one embodiment, the vehicle driving state is the second automated driving state; and the fusion unit 2303 is specifically configured to:
determine, based on the positioning location information of the current vehicle, the lane in which the current vehicle is currently located from the lane information, and sample each lane line of the lane in which the current vehicle is currently located to obtain multiple first sampling points;
align the multiple first sampling points, and supplement a missing point to obtain supplemented first sampling points, sampling points on each lane line in the supplemented first sampling points being corresponding to each other;
obtain a lane center line of the current lane based on an average value of groups of aligned first sampling points in the supplemented first sampling points; and
extend by a second width in a second preset direction by using a location of the lane center line of the current lane as a center, to obtain a lane center area corresponding to the second automated driving state; the second preset direction being a direction perpendicular to the lane center line.
In one embodiment, the vehicle driving state is the third automated driving state; the driving-state prompt element includes the lane change route area; and the fusion unit 2303 is specifically configured to:
obtain, from an automated driving system, a lane change route planned for the current vehicle; and
extend by a third width in a third preset direction by using a location of the lane change route as a center, to obtain a lane change route area corresponding to the third automated driving state;
the third preset direction being a direction perpendicular to the lane change route.
In one embodiment, the vehicle driving state is the third automated driving state; the driving-state prompt element includes the target lane area; and the fusion unit 2303 is specifically configured to:
extract, based on the lane information, each lane line of a target lane after lane change of the current vehicle, and sample each lane line of the target lane to obtain multiple second sampling points;
align the multiple second sampling points, and supplement a missing point to obtain supplemented second sampling points, sampling points on each lane line in the supplemented second sampling points being corresponding to each other; and
generate a target lane area corresponding to the third automated driving state based on the supplemented second sampling points.
In one embodiment, the vehicle driving state is the third automated driving state; the driving-state prompt element includes the vehicle landing area; and the fusion unit 2303 is specifically configured to:
extract, based on the lane information, each lane line of a target lane after lane change of the current vehicle, and sample each lane line of the target lane to obtain multiple second sampling points;
align the multiple second sampling points, and supplement a missing point to obtain supplemented second sampling points, sampling points on each lane line in the supplemented second sampling points being corresponding to each other;
obtain a lane center line of the target lane based on an average value of groups of aligned second sampling points in the supplemented second sampling points;
determine a vehicle landing point location of the current vehicle on the lane center line of the target lane according to a driving speed and preset lane change time of the current vehicle and the positioning location information of the current vehicle; and
extend, based on a size of the current vehicle, the vehicle landing point location to a vehicle landing area corresponding to the third automated driving state.
As disclosed, high-precision map data is discarded, and only a vehicle driving state and related environment sensing information need to be obtained, to perform lane line detection on a scene image to obtain lane information in a current driving scene, for generating a driving-state prompt element corresponding to the vehicle driving state based on the lane information. The driving-state prompt element is prompt information that matches a function of the vehicle driving state, and may reflect the vehicle driving state. Therefore, the driving-state prompt element is superposed and fused with the environment sensing information, and rendered to generate an augmented reality map for display. In this way, the vehicle driving state is displayed, and an automated driving behavior can be displayed in multiple dimensions for associated objects of the vehicle to view separately. The high-precision map data is discarded in a vehicle driving process, but an augmented reality map is rendered and generated based on a vehicle driving state and environment sensing information, thereby implementing lightweight drawing, reducing performance consumption, and improving drawing efficiency. In addition, in this process, the environment sensing information is used as a fusion basis, and a scene image of a real scene and the vehicle driving state are fused by using an augmented reality technology, for presenting a mapping result of sensing data and the real world, which reflects a correlation between a real environment and the driving state.
For convenience of description, the foregoing parts are divided into modules (or units) for description by function. Certainly, in implementation of the present disclosure, the functions of the modules (or units) may be implemented in the same piece of or a plurality of pieces of software and/or hardware.
A person skilled in the art can understand that the aspects of the present disclosure may be implemented as systems, methods, or program products. Therefore, the aspects of the present disclosure may be specifically embodied in the following forms: hardware only implementations, software only implementations (including firmware, micro code, etc.), or implementations with a combination of software and hardware, which are collectively referred to as “circuit”, “module”, or “system” herein.
Based on the same inventive concept as the foregoing method embodiment, an embodiment of the present disclosure further provides an electronic device. In an embodiment, the electronic device may be a server, such as the server 120 shown in
The memory 2401 is configured to store a computer program executed by the processor 2402. The memory 2401 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, a program required for running an instant messaging function, and the like. The data storage area can store various instant messaging information and operation instruction sets.
The memory 2401 may be a volatile memory such as a random access memory (RAM); the memory 2401 may alternatively be a non-volatile memory such as a read-only memory, a flash memory, a hard disk drive (HDD), or a solid-state drive (SSD); or the memory 2401 is any other medium that can be used for carrying or storing an expected computer program in the form of an instruction or data structure and that can be accessed by a computer, but is not limited thereto. The memory 2401 may be a combination of the foregoing memories.
The processor 2402 may include one or more central processing units (CPU), a digital processing unit, or the like. The processor 2402 is configured to: when invoking the computer program stored in the memory 2401, implement the foregoing vehicle driving state display method.
The communication module 2403 is configured to communicate with a terminal device and another server.
A specific connection medium among the memory 2401, the communication module 2403, and the processor 2402 is not limited in this embodiment of the present disclosure. In this embodiment of the present disclosure, the memory 2401 is connected to the processor 2402 by using a bus 2404 in
The memory 2401 stores a computer storage medium, the computer storage medium stores computer executable instructions, and the computer executable instructions are used for implementing the vehicle driving state display method in the embodiment of the present disclosure. The processor 2402 is configured to perform the foregoing vehicle driving state display method, as shown in
In another embodiment, the electronic device may alternatively be another electronic device, such as the terminal device 110 shown in
The memory 2520 may be configured to store a software program and data. The processor 2580 runs the software program and the data stored in the memory 2520, to implement various functions and data processing of the terminal device 110. The memory 2520 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory, or another volatile solid-state storage device. The memory 2520 stores an operating system that enables the terminal device 110 to run. In the present disclosure, the memory 2520 may store the operating system and various application programs, and may further store a computer program that executes the vehicle driving state display method in the embodiment of the present disclosure.
The processor 2580 is a control center of the terminal device, is connected to each part of the entire terminal by using various interfaces and lines, and performs various functions and data processing of the terminal device by running or executing the software program stored in the memory 2520 and invoking the data stored in the memory 2520. In some embodiments, the processor 2580 may include one or more processing units. The processor 2580 may further integrate an application processor and a baseband processor, where the application processor mainly processes an operating system, a user interface, an application program, and the like, and the baseband processor mainly processes wireless communication. It may be understood that the baseband processor may alternatively not be integrated into the processor 2580. In the present disclosure, the processor 2580 may run an operating system and an application program and display a user interface and a touch response, and the vehicle driving state display method in the embodiment of the present disclosure. In addition, the processor 2580 is coupled to the display unit 2530.
In some possible implementations, aspects of the vehicle driving state display method provided in the present disclosure may further be implemented in a form of a program product. The program product includes a computer program. When running on an electronic device, the computer program is configured to enable the electronic device to perform the steps described in the foregoing description in the vehicle driving state display method according to various exemplary implementations of the present disclosure. For example, the electronic device may perform the steps shown in
As such, embodiments of the present disclosure provide a vehicle driving state display method and apparatus, an electronic device, and a storage medium. As disclosed, high-precision map data is discarded, and only a vehicle driving state and related environment sensing information need to be obtained, to perform lane line detection on a scene image to obtain lane information in a current driving scene, for generating a driving-state prompt element corresponding to the vehicle driving state based on the lane information. The driving-state prompt element is prompt information that matches a function of the vehicle driving state, and may reflect the vehicle driving state. Therefore, the driving-state prompt element is superposed and fused with the environment sensing information, and rendered to generate an augmented reality map for display. In this manner, the vehicle driving state is displayed, and an automated driving behavior can be displayed in multiple dimensions for associated objects of the vehicle to view separately. As disclosed, high-precision map data is discarded in a vehicle driving process, but an augmented reality map is rendered and generated based on a vehicle driving state and environment sensing information, thereby implementing lightweight drawing, reducing performance consumption, and improving drawing efficiency. In addition, in this process, the environment sensing information is used as a fusion basis, and a scene image of a real scene and the vehicle driving state are fused by using an augmented reality technology, presenting a mapping result of sensing data and the real world, which reflects a correlation between a real environment and the driving state.
The program product may be any combination of one or more readable mediums. The readable medium may be a computer-readable signal medium or a computer-readable storage medium.
The readable storage medium may be, for example, but is not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semi-conductive system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive lists) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any appropriate combination thereof.
The program product in the implementation of the present disclosure may use a portable compact disk read-only memory (CD-ROM) and include a computer program, and may run on an electronic device. However, the program product in the present disclosure is not limited thereto. In this specification, the readable storage medium may be any tangible medium including or storing a program, and the program may be used by or used in combination with an instruction execution system, apparatus, or device.
A readable signal medium may include a data signal being in a baseband or transmitted as a part of a carrier, which carries a computer-readable program. A data signal propagated in such a way may assume a plurality of forms, including, but not limited to, an electromagnetic signal, an optical signal, or any appropriate combination thereof. The readable storage medium may alternatively be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program used by or in combination with an instruction execution system, apparatus, or device.
The computer program included in the readable medium may be transmitted by using any suitable medium, including but not limited to wireless, wired, optical cable, RF, or the like, or any suitable combination thereof.
A computer program used for performing operations of the present disclosure may be compiled in any combination of one or more programming languages. The programming language includes an object-oriented programming language such as java, C++, or the like, and further includes a conventional programming language such as a “C” language or a similar programming language. The computer program may be completely executed on a user electronic device, partially executed on the user electronic device, executed as an independent software package, partially executed on a remote electronic device, or completely executed on the remote electronic device or a server. In a case involving a remote electronic device, the remote electronic device may be connected to a user electronic device by using any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external electronic device (for example, connected by using an Internet service provider through the Internet).
Although several units or subunits of the apparatus are mentioned in the foregoing detailed description, such division is merely exemplary and not mandatory. Actually, according to the implementations of the present disclosure, the features and functions of two or more units described above may be specifically implemented in one unit. On the contrary, the features and functions of one unit described above may be further divided to be embodied by a plurality of units.
In addition, although operations of the methods of the present disclosure are described in a specific order in the accompanying drawings, this does not require or imply that these operations must be performed in the specific order, or that all the operations shown must be performed to achieve an expected result. Additionally or alternatively, some steps may be omitted, multiple steps are combined into one step for execution, and/or one step is decomposed into multiple steps for execution.
A person skilled in the art can understand that the embodiments of the present disclosure may be provided as a method, a system, or a computer program product. Therefore, the present disclosure may use a form of hardware-only embodiments, software-only embodiments, or embodiments combining software and hardware. In addition, the present disclosure may be in a form of a computer program product implemented on one or more computer-available storage media (including but not limited to a magnetic disk memory, a CD-ROM, an optical memory, and the like) that include a computer-available computer program.
The present disclosure is described with reference to flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiments of the present disclosure. It is to be understood that computer program instructions can implement each procedure and/or block in the flowcharts and/or block diagrams and a combination of procedures and/or blocks in the flowcharts and/or block diagrams. These computer program instructions may be provided to a general-purpose computer, a special-purpose computer, an embedded processor, or a processor of another programmable data processing device to generate a machine, so that an apparatus configured to implement functions specified in one or more procedures in the flowcharts and/or one or more blocks in the block diagrams is generated by using instructions executed by the computer or the processor of another programmable data processing device.
Although exemplary embodiments of the present disclosure have been described, once persons skilled in the art know the basic creative concept, they can make additional changes and modifications to these embodiments. Therefore, the following claims are intended to be construed as to cover the exemplary embodiments and all changes and modifications falling within the scope of the present disclosure.
Clearly, a person skilled in the art can make various modifications and variations to the present disclosure without departing from the spirit and scope of the present disclosure. In this case, if the modifications and variations made to the present disclosure fall within the scope of the claims of the present disclosure and their equivalent technologies, the present disclosure is intended to include these modifications and variations.
Number | Date | Country | Kind |
---|---|---|---|
202211310360.2 | Oct 2022 | CN | national |
This application is a continuation application of PCT Patent Application No. PCT/CN2023/120285, filed on Sep. 21, 2023, which claims priority to Chinese Patent Application No. 202211310360.2, filed on Oct. 25, 2022, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/120285 | Sep 2023 | WO |
Child | 18590516 | US |