This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2020-0172549, filed on Dec. 10, 2020 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.
The present disclosure relates to a vehicle that performs autonomous driving based on signals acquired from a camera and various sensors.
Autonomous driving technology for vehicles is a technology that enables a vehicle to automatically drive by understanding the road conditions without a driver controlling a brake, a steering wheel, an accelerator pedal, or the like.
Autonomous driving technology is a key technology for the realization of smart cars, and for autonomous vehicles, includes a highway driving support system (HAD) for automatically maintaining the distance between vehicles, a blind spot detection (BSD) system for sensing a neighboring vehicle during backward driving and producing an alert, an automatic emergency braking (AEB) system for operating a braking apparatus in case of a failure to recognize a preceding vehicle, a lane departure warning system (LDWS), a lane keeping assist system (LKAS) for preventing a drift out of a lane without a turn signal, an advanced smart cruise control (ASCC) system for performing auto cruise at a designated speed while maintaining a distance between vehicles, a traffic jam assistant (TJA) system, a parking collision-avoidance assist (PCA) system, and the like.
In particular, for the PCA system, research on sensors used for lateral collision avoidance assist and a control logic thereof is being actively conducted.
In performing the above-described autonomous driving, the vehicle may use signals acquired by various sensors provided in the vehicle.
According to an embodiment, the vehicle may perform the above-described autonomous driving using sensors, such as a radar and a LiDAR, and a camera.
On the other hand, sensors used for autonomous driving perform recognition, determination, and control to achieve maximum performance based on a fixed recognition range.
In the conventional technology, there is a limitation in that only a fixed recognition performance is acquired with a fixed recognition range and a fixed hardware performance of a sensor. Therefore, studies to solve such limitations are being actively conducted.
Therefore, it is an object of the present disclosure to provide a vehicle capable of performing safe autonomous driving by selecting a recognition area of a sensor for performing autonomous driving and maximizing the performance of the sensor according to a situation.
Additional aspects of the present disclosure are set forth in part in the description which follows and, in part, should be understood from the description, or may be learned by practice of the present disclosure.
According to an aspect of the present disclosure, there is provided a vehicle performing autonomous driving, the vehicle including: a communication part; a driving part configured to drive the vehicle and acquire information about an element that drives the vehicle; an information acquisition part including a camera, a radar and a LiDAR; and a control part. In one embodiment, the control part is configured to: determine road condition information of a road on which the vehicle travels based on a signal acquired from the communication part; determine travel information of the vehicle based on information acquired from the driving part; receive a recognition result of the information acquisition part; determine a required performance based on the road condition information, the vehicle travelling information, and the recognition result; and change an object recognition performance of the information acquisition part based on the required performance.
The control part, when the required performance is related to improving a recognition accuracy of one area of a surrounding area of the vehicle, may change a recognition area of the radar to a vicinity of the one area.
The control part, when the required performance is related to acquiring information about a moving object around the vehicle, may change a recognition area of the radar to a vicinity of the moving object.
The control part, when the required performance is related to improving a resolution to acquire information about one area of a surrounding area of the vehicle, may change a recognition area of the LiDAR to a center of the one area.
The control part, when the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle, may improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera to a predetermined range.
The control part may be configured to, among pieces of surrounding information about a specific area acquired by a plurality of modules forming the information acquisition part, in response to an existence of at least one module having acquired different surrounding information about the specific area, perform control to cause the information acquisition part to acquire the surrounding information by assigning a high weight to the at least one module that has acquired the different surrounding information.
The control part may be configured to, based on a performance of at least one module that forms the information acquisition part, determine the required performance for changing a recognition weight of the at least one module. The control part may also be configured to change the object recognition performance of the information acquisition part based on the required performance.
The control part may be configured to, based on a type of an object included in a surrounding image of the vehicle acquired by the information acquisition part, determine the required performance for changing a weight of the surrounding image of the vehicle corresponding to the object.
These and/or other aspects of the present disclosure should become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Like numerals refer to like elements throughout the specification. Not all elements of embodiments of the present disclosure will be described, and descriptions of what are commonly known in the art or what overlap each other in the embodiments are omitted. The terms as used throughout the specification, such as “˜ part”, “˜ module”, “˜ member”, “˜ block”, and the like, may be implemented in software and/or hardware, and a plurality of “˜ parts”, “˜ modules”, “˜ members”, or “˜ blocks” may be implemented in a single element, or a single “˜ part”, “˜ module”, “˜ member”, or “˜ block” may include a plurality of elements.
It is further understood that the term “connect” or its derivatives refer both to direct and indirect connection, and the indirect connection includes a connection over a wireless communication network.
It is further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof, unless the context clearly indicates otherwise.
Although the terms “first,” “second,” “A,” “B,” and the like may be used to describe various components, the terms do not limit the corresponding components, but are used only for the purpose of distinguishing one component from another component.
As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Reference numerals used for method steps are just used for convenience of explanation, but not to limit an order of the steps. Thus, unless the context clearly dictates otherwise, the written order may be practiced otherwise.
When a component, device, element, or the like of the present disclosure is described as having a purpose or performing an operation, function, or the like, the component, device, or element should be considered herein as being “configured to” meet that purpose or to perform that operation or function.
Hereinafter, the principles and embodiments of the present disclosure are described with reference to the accompanying drawings.
Referring to
The communication part 300 may communicate with an external server and devices.
Specifically, the communication part 300 may receive road condition information of a road on which the vehicle travels.
The road condition information may include a Global Positioning System (GPS) signal and map information transmitted from an external server.
The communication part 300 may include one or more components that enable communication with an external device, and may include, for example, at least one of a short-range communication module, a wired communication module, and a wireless communication module.
The driving part 400 may be provided as a device capable of driving a vehicle.
According to an embodiment, the driving part 400 may include an engine, and may include various components for driving the engine.
Specifically, the driving part 400 may include a brake and a steering device and may be provided without limitation as long as it can implement driving of a vehicle.
The information acquisition part 200 may include a radar 210, a LiDAR 220, and a camera 230.
The radar sensor 210 may refer to a sensor that emits an electromagnetic wave approximating microwaves (e.g., ultrahigh frequency wave, a wavelength of 10 cm to 100 cm) to an object, and receives the electromagnetic wave reflected from the object, to detect the distance, direction, altitude, and the like with the object.
The LiDAR sensor 220 may refer to a sensor that emits a laser pulse, receives the light reflected from a surrounding target object, and measures the distance to the object to thereby precisely depict a surrounding.
The camera 230 may be provided as a component to acquire a surrounding image of the vehicle 1.
According to an embodiment, a camera 230 may be provided at the front, rear, and side of the vehicle 1 to acquire an image.
The camera 230 installed in the vehicle may include a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) color image sensor. The CCD and the CMOS may refer to a sensor that converts light received through a lens of the camera 230 into an electric signal. In detail, the CCD camera 230 refers to an apparatus that converts an image into an electric signal using a charge-coupled device. In addition, a CMOS image sensor (CIS) refers to a low-consumption and low-power type image pickup device having a CMOS structure, and serves as an electronic film of a digital device. In general, the CCD has a sensitivity superior than that of the CIS and thus is widely used in the vehicle 1, but the present disclosure is not limited thereto.
The control part 100 may include an important area determining part 110 and a recognition area adjusting part 120.
The control part 100 may determine road condition information of a road on which the vehicle travels based on a signal acquired from the communication part 300.
The road condition information may refer to a concept including road information determined by precision map information, such as a road curvature, a speed limit, and/or a road width. The road condition information may also refer to concepts including road surrounding information and a degree of risk determined based on traffic information, accident information, and accident frequency/history information.
The control part 100 may determine vehicle travelling information based on information acquired from the driving part 400.
The travelling information of the vehicle 1 may refer to information including a vehicle behavior based on sensors of the vehicle 1, such as a steering angle, a brake pedal, an accelerator pedal, a turn indicator, a gear state, revolutions per minute (RPM), a braking pressure, an acceleration, and a yaw rate.
In addition, the control part 100 may receive a recognition result of the information acquisition part 200.
The recognition result may refer to a sensor performance degradation or a sensor abnormal state, such as recognition errors of sensors based on radar, camera, and LiDAR information.
The control part 100 may determine a required performance (e.g., a required operation) based on the road condition information, the vehicle travelling information, and the recognition result.
The required performance may include a recognition priority set by the vehicle 1 for each recognition area around the vehicle 1.
The control part 100 may change an object recognition performance of the information acquisition part 200 based on the required performance.
The changing of the object recognition performance may refer to an operation of changing the use priority of a radar, a LiDAR, and a camera in a specific area, or changing the weight and priority of an area acquired by each module.
The control part 100 may, when the required performance is an operation of improving a recognition accuracy of one area of a surrounding area of the vehicle 1, change a recognition area of the radar 210 to a vicinity of the one area.
In other words, when acquiring information about an object existing in a specific area, the control part 100 may more accurately acquire information about the corresponding area while less accurately acquiring information about the remaining area using the radar 210.
The controller 100 may, when the required performance is related to acquiring information about a moving object around the vehicle 1, change the recognition area of the radar 210 to a vicinity of the moving object. In other words, the control part 100 may acquire motion information of a surrounding object using the radar 210, and if there is a specific object, may improve the recognition accuracy to acquire motion information of the object in the corresponding area.
The control part 100 may, when the required performance is related to acquiring information about one area of a surrounding area of the vehicle 1 by improving the resolution, change the recognition area of the LiDAR 220 to the center of the one area.
The control part 100 may, when the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle 1, improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera 230 to a predetermined range.
As is described below, the camera 230 may acquire an image of a surrounding area of the vehicle 1 and classify an object in a specific area of each area. Accordingly, the control part 100, when there is a required performance for improving the classification characteristic of a specific area, may improve the classification characteristic of the corresponding area and reduce the classification characteristic of the other areas.
Among pieces of surrounding information of a specific area acquired by a plurality of modules constituting the information acquiring part 200, the control part 100, in response to an existence of at least one module having acquired different surrounding information of the specific area, may perform control to cause the information acquisition part 200 to assign a higher weight to the at least one module, and acquire the surrounding information.
The controller 100, based on a performance of at least one module that forms the information acquisition part 200, may determine the required performance for changing a recognition weight of the at least one module, and change the object recognition performance of the information acquisition part 200 based on the required performance.
Specifically, when a specific module among the plurality of modules has a performance different from those of other modules and thus provides information different from that acquired by the other modules, the control part 100 may change the object recognition performance of the corresponding module to acquire information about a surrounding object.
The controller 100, based on the type of an object included in a surrounding image of the vehicle 1 acquired by the information acquisition part 200, may determine the required performance for changing a weight of the surrounding image of the vehicle 1 corresponding to the object. The operation may include, when an object is included in a surrounding image of the vehicle 1, assigning a higher weight and priority to an area in which the image is located and acquiring object information. Details thereof are described below.
The control part 100 may include a memory (not shown) for storing data regarding an algorithm for controlling the operations of the components of the vehicle 1 or a program that represents the algorithm. The control part 100 may also include a processor (not shown) that performs the above described operations using the data stored in the memory. In this case, the memory and the processor may be implemented as separate chips. Alternatively, the memory and the processor may be implemented as a single chip.
At least one component may be added or omitted to correspond to the performances of the components of the vehicle shown in
Some of the components shown in
Referring to
Specifically, a narrow-angle front camera Z31 among the cameras 230 of the vehicle 1 may acquire information about the vehicle 1 up to a distance of 250 m in front of the vehicle 1.
In addition, a radar sensor Z32 provided in the vehicle 1 may acquire information about the vehicle 1 up to 160 m in front of the vehicle 1.
In addition, a main front camera Z33 among the cameras 230 provided in the vehicle 1 may acquire information about the vehicle 1 up to a distance of 150 m in front of the vehicle 1. In addition, the main front camera Z33 may acquire a wider range of information compared to the narrow-angle front camera Z31.
In addition, a wide-angle front camera Z34 among the cameras 230 provided in the vehicle 1 may acquire information about the vehicle 1 up to a distance of 60 m in front of the vehicle 1. The wide-angle front camera Z34 may acquire a wider range of surrounding information of the vehicle 1 compared to the narrow-angle front camera Z31 or the main front camera Z33.
In addition, an ultrasonic sensor Z35 provided in the vehicle 1 may acquire information about a surrounding of the vehicle 1 in a range of about 8 m around the vehicle 1.
On the other hand, a side camera Z36 facing rearward among the cameras 230 provided in the vehicle 1 may acquire information about the vehicle 1 up to a distance of 100 m behind the vehicle 1. On the other hand, a rear camera Z37 facing rearward may acquire information about the vehicle 1 up to a distance of 100 m behind the vehicle 1.
On the other hand, an area shown in
Referring to
Referring to
The camera 230 may have a superior object classification performance compared to other sensors. In addition, the camera 230 may process a recognition type for each selected recognition area.
On the other hand, the control part 100 may determine a required performance for different classification performance in the image acquired by the camera.
For example, when an object to be identified exists in areas 22, 23, 34, and 33, the control part 100 may improve the classification performances of the corresponding areas and reduce the classification performances of the remaining areas.
Referring to
When a specific area needs to have an improved recognition accuracy, the recognition area of the radar 210 may be selectively applied to improve the recognition accuracy.
In addition, when the speed and distance accuracy need to be improved, the recognition area of the radar 210 may be selectively applied.
The area around the vehicle recognized by the radar 210 may include left and right areas in front of the vehicle 1 shown in
For example, when an object is located in an area Z4-2, the control part 100 may assign the area Z4-2 with a higher weight and assign the remaining areas with lower weights to acquire a larger amount of information about the corresponding area.
In addition, according to another embodiment, when an object is located in an area Z4-1 and motion information of the object located in the corresponding area is acquired, the control part 100 may assign the area Z4-1 with a higher weight and assign the remaining areas with lower weights to acquire a larger amount of information about the corresponding area
Referring to
When the resolution of a specific area needs to be improved, the control part 100 may improve the resolution by narrowing the recognition area to the corresponding area. When the distance accuracy needs to be improved, the control part 100 selectively applies the recognition area.
For example, when an object is located in an area of Z5-2, the control part 100 may determine the area Z5-2 area to have a higher resolution and acquire a larger amount of information about the corresponding area.
In addition, according to another embodiment, when an object is located in an area Y5-2, the control part 100 may determine the area Y5-2 to have a higher resolution and acquire a larger amount of information about the area Y5-2.
The operations shown in
Referring to
In addition, with regard to the recognition of the camera 230, the control part 100 may improve the classification characteristics of lower part images in a surrounding image of the vehicle 1 (Z6b).
A situation in which the accelerator pedal of the vehicle 1 is operated may represent a situation in which the probability of driving straight in the lane is high.
Accordingly, in this case, the control part 100 may determine a distant area among front areas of the radar 210 and the LiDAR 220 as a specific area, improve the recognition accuracy of the corresponding area, and improve the resolution (Z7a).
In addition, with regard to the recognition of the camera 230, the control part 100 may improve the classification characteristics of images of central areas of upper and lower parts in the surrounding image of the vehicle 1 (Z6b).
A situation in which the steering wheel or steering wheel pedal of the vehicle is operated may represent a case in which there is a high probability that a lane change to the left or right and/or a left or right turn may occur.
In this case, the control part 100 may determine side areas of the radar 210 and the LiDAR 220 as a specific area, improve the recognition accuracy of the corresponding area, and improve the resolution (Z8a).
In addition, with regard to the recognition of the camera 230, the control part 100 may improve the classification characteristics of the images of lower and left/right sides of the surrounding image of the vehicle 1 (Z8b).
The operation described with reference to
Referring to
The vehicle 1 may recognize that a tunnel exists in front of the vehicle 1 through map information received by the communication part 300, determine an entry area Z9 as an important area, and improve the classification performance of the camera 230. The improving of the classification performance may include an operation of increasing the weight of the entry area Z9 and decreasing the weight of the remaining areas.
In addition, in this case, the vehicle 1 may determine the recognition area of the radar 210 as the entry area Z9 and may increase the resolution of the LiDAR 220 to the corresponding area.
In
Referring to
In
In addition, the control part 100 may determine the final surrounding object information using R1, R2, and R3 (Rt).
In the example shown in
On the other hand, while
Referring to
The control part 100 may determine the travelling condition of the vehicle and the recognition result based on the signals (1002). As described above, the travelling situation may represent a concept including a road situation around the vehicle 1 and a travelling situation of the vehicle 1. The control part 100 may change the object recognition performance of the information acquisition part 200 based on the travelling condition of the vehicle 1 and the recognition result (1003). The changing of the object recognition performance may include changing the recognition area of the radar 210, improving the classification performance of the camera 230, and improving the resolution of the LiDAR 220.
As should be apparent from the above, the vehicle according to an embodiment of the present disclosure can perform safe autonomous driving by selecting a recognition area of a sensor for performing autonomous driving and maximizing the performance of the sensor according to a situation.
Number | Date | Country | Kind |
---|---|---|---|
10-2020-0172549 | Dec 2020 | KR | national |