This application claims priority under 35 U.S.C. § 119 from European Patent Application No. 22193590.1, filed Sep. 2, 2022, the entire disclosure of which is herein expressly incorporated by reference.
The present disclosure relates to a method for processing image data captured by a camera in a vertical image plane, and to a data processing apparatus configured to perform the method at least in part. Additionally or alternatively, a computer program and/or a computer readable medium is provided with the method.
In the field of visual vision for automated driving in motor vehicles, 3D perception using single cameras as input (e.g., to detect and identify the objects in the scene in 3D or to find a 3D drivable path) is well known. For this purpose, for example, the Ortographic Feature Transform from Roddick (Roddick, Thomas, Alex Kendall, and Roberto Cipolla. “Orthographic feature transform for monocular 3d object detection.” arXiv preprint arXiv:1811.08188 (2018)), which transforms image features into a Bird's Eye View (BEV). More specifically, Roddick concerns an orthographic feature transformation method for 3D monocular object recognition. An approach for extracting 3D bounding boxes from monocular images is described. The algorithm used for this purpose includes five main components, namely (1) a front-end ResNet feature extractor that extracts multi-scale image-based feature maps from an input image, (2) an orthographic feature transformation that converts the image-based feature maps at each scale into a bird's-eye view orthographic representation, (3) a top-down network consisting of a set of ResNet residual units that processes the bird's-eye view feature maps in a manner invariant to the perspective effects observed in the image, (4) a set of output heads that generate a confidence score, position offset, dimension offset, and orientation vector for each object class and location on a footprint, and (5) a non-maximum suppression and decoding stage that identifies peaks in the confidence maps and generates discrete bounding box predictions. The first element of the proposed architecture (see (1) above: feature extraction) is a convolutional feature extractor that generates a hierarchy of multi-scale 2D feature maps from the raw input image. These features encode information about low-level structures in the image that form the basic components used by the top-down network to construct an implicit 3D representation of the scene. The front-end network is also responsible for deriving depth information based on the size of image features, as subsequent stages of the architecture aim to eliminate scale mismatches. In order to infer the 3D world without perspective effects, feature extraction is followed by an orthographic feature transform, which first involves mapping feature maps extracted in image space to orthographic feature maps in space, called an orthographic feature transform (OFT). The goal of the OFT is to transform the 3D voxel feature map g(x, y, z)∈Rn with relevant n-dimensional features from the image-based feature map f(u, v)∈Rn extracted by the front-end feature extractor (see (1) above). The voxel map is defined over a uniformly distributed 3D grid G fixed at a distance y0 below the camera at the ground plane and having dimensions W, H, D and a voxel size of r. The voxel map is then extracted by the front-end feature extractor. For a given voxel grid position (x, y, z)∈G, the voxel feature g (x, y, z) is obtained by accumulating features over the area of the image feature map f corresponding to the 2D projection of the voxel. In general, each voxel, which is a cube of size r, is projected onto a hexagonal region in the image plane. This is done by a rectangular bounding box with upper left and lower right corners (u1, v1) and (u2, v2), which are given by
where f is the focal length of the camera and (cu, cv) is the principal point. A feature can then be assigned to the corresponding position in the voxel feature map g by so-called average pooling over the bounding box of the projected voxel in the image feature map f:
The resulting voxel feature map g already provides a representation of the scene that is free from the effects of perspective projection. However, deep neural networks operating on large voxel grids tend to be extremely memory intensive. Since there is particular interest in applications, such as autonomous driving, where most objects are anchored on the 2D ground plane, the problem can be made more manageable by reducing the 3D voxel feature map to a third, two-dimensional representation called the orthographic feature map h (x, z). The orthographic feature map is obtained by summing voxel features along the vertical axis after multiplication by a set of learned weight matrices W (y)∈Rn×n are obtained:
Converting to an intermediate voxel representation before collapsing into the final orthographic feature map has the advantage of preserving information about the vertical structure of the scene. This proves essential for downstream tasks such as estimating the height and vertical position of object bounding boxes. A major challenge with the above approach is the need to aggregate features over a very large number of regions. A typical voxel grid generates about 150 k bounding boxes, which is far beyond the ˜2 k regions of interest used by the Faster R-CNN architecture, for example. To facilitate pooling over such a large number of regions, a fast average pooling operation based on integral images is used. An integral image, or in this case an integral feature map F, is constructed from an input feature map the recursive relation
F(u, v)=f(u,v)+F(u−1, v)+F(u, v−1)−F(u−1, v−1)
Given the integral feature map F, the initial feature g (x, y, z), which is the feature given by the bounding box coordinates (u1, v1) and (u2, v2) (see first equation above), is given by
The complexity of this pooling operation is independent of the size of each region, making it highly suitable for use in automated driving, where the size and shape of regions varies significantly depending on whether the voxel is close to or far from the camera. The pooling operation is also fully differentiable with respect to the original feature map f and can therefore be used as part of an end-to-end Deep Learning framework.
If the approach described above or similar approaches are used in the motor vehicle, they are usually quantized to improve runtime speed and memory consumption by a large factor. This means that the parameters and operations are converted from floating point precision (32 bits) to integers with 8 or 16 bit precision. The OFT transformation uses integral images as an intermediate step. More specifically, the OFT uses integral images to efficiently obtain the summed feature values contained in image regions of different sizes. This is because when BEV cells are projected into the image, the projection has different sizes depending on the distance between a particular cell and the camera. Since integral images are an accumulative sum over two axes, once this accumulation exceeds the maximum value available for int8 (8 bits) or int16 (16 bits), overflow occurs and artifacts and incorrect results are produced. This makes the OFT unusable when the models are quantized.
Against the background of this prior art, the task of the present disclosure is to disclose an apparatus and/or a method, each of which is suitable for enriching the prior art.
The task is solved by the features of the claimed invention.
Thereafter, the task is solved by methods for processing image data recorded by one, optionally single, camera in a vertical image plane.
The method may be a computer-implemented method, i.e., one, more, or all steps of the method may be performed at least in part by a data processing device or computer.
The method may also be referred to as a method for transforming image data of a vertical image plane into image data of a horizontal image plane. The vertical image plane may also be referred to as the image plane. The horizontal image plane may also be referred to as the bird's-eye view (BEV).
The method comprises extracting multiple feature maps with different resolutions from the image data captured by the camera in the vertical image plane using a feature extractor.
The feature extractor, which can also be referred to as the backbone, can be a Convolutional Neural Network (CNN). The extracted features can also be referred to as image-based features. The feature map corresponds to the image layer.
The method includes selecting one of the extracted feature maps as a function of a position of a portion of the image data in a horizontal image plane relative to the camera.
The method includes transforming the portion of image data captured in the vertical image plane from the vertical image plane to image data in the horizontal image plane based on the selected feature map.
That is, depending on where the portion of the extracted feature map appears in the horizontal plane, the resolution of the extracted feature map is adjusted. Adjusting can include lowering or decreasing the resolution or using the original resolution.
Transforming can also be called converting. This can be done in a single step or in several partial steps. It is conceivable that first the feature map is converted into a 3D environment and then a conversion into the horizontal image plane is performed from the 3D environment.
Possible further developments of the above process are explained in detail below.
The method may be performed for at least a portion of the image data acquired in the vertical image plane until a substantially true-to-scale image of the portion in the horizontal image plane is obtained.
Optionally, the entire captured image or all image data can be converted to bird's eye view.
In general, it should be noted that the term “part” can also be understood to mean a “partial area” or “cell” which may have a predefined size or dimensions, optionally a rectangular shape. A “section” or “area” or “portion”, on the other hand, has or consists of several of these parts. All of the sections together make up the image data as a whole, i.e., the image data can be subdivided into the sections, which in turn can be subdivided into the parts. The parts can be latticed over the image data. Both the parts and the image data correspond to or represent a respective area in the real world.
The resolution may increase as the distance of the portion of the image data captured in the vertical image plane in the horizontal image plane from the camera increases.
In other words, those portions of the Bird's-eye view that are located close to the camera can be determined from a low resolution feature map, whereas those portions of the Bird's-eye view that are located farther from the camera can be determined from a high resolution feature map.
The resolution can be adjusted in steps depending on the position of the part of the image data captured in the vertical image plane in the horizontal image plane relative to the camera.
In other words, one feature map each of a certain resolution can be generated for certain sections of the Bird's-eye view, which are then used for all parts of the Bird's-eye view arranged in this section. This results in a pyramid-like structure with several feature maps of different resolutions.
The resolution can be adjusted so that only a single pixel access is required to transform the portion of the image data captured in the vertical image plane from the vertical image plane to the horizontal image plane.
The method may include controlling a driving assistance system of a motor vehicle based on the image data in the horizontal image plane.
The above method offers a number of advantages over the conventional methods described at the beginning. In detail, since it is often not possible to use integral images when quantizing models for the reasons mentioned above, the method can be used to create a feature map at multiple resolutions and thus replace the summed feature values within variable size regions with pixel accesses in these feature maps. Pixel accesses to lower resolutions can thus be used to obtain the values of the closer parts or BEV cells (since the area of the cells projected into the horizontal plane is comparatively large), and higher resolutions can be used to obtain the values of the more distant BEV cells (since the area of the cells projected into the horizontal plane is comparatively small). In this way, the problem of integral image computation can be reformulated into a problem of image resizing and pixel access, which are handled by the available quantizable operations. This greatly facilitates the application in vehicles.
Furthermore, a data processing device, e.g. comprising a control unit for a motor vehicle, is provided, wherein the data processing device is adapted to at least partially execute the method described above.
The control unit may be configured to control automated driving of a motor vehicle based on the image data obtained with the method in the horizontal image plane. This may include path planning or a trajectory to be followed for the motor vehicle.
The control unit can be part of or represent a driving assistance system. The control unit can, for example, be an electronic control unit (ECU). The electronic control unit can be an intelligent processor-controlled unit that can communicate with other modules, e.g. via a central gateway (CGW), and that can form the vehicle electrical system, e.g. together with telematics control units, via fieldbuses such as the CAN bus, LIN bus, MOST bus and FlexRay or via automotive Ethernet. It is conceivable that the control unit controls functions relevant to the driving behavior of the motor vehicle, such as the engine control system, the power transmission, the braking system and/or the tire pressure control system. In addition, driver assistance systems such as a parking assistant, adaptive cruise control (ACC), a lane departure warning system, a lane change assistant, traffic sign recognition, light signal recognition, a start-up assistant, a night vision assistant and/or an intersection assistant can be controlled by the control unit.
Furthermore, a motor vehicle comprising the data processing device described above is provided. The motor vehicle can have a camera, in particular a front camera, which is connected to the data processing device and outputs image data recorded by the camera in a vertical image plane.
The motor vehicle may be a passenger car, especially an automobile, or a commercial vehicle, such as a truck.
The motor vehicle can be designed to take over longitudinal guidance and/or lateral guidance at least partially and/or at least temporarily during automated driving of the motor vehicle. Automated driving can be carried out in such a way that the movement of the motor vehicle is (largely) autonomous.
The motor vehicle can be an autonomy level 0 motor vehicle, i.e. the driver takes over the dynamic driving task, even if supporting systems (e.g. ABS or ESP) are present.
The motor vehicle can be an autonomy level 1 motor vehicle, i.e., have certain driver assistance systems that support the driver in operating the vehicle, such as adaptive cruise control (ACC).
The motor vehicle can be an autonomy level 2 vehicle, i.e. it can be partially automated in such a way that functions such as automatic parking, lane keeping or lateral guidance, general longitudinal guidance, acceleration and/or braking are performed by driver assistance systems.
The motor vehicle can be an autonomy level 3 motor vehicle, i.e. conditionally automated in such a way that the driver does not have to monitor the vehicle system throughout. The motor vehicle independently performs functions such as triggering the turn signal, changing lanes and/or keeping in lane. The driver can turn his attention to other things, but is prompted by the system to take over the lead within a warning time if necessary.
The motor vehicle can be an autonomy level 4 motor vehicle, i.e. so highly automated that the driving of the vehicle is permanently taken over by the system vehicle. If the driving tasks are no longer mastered by the system, the driver can be asked to take over.
The motor vehicle may be an autonomy level 5 motor vehicle, i.e., so fully automated that the driver is not required to perform the driving task. No human intervention is required other than setting the destination and starting the system. The motor vehicle can operate without a steering wheel or pedals.
What has been described above with reference to the method and the data processing device applies analogously to the motor vehicle and vice versa.
Further provided is a computer program comprising instructions which, when the program is executed by a computer, cause the computer to at least partially execute the method described above.
A program code of the computer program may be in any code, especially in a code suitable for control systems of motor vehicles.
What has been described above with reference to the process and the motor vehicle also applies analogously to the computer program and vice versa.
Further provided is a computer-readable medium, in particular a computer-readable storage medium. The computer-readable medium comprises instructions which, when the instructions are executed by a computer, cause the computer to at least partially execute the method described above.
That is, a computer-readable medium comprising a computer program defined above may be provided.
The computer-readable medium can be any digital data storage device, such as a USB flash drive, hard drive, CD-ROM, SD card, or SSD card.
The computer program does not necessarily have to be stored on such a computer-readable storage medium in order to be made available to the motor vehicle, but may also be obtained externally via the Internet or otherwise.
What has been described above with reference to the method, the data processing device, the motor vehicle and the computer program also applies analogously to the computer-readable medium and vice versa.
Other objects, advantages and novel features of the present invention will become apparent from the following detailed description of one or more preferred embodiments when considered in conjunction with the accompanying drawings.
As can be seen from
In a first step S1 of the method, a feature map 4 is extracted from the image data 21 recorded by the camera 2 in one of the vertical image planes by a feature extractor 31 operated by the control unit 3.
In a second step S2 of the method, a resolution of the extracted feature map 4 is adjusted by the control unit 3 so that two feature maps 5, 6 with reduced resolution are obtained. The resolution gradually increases as the distance of the partial area 41, 51, 61 of the image data 7 in the horizontal plane from the camera 2 increases. For example, one of these two feature maps 5 may have a resolution of 66% of the original resolution of feature map 4 and the other of the two feature maps 6 may have, for example, a resolution of 50% of the original resolution of feature map 4.
In a third step S3 of the method, the control unit 3 selects one of the feature maps 4-6 as a function of a position of a part or a partial area 41, 51, 61 of the image data in a horizontal image plane 7 relative to the camera 2. For this purpose, an (optionally static) assignment of the (or all, only three being shown by way of example in
In a fourth step S4 of the method, the control unit 3 transforms the respective partial area 41, 51, 61 of the image data from the vertical image plane to the horizontal image plane 7 based on the respective selected feature map 4-6.
The third and fourth steps S3, S4 of the method are thereby carried out for a section 8 of the image data 21 recorded by the camera 2 with a vertical image plane until a substantially true-to-scale image of the section 8 in the horizontal plane 7 is obtained.
The resolution of the feature maps 4-6 is selected in such a way that only a single pixel access is required to transform the respective partial area 41, 51, 61 of the image data 21 from the vertical image plane to the horizontal image plane.
In an optional fifth step S5 of the method, a control of a (not shown) driving assistance system of the motor vehicle 1 is performed by the control unit 3 based on the image data 7 obtained by steps S1-S4 in the horizontal image plane.
The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
22193590.1 | Sep 2022 | EP | regional |