The invention relates to a method for obstacle detection for a rail vehicle. The invention also relates to an obstacle detection device. In addition, the invention relates to a rail vehicle.
In the context of rail traffic, it occasionally occurs that objects such as e.g. people or road vehicles, shopping carts that have been thrown onto the rails, or even lumps of rock or fallen trees, are found on the track and therefore represent a threat to the safety of the rail traffic and, in the case of people and road vehicles, are themselves seriously endangered by the possibility of a collision with a moving rail vehicle. Furthermore, objects such as e.g. scree or mud which are not so easily identifiable and in particular cannot easily be assigned to an object class may be found on the rails or in a monitoring region of a rail vehicle. Therefore some form of generic obstacle detection is necessary in order to recognize such objects as obstacles in a timely manner, even if they cannot be directly identified, in order to initiate a braking operation for an approaching rail vehicle so that a collision between the rail vehicle and such an object can be prevented. One of the critical tasks of a locomotive engineer is consequently to observe the rail region continuously for possible obstacles or objects which could collide with the rail vehicle and to decide whether a reaction thereto, e.g. a braking maneuver, must be initiated. To this end, the locomotive engineer monitors not only the actual rail region on which the train is travelling, but also adjacent rail strings. If an obstacle is detected on an adjacent stretch of track, the driver notifies a monitoring center which forewarns other trains. This notification is particularly important in regions in which a view of the rails is impeded, e.g. due to curves, vegetation, buildings or walls, and the trains do not have enough braking distance to come to a standstill before the obstacle due to their late inevitably late detection thereof.
Since the braking distance of a rail vehicle before coming to standstill is in the kilometer range for high-speed trains, and a possible collision object must also be detected in darkness or under any sort of weather conditions, the sensory capabilities of the driver are often not sufficient for this task, even on straight sections. Therefore provision must additionally be made for a technical device by means which an obstacle can also be detected under poor conditions of visibility and at greater distance.
In the case of driverless vehicle control, it is particularly important to have a system which automatically detects possible obstacles correctly and can decide whether they could represent a threat to the vehicle itself or to other road and rail users, since it is by definition not possible for a driver to monitor the stretch of track.
Automatic detection systems for potential obstacles have already been studied and developed experimentally. Infrastructure-based solutions in which a multiplicity of sensors are installed in order to monitor a specific region of a stretch of track are sometimes used to monitor regions in which accidents frequently occur, e.g. railroad crossings. It is however not feasible to equip the entire rail network with infrastructure-based obstacle detection systems. Therefore the requirement exists for an on-board system to perform the task of a driver to observe and monitor the stretch of track.
Most on-board systems use cameras in the form of monoscopic cameras or stereoscopic cameras. On the basis of the recorded images, the tracks are identified without the additional use of a map and solely by means of image analysis, this being based either on a conventional algorithm which functions in a similar manner to road marking detection, or on an approach that is based on machine learning. However, most of these approaches suffer from limited reliability in certain situations in which the visibility ahead and particularly in a monitoring region of a rail vehicle is limited.
For the purpose of approaches based on monoscopic cameras, use is generally made of an algorithm based on machine learning, said algorithm being applied in order to detect the relevant obstacles in the surroundings of the rail vehicle. The lower edge of the bounding box of the detected object can then be compared with the detected course of the line, in order to determine whether the object represents a possible collision obstacle. The interval between the object and the camera is usually determined on the basis of a flat-earth model. However, this has a limitation in that the detection is limited to classes of objects for which the algorithm has been trained, e.g. for vehicles and people only, and other objects are not detected or are ignored by the algorithm. Therefore generic obstacle detection is not possible using an approach which is based on machine learning. Safety certification for approaches based solely on machine learning has not been conceivable until now, since said approaches rely on random statistical methods and therefore do not satisfy the requirements of a safety certification.
In the case of approaches based on stereoscopic cameras, the images are converted into a depth image or into a point cloud, allowing generic objects to be detected. The 3D position of the detected object is compared with a detected course of the line in three dimensions in order to determine whether the object represents a potential danger or a potential obstacle.
Camera-based approaches have the disadvantage that they require suitable light conditions in order to function. At night in particular, RGB cameras are unable to provide usable results. Therefore alternative camera technologies are usually applied, e.g. thermal imaging cameras, in order to improve the performance of the system under all possible light conditions.
Active sensors such as e.g. lidar or radar systems are already in use. Radar sensors function under most weather conditions, but suffer from limited accuracy and resolution in comparison with lidar systems. Lidar-based solutions are likewise already used. However, these solutions do not function for object detection if the tracks are not visible, whether this is due to the course of the line or due to external visibility restrictions such as e.g. weather conditions, snow coverage, etc. In both cases, the 3D information supplied by the sensors is compared with the course of the line that has been detected by means of camera images, in order to determine whether obstacles are present.
One of the challenges faced by existing systems is reliably to detect the position of the stretch of track under all conditions. Camera-based detection of the course of the line functions well for straight sections and gentle curves, but relies on the rails being visible. Lidar-based rail detection functions on the basis of the highly reflective metallic surface of the rails and only at a short distance from a rail vehicle. It is moreover likewise dependent on the sensor being able to capture the course of the line.
Since optical rail detection only functions reliably for line sections with visible rails, e.g. approximately straight stretches of track, it has been proposed to deploy the on-board system only for stretches of track where the rails can be detected to a satisfactory degree, and to deploy approaches which are infrastructure-based or airborne-based in order to cover the other regions of the rail network.
The position of the rail vehicle can be estimated by means of location-finding algorithms and on the basis of a combination of sensors, e.g. GNSS, IMU, radar, odometry, balises, etc.
In other existing location-finding approaches, use is made of landmarks which are detected by cameras, lidar systems or radar systems in order to improve the location-finding. This option is useful when the GNSS coverage is poor. Landmarks can take the form of e.g. parts of the fixed rail infrastructure such as e.g. signals, signs or sets of points.
The implementation of map-based approaches, by means of which it is theoretically possible to increase the accuracy of position-finding, has failed in the past because the determination of the position, and in particular the orientation, of the rail vehicle relative to the map must be extremely accurate in order to determine whether a detected object is located on the stretch of track. For example, a slight error of e.g. 0.1 percent in the determination of the orientation of the rail vehicle results in a positional error or a positional deviation of 1 m at a range of 600 m.
The object is therefore to provide a method and an apparatus for obstacle detection for rail vehicles, offering greater reliability, resilience and precision than previous approaches to the solution.
This object is achieved by a method for obstacle detection for a rail vehicle according to claim 1, an obstacle detection device according to claim 10, and a rail vehicle according to claim 11.
According to the inventive method for obstacle detection for a rail vehicle, a monitoring region in front of the rail vehicle is determined as a function of a position and a course of a track on which the train vehicle is travelling and a position and dimensions of a monitoring profile of the rail vehicle. The monitoring profile comprises a clearance gage of the rail vehicle. The clearance gage comprises a maximum vehicle cross section or a maximum vehicle gage as a function of the proper motion of the rail vehicle, which in turn depends on the curve radius and the true speed. The monitoring profile can also comprise an extended clearance gage, which includes e.g. a predefined safety zone around the actual core region or the actual clearance gage. The monitoring profile can also be divided into zones at different distances around the rail vehicle. For example, the clearance gage is surrounded by a so-called emergency profile and the emergency profile by a warning profile. According to the region into which an object intrudes, provision can be made for incremental countermeasures such as e.g. the output of warning signals, regular braking or emergency braking. The course of the track is understood to be the trajectory of the track and the spatial or geographical course thereof.
For the purpose of determining the monitoring region, sensor data for determining the position and orientation of the rail vehicle is captured and the position and spatial course of the track on which the train vehicle is travelling are determined on the basis of an alignment of the determined position and orientation of the rail vehicle with digital map data. The captured sensor data is therefore used for auto-location-finding of the rail vehicle. If position and orientation of the rail vehicle are known with sufficient precision, a position of the track section that the train will be travelling on and a position of the monitoring profile of the rail vehicle can also be determined therefrom. If e.g. the position or orientation of the rail vehicle captured by the sensor data are only imprecisely known due to possible variations in the sensor data, this imprecise data can be aligned with the digital map data. If e.g. a position of the rail vehicle does not lie precisely on a track, the position can be corrected to the effect that the rail vehicle has to be located on the track. The measured orientation of the rail vehicle can likewise vary from an orientation of a track section on which the rail vehicle, according to its sensor-based position data, must be located. In this case, as explained in detail below, information which is captured by on-board sensors, in particular positional information, relating to stationary objects in the environment of the rail vehicle, e.g. landmarks or the rail section in front of the rail vehicle, can be aligned with map data in order thus to reduce inaccuracies when determining the position and the orientation of the rail vehicle.
Furthermore, objects in an environment of the rail vehicle are captured by sensors of the rail vehicle. The sensors for capturing the environment are designed to provide images, i.e. they must depict the environment and offer sufficient resolution for obstacles to be captured and detected as such. The images can be realized as 2D images or 3D images. Specific embodiments are explained in detail below. Provision is also made for determining which objects are potential collision obstacles by checking whether and to what extent they overlap with the monitoring region.
Finally, a reaction or response to those objects categorized as potential obstacles is determined as a function of geometric and kinetic object attributes. The check whether and to what extent the captured objects overlap with the monitoring region, and the determination of the object attributes, can take place both as part of the object capture and after subsequent tracking of the detected objects. By means of said tracking, a situational evaluation can be e.g. confirmed or revised and the intersection of the tracked objects with the monitoring region can be determined more precisely or a change of the intersection can be captured on the basis of the dynamic proper motion of the objects.
The reaction or response is usually determined by specifications in the rail region, which specify e.g. minimum object sizes, overlap areas, etc. that must be reached in order to trigger a braking operation. The typical procedure is to calculate the service braking distance and the emergency braking distance of the rail vehicle as a function of the current speed of the rail vehicle and other information relating to the rail vehicle such as e.g. its weight and the incline or the gradient of the line section ahead. Objects of sufficient size and having a sufficiently large area of overlap with the monitoring region are then treated as collision obstacles and a response or reaction is determined on the basis of the interval to the collision obstacle and the braking distance of the rail vehicle. If e.g. the interval to a collision obstacle is less than the service braking distance, the emergency brake can be activated. In addition, an acoustic warning can be transmitted to the collision obstacle by means of the signal hooter of the rail vehicle. If the interval to the collision obstacle is in a range in which a standard braking maneuver is initiated, it is possible either to initiate such a braking maneuver or, if the interval to the collision obstacle is greater than the service braking distance by a predefined threshold value, to refrain from a braking maneuver initially and merely activate the signal hooter. The nature of the response can be modified on the basis of specific partial volumes with which the collision obstacle overlaps in the monitoring region, e.g. a warning volume or an emergency volume, and the detected properties of collision obstacles.
In addition to generating a response of the rail vehicle, the obstacle information can also be transmitted to a remote operations control center in order to draw general attention to the obstacle. This is beneficial if e.g. the adjacent track is also being monitored and other rail vehicles can be warned of possible collision obstacles which are not currently in their monitoring region. The operations control center can also issue additional instructions in situations in which an automatic rail vehicle is not allowed to make a decision independently. For example, a rail vehicle can send sensor data to the operations control center after coming to a halt in front of a detected collision obstacle. A person assesses the collision obstacle properly there and decides whether the rail vehicle can run over the collision obstacle without danger, or whether the collision obstacle has been moved aside sufficiently that the rail vehicle can continue onwards. If the collision obstacle in front of the rail vehicle must be removed before the rail vehicle can continue its journey, the sensor data transmitted to the operations control center can enable a human operator to establish which device could be used to clear the collision obstacle. Such a device may comprise e.g. a cutting tool for a fallen tree or a piece of construction machinery for a rock.
If on-board obstacle detection and on-board handling of the collision obstacle are not sufficient, the inventive method can be supplemented by the use of infrastructure-based obstacle detection systems. Since these systems are static, they can easily monitor a section of a rail string for collision obstacles. Such systems can advantageously be installed in regions with poor visibility such as e.g. regions with blind spots and regions with a high risk of collision such as e.g. railway stations. They can also be used to monitor the entrances of tunnels. In this case, they can not only monitor whether a potential collision obstacle is present on the line section but also whether an object such as e.g. a person or an animal has entered a tunnel, and can therefore transmit a warning that a collision obstacle might be present in the tunnel. This information can then be used to influence the strategy of a rail vehicle, e.g. to trigger a braking maneuver or select a lower speed for the rail vehicle.
As a result of the combined use of sensor data and map data, a collision danger is advantageously determined with greater precision and greater reliability than is possible using conventional procedures, thereby improving the safety and the effectiveness of automatic rail traffic. In particular, by virtue of the map-based approach and the consequently increased precision, the range of the collision obstacle detection is increased since this depends in particular on the accuracy of the measurement of the orientation of the rail vehicle and the orientation of the sensors that are used to detect the collision obstacle. Furthermore, this way of determining the course of the line can also be realized for line sections which are not straight or have reduced visibility of the course of the line, e.g. due to snow coverage or limited conditions of visibility. Furthermore, the inventive method allows the detection of generic objects on the basis of 3D data. This is advantageous in particular over methods which are based on machine learning and by means of which only specific classes of objects can therefore be detected and identified.
The inventive obstacle detection device has a monitoring region determination unit for determining a monitoring region in front of a rail vehicle. The determination of the monitoring region, in particular the position and extent thereof, is effected as a function of a position and a course of a track on which the train vehicle is travelling, and a position and dimensions of a monitoring profile of the rail vehicle. As mentioned above, the determination of the monitoring region is effected by means of capturing sensor data, e.g. GNSS data, and determining the position and orientation of the rail vehicle on the basis of this sensor data. The position and the course of the track on which the train vehicle is travelling are then determined on the basis of an alignment of the determined position and orientation of the rail vehicle with digital map data.
The inventive obstacle detection device also includes a sensor unit for the sensory capture of objects in the environment of the rail vehicle. Any suitable sensors for capturing, locating and possibly identifying objects can be used as sensor units, e.g. cameras, lidar systems or radar systems.
The inventive obstacle detection device further comprises an obstacle determination unit for determining which objects are possible collision obstacles, specifically by checking whether and to what extent these objects overlap with the monitoring region. In addition, the inventive obstacle detection device has a response determination unit for determining a reaction to the objects that are categorized as potential collision obstacles. The determination of a reaction is effected as a function of the previously cited geometric and kinetic attributes of the objects. The inventive obstacle detection device has the same advantages as the inventive method for obstacle detection for a rail vehicle.
The inventive rail vehicle has the inventive obstacle detection device and a control device for controlling a driving strategy of the rail vehicle as a function of a reaction which is determined by the obstacle detection device. The inventive rail vehicle has the same advantages as the inventive obstacle detection device.
Some components of the inventive obstacle detection device can, possibly after adding hardware systems such as e.g. a sensor unit, be embodied primarily in the form of software components. This relates in particular to the monitoring region determination unit, the obstacle determination unit and the response determination unit.
However, these components can in principle also be realized partly in the form of programmable hardware, e.g. FPGAs or similar, especially when particularly high-speed calculations are required. Likewise, the required interfaces can take the form of software interfaces, e.g. when only a transfer of data from other software components is required. However, they can also take the form of hardware-based interfaces which are activated by suitable software.
A largely software-based realization has the advantage that computing systems which are already present in a rail vehicle can also be upgraded easily by means of a software update after possibly adding further hardware elements, e.g. additional sensor units, in order to operate in the inventive manner. In this respect, the object of the invention is also achieved by a corresponding computer program product comprising a computer program which can be loaded directly into a storage device of such a computing system, said computer program having program sections for executing the inventive method steps that can be realized by means of software when the computer program is executed in the computing system.
Such a computer program product can, in addition to the computer program, optionally comprise additional elements such as e.g. documentation and/or additional components including hardware components such as e.g. hardware keys (dongles, etc.) for using the software.
For the purpose of transportation to the storage device of the computing system and/or for the purpose of storage on the computing system, it is possible to use a computer-readable medium, e.g. a memory stick, hard disc or other transportable or permanently installed data medium, on which are stored the program sections of the computer program that can be read in and executed by a computing unit. For this purpose, the computing unit can have e.g. one or more interworking microprocessors or similar.
The dependent claims and the following description each contain particularly advantageous embodiments and developments of the invention. In this case, the claims in one statutory class of claim can be developed in particular in a similar way to the dependent claims of another statutory class of claim and the specification parts thereof. Moreover, in the context of the invention, the various features of different exemplary embodiments and claims can also be combined to form further exemplary embodiments.
In the inventive method for obstacle detection for a rail vehicle, in order to achieve even greater precision when locating obstacles, provision is made for calibrating the position measurement and orientation measurement of the rail vehicle by determining positional data relating to landmarks or other distinctive features in an environment of the rail vehicle on the basis of sensor data from the environment. The positional data that is determined in relation to the landmarks is then compared with the corresponding positional data relating to the landmarks in the digital map data. If the values of the positional data based on the sensor data and on the digital map data differ from each other, this variation is used as a calibration value in order to make an adjustment to the position determination and orientation determination, i.e. the auto-location-finding of the rail vehicle and the orientation measurement thereof, with reference to the sensor data.
The precision of the position and orientation of the rail vehicle is advantageously improved further. The accuracy is particularly important when determining the position and orientation of the rail vehicle because, at greater distance, even a small inaccuracy when determining this data can cause a significant variation in the specification of a position of a possible collision obstacle.
The attributes of the objects preferably comprise at least one of the following attribute types: an object is characterized e.g. by its approximate dimensions and area, or by information indicating which partial volume it overlaps, or by the interval from the rail vehicle to the object, or by the current speed of the rail vehicle. The monitoring profile, in particular the extended clearance gage of the rail vehicle, can comprise e.g. a plurality of differently dimensioned partial profiles which are nested one within the other and are each assigned a different danger level. According to the partial profile, or the partial volume assigned to the respective partial profile, into which the detected object has already intruded, and as a function of the distance of the object from rail vehicle, an incremental reaction of the rail vehicle can be triggered. For example, according to the speed of the rail vehicle and of the object, and as a function of the position of the object and the distance of the object from the rail vehicle, a warning signal can be emitted initially and, if the object intrudes into an inner volume or if the object moves closer to the rail vehicle, or in the case of a corresponding speed and direction of motion of the object and/or of the rail vehicle, a braking maneuver can be triggered.
As mentioned above, in a step of the inventive method, a monitoring region is determined on the basis of a digital map or land map, and the position and orientation of the rail vehicle are determined in the digital map, preferably on the basis of a combination of sensor data. Sensor data which can return information about the environment of the rail vehicle, as well as intervals and orientations of objects relative to the rail vehicle, is advantageously combined with map data in order thereby to determine the position and orientation of the rail vehicle. If the rail vehicle is situated at a specific position relative to a position indicated in the digital map, the position of the rail vehicle can then be specified exactly.
For example, an estimated position and estimated orientation for the rail vehicle can initially be determined with the aid of sensor data which includes a degree of inaccuracy. Such sensor data preferably comprises at least one of the following data types:
In an embodiment of the inventive method for obstacle detection for a rail vehicle, the feature data comprises at least one of the following data types:
A known position of landmarks or the track can advantageously be used to determine a variation of a position measurement and/or orientation measurement by the sensors and to undertake a corresponding adjustment.
In the inventive method for obstacle detection for a rail vehicle, various sensor data or sensor data of various types is advantageously combined in order to determine the position and orientation of the rail vehicle in the digital map. A multisensor fusion method is therefore applied. By virtue of combining sensor data from different sensors, it is usually possible to reduce or compensate measurement inaccuracies of individual sensor types by means of averaging and/or a combination of sensor data which is suitably weighted for current environmental conditions, such that reliability is further improved during the obstacle detection.
In a variant of the inventive method for obstacle detection for a rail vehicle, said combination comprises filtering by means of a Kalman filter or particle filter and/or graph-based methods and/or sensor-specific filtering. In this case, the status of the rail vehicle is preferably restricted to its movement along the track. The accuracy of the position-finding and the determination of the orientation of the rail vehicle can be further improved thus. The position of the rail vehicle is restricted by the movement of the contact points, i.e. the wheels, along the track. The orientation of the fixed vehicle body can, due to the mounting on rotatable running gear and as a result of the spring suspension, vary to some extent from the orientation of the track at the contact points.
In the inventive method for obstacle detection for a rail vehicle, provision is preferably also made for individual sensors to be calibrated with the digital map in order to correct effects of vibration or drift in the extrinsic calibration. In this case, the initially estimated position and orientation of the rail vehicle or the sensor can be used as a starting point, and the position and orientation of the sensor in the map can then be made more precise with reference to landmarks and/or other features that have been detected in the environment of the rail vehicle. For example, landmarks can be used to specify more accurately the position and orientation of the sensor, while the detected rail string can be used to refine the determination of the orientation of the sensor.
In the inventive method for obstacle detection for a rail vehicle, the calibration therefore also preferably comprises the following steps:
In the inventive method for obstacle detection for a rail vehicle, the step of the sensory capture of objects in the environment of the rail vehicle most preferably comprises determining a position and a size of the objects in the environment on the basis of sensor data. This data can advantageously be used for the purpose of a danger assessment and for the purpose of determining an adequate reaction to the obstacle that is present.
In the inventive method for obstacle detection for a rail vehicle, as part of the step for determining which objects represent possible collision obstacles, provision is also preferably made for initially generating a point cloud with object points on the basis of the sensor data that has been captured from the object. A point cloud can be generated in particular from raster-type scanning of the environment. Such an option is applied e.g. when a lidar system is deployed for the purpose of object detection. The point cloud is then broken down into individual objects by means of a so-called clustering method or a cluster analysis. This is understood to mean a method for detecting similarity structures in usually relatively large pools of data. These groups of measured points found thus and having similar properties are referred to as clusters, and the group assignment as clustering. Unlike automatic classification, the cluster analysis is intended to identify new groups in the data. Such a clustering method for clustering on the basis of the spatial distances of the points comprises e.g. a Euclidian distance clustering algorithm. The individual objects are then checked to determine whether they have an overlap region with the monitoring region of the rail vehicle. It is also advantageously possible to identify and separate generic objects that cannot be detected using a machine-learning based method for specific object classes.
In a particularly preferred variant of the inventive method, a closest object point within the monitoring profile is determined, said object point lying closest to the rail vehicle as measured along the course of the line. Following thereupon, all object points are grouped together in a 2D plane by combining all projection planes, said projection planes running through each object point and being oriented orthogonally to the track. In this case, a projection plane is determined which runs through the closest object point relative to the vehicle along the rail string and is oriented orthogonally to the track on which the rail vehicle is travelling. All object points of the cluster are then projected onto the projection plane. In this way, all detectable points of a detected object lie on a plane which represents the shortest interval from the object to the rail vehicle. The projection of the object thus generated can then be compared using a sectional area of the monitoring region comprising the projection plane. It is moreover possible to determine geometric object attributes of the object or for the point cloud. Such object attributes comprise the area, the dimensions and the position of the object or of the entire point cloud relative to the rail vehicle, and the area of the subset of the point cloud within the monitoring region or the partial volumes. For the purpose of specifying these object attributes, it is possible e.g. to form a bounding box around this point cloud, to calculate a convex envelope, or also to create a grid layout map.
In the step for determining which objects represent potential collision obstacles, the dataset that must be processed can advantageously be limited to that part of an object which is situated in the vicinity of the rail region or the monitoring region. Furthermore, greater accuracy can be achieved by directly calculating an overlap volume or an overlap area on the initial point cloud, than by first performing a modeling step for the complete object, e.g. forming a bounding box and performing the calculation on the basis of the model.
As mentioned above, the dynamic behavior of a possible collision object can also be captured by tracking this object. In this case, measured data relating to the object, e.g. its position, speed, size and sectional areas can be processed with the aid of a filter, e.g. a Kalman filter or a particle filter. It should be noted here that the dynamic attributes of an object such as e.g. speed or acceleration do not have to be measured directly, but can be derived from a plurality of positions at different time points. An intersection of the tracked object with the monitoring region can also be determined during the tracking. The methods mentioned above in respect of object detection and determination of a collision danger can also be applied in the case of this variant. Object attributes such as e.g. the distance, the height and width, the sectional area of the projected point cloud with the monitoring region, but also typical tracking attributes such as e.g. speed and confidence, are also used to assess the collision danger for this variant.
The invention is explained again in greater detail below on the basis of exemplary embodiments and with reference to the appended figures, in which:
In the step 1.I, for a process which monitors the surroundings of the rail vehicle 2, information relating to a monitoring profile PR, e.g. a clearance gage, and in particular its position PS relative to the rail vehicle 2 is provided together with digital map data LK for obstacle detection, said information and data being required subsequently for the purpose of determining a position and dimensions of a monitoring region VU. The map data LK and the profile data PR can be stored in a database in combined form, but can also be obtained separately from different sources.
In the step 1.II, sensor data SDL is captured for the purpose of auto-location-finding of the rail vehicle 2. For example, GNSS data or data from an inertial measurement unit is captured for this purpose.
In the step 1.III, information relating to the surroundings of the rail vehicle 2, in particular sensor data from landmarks LM and sensor data from objects O, is captured by sensors. The surroundings or the environment comprises at least the monitoring region VU. It may however extend beyond the monitoring region VU, e.g. in order to detect moving objects O at an early stage. Objects O are detected within the sensor data, which depicts the environment of the rail vehicle. The objects O can be e.g. vehicles, people, or even trees, shopping carts or rocks that have fallen onto the track.
In the step 1.IV, the digital map data LK, the sensor data SDL for auto-location-finding of the rail vehicle 2, and the sensor data from the landmarks LM are processed in order to determine a position PSF and orientation OR of the rail vehicle 2 and, from this, the position PG of the current track section and possibly also the orientation thereof. The true speed of the rail vehicle together with odometry data can be used for the purpose of position-finding. In the step 1.IV, the knowledge of the position PS and dimensions of a monitoring profile of the rail vehicle 2 are also used to determine suitable dimensions, e.g. the required width, height and extent along the line section, as well as a suitable positioning of the monitoring region VU.
An alignment of the determined position PSF and orientation OR of the rail vehicle 2 with the digital map data LK also takes place in the step 1.IV. As mentioned above, this alignment increases the precision with which the values for the position PSF and orientation OR of the rail vehicle 2 are determined, making use of the constraint that the rail vehicle 2 can only travel along a track.
In the step 1.V, the monitoring region VU is determined on the basis of the determined position PSF and orientation OR of the rail vehicle 2, the position PG of the current track section, the knowledge of the position PS of the monitoring profile PR and the dimensions thereof, said monitoring region VU comprising the region which is captured in front of the rail vehicle 2 by the on-board sensors, and through which the rail vehicle 2 will next travel, and in which a collision could therefore occur. Adjacent tracks can also form part of the monitoring region VU.
In the step 1.VI, provision is made for determining which objects O represent potential collision obstacles KH. This determination takes the form of checking whether and to what extent they overlap with the monitoring region VU. If such an overlap exists, a collision danger is present. If a detected object O is sufficiently far outside the monitoring region VU and its anticipated trajectory also does not cross the monitoring region VU, which moves dynamically in the direction of travel, the object O can be categorized as not dangerous. Otherwise, the object O must be categorized as a potential collision object or potential collision obstacle KH.
In the step 1.VII, a reaction R of the rail vehicle 2 to the objects O categorized as potential collision obstacles KH is determined. The type of the reaction R is determined as a function of object attributes such as e.g. the approximate dimensions of the respective object O. In the case of small non-human objects such as e.g. small animals, for example, it is better to simply drive over them and full braking would not be a proportionate response to such an obstacle. Otherwise in the case of people or large obstacles it is essential to avoid a collision in order to avoid personal injury or damage to the rail vehicle 2. The reaction R is also determined in the step 1.VII as a function of the volume with which and the extent to which the object O overlaps. If the potential collision object KH is situated in the peripheral region of the monitoring region VU, it might be possible to avoid a collision with the rail vehicle 2 even as it continues its run. However, if the potential collision object KH is in the center of the monitoring region VU, braking is essential when the potential collision object KH is above a certain size. Furthermore, the reaction R is also dependent on the interval from the rail vehicle 2 to the collision object KH. If the collision object KH is still very far away, relatively gentle braking and the emission of a warning signal might suffice if it is a living object or an object that is controlled by a person. In this way, the object O can be provoked into a reaction or induced to leave the danger region. However, if the potential collision object KH is relatively close, braking must be correspondingly hard in order to avoid a collision. If the potential collision object KH is too close to the rail vehicle to allow braking at an early stage, the instructions might prescribe reducing the speed of the rail vehicle in order at least to moderate the impact with the collision obstacle KH on the line section. The reaction R is also determined as a function of the current speed of the rail vehicle 2.
A first scenario 20a on the far left in
A second scenario 20b shows a map extract in which are marked the captured landmarks LM1, LM2, LM3 and the track GK on which the rail vehicle 2 is travelling. The rail vehicle 2 is also marked at the bottom edge of the map in the second scenario 20b.
In a third scenario 20c, only the monitoring region VU of the rail vehicle 20c is shown (dotted).
In a fourth scenario 20d, the first three scenarios 20a, 20b, 20c are shown superimposed on top of each other. In this case, it can be seen that the map positions of the landmarks LM1, LM2, LM3 vary from the positions of the landmarks LM1, LM2, LM3 as captured by the rail vehicle 2. In the fourth scenario 20d, it also appears that the detected motor vehicle O is situated outside the monitoring region VU of the rail vehicle 2. Conversely, one of the trees O3 appears to project into the monitoring region VU of the rail vehicle 2. The actual orientation of the rail vehicle 2 also varies slightly from the self-determined orientation, this being evident in that the orientation of the rail vehicle 2 in the map and the self-determined orientation do not correspond exactly. On the basis of the determined variation between the map positions and the positions determined by sensor measurement of the landmarks LM1, LM2, LM3, correction of the sensor measurement is now undertaken in an adjustment step so that the positions of the landmarks LM1, LM2, LM3 as determined by the sensor measurement and the map data correspond.
The result can be seen in the fifth scenario 20e. It can now be seen there that the motor vehicle O is situated in the monitoring region VU of the rail vehicle 2, and is even already on the tracks GK, and therefore represents a potential collision obstacle. Conversely, it is clear from the fifth scenario 20e that the tree O3 is not situated in the monitoring region VU of the rail vehicle 2 and therefore does not represent a danger to the rail vehicle 2.
On the basis of the knowledge of the positions of the landmarks LM1, LM2 and LM3 and a current measurement of the position and orientation of the landmarks LM1, LM2 and LM3, it is therefore possible to detect a variation of the determined orientation of the rail vehicle 2 and a variation of the capture of objects in the environment of the rail vehicle (e.g. due to drift in the extrinsic calibration of sensors or vibrations), and to use said variation for the purpose of calibrating or adjusting the orientation determination.
In
Concerning this,
Alternatively, that part of the point cloud CL which lies within the sectional area FU of the monitoring region VU, or the monitoring area FU, can also be included in a grid layout map.
Concerning this,
The obstacle detection device 80 also comprises a position determination unit 81 which, on the basis of known dimensions of the rail vehicle 2, the sensor data SDL for auto-location-finding, e.g. GNSS data or IMU data (IMU=inertial measurement unit), further feature data from the environment such as landmarks LM, and map data LK, determines a position PSF of the rail vehicle 2 and its orientation OR as well as a position PG and a course of that part of the rail string on which the rail vehicle 2 is currently running.
The obstacle detection device 80 also comprises a monitoring region determination unit 88 which, on the basis of the monitoring profile PR, the position PS of the monitoring profile PR, the position PSF and orientation OR of the rail vehicle 2, as well as a position PG and a course of that part of the rail string on which the rail vehicle 2 is currently running, determines a monitoring volume or the position and dimensions of a monitoring region VU.
The sensor data relating to the captured objects O or positions thereof is transmitted to an obstacle determination unit 83 which, on the basis of the positions of the objects O together with the knowledge of the position and extent of the monitoring region VU, determines which objects O are possible collision obstacles KH. Details of the procedure for this are explained with reference to
The captured data relating to the determined collision obstacles KH, e.g. their position, their approximate dimensions and the interval from the collision obstacles KH to the rail vehicle 2, are transmitted to a response determination unit 84. The response determination unit 84 is configured to determine a reaction R to the objects O that are categorized as potential obstacles KH, as a function of the approximate dimensions G of the respective object O, the size of the collision volume with which it overlaps, the interval to the object O, and the current speed of the rail vehicle.
It is again noted that the methods and apparatuses described in the foregoing are merely preferred exemplary embodiments of the invention, and that the invention can be modified by a person skilled in the art without departing from the scope of the invention as it is specified in the claims. For the sake of completeness, it is also noted that use of the indefinite article “a” or “an” does not preclude multiple instances of the features concerned. Likewise, the term “unit” does not preclude this consisting of a plurality of components, which can also be spatially distributed if applicable.
Number | Date | Country | Kind |
---|---|---|---|
10 2021 203 014.9 | Mar 2021 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/055657 | 3/7/2022 | WO |