The technical field generally relates to control systems for controlling partially or fully autonomous vehicles, and more particularly relates to a system, a vehicle and a method for adapting a driving condition of a vehicle upon detecting an event in an environment of the vehicle.
A driving scenario of a vehicle often includes many different situations in which the driver must interact, for example, to avoid a collision between the vehicle and an object surrounding the vehicle. Such a driving scenario may involve the vehicle driving straight forward, making a left or right turn or a complex parking maneuver where the vehicle subsequently changes between driving forwards and backwards. These driving maneuvers can be carried out by the vehicle partially or fully autonomously, in particular using a processor of the vehicle system configured to drive such maneuvers. However, complex and unexpected driving scenarios or events can occur which makes it difficult for the autonomous vehicle system to predict possible changes of such a driving scenario or to determine whether an intervention of the vehicle system is required to avoid dangerous situations, for example to avoid a collision of the vehicle and an object nearby.
Accordingly, it is desirable to take into account and evaluate acoustic sources in the environment of a vehicle when controlling the vehicle in order to increase safety for the passengers and surrounding persons or objects. In addition, it is desirable to improve the redundancy of sensors for detecting environmental parameters used to control a vehicle. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
A system is provided for adapting a driving condition of a vehicle upon detecting an event in an environment of the vehicle. The system includes a non-transitory computer readable medium having stored thereon a pre-programmed driving maneuver of the vehicle, wherein the pre-programmed driving maneuver is indicative of a driving path of the vehicle. The system also includes a processor configured to obtain audio data of an acoustic source in the environment of the vehicle. The processor is further configured to determine a receiving direction of the acoustic source based on the audio data, the receiving direction being indicative of a direction of the acoustic source relative to the vehicle. The processor is further configured to determine whether the acoustic source is located within the driving path of the vehicle based on the pre-programmed driving maneuver and the determined receiving direction of the acoustic source. Furthermore, the processor is configured to determine a range between the vehicle and the acoustic source if it is determined that the acoustic source is located within the driving path of the vehicle. The processor is configured to control the vehicle based on the determined range, for example by changing the velocity of the vehicle or a driving direction of the vehicle. The controlling of the vehicle based on the determined range may include further processing steps of the determined range before a control of the vehicle is initiated. The processor may further be configured to re-program the driving maneuver and/or to stop the vehicle. Thus, an updated driving maneuver can be generated which is indicative of the upcoming driving path of the vehicle. The update of the driving path can be carried out in real time.
In one embodiment, the processor is further configured to determine the range between the vehicle and the acoustic source only if it is determined that the acoustic source is located within the driving path of the vehicle.
In one embodiment, the non-transitory computer readable medium stores a set of audio models, each of the audio models being indicative of a respective acoustic scenario, wherein the processor is further configured to determine a type of the acoustic source based on the obtained audio data between the acoustic source and the set of audio models.
In one embodiment, the non-transitory computer readable medium further stores a set of audio models, each of the audio models being indicative of a respective acoustic scenario, wherein the processor is further configured to determine an urgency estimation of a current driving scenario based on the obtained audio data of the acoustic source and the set of audio models, wherein a positive urgency estimation is indicative of an upcoming collision event of the acoustic source and the vehicle.
In one embodiment, the processor is further configured to obtain second audio data of the acoustic source if the result of the urgency estimation is indeterminable, and to subsequently determine a second urgency estimation of another driving scenario based on the obtained second audio data of the acoustic source and the set of audio models.
In one embodiment, the processor is further configured to obtain further sensor data of the acoustic source in the environment of the vehicle, and to provide fused data based on a fusion of the further sensor data of the acoustic source with the audio data of the acoustic source.
In one embodiment, the further sensor data is obtained from a camera, a Lidar and/or a radar.
In one embodiment, the processor is further configured to verify the urgency estimation of the current driving scenario based on the fused data.
In one embodiment, the processor is further configured to control the vehicle based on the verified urgency estimation of the current driving scenario, wherein controlling the vehicle by the processor includes changing the driving path of the vehicle or stopping the vehicle.
In one embodiment, the acoustic source is a person, another vehicle, an animal and/or a loudspeaker. However, it is noted that any other acoustic source emitting an acoustic noise, tone or signal may be considered as an acoustic source.
In one embodiment, the system further includes an audio sensor arrangement having a plurality of audio sensor arrays, wherein each of the plurality of audio sensor arrays in the audio sensor arrangement is located at a distinct location of the vehicle.
In one embodiment, the audio sensor arrangement comprises two audio sensor arrays, each of the two audio sensor arrays having two audio sensors, for example microphones.
In one embodiment, the processor is further configured to determine the range between the vehicle and the acoustic source based on triangulation using the two audio sensor arrays.
In one embodiment, the processor is further configured to obtain audio data of a plurality of acoustic sources in the environment of the vehicle, to determine a receiving direction for each of the acoustic sources based on the audio data, the receiving directions being indicative of respective directions of the acoustic sources relative to the vehicle, and to determine for each of the acoustic sources whether the acoustic source is located within the driving path of the vehicle based on the pre-programmed driving maneuver and the determined receiving directions of each of the acoustic sources. It is possible that, for each acoustic source, two or more receiving directions may be determined using two or more audio sensor arrays, wherein the at least two receiving directions can then be used to localize the acoustic source.
In one embodiment, the processor is further configured to select the acoustic sources that are determined to be located within the driving path of the vehicle, and to determine a range between the vehicle and each of the selected acoustic sources, and to discard the acoustic sources that are determined not to be located within the driving path of the vehicle.
In one embodiment, the processor is further configured to determine a minimum range out of the determined ranges between the selected acoustic sources and the vehicle, and to select a single acoustic source from the plurality of acoustic sources, which is most proximal to the vehicle.
In one embodiment, the system further includes an audio sensor arrangement having a plurality of audio sensor arrays, wherein the processor is further configured to select one audio sensor array receiving a maximum signal-to-noise-ratio from the selected single acoustic source being most proximal to the vehicle, and to select an audio channel for an audio signal from an audio sensor of the selected audio sensor array.
In one embodiment, the processor is further configured to determine an urgency estimation of a current driving scenario based on the audio signal and a set of audio models stored on the non-transitory computer readable medium.
A vehicle is provided for adapting a driving condition upon detecting an event in an environment of the vehicle. The vehicle includes a non-transitory computer readable medium having stored thereon a pre-programmed driving maneuver of the vehicle, wherein the pre-programmed driving maneuver is indicative of a driving path of the vehicle. The vehicle further includes a processor configured to obtain audio data of an acoustic source in the environment of the vehicle, to determine a receiving direction of the acoustic source based on the audio data, the receiving direction being indicative of a direction of the acoustic source relative to the vehicle, to determine whether the acoustic source is located within the driving path of the vehicle based on the pre-programmed driving maneuver and the determined receiving direction of the acoustic source, and to determine a range between the vehicle and the acoustic source if it is determined that the acoustic source is located within the driving path of the vehicle. The processor is further configured to control the vehicle based on the determined range. The determination of the receiving direction of the acoustic source and the determination whether the acoustic source is located within the driving path of the vehicle as well as the determination of the range between the vehicle and the acoustic source can be applied using an updated maneuver, in addition or instead, to using the pre-programmed maneuver.
A method is provided for adapting a driving condition of a vehicle upon detecting an event in an environment of the vehicle. The method includes the step of storing, on a non-transitory computer readable medium, a pre-programmed driving maneuver of the vehicle, wherein the pre-programmed driving maneuver is indicative of a driving path of the vehicle. The method further includes the step of obtaining, by a processor, audio data of an acoustic source in the environment of the vehicle. The method further includes the step of determining, by the processor, a receiving direction of the acoustic source based on the audio data, the receiving direction being indicative of a direction of the acoustic source relative to the vehicle. The method further includes the step of determining, by the processor, whether the acoustic source is located within the driving path of the vehicle based on the pre-programmed driving maneuver and the determined receiving direction of the acoustic source. The method further includes the step of determining, by the processor, a range between the vehicle and the acoustic source if it is determined that the acoustic source is located within the driving path of the vehicle. The method further includes the step of controlling the vehicle based on the determined range.
The exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
The following detailed description is merely exemplary in nature and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. As used herein, the term module refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Embodiments of the present disclosure may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of the present disclosure may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments of the present disclosure may be practiced in conjunction with any number of systems, and that the systems described herein is merely exemplary embodiments of the present disclosure.
For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the present disclosure.
With reference to
In various embodiments, the vehicle 10 is an autonomous vehicle. The autonomous vehicle 10 is, for example, a vehicle that is automatically controlled to carry passengers from one location to another. The vehicle 10 is depicted in the illustrated embodiment as a passenger car, but it should be appreciated that any other vehicle including motorcycles, trucks, sport utility vehicles (SUVs), recreational vehicles (RVs), marine vessels, aircraft, etc., can also be used. In an exemplary embodiment, the autonomous vehicle 10 is a so-called Level Four or Level Five automation system. A Level Four system indicates “high automation”, referring to the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene. A Level Five system indicates “full automation”, referring to the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver. It is appreciated, however, that the autonomous vehicle 10 may have any automation level from a Level Two system to a Level Five system.
As shown, the autonomous vehicle 10 generally includes a propulsion system 20, a transmission system 22, a steering system 24, a brake system 26, a sensor system 28, an actuator system 30, at least one data storage device 32, at least one controller 34, and a communication system 36. The propulsion system 20 may, in various embodiments, include an internal combustion engine, an electric machine such as a traction motor, and/or a fuel cell propulsion system. The transmission system 22 is configured to transmit power from the propulsion system 20 to the vehicle wheels 16 an 18 according to selectable speed ratios. According to various embodiments, the transmission system 22 may include a step-ratio automatic transmission, a continuously-variable transmission, or other appropriate transmission. The brake system 26 is configured to provide braking torque to the vehicle wheels 16 and 18. The brake system 26 may, in various embodiments, include friction brakes, brake by wire, a regenerative braking system such as an electric machine, and/or other appropriate braking systems. The steering system 24 influences a position of the of the vehicle wheels 16 and 18. While depicted as including a steering wheel for illustrative purposes, in some embodiments contemplated within the scope of the present disclosure, the steering system 24 may not include a steering wheel.
The sensor system 28 includes one or more sensing devices 40a-40n that sense observable conditions of the exterior environment and/or the interior environment of the autonomous vehicle 10. The sensing devices 40a-40n can include, but are not limited to, radars, Lidars (light detection and ranging), acoustic sensors, global positioning systems, optical cameras, thermal cameras, ultrasonic sensors, and/or other sensors. For example, the sensing devices include an acoustic sensor like a microphone, etc. The actuator system 30 includes one or more actuator devices 42a-42n that control one or more vehicle features such as, but not limited to, the propulsion system 20, the transmission system 22, the steering system 24, and the brake system 26. In various embodiments, the vehicle features can further include interior and/or exterior vehicle features such as, but are not limited to, doors, a trunk, and cabin features such as air, music, lighting, etc. (not numbered).
The communication system 36 is configured to wirelessly communicate information to and from other entities 48, such as but not limited to, other vehicles (“V2V” communication,) infrastructure (“V2I” communication), remote systems, and/or personal devices. In an exemplary embodiment, the communication system 36 is a wireless communication system configured to communicate via a wireless local area network (WLAN) using IEEE 802.11 standards or by using cellular data communication. However, additional or alternate communication methods, such as a dedicated short-range communications (DSRC) channel, are also considered within the scope of the present disclosure. DSRC channels refer to one-way or two-way short-range to medium-range wireless communication channels specifically designed for automotive use and a corresponding set of protocols and standards.
The data storage device 32 stores data for use in automatically controlling the autonomous vehicle 10. In various embodiments, the data storage device 32 stores defined maps of the navigable environment. In various embodiments, the defined maps may be predefined by and obtained from a remote system. For example, the defined maps may be assembled by the remote system and communicated to the autonomous vehicle 10 (wirelessly and/or in a wired manner) and stored in the data storage device 32. As can be appreciated, the data storage device 32 may be part of the controller 34, separate from the controller 34, or part of the controller 34 and part of a separate system.
The controller 34 includes at least one processor 44 and a computer readable storage device or media 46. The computer readable storage media 46 and/or the storage device 32 may store a pre-programmed driving maneuver of the vehicle 10, wherein the pre-programmed driving maneuver may be indicative of an upcoming driving path of the vehicle 10. The processor 44 can be any custom made or commercially available processor, a central processing unit (CPU), a graphics processing unit (GPU), an auxiliary processor among several processors associated with the controller 34, a semiconductor based microprocessor (in the form of a microchip or chip set), a macroprocessor, any combination thereof, or generally any device for executing instructions. The computer readable storage device or media 46 may include volatile and nonvolatile storage in read-only memory (ROM), random-access memory (RAM), and keep-alive memory (KAM), for example. KAM is a persistent or non-volatile memory that may be used to store various operating variables while the processor 44 is powered down. The computer-readable storage device or media 46 may be implemented using any of a number of known memory devices such as PROMs (programmable read-only memory), EPROMs (electrically PROM), EEPROMs (electrically erasable PROM), flash memory, or any other electric, magnetic, optical, or combination memory devices capable of storing data, some of which represent executable instructions, used by the controller 34 in controlling the autonomous vehicle 10.
The instructions may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. The instructions, when executed by the processor 44, receive and process signals from the sensor system 28, perform logic, calculations, methods and/or algorithms for automatically controlling the components of the autonomous vehicle 10, and generate control signals to the actuator system 30 to automatically control the components of the autonomous vehicle 10 based on the logic, calculations, methods, and/or algorithms. Although only one controller 34 is shown in
In various embodiments, one or more instructions of the controller 34 are embodied. The controller includes the non-transitory computer readable medium 46 that stores a pre-programmed driving maneuver of the vehicle 10, which is indicative of a driving path of the vehicle 10 and, in particular indicates a driving path along which the vehicle 10 will travel. The controller also includes the processor 44 which is configured to obtain audio data of at least one acoustic source 41 in the environment of the vehicle 10. The acoustic source 41 may be any source emitting an acoustic wave or sound wave travelling, for example, through the air from the acoustic source 41 to the vehicle 10. The controller 34, in particular the functionality of the processor 44 will be described in more detail with reference to
In an exemplary embodiment, the processor 44 comprises an array processing module 44a configured to obtain audio data of the acoustic source 41 based on the acoustic signal data 41b from the audio sensor arrays 40a-40d. Based on these audio data, the processor 44 determines a receiving direction of the acoustic source 41, wherein the receiving direction is indicative of a direction of the at least one acoustic source 41 relative to the vehicle 10. The receiving direction may, for example, be measured with reference to a longitudinal axis of the vehicle 10. The receiving direction therefore indicates the location of the acoustic source 41 relative to the vehicle 10, for example using three-dimensional coordinates. In particular, two receiving directions may indicate the location of the acoustic source 41. Therefore, at least two audio sensor arrays 40a-40d may be used to determine receiving directions of the acoustic source 41 for each of the at least two audio sensors arrays 40a-40d in order to determine the location of the acoustic source 41. Three-dimensional coordinates may be used to determine the location of the acoustic source 41, wherein it may be possible to ignore acoustic sources 41 that are located above the road, wherein acoustic sources 41 that lie on the road may be further considered. In an exemplary embodiment, all audio sensor arrays 40a-40d are used to determine, i.e. localize the acoustic source 41. This means that each audio sensor array 40a-40d determines one receiving direction for the acoustic source 41 so that, for each audio sensor array 40a-40d, one respective receiving direction of the acoustic source 41 is obtained. These receiving directions are then used to localize the acoustic source 41 and to estimate whether it is located in the driving path or not, i.e. to estimate whether it can be excluded that the acoustic source is located within the driving path or not. The localization is carried out by the localization module 44b of the processor 44. The localization may also use information from an inter-array energy difference which is calculated based on the intensity of the audio signals 41a received by each of the audio sensor arrays 40a-40d. In this manner, a receiving direction of the acoustic source 41 relative to the vehicle 10 can be determined and therefore the acoustic source 41 may be localized such that a location of the acoustic source 41 relative to the vehicle 10 can be determined, for example using three-dimensional coordinates. However, it might be preferred that the localization is determined based on two receiving directions. Energy differences may additionally be used to eliminate hypothetical directions. The described process may be referred to as a low latency, short-range maneuver dependent localization of acoustic sources 41.
In an exemplary embodiment, the localization of the acoustic sources 41 can be carried out based on an inter-array energy level difference elimination by the localization module 44b of processor 44. This inter-array energy level difference elimination may include a localization based on an acoustic intensity or energy determination of the considered acoustic sources 41, i.e. finding the acoustic source 41 from which the strongest acoustic signal 41a is received by the different audio sensor arrays 40a-40d.
In an exemplary embodiment, the processor 44 estimates whether the acoustic source 41 lies within the driving path (not shown) of the vehicle 10 based on the pre-programmed driving maneuver or an updated driving maneuver and the determined receiving direction of the acoustic source 41. The acoustic source 41 lies within the driving path if the acoustic source 41 is localized such that a collision event between the vehicle 10 and the acoustic source 41 would occur if the acoustic source 41 does not move away and the vehicle 10 continues following the pre-programmed driving maneuver without control intervention. In such an event, i.e. when it was estimated that the acoustic source 41 is within or intersects the driving path of the vehicle 10 and therefore the processor 44 estimates that the acoustic source 41 lies within the driving path of the vehicle 10, the processor 44 further determines a range between the vehicle 10 and the acoustic source 41 in order to confirm that the acoustic source 41 certainly lies within the driving path of the vehicle 10. It is preferred that the processor 44 determines the range between the vehicle 10 and the acoustic source 41 only if it is determined that the acoustic source 41 lies within the driving path of the vehicle 10. The range determination may thus be used to provide a confirmation whether the acoustic source 41 really lies within the driving path of the vehicle 10, wherein the receiving directions determined beforehand may only indicate which acoustic sources 41 are certainly not within the driving path of the vehicle 10. Therefore, it is possible to consider only those acoustic sources 41 in the range determination which are not excluded to lie within the driving path after the localization using the receiving directions.
In an exemplary embodiment, it is possible that the localization which is based on the determined receiving directions provides an information about whether the acoustic source 41 can be excluded to lie within the driving path of the vehicle 10 or whether the acoustic source 41 cannot be excluded to lie within the driving path of the vehicle 10. This means, a first estimation is carried out whether there is a chance that the acoustic source 41 lies within the driving path. If so, the range between the vehicle 10 and the acoustic source 41 is determined to certainly determine that the acoustic source 41 lies within the driving path or not. If the receiving directions for the acoustic source 41 do not indicate that the acoustic source 41 lies within the driving path, then there is no range determination carried out by the processor 44, i.e. some receiving directions can indicate that the acoustic source 41 is for sure not in the maneuver path and these are not further considered. Therefore, it may be possible that it can only be certainly determined if the source is in the driving path after the range has been determined.
In an exemplary embodiment, the processor 44 determines the range between the vehicle 10 and the at least one acoustic source 41 based on triangulation using at least two of the audio sensor arrays 40a-40d. In this manner, the range, e.g. the distance between the vehicle 10 and the acoustic source 41 can be determined at a certain point in time.
In an exemplary embodiment, the processor 44 can also obtain audio data of a plurality of different acoustic sources 41 in the environment of the vehicle 10 and determine a receiving direction for each of the plurality of acoustic sources 41 based on the audio data. In particular, the processor 44 determines the location of each acoustic source 41. The receiving directions are thus indicative of respective directions of the plurality of acoustic sources 41 relative to the vehicle 10 which provides the locations of each of the acoustic sources 41. The processor 44 then determines for each of the acoustic sources 41 whether it lies within the driving path of the vehicle 10 based on the pre-programmed driving maneuver and the determined receiving directions, i.e. locations, of each of the plurality of acoustic sources 41. The processor 44 selects those acoustic sources 41 that are determined to lie within the driving path of the vehicle 10 such that the processor 44 can then determine a range between the vehicle 10 and each of the selected acoustic sources 41. In particular, a range or distance between each acoustic source 41 that lies on the upcoming driving path (selected acoustic sources 41) and the vehicle 10 is determined. The other acoustic sources 41 that are not selected and thus do not lie within the driving path of the vehicle 10 are discarded. In other words, the processor 44 therefore selects all acoustic peaks in the maneuver direction, i.e. all acoustic sources 41 that lie within the driving path of the vehicle 10, and discards all other peaks, i.e. all acoustic sources 41 that do not lie within the driving path of the vehicle 10.
In an exemplary embodiment, the processor 44 determines a minimum range out of the determined ranges between the selected acoustic sources 41 and the vehicle 10. In other words, only the selected acoustic sources 41 which were determined to lie within the driving path of the vehicle 10 and for which a range has therefore been determined are compared according to their ranges such that a single acoustic source 41 from the plurality of acoustic sources 41, which is most proximal to the vehicle 10, is selected.
In an exemplary embodiment, the processor 44, for example the array selection module 44c of the processor 44, selects a single one of the audio sensor arrays 40a-40d, for example array 40c. The selection takes place by determining which of the audio sensor arrays 40a-40d receives a maximum signal-to-noise-ratio from the selected single acoustic source 41 which has been selected as being most proximal to the vehicle 10. In other words, the audio sensor array 40c which receives the highest acoustic signal and the lowest acoustic noise may be selected. The result is that a selected single audio sensor array 40c is further used by the processor 44 to beamform towards the selected acoustic source 41 which is most proximal to the vehicle 10, i.e. to select an audio channel for an audio signal from an audio sensor, e.g. from a single audio sensor, of the selected audio sensor array 40c. This may be carried out by the spatial object separation module 44d of the processor 44.
In an exemplary embodiment, a non-transitory computer readable medium 46 stores pre-trained audio models besides the pre-programmed driving maneuver. The audio models may be descriptive for different acoustic scenarios or the characteristics of different types or arrangements of acoustic sources. The processor 44, in particular the pattern recognition module 44e, is then able to allocate the selected audio signal to at least one of the pre-trained audio models stored on the non-transitory computer readable medium 46. This can be understood as a comparison between the selected audio signal and a pre-trained audio model which is carried out to obtain a certain probability based on which it can be assumed that the selected audio signal belongs to a specific pre-trained audio model. In other words, the selected audio signal is classified. This procedure can be carried out by the type and urgency classifier module 44f of the pattern recognition module 44e. Therefore, a predetermined probability threshold can be applied indicating which probability has to be achieved to indicate that the driving scenario, in particular the acoustic source 41, has been correctly identified. The processor 44 can then determine a type of the at least one acoustic source 41 based on the comparison and the probability calculation. The processor 44 can then also determine an urgency estimation of a current driving scenario by analyzing the driving situation involving the vehicle 10 and the surrounding acoustic sources 41, based on the comparison and the probability calculation. A positive urgency estimation may indicate an upcoming collision event between the acoustic source 41 and the vehicle 10. The urgency estimation may thus be dependent on how urgent an intervention of the vehicle control system is required to avoid a collision event. This can be determined based on a comparison of the selected audio data and the audio models which provides an indication of a probability for a certain current driving scenario, in particular of a current situation describing the positioning and movement of the vehicle 10 and the acoustic source 41. Based on this probability approach, it can be determined how urgent a change of the situation is to be initiated to avoid an upcoming dangerous situation or even a collision between the acoustic source 41 and the vehicle 10. Both the non-transitory computer readable medium 46 and the type and urgency classifier module 44f are parts of the pattern recognition module 44e of processor 44. The urgency estimation may involve an urgency related processing of the audio data based on a pitch variation under doppler impact.
In an exemplary embodiment, the above described process is iteratively repeated by the processor 44 until a clear, i.e. determinable type and/or urgency estimation is possible. A determinable urgency estimation is present if a positive urgency estimation or a negative urgency estimation can be made. A positive urgency estimation may indicate a likely upcoming collision event between the acoustic source 41 and the vehicle 10 and that further verification of this urgency estimation may be required to provide a control intervention for the vehicle 10. A negative urgency estimation may indicate that a collision event can be excluded. The processor 44 will obtain second audio data of the at least one acoustic source 41 if the result of the urgency estimation is indeterminable or unclear, for example when a classification of the same selected audio signal or another selected audio signal is necessary to carry out the urgency decision, i.e. to make a positive or negative urgency estimation. The decision whether an urgency estimation is positive, negative or indeterminable can be carried out by the urgency decision module 44g. If the result of the urgency estimation is indeterminable, another acoustic sensing of the environment can be carried out to receive further audio signals and to obtain a corresponding second audio signal of another driving scenario based on the obtained second audio data of the at least one acoustic source 41 and the set of audio models.
In an exemplary embodiment, the processor 44 obtains further sensor data of the at least one acoustic source 41 in the environment of the vehicle 10. These further sensor data can be optical data from a camera 47a or Lidar sensor 47b or data from a radar sensor 47c. The processor 44 has a sensor data fusion module 44h which provides fused data based on a fusion of the further sensor data of the at least one acoustic source 41 with the selected audio data of the at least one acoustic source 41, in particular the selected audio signal from the selected audio channel. The data fusion may provide a verification of the correctness of the urgency estimation which is based on the audio data obtained from the audio sensors, i.e. the audio sensor array arrangement 40. The processor 44 thus verifies the urgency estimation of the current driving scenario based on the fused data. If the urgency estimation is confirmed by the further sensor data and the data fusion, then the processor 44 controls the vehicle 10 based on the verified urgency estimation of the current driving scenario.
In an exemplary embodiment, the vehicle 10 is controlled or maneuvered by the processor 44, in particular by the AV controller of the maneuver system module 30. The controlling and maneuvering may include a changing of the driving path of the vehicle 10 or a stopping the vehicle 10. Changing the driving path may include driving a left turn or a right turn, etc.
In an exemplary embodiment, the system 1 provides an improved redundancy for sensor-based environment detection and can compensate unexpected events occurring due to failure of sensors or decision logic, obstructions, blind spots, and human behavior in general. The system 1 is adapted for to handle unexpected events indicated by a shout or cry of a person, e.g. a pedestrian, upon maneuvering or parking the vehicle 10. However, further applications are possible, such as handling unexpected events indicated by a horn or a screech in the environment. The system 1 improves the safety in maneuvering AV's, in particular the safety of the drivers and passengers of the AV's, but also the persons in the environment of the vehicle 10.
In an exemplary embodiment, the system 1 provides for a low latency classification including a successive evaluation of putative events based on a sensor array detection of a direction of acoustic sources 41 relative to the vehicle 10, a maneuver of the vehicle, a range between the acoustic sources 41 and the vehicle 10 and an urgency estimation being indicative of a requirement to change a driving condition of the vehicle 10. The duration of the evaluation can be incremented and iteratively carried out until a predetermined detection confidence is reached. This will be described in further detail with reference to
In an exemplary embodiment, the system 1 also provides a maneuver dependent spatial scan. Therein, events may be incrementally evaluated only in the maneuver direction, i.e. only for the acoustic sources 41 which are located in the driving path of the vehicle 10. Afterwards, a range estimation only for acoustic sources located in the maneuver direction (DoA's in maneuver direction) is carried out. Furthermore, the beamforming is applied only if the acoustic sources 41 and the range are determined to be in maneuver direction.
In an exemplary embodiment, the system 1 also provides an architecture for detecting a short range event. A distributed microphone architecture is provided as, for example, described with the reference to
In an exemplary embodiment, at step 200, n is set to “0” and SU is set to “0”. At step 200, the method starts collecting samples of acoustic signals from the environment of the vehicle. N audio sensor arrays are used to obtain these samples of acoustic signals from the environment. Therein, the number of audio channels provided to receive the acoustic signals from the environment are determined multiplying N by M. At the beginning of the iterative process at step 210 shown in
In an exemplary embodiment, the vehicle 10 and/or system 1 of
In an exemplary embodiment, steps 200 to 310 as well as steps 380 to 400 of
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the disclosure as set forth in the appended claims and the legal equivalents thereof.