The present application claims priority to European Patent Application 18164228.1 filed by the European Patent Office on Mar. 27, 2018, the entire contents of which being incorporated herein by reference.
The present disclosure generally pertains to the technical field of active noise control (ANC), in particular to an electronic device, a method, and a computer program for active noise control inside a vehicle.
Drivers in vehicles are often exposed to a lot of distracting and annoying noise. Such subsequently called “unwanted” noise can have multiple negative effects on drivers. The noise may annoy a driver and it may even dangerously decrease a driver's concentration.
Active noise control is based on the phenomenon of “destructive interference”, in which a 180°-phase-shifted anti-noise signal is superimposed on the noise signal so that the noise is decreased significantly. Typically, one or more microphones detect the incoming noise, while a computer calculates a corresponding anti-noise signal which is emitted by one or more speakers to cancel out the incoming noise.
Noise cancellation inside a vehicle cannot be done inside a large volume, i.e. the volume in which the noise is cancelled is small. This poses a problem for ordinary systems, since the area of most effective cancellation might not cover the driver's ears, and hence the noise cancellation is not perceived by the driver. In addition, for immersive audio rendering systems, in particular for systems that make use of binaural cancellation, the sweet spot is very small and therefore such systems cannot be used in a generic way inside the vehicle.
According to a first aspect, the disclosure provides an electronic device for active noise control inside a vehicle, the device comprising a processor configured to determine a noise wavefield within the vehicle based on noise signals captured by a microphone array; to determine the position of the ears of a passenger based on information obtained from a head tracking sensor; to capture a noise field inside the vehicle based on information obtained by the microphone array; to obtain a noise level at the ears of the passenger from the noise field; and to determine an anti-noise field based on the noise level at the position of the ears of the passenger.
According to a further aspect, the disclosure provides a system, comprising a microphone array, a loudspeaker array, and the electronic device as defined above, wherein the processor is configured to calculate an anti-noise field based on the noise level at the ears of a passenger and wherein the processor is further configured to render the anti-noise field with loudspeaker array.
According to a further aspect, the disclosure provides a method for active noise control inside a vehicle, the method comprising determining a noise wavefield within the vehicle based on noise signals captured by a microphone array; determining the position of the ears of the passenger based on information obtained from a head tracking sensor; capturing a noise field inside the vehicle based on information obtained by the microphone array; obtaining a noise level at the ears of a passenger from the noise field; and determining an anti-noise field based on the noise level at the position of the ears of the passenger.
According to a further aspect, the disclosure provides a computer program for active noise control inside a vehicle, the computer program comprising instructions which when executed on a processor cause the processor to: determine a noise wavefield within the vehicle based on noise signals captured by a microphone array; determine the position of the ears of a passenger based on information obtained from a head tracking sensor; capture a noise field inside the vehicle based on information obtained by the microphone array; obtain a noise level at the ears of the passenger from the noise field; and determine an anti-noise field based on the noise level at the position of the ears of the passenger.
Embodiments are explained by way of example with respect to the accompanying drawings, in which:
Before a detailed description of the embodiments under reference of
The embodiments disclose an electronic device for active noise control inside a vehicle, the device comprising a processor configured to determine a noise wavefield within the vehicle based on noise signals captured by a microphone array; determine the position of the ears of a passenger based on information obtained from a head tracking sensor; capture a noise field inside the vehicle based on information obtained by the microphone array; obtain a noise level at the ears of the passenger from the noise field; and calculate an anti-noise field based on the noise level at the position of the ears of the passenger.
For spatial acoustic sensing, e.g. an acoustic noise cancelling application, the position of the head can be used to determine, via wavefield interpolation of a number of acoustic sensors in the front/back/sides of the passengers, the wavefield around a person's head, which then enables driving a standard Active Noise Control (ANC) system more accurately. In particular, such a system achieves improved noise cancellation performance at high frequencies, since virtual sensors are located much closer to the ears of the user than physical sensors can be.
The processor may for example be a microcomputer, for example a microcomputer of an electronic control unit (ECU) that is integrated in a vehicle control system.
The embodiments relate to an adjustment of audio beam size/rendering dependent on a user detection (or tracking). The embodiments disclose a noise cancellation technique which depends on estimation of a passenger's head position and orientation, in relative or absolute space.
Absolute positioning and orientation of user head can be determined by use of an acoustic (e.g. ultrasound) based sensing system, involving sophisticated signal processing and machine learning of signals bounced on and scattered by user/listener's head. Other means like camera based determination, or use of passive (pressure, IR, or capacitive proximity) sensor can also be employed.
Based on the detection of head positioning and orientation, a 3D rendering engine is employed for placing virtual audio source around passenger's head. Alternatively, a beam width and orientation adaptation is performed for optimal performance of audio beam-formed signal. Adaptation of orientation and width of beam-formed signal can be achieved using an array of loudspeakers.
Active noise cancellation is for example enhanced by the determination of a wavefield around a passenger's head.
The processor may be further configured to determine the position and orientation of the head of the passenger based on information obtained from the head tracking sensor, and to determine the position and the orientation of the ears of the passenger based on the position and the orientation of the head of the passenger.
The processor may be further configured to determine the orientation of the ears of the passenger based on information obtained from a head tracking sensor.
The processor may be configured to calculate the anti-noise field by determining one or more virtual sound sources. These one or more virtual sound sources may for example be monopole sound sources. The anti-noise field may for example be modelled as at least one monopole sound source placed at a defined target position, e.g. close to an ear of the passenger. The anti-noise field may for example be modelled as one single target monopole, or as multiple target monopoles placed at respective defined target positions, e.g. at the position of the passenger's ears. If multiple target monopoles are used to represent the target sound field, the methods of synthesizing the sound of a target monopole based on a set of defined synthesis monopoles may be applied for each target monopole independently, and the contributions of the synthesis monopoles obtained for each target monopole may be summed to reconstruct the target sound field. The simplest example of a monopole source would be a sphere whose radius alternately expands and contracts sinusoidal. It is also possible by using multiple monopole sound sources or algorithms such as wavefield synthesis to create a directivity pattern for virtual sound sources.
The head tracking sensor may for example be an acoustic based sensing system. The head tracking sensor may for example play acoustic waves close to the back of the head of a driver while sensing the signal bounced back from the head. Alternatively or in addition, the head tracking sensor may comprise a camera-based determination system or a passive sensor, a pressure sensor, an IR sensor, or capacitive proximity sensor, or the like.
The embodiments also disclose a system, comprising a microphone array, a loudspeaker array, and a device as described above, wherein the processor is configured to calculate an anti-noise field based on the noise level at the ears of a passenger and wherein the processor is further configured to render the anti-noise field with loudspeaker array. An array of microphones or several acoustic sensors may for example be distributed throughout the vehicle's in-cabin room. The processor may be configured to render the anti-noise field based on 3D audio rendering techniques.
The embodiments also disclose a method, comprising determining a noise wavefield within the vehicle based on noise signals captured by a microphone array; determining the position of the ears of the passenger based on information obtained from a head tracking sensor; capturing a noise field inside the vehicle based on information obtained by the microphone array; obtaining a noise level at the ears of a passenger from the noise field; and determining an anti-noise field based on the noise level at the position of the ears of the passenger. The method may perform any of the process described above and below in more detail.
The embodiments also disclose a computer program comprising instructions which when executed on a processor cause the processor to: determine a noise wavefield within the vehicle based on noise signals captured by a microphone array; determine the position of the ears of a passenger based on information obtained from a head tracking sensor; capture a noise field inside the vehicle based on information obtained by the microphone array; obtain a noise level at the ears of the passenger from the noise field; and calculate an anti-noise field based on the noise level at the position of the ears of the passenger.
The embodiments also relate to a tangible computer-readable medium storing instructions that perform the computer program defined above.
Vehicle with 3D Audio Rendering System
Also in the vehicle, there are installed four head tracking sensors HTU1-HTS4 as described in
Within the vehicle, an array of microphones M1-M10 (see also In-vehicle information detecting unit 7500 of
The 3D anti-sound field calculated by the microcomputer is rendered by an array of loudspeakers SP1-SP8 which are installed in vehicle 1. Loudspeakers SP1, SP2 are located at the instrument panel, loudspeakers SP3, SP4, SP5, SP6 are located at the doors, and loudspeakers SP7, and SP8 are located at the rear of the vehicle. By means of 3D audio rendering, the speakers SP1 to SP10 are driven to generate a virtual sound field comprising virtual sound sources V1, V2 which are located close to the ears of the driver. Here, the virtual sound sources V1, V2 are monopole sound sources which radiate sound equally in all directions. The virtual sound sources V1, V2 are configured such that the interference field of all anti-sound waves at the ears of driver P1 have the same amplitude as the noise field at the driver's ears, but with inverted phase (also known as antiphase). The interference of all waves combine to a residual wave field, and effectively cancel each other out by destructive interference. This leads to the result that driver P1 does no longer hear noise. Destructive interference works best in a very small volume (about 10-20 cm diameter) so that it is beneficial that the microcomputer knows the position and orientation of the head of driver P1, and thus the position of the driver's ears with good accuracy (e.g. 1 cm or better).
For the immersive field generation (3D sound rendering), the exact position of the two ears of the user can be used to control the delay between the emitting speakers in such a way as to get the desired interference of the emitted waves exactly at the location of the ears of the listener. Methods that make use of interference to create 3D audio are called “cross-talk cancellation” or “multichannel decorrelation” in the literature.
As an alternative to the rendering of virtual sound sources a beam width and orientation adaptation may be performed for optimal performance of audio beam-formed signal. Adaptation of orientation and width of beam-formed signals can be achieved using an array of loudspeakers.
The head tracking sensors described with regard to
At S1, a passenger's head position and head orientation are determined by means of a head tracking sensor. Determining a passenger's head position and head orientation by means of a head tracking sensor may for example be performed as described with regard to
At S2, the position and orientation of the passenger's ears are determined based on the position and orientation of passenger's head. Determining the position and orientation of the passenger's ears based on the position and orientation of passenger's head may for example be based on a predefined head model of the passenger (e.g. information describing the relative position of the ear's with regard to the center of the head), or it may be based on a predefined standard head model, or it may be based on data obtained from the head tracking sensor that allows to conclude on the shape of the passenger's head. Information about the driver identity (obtained e.g. by image recognition, key identification, manual input, or the like) available to the in-vehicle processor may be used to identify the passenger and an appropriate head model.
At S3, the 3D noise field within the vehicle is captured with a microphone array. Capturing the 3D noise field within the vehicle with a microphone array may be achieved with any 3D sound recording technique known to the skilled person. For example, for estimating the sound level at desired points in the car, the SRP-PHAT (Steered Response Power Phase Transform) algorithm can be used, which is described in “Microphone Arrays: Signal Processing Techniques and Applications”, Springer-Verlag, 2001, chapter “Robust Localization in Reverberant Rooms”, pp. 157-180. Wavefield interpolation techniques, as known to the skilled person, can be used to estimate phase information.
At S4, the noise level at the passenger's ears is obtained from the captured 3D noise field. Obtaining the noise level at the passenger's ears from the captured 3D noise field may be achieved by evaluating the captured 3D noise field at the position of the passenger's ears.
At S5, a 3D anti-noise field is calculated based on the noise level at the passenger's ears. Calculating a 3D anti-noise field based on the noise level at the passenger's ears may be achieved by determining the noise level at the passenger's ears and creating a respective anti-noise signal. For example, virtual sound sources that emit anti-noise directly at the passenger's ears may be produced with any 3D audio rendering technique known to the skilled person, such as digitalized monopole synthesis which is described below in more detail.
At S6, the 3D anti-noise field is rendered with the loudspeaker array based on 3D audio rendering techniques. The rendering of the 3D anti-noise field with speaker array based on 3D audio rendering techniques may be based on any 3D audio rendering technique known to the skilled person, such as digitalized monopole synthesis which is described below in more detail.
The theoretical background of this system is described in more detail in patent application US 2016/0037282 A1 which is herewith incorporated by reference.
The technique which is implemented in the embodiments of US 2016/0037282 A1 is conceptually similar to the Wavefield synthesis, which uses a restricted number of acoustic enclosures to generate a defined sound field. The fundamental basis of the generation principle of the embodiments is, however, specific, since the synthesis does not try to model the sound field exactly but is based on a least square approach.
A target sound field is modelled as at least one target monopole placed at a defined target position. In one embodiment, the target sound field is modelled as one single target monopole. In other embodiments, the target sound field is modelled as multiple target monopoles placed at respective defined target positions. For example, each target monopole may represent a noise cancelation source comprised in a set of multiple noise cancelation sources positioned at a specific location within a space. The position of a target monopole may be moving. For example, a target monopole may adapt to the movement of a noise source to be attenuated. If multiple target monopoles are used to represent a target sound field, then the methods of synthesizing the sound of a target monopole based on a set of defined synthesis monopoles as described below may be applied for each target monopole independently, and the contributions of the synthesis monopoles obtained for each target monopole may be summed to reconstruct the target sound field.
A source signal x(n) is fed to delay units labelled by z−n
In this embodiment, the synthesis is thus performed in the form of delayed and amplified components of the source signal x.
According to this embodiment, the delay np for a synthesis monopole indexed p is corresponding to the propagation time of sound for the Euclidean distance r=Rp0=|rp−ro| between the target monopole ro and the generator rp.
Further, according to this embodiment, the amplification factor
is inversely proportional to the distance r=Rp0.
In alternative embodiments of the system, the modified amplification factor according to equation (118) of US 2016/0037282 A1 can be used.
In yet further alternative embodiments of the system, a mapping factor as described with regard to FIG. 9 of US 2016/0037282 A1 can be used to modify the amplification.
The technology according to an embodiment of the present disclosure is applicable to various products. For example, the technology according to an embodiment of the present disclosure may be implemented as a device included in a mobile body that is any of kinds of automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility vehicles, airplanes, drones, ships, robots, construction machinery, agricultural machinery (tractors), and the like.
Each of the control units includes: a microcomputer that performs arithmetic processing according to various kinds of programs; a storage section that stores the programs executed by the microcomputer, parameters used for various kinds of operations, or the like; and a driving circuit that drives various kinds of control target devices. Each of the control units further includes: a network interface (I/F) for performing communication with other control units via the communication network 7010; and a communication I/F for performing communication with a device, a sensor, or the like within and without the vehicle by wire communication or radio communication. A functional configuration of the integrated control unit 7600 illustrated in
The driving system control unit 7100 controls the operation of devices related to the driving system of the vehicle in accordance with various kinds of programs. For example, the driving system control unit 7100 functions as a control device for a driving force generating device for generating the driving force of the vehicle, such as an internal combustion engine, a driving motor, or the like, a driving force transmitting mechanism for transmitting the driving force to wheels, a steering mechanism for adjusting the steering angle of the vehicle, a braking device for generating the braking force of the vehicle, and the like. The driving system control unit 7100 may have a function as a control device of an antilock brake system (ABS), electronic stability control (ESC), or the like.
The driving system control unit 7100 is connected with a vehicle state detecting section 7110. The vehicle state detecting section 7110, for example, includes at least one of a gyro sensor that detects the angular velocity of axial rotational movement of a vehicle body, an acceleration sensor that detects the acceleration of the vehicle, and sensors for detecting an amount of operation of an accelerator pedal, an amount of operation of a brake pedal, the steering angle of a steering wheel, an engine speed or the rotational speed of wheels, and the like. The driving system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detecting section 7110, and controls the internal combustion engine, the driving motor, an electric power steering device, the brake device, and the like.
The body system control unit 7200 controls the operation of various kinds of devices provided to the vehicle body in accordance with various kinds of programs. For example, the body system control unit 7200 functions as a control device for a keyless entry system, a smart key system, a power window device, or various kinds of lamps such as a headlamp, a backup lamp, a brake lamp, a turn signal, a fog lamp, or the like. In this case, radio waves transmitted from a mobile device as an alternative to a key or signals of various kinds of switches can be input to the body system control unit 7200. The body system control unit 7200 receives these input radio waves or signals, and controls a door lock device, the power window device, the lamps, or the like of the vehicle.
The battery control unit 7300 controls a secondary battery 7310, which is a power supply source for the driving motor, in accordance with various kinds of programs. For example, the battery control unit 7300 is supplied with information about a battery temperature, a battery output voltage, an amount of charge remaining in the battery, or the like from a battery device including the secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and performs control for regulating the temperature of the secondary battery 7310 or controls a cooling device provided to the battery device or the like.
The outside-vehicle information detecting unit 7400 detects information about the outside of the vehicle including the vehicle control system 7000. For example, the outside-vehicle information detecting unit 7400 is connected with at least one of an imaging section 7410 and an outside-vehicle information detecting section 7420. The imaging section 7410 includes at least one of a time-of-flight (ToF) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The outside-vehicle information detecting section 7420, for example, includes at least one of an environmental sensor for detecting current atmospheric conditions or weather conditions and a peripheral information detecting sensor for detecting another vehicle, an obstacle, a pedestrian, or the like on the periphery of the vehicle including the vehicle control system 7000.
The environmental sensor, for example, may be at least one of a rain drop sensor detecting rain, a fog sensor detecting a fog, a sunshine sensor detecting a degree of sunshine, and a snow sensor detecting a snowfall. The peripheral information detecting sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR device (Light detection and Ranging device, or Laser imaging detection and ranging device). Each of the imaging section 7410 and the outside-vehicle information detecting section 7420 may be provided as an independent sensor or device, or may be provided as a device in which a plurality of sensors or devices are integrated.
The outside-vehicle information detecting unit 7400 makes the imaging section 7410 image an image of the outside of the vehicle, and receives imaged image data. In addition, the outside-vehicle information detecting unit 7400 receives detection information from the outside-vehicle information detecting section 7420 connected to the outside-vehicle information detecting unit 7400. In a case where the outside-vehicle information detecting section 7420 is an ultrasonic sensor, a radar device, or a LIDAR device, the outside-vehicle information detecting unit 7400 transmits an ultrasonic wave, an electromagnetic wave, or the like, and receives information of a received reflected wave. On the basis of the received information, the outside-vehicle information detecting unit 7400 may perform processing of detecting an object such as a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto. The outside-vehicle information detecting unit 7400 may perform environment recognition processing of recognizing a rainfall, a fog, road surface conditions, or the like on the basis of the received information. The outside-vehicle information detecting unit 7400 may calculate a distance to an object outside the vehicle on the basis of the received information.
In addition, on the basis of the received image data, the outside-vehicle information detecting unit 7400 may perform image recognition processing of recognizing a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto. The outside-vehicle information detecting unit 7400 may subject the received image data to processing such as distortion correction, alignment, or the like, and combine the image data imaged by a plurality of different imaging sections 7410 to generate a birds-eye image or a panoramic image. The outside-vehicle information detecting unit 7400 may perform viewpoint conversion processing using the image data imaged by the imaging section 7410 including the different imaging parts.
The in-vehicle information detecting unit 7500 detects information about the inside of the vehicle. The in-vehicle information detecting unit 7500 is, for example, connected with a passenger state detecting section 7510 that detects the state of a passenger (e.g. a driver). The passenger state detecting section 7510 includes the head tracking sensors (HTU1-HTS4 of
The integrated control unit 7600 controls general operation within the vehicle control system 7000 in accordance with various kinds of programs. The integrated control unit 7600 is connected with an input section 7800. The input section 7800 is implemented by a device capable of input operation by an occupant, such, for example, as a touch panel, a button, a microphone, a switch, a lever, or the like. The input section 7800 comprises the microphone array (M1-10 in
The storage section 7690 may include a read only memory (ROM) that stores various kinds of programs executed by the microcomputer and a random access memory (RAM) that stores various kinds of parameters, operation results, sensor values, or the like. In addition, the storage section 7690 may be implemented by a magnetic storage device such as a hard disc drive (HDD) or the like, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
The general-purpose communication I/F 7620 is a communication I/F used widely, which communication I/F mediates communication with various apparatuses present in an external environment 7750. The general-purpose communication I/F 7620 may implement a cellular communication protocol such as global system for mobile communications (GSM (registered trademark)), world-wide interoperability for microwave access (WiMAX (registered trademark)), long term evolution (LTE (registered trademark)), LTE-advanced (LTE-A), or the like, or another wireless communication protocol such as wireless LAN (referred to also as wireless fidelity (Wi-Fi (registered trademark)), Bluetooth (registered trademark), or the like. The general-purpose communication I/F 7620 may, for example, connect to an apparatus (for example, an application server or a control server) present on an external network (for example, the Internet, a cloud network, or a company-specific network) via a base station or an access point. In addition, the general-purpose communication I/F 7620 may connect to a terminal present in the vicinity of the vehicle (which terminal is, for example, a terminal of the driver, a pedestrian, or a store, or a machine type communication (MTC) terminal) using a peer to peer (P2P) technology, for example.
The dedicated communication I/F 7630 is a communication I/F that supports a communication protocol developed for use in vehicles. The dedicated communication I/F 7630 may implement a standard protocol such, for example, as wireless access in vehicle environment (WAVE), which is a combination of institute of electrical and electronic engineers (IEEE) 802.11p as a lower layer and IEEE 1609 as a higher layer, dedicated short range communications (DSRC), or a cellular communication protocol. The dedicated communication I/F 7630 typically carries out V2X communication as a concept including one or more of communication between a vehicle and a vehicle (Vehicle to Vehicle), communication between a road and a vehicle (Vehicle to Infrastructure), communication between a vehicle and a home (Vehicle to Home), and communication between a pedestrian and a vehicle (Vehicle to Pedestrian).
The positioning section 7640, for example, performs positioning by receiving a global navigation satellite system (GNSS) signal from a GNSS satellite (for example, a GPS signal from a global positioning system (GPS) satellite), and generates positional information including the latitude, longitude, and altitude of the vehicle. Incidentally, the positioning section 7640 may identify a current position by exchanging signals with a wireless access point, or may obtain the positional information from a terminal such as a mobile telephone, a personal handyphone system (PHS), or a smart phone that has a positioning function.
The beacon receiving section 7650, for example, receives a radio wave or an electromagnetic wave transmitted from a radio station installed on a road or the like, and thereby obtains information about the current position, congestion, a closed road, a necessary time, or the like. Incidentally, the function of the beacon receiving section 7650 may be included in the dedicated communication I/F 7630 described above.
The in-vehicle device I/F 7660 is a communication interface that mediates connection between the microcomputer 7610 and various in-vehicle devices 7760 present within the vehicle. The in-vehicle device I/F 7660 may establish wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or wireless universal serial bus (WUSB). In addition, the in-vehicle device I/F 7660 may establish wired connection by universal serial bus (USB), high-definition multimedia interface (HDMI (registered trademark)), mobile high-definition link (MHL), or the like via a connection terminal (and a cable if necessary) not depicted in the figures. The in-vehicle devices 7760 may, for example, include at least one of a mobile device and a wearable device possessed by an occupant and an information device carried into or attached to the vehicle. The in-vehicle devices 7760 may also include a navigation device that searches for a path to an arbitrary destination. The in-vehicle device I/F 7660 exchanges control signals or data signals with these in-vehicle devices 7760.
The vehicle-mounted network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The vehicle-mounted network I/F 7680 transmits and receives signals or the like in conformity with a predetermined protocol supported by the communication network 7010.
The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 in accordance with various kinds of programs on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. For example, the microcomputer 7610 may calculate a control target value for the driving force generating device, the steering mechanism, or the braking device on the basis of the obtained information about the inside and outside of the vehicle, and output a control command to the driving system control unit 7100. For example, the microcomputer 7610 may perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) which functions include collision avoidance or shock mitigation for the vehicle, following driving based on a following distance, vehicle speed maintaining driving, a warning of collision of the vehicle, a warning of deviation of the vehicle from a lane, or the like. In addition, the microcomputer 7610 may perform cooperative control intended for automatic driving, which makes the vehicle to travel autonomously without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the obtained information about the surroundings of the vehicle.
The microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object such as a surrounding structure, a person, or the like, and generate local map information including information about the surroundings of the current position of the vehicle, on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. In addition, the microcomputer 7610 may predict danger such as collision of the vehicle, approaching of a pedestrian or the like, an entry to a closed road, or the like on the basis of the obtained information, and generate a warning signal. The warning signal may, for example, be a signal for producing a warning sound or lighting a warning lamp.
The sound/image output section 7670 transmits an output signal of at least one of a sound and an image to an output device capable of visually or auditorily notifying information to an occupant of the vehicle or the outside of the vehicle. In the example of
Incidentally, at least two control units connected to each other via the communication network 7010 in the example depicted in
Incidentally, a computer program for realizing the functions of the information processing device 100 according to the present embodiment described with reference to FIG. Z can be implemented in one of the control units or the like. In addition, a computer readable recording medium storing such a computer program can also be provided. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. In addition, the above-described computer program may be distributed via a network, for example, without the recording medium being used.
In the vehicle control system 7000 described above, the information processing device 100 according to the present embodiment described with reference to FIG. Z can be applied to the integrated control unit 7600 in the application example depicted in
In addition, at least part of the constituent elements of the information processing device 100 described with reference to FIG. Z may be implemented in a module (for example, an integrated circuit module formed with a single die) for the integrated control unit 7600 depicted in
Aspects of the above described technology are also the following:
[1] An electronic device for active noise control inside a vehicle, the device comprising a processor (7610) configured to
Number | Date | Country | Kind |
---|---|---|---|
18164228.1 | Mar 2018 | EP | regional |