Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle

Information

  • Patent Application
  • 20240137721
  • Publication Number
    20240137721
  • Date Filed
    December 29, 2023
    4 months ago
  • Date Published
    April 25, 2024
    10 days ago
Abstract
A sound-making apparatus control method includes that a first device obtains position information of a plurality of areas in which a plurality of users is located. The first device controls, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.
Description
TECHNICAL FIELD

Embodiments of this application relate to the field of intelligent vehicles, furthermore, to a sound-making apparatus control method, a sound-making system, and a vehicle.


BACKGROUND

With improvement of people's living standards, vehicles have become an important means of transportation for people to travel. People like to listen to music and radio, sometimes watch movies, and browse short videos during traveling or waiting. Therefore, sound field effect in the vehicle has become an important factor of concern, and good sound effect can always bring comfortable experience to people.


Currently, a user needs to manually adjust a play intensity of each speaker, to achieve an optimal sound field at a target point position. If a driver needs to manually adjust the play intensity, the driver needs to shift attention to a screen. This is a safety hazard for driving. In addition, when a passenger in the vehicle changes a position or a quantity of passengers increases, speakers need to be continuously adjusted manually. This leads to poor user experience.


SUMMARY

Embodiments of this application provide a sound-making apparatus control method, a sound-making system, and a vehicle. Position information of an area in which a user is located is obtained for adaptively adjusting a sound field optimization center, to help improve listening experience of the user.


According to a first aspect, a sound-making apparatus control method is provided. The method includes that a first device obtains position information of a plurality of areas in which a plurality of users is located. The first device controls, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.


In this embodiment of this application, the first device obtains the position information of the plurality of areas in which the plurality of users is located, and controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work, without a need for a user to manually adjust the sound-making apparatuses. This helps to reduce learning costs of the user and reduce complicated operations of the user. In addition, this also helps the plurality of users enjoy good listening effect, and helps to improve user experience.


In some possible implementations, the first device may be a sound-making system in a vehicle or a home theater, or a sound-making system in a karaoke television (KTV).


In some possible implementations, before that a first device obtains position information of areas in which a plurality of users is located, the method further includes that a first device detects a first operation of a user.


In some possible implementations, the first operation is an operation of the user controlling the first device to play audio content; or the first operation is an operation of the user connecting a second device to the first device and playing audio content on the second device by using the first device; or the first operation is an operation of the user enabling a sound field adaptation switch.


With reference to the first aspect, in some implementations of the first aspect, that a first device obtains position information of a plurality of areas in which a plurality of users is located includes that the first device determines, based on collected sensing information, the position information of the areas in which the plurality of users is located. The sensing information may be one or more of image information, sound information, and pressure information. The image information may be collected by an image sensor, for example, a camera apparatus or a radar. The sound information may be collected by a sound sensor, for example, a microphone array. The pressure information may be collected by a pressure sensor, for example, a pressure sensor mounted in a seat. In addition, the sensing information may be data collected by a sensor, or may be information obtained based on data collected by a sensor.


With reference to the first aspect, in some implementations of the first aspect, that a first device obtains position information of a plurality of areas in which a plurality of users are located includes that the first device determines the position information of the plurality of areas based on data collected by the image sensor; or the first device determines the position information of the plurality of areas based on data collected by the pressure sensor; or the first device determines the position information of the plurality of areas based on data collected by the sound sensor.


In this embodiment of this application, based on the position information of the plurality of areas in which the plurality of users is located and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses is controlled to work. In this way, a calculation process in which the first device controls the plurality of sound-making apparatuses to work can be simplified, and the first device can control the plurality of sound-making apparatuses more conveniently.


In some possible implementations, the image sensor may include a camera, a lidar, and the like.


In some possible implementations, the image sensor may determine whether there is a user in an area by collecting image information in the area and determining, based on the image information, whether the image information includes face contour information, human ear information, iris information, and the like.


In some possible implementations, the sound sensor may include a microphone array.


It should be understood that the sensor may be one sensor or may be a plurality of sensors, where the plurality of sensors may be sensors of a same type, for example, all image sensors. Alternatively, sensing information of a plurality of types of sensors, for example, image information and sound information collected by the image sensor and the sound sensor may be used to determine the user.


In some possible implementations, the position information of the plurality of areas in which the plurality of users is located may include a center point of each of the plurality of areas, or a preset point of each of the plurality of areas, or a point of each area that is obtained according to a preset rule.


With reference to the first aspect, in some implementations of the first aspect, that the first device controls, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes that the first device determines a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and the first device controls, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.


In this embodiment of this application, before controlling the plurality of sound-making apparatuses to work, the first device may first determine the current sound field optimization center point, and control, based on information about the distance between the sound field optimization center point and the plurality of sound-making apparatuses, the sound-making apparatus to work. This helps the plurality of users enjoy good listening effect, and helps improve user experience.


In some possible implementations, that the first device controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes controlling, based on the position information of the plurality of areas and a mapping relationship, the plurality of sound-making apparatuses to work, where the mapping relationship is a mapping relationship between positions of the plurality of areas and play intensities of the plurality of sound-making apparatuses.


With reference to the first aspect, in some implementations of the first aspect, the method further includes that the first device notifies position information of the sound field optimization center point.


In this embodiment of this application, the position information of the sound field optimization center point is notified to the user such that listening effect of the plurality of users is improved and the user is also helped to determine the current sound field optimization center point.


In some possible implementations, that the first device notifies position information of the sound field optimization center point includes that the first device notifies the position information of the sound field optimization center point by using a human-machine interface (HMI) or a sound.


In some possible implementations, the first device may be a vehicle, and that the first device notifies position information of the sound field optimization center point includes that the vehicle notifies the position information of the sound field optimization center point by using an atmosphere light.


With reference to the first aspect, in some implementations of the first aspect, the plurality of areas are areas in a vehicle cockpit.


With reference to the first aspect, in some implementations of the first aspect, the plurality of areas includes a front-row area and a rear-row area.


With reference to the first aspect, in some implementations of the first aspect, the plurality of areas may include a driver area and a front passenger area.


In some possible implementations, the plurality of areas includes a driver area, a front passenger area, a second-row left area, and a second-row right area.


In some possible implementations, the first device may be a vehicle, and that a first device obtains position information of a plurality of areas in which a plurality of users is located includes that the vehicle obtains, by using pressure sensors under seats in the areas, the position information of the plurality of areas in which the plurality of users is located.


In some possible implementations, the first device includes a microphone array, and that a first device obtains position information of a plurality of users includes that the first device obtains a voice signal in an environment by using the microphone array; and determines, based on the voice signal, the position information of the plurality of areas in which the plurality of users in the environment are located.


With reference to the first aspect, in some implementations of the first aspect, the method further includes that the first device notifies the position information of the plurality of areas in which the plurality of users is located.


In some possible implementations, that the first device notifies the position information of the plurality of areas in which the plurality of users is located includes that the first device notifies, by using the HMI or the sound, the position information of the plurality of areas in which the plurality of users are located.


In some possible implementations, the first device may be a vehicle, and that the first device notifies the position information of the plurality of areas in which the plurality of users is located includes that the vehicle notifies, by using an atmosphere light, the position information of the plurality of areas in which the plurality of users is located.


With reference to the first aspect, in some implementations of the first aspect, controlling the plurality of sound-making apparatuses to work includes adjusting a play intensity of each of the plurality of sound-making apparatuses.


In some possible implementations, the play intensity of each of the plurality of sound-making apparatuses is directly proportional to a distance between each sound-making apparatus and the user.


With reference to the first aspect, in some implementations of the first aspect, the plurality of sound-making apparatuses includes a first sound-making apparatus, and that the first device adjusts a play intensity of each of the sound-making apparatuses includes that the first device controls a play intensity of the first sound-making apparatus to be a first play intensity. The method further includes that the first device obtains an instruction of a user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity; and the first device adjusts the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the instruction.


In this embodiment of this application, after adjusting the play intensity of the first sound-making apparatus to the first play intensity, if detecting an operation of the user adjusting the play intensity of the first sound-making apparatus from the first play intensity to the second play intensity, the first device may adjust the play intensity of the first sound-making apparatus to the second play intensity. In this way, the user can quickly adjust the play intensity of the first sound-making apparatus such that the first sound-making apparatus better meets listening effect of the user.


According to a second aspect, a sound-making system is provided, where the sound-making system includes a sensor, a controller, and a plurality of sound-making apparatuses. The sensor is configured to collect data and send the data to the controller. The controller is configured to obtain, based on the data, position information of a plurality of areas in which a plurality of users is located; and control, based on the position information of the plurality of areas and position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.


With reference to the second aspect, in some implementations of the second aspect, the controller is further configured to obtain the position information of the plurality of areas based on data collected by an image sensor; obtain the position information of the plurality of areas based on data collected by a pressure sensor; or obtain the position information of the plurality of areas based on data collected by a sound sensor.


With reference to the second aspect, in some implementations of the second aspect, the controller is further configured to determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.


With reference to the second aspect, in some implementations of the second aspect, the controller is further configured to send a first instruction to a first prompt apparatus, where the first instruction instructs the first prompt apparatus to notify the position information of the sound field optimization center point.


With reference to the second aspect, in some implementations of the second aspect, the plurality of areas are areas in a vehicle cockpit.


With reference to the second aspect, in some implementations of the second aspect, the plurality of areas includes a front-row area and a rear-row area.


With reference to the second aspect, in some implementations of the second aspect, the front-row area includes a driver area and a front passenger area.


With reference to the second aspect, in some implementations of the second aspect, the controller is further configured to send a second instruction to a second prompt apparatus, where the second instruction instructs the second prompt apparatus to notify the position information of the plurality of areas in which the plurality of users is located.


With reference to the second aspect, in some implementations of the second aspect, the controller is further configured to adjust a play intensity of each of the plurality of sound-making apparatuses.


With reference to the second aspect, in some implementations of the second aspect, the plurality of sound-making apparatuses includes a first sound-making apparatus. The controller is further configured to control a play intensity of the first sound-making apparatus to be a first play intensity. The controller is further configured to obtain a third instruction of the user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity, and adjust the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the third instruction.


According to a third aspect, an electronic apparatus is provided, where the electronic apparatus includes a transceiver unit configured to receive sensing information; and a processing unit configured to obtain, based on the sensing information, position information of a plurality of areas in which a plurality of users is located. The processing unit is further configured to control, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.


With reference to the third aspect, in some implementations of the third aspect, that the processing unit is further configured to control, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: The processing unit is configured to determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.


With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is further configured to send a first instruction to a first prompt unit, where the first instruction instructs the first prompt unit to notify position information of the sound field optimization center point.


With reference to the third aspect, in some implementations of the third aspect, the plurality of areas are areas in a vehicle cockpit.


With reference to the third aspect, in some implementations of the third aspect, the plurality of areas includes a front-row area and a rear-row area.


With reference to the third aspect, in some implementations of the third aspect, the plurality of areas includes a driver area and a front passenger area.


With reference to the third aspect, in some implementations of the third aspect, the transceiver unit is further configured to send a second instruction to a second prompt unit, where the second instruction instructs the second prompt unit to notify the position information of the plurality of areas in which the plurality of users is located.


With reference to the third aspect, in some implementations of the third aspect, the processing unit is further configured to adjust a play intensity of each of the plurality of sound-making apparatuses.


With reference to the third aspect, in some implementations of the third aspect, the plurality of sound-making apparatuses includes a first sound-making apparatus. The processing unit is further configured to control a play intensity of the first sound-making apparatus to be a first play intensity. The transceiver unit is further configured to receive a third instruction, where the third instruction is an instruction instructing to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity. The processing unit is further configured to adjust the play intensity of the first sound-making apparatus to the second play intensity.


With reference to the third aspect, in some implementations of the third aspect, the sensing information includes one or more of image information, pressure information, and sound information.


In some possible implementations, the electronic apparatus may be a chip or an in-vehicle apparatus (for example, a controller).


In some possible implementations, the transceiver unit may be an interface circuit.


In some possible implementations, the processing unit may be a processor, a processing apparatus, or the like.


According to a fourth aspect, an apparatus is provided. The apparatus includes units configured to perform the method in any implementation of the first aspect.


According to a fifth aspect, an apparatus is provided. The apparatus includes a processing unit and a storage unit. The storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit such that the apparatus performs the method in any possible implementation of the first aspect.


Optionally, the processing unit may be a processor, and the storage unit may be a memory. The memory may be a storage unit (for example, a register or a cache) in a chip, or may be a storage unit (for example, a read-only memory (ROM), or a random-access memory (RAM)) located outside the chip in a vehicle.


According to a sixth aspect, a system is provided. The system includes a sensor and an electronic apparatus. The electronic apparatus may be the electronic apparatus according to any possible implementation of the third aspect.


With reference to the sixth aspect, in some implementations of the sixth aspect, the system further includes a plurality of sound making apparatuses.


According to a seventh aspect, a system is provided. The system includes a plurality of sound-making apparatuses and an electronic apparatus, where the electronic apparatus may be the electronic apparatus according to any possible implementation of the third aspect.


With reference to the seventh aspect, in some implementations of the seventh aspect, the system further includes a sensor.


According to an eighth aspect, a vehicle is provided, where the vehicle includes the sound-making system according to any one of the possible implementations of the second aspect, or the vehicle includes the electronic apparatus according to any one of the possible implementations of the third aspect, or the vehicle includes the apparatus according to any possible implementation of the fourth aspect, or the vehicle includes the apparatus according to any possible implementation of the fifth aspect, or the vehicle includes the system according to any possible implementation of the sixth aspect, or the vehicle includes the system according to any possible implementation of the seventh aspect.


According to a ninth aspect, a computer program product is provided. The computer program product includes computer program code, and when the computer program code is run on a computer, the computer is enabled to perform the method according to the first aspect.


It should be noted that all or some of the computer program code may be stored in a first storage medium. The first storage medium may be encapsulated together with a processor, or may be encapsulated separately from a processor. This is not specifically limited in this embodiment of this application.


According to a tenth aspect, a computer-readable medium is provided. The computer-readable medium stores program code, and when the computer program code is run on a computer, the computer is enabled to perform the method according to the first aspect.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic functional block diagram of a vehicle according to an embodiment of this application;



FIG. 2 is a schematic diagram of a structure of a sound-making system according to an embodiment of this application;



FIG. 3 is a schematic diagram of another structure of a sound-making system according to an embodiment of this application;



FIG. 4 is a top view of a vehicle;



FIG. 5 is a schematic flowchart of a sound-making apparatus control method according to an embodiment of this application;



FIG. 6 is a schematic diagram of positions of four speakers in a vehicle cockpit;



FIG. 7 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat, a front passenger seat, a second-row left area, and a second-row right area according to an embodiment of this application;



FIG. 8 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 9 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat, a front passenger seat, and a second-row left area according to an embodiment of this application;



FIG. 10 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 11 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 12 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 13 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat and a second-row left area according to an embodiment of this application;



FIG. 14 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 15 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 16 is a schematic diagram of a sound field optimization center in a vehicle when there are users in a driver seat and a front passenger seat according to an embodiment of this application;



FIG. 17 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 18 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 19 is a schematic diagram of a sound field optimization center in a vehicle when there are users in a driver seat and a second-row left area according to an embodiment of this application;



FIG. 20 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 21 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 22 is a schematic diagram of a sound field optimization center in a vehicle when there is a user in a driver seat according to an embodiment of this application;



FIG. 23 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 24 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 25 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIG. 26 is another schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application;



FIGS. 27A and 27B show a group of graphical user interfaces (GUIs) according to an embodiment of this application;



FIGS. 28A and 28B show another group of GUIs according to an embodiment of this application;



FIG. 29 is a schematic diagram in which a sound-making apparatus control method is applied to a home theater according to an embodiment of this application;



FIG. 30 is a schematic diagram of a sound field optimization center in a home theater according to an embodiment of this application;



FIG. 31 is another schematic flowchart of a sound-making apparatus control method according to an embodiment of this application;



FIG. 32 is a schematic diagram of a structure of a sound-making system according to an embodiment of this application; and



FIG. 33 is a schematic block diagram of an apparatus according to an embodiment of this application.





DESCRIPTION OF EMBODIMENTS

The following describes technical solutions of this application with reference to accompanying drawings.



FIG. 1 is a schematic functional block diagram of a vehicle 100 according to an embodiment of this application. The vehicle 100 may be configured to be in a full or partial automatic driving mode. For example, the vehicle 100 may obtain environment information around the vehicle 100 by using a sensing system 120, and obtain an autonomous driving policy based on analysis of the ambient environment information, to implement full-autonomous driving, or present an analysis result to a user, to implement partial autonomous driving.


The vehicle 100 may include various subsystems, such as an infotainment system 110, a sensing system 120, a decision control system 130, a propulsion system 140, and a computing platform 150. Optionally, the vehicle 100 may include more or fewer subsystems, and each subsystem may include a plurality of components. In addition, each subsystem and component of the vehicle 100 may be interconnected in a wired or wireless manner.


In some embodiments, the infotainment system 110 may include a communication system 111, an entertainment system 112, and a navigation system 113.


The communication system 111 may include a wireless communication system, and the wireless communication system may communicate with one or more devices in wireless manner directly or by using a communication network. For example, the wireless communication system 146 may use a third generation (3G) cellular communication, for example, code-division multiple access (CDMA), Evolution-Data Optimized (EVDO), a Global System for Mobile Communications (GSM), or a general packet radio service (GPRS); or a fourth generation (4G) cellular communication, for example, Long-Term Evolution (LTE); or a fifth generation (5G) cellular communication. The wireless communication system may communicate with a wireless local area network (WLAN) through Wi-Fi. In some embodiments, the wireless communication system 146 may directly communicate with a device by using an infrared link, BLUETOOTH, or ZigBee. As for various vehicle communication systems in other wireless protocols, for example, the wireless communication system may include one or more dedicated short-range communications (DSRC) devices, and these devices may include public and/or private data communication between vehicles and/or roadside stations.


The entertainment system 112 may include a central control screen, a microphone, and a sound box. A user may listen to radio and play music in a vehicle through the entertainment system. Alternatively, a mobile phone is connected to a vehicle, to realize screen projection of the mobile phone on the central control screen. The central control screen may be a touchscreen, and the user may perform an operation by touching the screen. In some cases, a voice signal of the user may be obtained by using the microphone, and some control performed by the user on the vehicle 100 is implemented based on analysis of the voice signal of the user, for example, a temperature inside the vehicle is adjusted. In other cases, music may be played for the user by using the sound box.


The navigation system 113 may include a map service provided by a map supplier, to provide navigation of a driving route for the vehicle 100. The navigation system 113 may be used together with a global positioning system 121 and an inertial measurement unit 122 of the vehicle. The map service provided by the map provider may be a two-dimensional map or a high-precision map.


The sensing system 120 may include several types of sensors that sense the ambient environment information of the vehicle 100. For example, the sensing system 120 may include the Global Positioning System (GPS) 121 (the global positioning system may be a GPS system, or may be a BeiDou system, or another positioning system), the inertial measurement unit (IMU) 122, a lidar 123, a millimeter-wave radar 124, an ultrasonic radar 125, and a camera apparatus 126. The sensing system 120 may further include sensors (for example, an in-vehicle air quality monitor, a fuel gauge, and an oil temperature gauge) of an internal system of the vehicle 100 that is monitored. Sensor data from one or more of these sensors can be used to detect an object and corresponding features (a position, a shape, a direction, a speed, and the like) of the object. Such detection and recognition are key functions of a safe operation of the vehicle 100.


The global positioning system 121 may be configured to estimate a geographical position of the vehicle 100.


The inertial measurement unit 122 is configured to sense a position and an orientation change of the vehicle 100 based on an inertial acceleration. In some embodiments, the inertial measurement unit 122 may be a combination of an accelerometer and a gyroscope.


The lidar 123 may sense, by using a laser, an object in an environment in which the vehicle 100 is located. In some embodiments, the lidar 123 may include one or more laser sources, a laser scanner, one or more detectors, and other system components.


The millimeter-wave radar 124 may sense an object in an ambient environment of the vehicle 100 by using a radio signal. In some embodiments, in addition to sensing an object, the radar 124 may further be configured to sense a speed and/or a moving direction of the object.


The ultrasonic radar 125 may sense an object around the vehicle 100 by using an ultrasonic signal.


The camera apparatus 126 may be configured to capture image information of the ambient environment of the vehicle 100. The camera apparatus 126 may include a monocular camera device, a binocular camera device, a structured light camera device, a panorama camera device, and the like. The image information obtained by using the camera apparatus 126 may include a static image, or may include video stream information.


The decision control system 130 includes a computing system 131 that performs analysis and decision-making based on information obtained by the sensing system 120. The decision control system 130 further includes a vehicle control unit 132 that controls a power system of the vehicle 100, and a steering system 133, a throttle 134, and a braking system 135 that are configured to control the vehicle 100.


The computing system 131 may operate to process and analyze various information obtained by the sensing system 120 to identify a target, an object, and/or a feature in the ambient environment of the vehicle 100. The target may include a pedestrian or an animal, and the object and/or the feature may include a traffic signal, a road boundary, and an obstacle. The computing system 131 may use technologies such as an object recognition algorithm, a structure from motion (SFM) algorithm, and video tracking. In some embodiments, the computing system 131 may be configured to map an environment, track an object, estimate a speed of an object, and the like. The computing system 131 may analyze the obtained various information and obtain a control policy for the vehicle.


The vehicle control unit 132 may be configured to coordinate and control a power battery and an engine 141 of the vehicle, to improve power performance of the vehicle 100.


The steering system 133 may be operated to adjust a moving direction of the vehicle 100. For example, in an embodiment, the steering system 133 may be a steering wheel system.


The throttle 134 is configured to control an operating speed of the engine 141 and control a speed of the vehicle 100.


The braking system 135 is configured to control the vehicle 100 to decelerate. The braking system 135 may slow down a wheel 144 by using a friction force. In some embodiments, the braking system 135 may convert kinetic energy of the wheel 144 into a current. The braking system 135 may also slow down a rotation speed of the wheel 144 by using other forms, to control the speed of the vehicle 100.


The propulsion system 140 may include a component that provides power for the vehicle 100 to move. In an embodiment, the propulsion system 140 may include the engine 141, an energy source 142, a drive system 143, and the wheel 144. The engine 141 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, for example, a hybrid engine formed by a gasoline engine and an electric motor, or a hybrid engine formed by an internal combustion engine and an air compression engine. The engine 141 converts the energy source 142 into mechanical energy.


Examples of the energy source 142 include gasoline, diesel, other oil-based fuels, propane, other compressed gas-based fuels, ethyl alcohol, solar panels, batteries, and other power sources. The energy source 142 may also provide energy for another system of the vehicle 100.


The drive system 143 may transmit mechanical power from the engine 141 to the wheel 144. The drive system 143 may include a gearbox, a differential, and a drive shaft. In an embodiment, the drive system 143 may further include another component, for example, a clutch. The drive shaft may include one or more shafts that may be coupled to one or more wheels 144.


Some or all functions of the vehicle 100 are controlled by the computing platform 150. The computing platform 150 may include at least one processor 151, and the processor 151 may execute instructions 153 stored in a non-transitory computer-readable medium such as a memory 152. In some embodiments, the computing platform 150 may alternatively be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.


The processor 151 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor 151 may further include an image processor (GPU), a field-programmable gate array (FPGA), a system on chip (SOC), an application-specific integrated circuit (ASIC) or a combination thereof Although FIG. 1 functionally illustrates a processor, a memory, and other components of the computing platform 150 in a same block, a person of ordinary skill in the art should understand that the processor, the computer, or the memory may actually include a plurality of processors, computers, or memories that may or may not be stored in a same physical housing. For example, the memory may be a hard disk drive, or another storage medium located in a housing different from that of the computing platform 150. Thus, it is understood that a reference to the processor or the computer includes a reference to a set of processors or computers or memories that may or may not operate in parallel. Different from using a single processor to perform the steps described herein, some components such as a steering component and a deceleration component may include respective processors. The processor performs only computation related to a component-specific function.


In various aspects described herein, the processor may be located far away from the vehicle and wirelessly communicate with the vehicle. In another aspect, some processes described herein are performed on a processor disposed inside the vehicle, while others are performed by a remote processor, including performing steps necessary for single manipulation.


In some embodiments, the memory 152 may include the instructions 153 (for example, program logics), and the instructions 153 may be executed by the processor 151 to perform various functions of the vehicle 100. The memory 152 may also include additional instructions, including instructions used to send data to, receive data from, interact with, and/or control one or more of the infotainment system 110, the sensing system 120, the decision control system 130, and the propulsion system 140.


In addition to the instructions 153, the memory 152 may further store data, such as a road map, route information, a position, a direction, a speed, and other vehicle data of the vehicle, and other information. This information may be used by the vehicle 100 and the computing platform 150 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.


The computing platform 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the propulsion system 140, the sensing system 120, and the decision control system 130). For example, the computing platform 150 may utilize an input from the decision control system 130 to control the steering system 133 to avoid obstacles detected by the sensing system 120. In some embodiments, the computing platform 150 may operate to provide control over many aspects of the vehicle 100 and the subsystems of the vehicle 100.


Optionally, one or more of the foregoing components may be installed separately from or associated with the vehicle 100. For example, the memory 152 may be partially or completely separated from the vehicle 100. The foregoing components may be communicatively coupled together in a wired and/or wireless manner.


Optionally, the foregoing component is merely an example. During actual application, components in the foregoing modules may be added or removed based on an actual requirement. FIG. 1 should not be construed as a limitation on this embodiment of this application.


An autonomous driving vehicle traveling on a road, for example, the vehicle 100, may identify an object in an ambient environment of the autonomous driving vehicle, to determine to adjust a current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and a feature of each object, such as a current speed of the object, an acceleration of the object, and an interval between the object and the vehicle may be used to determine a speed to be adjusted by the autonomous driving vehicle.


Optionally, the vehicle 100 or a sensing and computing device (for example, the computing system 131 and the computing platform 150) associated with the vehicle 100 may predict behavior of the identified object based on the features of the identified object and a state of the ambient environment (for example, traffic, rain, and ice on the road). Optionally, all identified objects depend on behavior of each other, and therefore all the identified objects may be considered together to predict behavior of a single identified object. The vehicle 100 can adjust the speed of the vehicle 100 based on the predicted behavior of the identified object. In other words, the autonomous driving vehicle can determine, based on the predicted behavior of the object, a stable state to which the vehicle needs to be adjusted (for example, acceleration, deceleration, or stop). In this process, another factor may also be considered to determine the speed of the vehicle 100, for example, a horizontal position of the vehicle 100 on a road on which the vehicle drives, curvature of the road, and proximity between a static object and a dynamic object.


In addition to providing an instruction for adjusting the speed of the autonomous driving vehicle, the computing device may further provide an instruction for modifying a steering angle of the vehicle 100 such that the autonomous driving vehicle follows a given track and/or maintains safe lateral and longitudinal distances between the autonomous driving vehicle and an object (for example, a car in an adjacent lane on the road) near the autonomous driving vehicle.


The vehicle 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, a construction device, a trolley, a golf cart, a train, or the like. This is not specifically limited in embodiments of this application.


With improvement of people's living standard, vehicles have become an important means of transportation for people to travel. People like to listen to music and radio, and sometimes watch movies and browse short videos while driving or waiting. Therefore, sound field effect in the vehicle has become an important factor of concern, and good sound effect can always bring comfortable experience to people.


Currently, a user needs to manually adjust a play intensity of each speaker, so as to achieve an optimal sound field at a target point position. If a driver needs to manually adjust the play intensity, the driver needs to shift focus to a screen. This is a safety hazard for driving. In addition, when a passenger in the vehicle changes a position or a quantity of passengers increases, play intensities of speakers need to be continuously manually adjusted. This leads to poor user experience.


Embodiments of this application provide a sound-making apparatus control method, a sound-making system, and a vehicle. Position information of an area in which a user is located is identified, and a sound field optimization center is automatically adjusted, so that each user can achieve good listening effect.


The following describes an in-vehicle sound-making system provided in an embodiment of this application by using FIG. 2 and FIG. 3. FIG. 2 is a schematic diagram of a structure of a sound-making system according to an embodiment of this application. The sound-making system may be a Controller Area Network (CAN) control system. The CAN control system may include a plurality of sensors (such as a sensor 1 and a sensor 2), a plurality of electronic control units (ECU), an in-vehicle entertainment host, a speaker controller, and a speaker. The sensor includes but is not limited to a camera, a microphone, an ultrasonic radar, a millimeter-wave radar, a lidar, a vehicle speed sensor, a motor power sensor, and an engine speed sensor. The ECU is configured to receive data collected by the sensor, execute a corresponding command, and obtain a periodic signal or an event signal after executing the corresponding command. Then, the ECU may send these signals to a public CAN network, where the ECU includes but is not limited to a complete vehicle controller, a hybrid controller, an automatic transmission controller, and an automatic driving controller. The in-vehicle entertainment host is configured to capture a periodic signal or an event signal sent by each ECU on the public CAN network, and perform a corresponding operation or forward the signal to the speaker controller when the corresponding signal is recognized. The speaker controller is used to receive the command signal from the in-vehicle entertainment host on the private CAN network to adjust the speaker. For example, in this embodiment of this application, the in-vehicle entertainment host may capture, from the CAN bus, image information collected by the camera. The in-vehicle entertainment host may determine, based on image information, whether there are users in a plurality of areas in the vehicle, and send position information of the user to the speaker controller. The speaker controller may control a play intensity of each speaker based on the position information of the user.



FIG. 3 is a schematic diagram of another structure of an in-vehicle sound-making system according to an embodiment of this application. The sound-making system may be a ring network communication architecture. All sensors and actuators (such as components like a speaker, an atmosphere light, an air conditioner, and a motor that obtain and execute a command) may be connected to a nearby vehicle integration unit (VIU). As a communication interface unit, the VIU may be deployed at a position in which sensors and actuators of a vehicle are dense, so that the sensors and the actuators of the vehicle can perform nearby connection. In addition, the VIU may have specific computing and driving capabilities (for example, the VIU may absorb driving computing functions of some actuators). The sensor includes but is not limited to a camera, a microphone, an ultrasonic radar, a millimeter-wave radar, a lidar, a vehicle speed sensor, a motor power sensor, an engine rotation speed sensor, and the like.


VIUs communicate with each other through networking. An intelligent driving computing platform/a mobile data center (MDC), a vehicle domain controller (VDC), and an intelligent cockpit domain controller (CDC) are separately and redundantly connected to the ring network communication network formed by the VIUs. After the sensor (for example, the camera) collects data, the sensor may send the collected data to the VIU. The VIU can publish the data to the ring network. The MDC, VDC, and CDC collect the related data on the ring network, calculate the data, convert the data into a signal including position information of a user, and publish the signal to the ring network. A play intensity of a speaker is controlled through a corresponding computing capability and driving capability in the VIU.


It should be understood that, as shown in FIG. 3, there may be a correspondence between different VIUs and speakers at different positions. For example, a VIU 1 is configured to drive a speaker 1, a VIU 2 is configured to drive a speaker 2, a VIU 3 is configured to drive a speaker 3, and a VIU 4 is configured to drive a speaker 4. An arrangement of the VIU may be independent of the speaker. For example, the VIU 1 may be arranged at the left rear of the vehicle, but the speaker 1 may be arranged near a door on a driver side. The sensor or actuator can be connected to the nearby VIU, thereby reducing cable bundles. Due to a limited quantity of interfaces of the MDC, VDC, and CDC, the VIU can be connected a plurality of sensors and a plurality of actuators to implement interface and communication functions.


It should be further understood that, in this embodiment of this application, a VIU to which the sensor or a controller is connected and a controller by which the connection is controlled may be set before delivery of the sound-making system, or may be defined by the user, and hardware of the sound-making system may be replaced and upgraded.


It should be further understood that the VIU may absorb driving computing functions of some sensors and actuators. In this way, when some actuators (for example, a CDC or a VDC) are faulty, the VIU may directly process the data collected by the sensor, to further control the actuators.


In an embodiment, the communication architecture shown in FIG. 3 may be an intelligent digital vehicle platform (IDVP) ring network communication architecture.



FIG. 4 is a top view of a vehicle. As shown in FIG. 4, a position 1 is a driver seat, a position 2 is a front passenger seat, positions 3 to 5 are rear-row areas, positions 6a to 6d are positions of four speakers in the vehicle, a position 7 is a position of an in-vehicle camera, and a position 8 is a position where a CDC and an in-vehicle central control screen are located. The speaker may be configured to play a media sound in the vehicle. The in-vehicle camera may be used to detect a position of a passenger in the vehicle. The in-vehicle central control screen may be used to display image information and an interface of an application. The CDC is used to connect peripherals and provide data analysis and processing capabilities.


It should be understood that in FIG. 4, only an example in which the speakers shown in the positions 6a to 6d are located near a door of the driver seat, near a door of the front passenger seat, near a door of a second-row left area, and near a door of the second-row right area is used for description. In this embodiment of this application, positions of the speakers are not specifically limited. The speakers can alternatively be located near the vehicle doors, near the large central control screen, on a ceiling, on floors, or on seats (for example, on pillows of the seats).



FIG. 5 is a schematic flowchart of a sound-making apparatus control method 500 according to an embodiment of this application. The method 500 may be applied to a vehicle, the vehicle includes a plurality of sound-making apparatuses (for example, speakers), and the method 500 includes the following steps.


S501: The vehicle obtains position information of a user.


In an embodiment, the vehicle may obtain image information of each area (for example, a driver seat, a front passenger seat, and a rear-row area) in the vehicle by starting an in-vehicle camera, and determine, based on the image information of each area, whether there is a user in the area. For example, the vehicle may analyze, based on the image information collected by the camera, whether the image information includes an outline of a human face, so that the vehicle may determine whether there is a user in the area. For another example, the vehicle may analyze, based on the image information collected by the camera, whether the image information includes iris information of a human eye, so that the vehicle may determine that there is a user in the area.


In an embodiment, when the vehicle detects an operation of turning on a sound field adaptation switch by the user, the vehicle may start the camera to obtain the image information of each area in the vehicle.


For example, the user may select a setting option on a large central control screen to enter a sound effect function interface, and may choose to enable the sound field adaptation switch on the sound effect function interface.


In an embodiment, the vehicle may alternatively detect, by using a pressure sensor under a seat, whether there is a user in a current area. For example, when a pressure value detected by a pressure sensor under a seat in an area is greater than or equal to a preset value, it may be determined that there is a user in the area.


In an embodiment, the vehicle may alternatively determine position information of a sound source by using audio information obtained by a microphone array, to determine specific areas in which there are users.


In an embodiment, the vehicle may alternatively obtain the position information of the user in the vehicle by using one or a combination of the in-vehicle camera, the pressure sensor, and the microphone array.


It should be understood that in this embodiment of this application, data collected by the sensor (for example, the in-vehicle camera, the pressure sensor, or the microphone array) can be transmitted to a CDC, and the CDC can process the data to determine specific areas in which there are users.


For example, after processing the data, the CDC may convert the data into a flag bit. For example, the CDC may output 1000 when there is a user in only the driver seat. The CDC may output 0100 when there is a user in only the front passenger seat. The CDC may output 0010 when there is a user in only a second-row left area. The CDC may output 1100 when there are users in both the driver seat and the front passenger seat. The CDC may output 1110 when there are users in the driver seat, the front passenger seat, and the second-row left area.


It should be understood that a manner of outputting the position information by the CDC is merely described by using the flag bit as an example. This embodiment of this application is not limited thereto.


It should be further understood that the foregoing description uses an example in which sitting areas of the users in the vehicle are divided into the driver seat, the front passenger seat, the second-row left area, and the second-row right area. This embodiment of this application is not limited thereto. For example, areas in the vehicle may alternatively be divided into a driver seat, a front passenger seat, a second-row left area, a second-row middle area, and a second-row right area. For another example, for a 7-seat sports utility vehicle (SUV), areas in the vehicle may alternatively be divided into a driver seat, a front passenger seat, a second-row left area, a second-row right area, a third-row left area, and a third-row right area. For another example, for a passenger car, areas in the vehicle may be divided into a front-row area and a rear-row area. Alternatively, for a multi-passenger car, areas in the vehicle may be divided into a driving area, a passenger area, and the like.


S502: The vehicle adjusts a sound-making apparatus based on the position information of the user.


For example, an example in which the sound-making apparatus is a speaker is used for description with reference to the speakers 6a to 6d in FIG. 4. FIG. 6 shows positions of the four speakers. For example, a graph formed by connection lines of points at which the four speakers are located is a rectangle ABCD. A speaker 1 is disposed at a point A on the rectangle ABCD, a speaker 2 is disposed at a point B, a speaker 3 is disposed at a point C, and a speaker 4 is disposed at a point D. A point O is a center point of the rectangle ABCD (distances between the point O and the four points A, B, C, and D are equal). It should be understood that different automobiles have different positions and different quantities of speakers. In a specific implementation process, a specific adjustment manner may be designed based on a model of an automobile or a setting of a speaker in an automobile. This is not limited in this application.



FIG. 7 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat, a front passenger seat, a second-row left area, and a second-row right area according to an embodiment of this application. Center points of all the areas may form a rectangular EFGH, and a center point of the rectangular EFGH may be a point Q. The point Q may be a current sound field optimization center point in the vehicle.


For example, the point Q may coincide with the point O. Since distances between the center point Q of the rectangular EFGH and the four speakers are equal, the vehicle can control play intensities of the four speakers to be the same (for example, the play intensities of the four speakers are all p).


For example, if the point Q and the point O do not overlap, the vehicle may control the play intensities of the four speakers based on the distances between the point Q and the four speakers.


For example, for the speaker 1 (located at the point A), the vehicle may control a play intensity of the speaker 1 to be







AO
AQ

·

p
.





For another example, for the speaker 2 (located at the point B), the vehicle may control a play intensity of the speaker 2 to be







BQ
AO

·

p
.





For another example, for the speaker 3 (located at the point C), the vehicle may control a play intensity of the speaker 3 to be







CQ
AO

·

p
.





For another example, for the speaker 4 (located at the point D), the vehicle may control a play intensity of the speaker 4 to be







DQ
AO

·

p
.






FIG. 8 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a driver seat, a front passenger seat, a second-row left area, and a second-row right area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the driver seat, the front passenger seat, the second-row left area, and the second-row right area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of an area in which the front passenger seat is located, a center point of the second-row left area, and a center point of the second-row right area.



FIG. 9 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat, a front passenger seat, and a second-row left area according to an embodiment of this application. Center points of the driver seat, the front passenger seat, and the second-row left area may form a triangle EFG, where a circumcenter of the triangle EFG may be a point Q. The point Q may be a current sound field optimization center point in the vehicle.


For example, the point Q may coincide with the point O. Since distances between the center point Q and the four speakers are equal, the vehicle can control play intensities of the four speakers to be the same (for example, the play intensities of the four speakers are all p).


For example, the point Q and the point O may not overlap. In this case, for a manner in which the vehicle controls the play intensities of the four speakers, refer to the description in the foregoing embodiment. Details are not described herein again.



FIG. 10 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a driver seat, a front passenger seat, and a second-row left area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the driver seat, the front passenger seat, and the second-row left area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of an area in which the front passenger seat is located, and a center point of the second-row left area.



FIG. 11 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a driver seat, a front passenger seat, and a second-row right area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the driver seat, the front passenger seat, and the second-row right area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of an area in which the front passenger seat is located, and a center point of the second-row right area.



FIG. 12 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a driver seat, a second-row left area and a second-row right area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the driver seat, the second-row left area, and the second-row right area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of the second-row left area, and a center point of the second-row right area.


It should be understood that when there are users in the driver seat, the front passenger seat, and the second-row right area, or when there are users in the driver seat, the second-row left area, and the second-row right area, for a manner in which the vehicle controls play intensities of four speakers, reference may be made to the foregoing manner in which the vehicle controls the play intensities of the four speakers when there are users in the driver seat, the front passenger seat, and the second-row left area. Details are not described herein again.



FIG. 13 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat and a second-row left area according to an embodiment of this application. A connection line between a center point of the driver seat and a center point of the second-row left area is EG, where a midpoint of EG may be a point Q. The point Q may be a current sound field optimization center point in the vehicle.


For example, the point Q may coincide with the point O. Since distances between the midpoint of the line segment EG and the four speakers are equal, the vehicle can control play intensities of the four speakers to be the same (for example, the play intensities of the four speakers are all p).


For example, if the point Q and the point O do not overlap, the vehicle may control the play intensities of the four speakers based on the distances between the point Q and the four speakers. For a specific control process, refer to the description of the foregoing embodiment. Details are not described herein again.



FIG. 14 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a driver seat and a second-row right area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the driver seat and the second-row right area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located and a center point of the second-row right area.



FIG. 15 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a front passenger seat and a second-row left area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the front passenger seat and the second-row left area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the front passenger seat is located and a center point of the second-row left area.



FIG. 16 is a schematic diagram of a sound field optimization center in a vehicle when there are users in a driver seat and a front passenger seat according to an embodiment of this application. A connection line between center points of areas in which the driver seat and the front passenger seat are located is EF, where a midpoint of EF may be a point P. The point P may be a current sound field optimization center in the vehicle.



FIG. 17 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a driver seat and a front passenger seat in the vehicle, the central control screen may notify a user that “Detect that there are persons in the driver seat and the front passenger seat”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located and a center point of an area in which the front passenger seat is located.


For example, the vehicle may control play intensities of four speakers based on a distance between the point P and the four speakers.


For example, for the speaker 1 (located at the point A), the vehicle may control a play intensity of the speaker 1 to be







AP
AO

·

p
.





For another example, for the speaker 2 (located at the point B), the vehicle may control a play intensity of the speaker 2 to be







BP
AO

·

p
.





For another example, for the speaker 3 (located at the point C), the vehicle may control a play intensity of the speaker 3 to be







CP
AO

·

p
.





For another example, for the speaker 4 (located at the point D), the vehicle may control a play intensity of the speaker 4 to be







DP
AO

·

p
.






FIG. 18 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a second-row left area and a second-row right area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the second-row left area and the second-row right area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of the second-row left area and a center point of the second-row right area.



FIG. 19 is a schematic diagram of a sound field optimization center in a vehicle when there are users in a driver seat and a second-row left area according to an embodiment of this application. A connection line between a center point of the driver seat and a center point of the second-row left area is EH, where a middle point of EH may be a point R. The point R may be a current sound field optimization center in the vehicle.



FIG. 20 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a driver seat and a second-row left area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the driver seat and the second-row left area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located and a center point of the second-row left area.


For example, the vehicle may control play intensities of four speakers based on distances between the point R and the four speakers.


For example, for the speaker 1 (located at the point A), the vehicle may control a play intensity of the speaker 1 to be







AR
AO

·

p
.





For another example, for the speaker 2 (located at the point B), the vehicle may control a play intensity of the speaker 2 to be







BR
AO

·

p
.





For another example, for the speaker 3 (located at the point C), the vehicle may control a play intensity of the speaker 3 to be







CR
AO

·

p
.





For another example, for the speaker 4 (located at the point D), the vehicle may control a play intensity of the speaker 4 to be







DR
AO

·

p
.






FIG. 21 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there are users in a front passenger seat and a second-row right area in the vehicle, the large central control screen may notify a user that “Detect that there are persons in the front passenger seat and the second-row right area”, and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the front passenger seat is located and a center point of the second-row right area.



FIG. 22 is a schematic diagram of a sound field optimization center in a vehicle when there is a user in a driver seat according to an embodiment of this application. A center point of an area in which the driver seat is located is a point E, where the point E may be a current sound field optimization center point in the vehicle.



FIG. 23 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there is a user in a driver seat in the vehicle, the large central control screen may notify a user that “Detect that there is a person in the driver seat”, and notify the user that a current sound field optimization center point may be a center point of an area in which the driver seat is located.


For example, the vehicle may control play intensities of four speakers based on distances between the point E and the four speakers.


For example, for the speaker 1 (located at the point A), the vehicle may control a play intensity of the speaker 1 to be







AE
AO

·

p
.





For another example, for the speaker 2 (located at the point B), the vehicle may control a play intensity of the speaker 2 to be







BE
AO

·

p
.





For another example, for the speaker 3 (located at the point C), the vehicle may control a play intensity of the speaker 3 to be







CE
AO

·

p
.





For another example, for the speaker 4 (located at the point D), the vehicle may control a play intensity of the speaker 4 to be







DE
AO

·

p
.






FIG. 24 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there is a user in a front passenger seat in the vehicle, the large central control screen may notify a user that “Detect that there is a person in the front passenger seat”, and notify the user that a current sound field optimization center point may be a center point of an area in which the front passenger seat is located.



FIG. 25 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there is a user in a second-row left area in the vehicle, the large central control screen may notify the user that “Detect that there is a person in the second-row left area”, and notify the user that a current sound field optimization center point may be a center point of the second-row left area.



FIG. 26 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application. When the vehicle detects that there is a user in a second-row right area in the vehicle, the large central control screen may notify the user that “Detect that there is a person in the second-row right area” and notify the user that a current sound field optimization center point may be a center point of the second-row right area.


It should be understood that an example in which position information of the user in S501 is a center point of the area in which the user is located is used for description. This embodiment of this application is not limited thereto. For example, the position information of the user may alternatively be another preset point of the area in which the user is located, or the position information of the user may alternatively be a point of an area that is obtained through calculation according to a preset rule (for example, a preset algorithm).


In an embodiment, the position information of the user may alternatively be determined based on position information of a human ear of the user. The position information of the human ear of the user may be determined based on image information collected by a camera apparatus. For example, the position information of the human ear of the user is a midpoint of a connection line between a first point and a second point, where the first point is a point on a left ear of the user, and the second point is a point on a right ear of the user. For another example, position information of a pinna of the human ear of the user may be determined based on image information collected by a camera apparatus. Position information of an area may be determined based on the position information of the human ear of the user or the position information of the pinna of the human ear. With reference to FIG. 27 and FIG. 28, the following describes a process in which a user manually adjusts a play intensity of a specific speaker after the vehicle adjusts play intensities of a plurality of sound-making apparatuses by using the position information of the user.



FIGS. 27A and 27B show a group of GUIs according to an embodiment of this application.


As shown in FIG. 27A, when there are users in a driver seat, a front passenger seat, a second-row left area, and a second-row right area, a vehicle may notify a user that “Detect that there are persons in the driver seat, the front passenger seat, the second-row left area and the second-row right area” on an HMI, and notify the user of a current sound field optimization center. As shown in FIG. 27A, smiley faces in the driver seat, the front passenger seat, the second-row left area, and the second-row upper right area indicate that there are users in the areas. When detecting an operation of touching and holding a smiley face in the second-row left area by the user, the vehicle may display an icon 2701 (for example, a garbage bin icon) on the HMI. When the vehicle detects that the user drags the smiley face in the second-row left area to the icon 2701, the vehicle may display, on the HMI, a GUI shown in FIG. 27B.


As shown in FIG. 27B, in response to the detected operation of the user dragging the smiley face in the second-row left area to the icon 2701, the vehicle may notify the user that “The volume of the speaker in the second-row left area has been reduced to 0” on the HMI.


In an embodiment, if there are users in the driver seat, the front passenger seat, the second-row left area, and the second-row right area, play intensities of current four speakers may be p. When the vehicle detects an operation of dragging a smiley face in the second-row left area to the icon 2701 by the user, the vehicle may reduce a play intensity of a speaker of the second-row left area to 0, or decrease the speaker play intensity of the second-row left area from p to 0.1 p. This is not limited in this embodiment of this application.



FIGS. 28A and 28B show a group of GUIs according to an embodiment of this application.


As shown in FIG. 28A, when there are users in a driver seat, a front passenger seat, a second-row left area, and a second-row right area, a vehicle may notify a user that “Detect that there are persons in the driver seat, the front passenger seat, the second-row left area, and the second-row right area” on an HMI, and notify the user of a current sound field optimization center. When the vehicle detects, on the HMI, an operation of sliding a finger of the user upward in the second-row left area, a scroll bar 2801 of a play intensity may be displayed. The scroll bar 2801 of the play intensity may include a scroll block 2802.


As shown in FIG. 28B, in response to the detected operation of sliding the finger of the user upward in the second-row left area, the vehicle may increase a play intensity of a speaker near the second-row left area and display the scroll block 2802 on the HMI to move upward. For example, the play intensity of the speaker near the second-row left area may be increased from p to 1.5 p. At the same time, the vehicle can notify the user that “The volume of the speaker in the second-row left area has been increased” on the HMI.


In this embodiment of this application, after adjusting a play intensity of a first sound-making apparatus to a first play intensity, if detecting an operation of adjusting, by the user, a play intensity of a speaker from the first play intensity to a second play intensity, the vehicle may adjust the play intensity of the speaker to the second play intensity. In this way, the user can quickly adjust the play intensity of the speaker in the area, so that the speaker in the area better meets listening effect of the user.


In an embodiment, the vehicle may further determine a status of a user in an area based on image information collected by a camera, so as to adjust a play intensity of a speaker near the area with reference to position information of the area and the status of the user. For example, when the vehicle detects that there is a user in the second-row left area and the user is resting, the vehicle may control the play intensity of the speaker near the second-row left area to be 0 or another value.


In an embodiment, the second play intensity may alternatively be a default play intensity (for example, the second play intensity is 0). When the vehicle detects a preset operation of the user in an area (for example, the second-row left area) on the large central control screen, the vehicle may adjust a play intensity of a speaker in the area from the first play intensity to the default play intensity.


In an embodiment, the preset operation includes but is not limited to a touch and hold operation of the user detected in the area, (for example, touch and hold a seat in the second-row left area), and a sliding or tapping operation in the area.


It should be understood that, with reference to FIG. 6 to FIG. 28, the foregoing describes a case in which the sound-making apparatus control method provided in embodiments of this application is applied to an in-vehicle scenario, and the control method may be further applied to another scenario, for example, a home theater scenario or a KTV scenario. FIG. 29 is a schematic diagram of a sound-making apparatus control method when applied to a home theater according to an embodiment of this application. As shown in FIG. 29, the home theater may include a sound box 1, a sound box 2, and a sound box 3. A sound-making system in the home theater can adjust the three sound boxes by detecting position relationship between a user and the three sound boxes.



FIG. 30 is a schematic diagram of a sound field optimization center in a home theater according to an embodiment of this application. For example, a graph including connection lines of points at which three sound boxes are located is a triangle ABC, where a sound box 1 is disposed at a point A on the triangle ABC, a sound box 2 is disposed at a point B, and a sound box 3 is disposed at a point C. A point O is a circumcenter of the triangle ABC.


When a center point of an area in which a user is located coincides with the point O, or when the point O is located in an area in which a user is located, a sound-making system may control the sound box 1, the sound box 2, and the sound box 3 to have a same play intensity. (For example, all play intensities of the three sound boxes are p).


When the center point of the area in which the user is located does not coincide with the point O, or when the point O is not located in the area in which the user is located, the sound-making system may adjust the play intensities of the three sound boxes based on a position relationship between the area in which the user is located and the three sound boxes.


For example, the center point of the area in which the user is located is a point Q. For the sound box 1 (located at the point A), the sound-making system may control a play intensity of the sound box 1 to be







AQ
AO

·

p
.





For another example, for the sound box 2 (located at the point B), the sound-making system may control a play intensity of the sound box 2 to be







BQ
AO

·

p
.





For another example, for the sound box 3 (located at the point C), the sound-making system may control a play intensity of the sound box 3 to be







CQ
AO

·

p
.






FIG. 31 is a schematic flowchart of a sound-making apparatus control method 3100 according to an embodiment of this application. The method 3100 may be applied to a first device. As shown in FIG. 31, the method 3100 includes the following steps.


S3101: A first device obtains position information of a plurality of areas in which a plurality of users is located.


Optionally, that a first device obtains position information of a plurality of areas in which a plurality of users is located includes obtaining sensing information; and determining the position information of the plurality of areas based on the sensing information, where the sensing information includes one or more of image information, pressure information, and sound information.


For example, the sensing information may include image information. The first device may obtain the image information by using an image sensor.


For example, the first device is a vehicle. The vehicle may determine, based on image information collected by an image shooting apparatus, whether the image information includes face contour information, human ear information, iris information, or the like. When the vehicle needs to determine whether there is a user in a driver area, the vehicle may obtain image information of the driver area that is collected by a driver camera. If the vehicle may determine that the image information includes one or more of face contour information, human ear information, or iris information, the first device may determine that there is a user in the driver area.


It should be understood that an implementation process of determining that the image information includes one or more of the face contour information, the human ear information, or the iris information is not limited in this embodiment of this application. For example, the vehicle may input the image information into a neural network, to obtain a classification result thin the area includes a face of the user.


For another example, the vehicle may further establish a coordinate system for the driver area. When the vehicle needs to determine whether there is a person in the driver area, the vehicle may collect image information of a plurality of coordinate points in the coordinate system by using the driver camera, and further analyze whether there is feature information of a person at the plurality of coordinate points. If there is the feature information of the person, the vehicle may determine that there is the user in the driver area.


Optionally, the first device is a vehicle, and the sensing information may be pressure information. For example, a pressure sensor is included under each seat in the vehicle, and that a first device obtains position information of a plurality of areas in which a plurality of users are located includes: The first device obtains, based on the pressure information (for example, a pressure value) collected by the pressure sensor, the position information of the plurality of areas in which the plurality of users are located.


Optionally, when the pressure value collected by the pressure sensor is greater than or equal to a first threshold, the vehicle determines that there is a user in the area corresponding to the pressure sensor. For example, when a pressure value detected by a pressure sensor under a seat at the driver area is greater than or equal to a preset pressure value, the vehicle may determine that there is a user in the driver area.


Optionally, the sensing information may be sound information. That a first device obtains position information of a plurality of areas in which a plurality of users are located includes: The first device obtains, by using sound signals collected by a microphone array, the position information of the plurality of areas in which the plurality of users are located. For example, the first device may locate a user based on a sound signal collected by the microphone array. If the first device locates, based on the sound signal, that a user is located in an area, the first device may determine that there is the user in the area.


Optionally, the first device may further determine, with reference to at least two types of image information, pressure information, and sound information, whether there is the user in the area.


For example, the first device is a vehicle. When the vehicle needs to determine whether there is a user in the driver area, the vehicle may obtain image information collected by the driver camera and pressure information collected by the pressure sensor in the driver seat. If determining, based on the image information collected by the driver camera, that the image information includes face information and the pressure value collected by the pressure sensor in the driver seat is greater than or equal to the first threshold, the vehicle may determine that there is the user in the driver area.


For another example, when the first device needs to determine whether there is a user in an area, the first device may obtain image information in the area that is collected by the camera, and pick up sound information in an environment by using the microphone array. If determining, based on the image information of the area that is collected by the camera, that the image information includes face information, and determining, based on the sound information collected by the microphone array, that a sound comes from the area, the vehicle may determine that there is the user in the area.


S3102: The first device controls, based on the position information of the plurality of areas in which the plurality of users are located and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.


Optionally, the position information of the plurality of areas in which the plurality of users are located may include a center point of each of the plurality of areas, or a preset point of each of the plurality of areas, or a point of each area that is obtained according to a preset rule (for example, a preset algorithm).


Optionally, that the first device controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes controlling, based on the position information of the plurality of areas and a mapping relationship, the plurality of sound-making apparatuses to work, where the mapping relationship is a mapping relationship between positions of the plurality of areas and play intensities of the plurality of sound-making apparatuses.


For example, the vehicle shown in FIG. 4 is used as an example. Table 1 shows a mapping relationship between positions of a plurality of areas and play intensities of a plurality of sound-making apparatuses.










TABLE 1







Positions of a plurality of areas













Front
Second-
Second-
Play intensities of a



passenger
row left
row right
plurality of sound-making apparatuses














Driver seat
seat
area
area
Speaker 1
Speaker 2
Speaker 3
Speaker 4





Person
Person
Person
Person
p
p
p
p


Person
Person
Person
No person
p
p
p
p


Person
Person
No person
Person
p
p
p
p


Person
No person
Person
Person
p
p
p
p


No person
Person
Person
Person
p
p
p
p


Person
No person
No person
Person
p
p
p
p


No person
Person
Person
No person
p
p
p
p


Person
Person
No person
No person
0.6 p
0.6 p
1.8 p
1.8 p


No person
No person
Person
Person
1.8 p
1.8 p
0.6 p
0.6 p


Person
No person
Person
No person
0.8 p
1.4 p
1.4 p
0.8 p


No person
Person
No person
Person
1.4 p
0.8 p
0.8 p
1.4 p


Person
No person
No person
No person
0.5 p
0.8 p
1.5 p
1.1 p


No person
Person
No person
No person
0.8 p
0.5 p
1.1 p
1.5 p


No person
No person
Person
No person
1.5 p
1.1 p
0.5 p
0.8 p


No person
No person
No person
Person
1.1 p
1.5 p
0.8 p
0.5 p









It should be understood that the mapping relationship between the positions and the play intensities of the plurality of sound-making apparatuses shown in Table 1 is merely an example. An area division manner and a play intensity of a speaker are not limited in this embodiment of this application.


Optionally, that the first device controls, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: The first device determines a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and the first device controls, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.


Optionally, the method further includes that the first device notifies position information of the sound field optimization center point.


Optionally, the first device notifies the position information of the current sound field optimization center by using an HMI, a sound, or an atmosphere light.


Optionally, the plurality of areas are areas in a vehicle cockpit.


Optionally, the plurality of areas includes a front-row area and a rear-row area. Optionally, the plurality of areas may include a driver area and a front passenger area.


Optionally, the method further includes that the first device notifies the position information of the plurality of areas in which the plurality of users is located.


Optionally, that the first device controls the plurality of sound-making apparatuses to work includes:


The first device adjusts a play intensity of each of the plurality of sound-making apparatuses.


Optionally, the plurality of sound-making apparatuses includes a first sound-making apparatus, and that the first device adjusts a play intensity of each of the sound-making apparatuses includes that the first device controls a play intensity of the first sound-making apparatus to be a first play intensity. The method further includes that the first device obtains an instruction of a user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity; and the first device adjusts the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the instruction.


For example, as shown in FIG. 27A, there are users in the driver seat, the front passenger seat, the second-row left area, and the second-row right area in the vehicle. In this case, the vehicle may control play intensities of four speakers to be p. When the vehicle detects that the user drags a smiley face in the second-row left area on an HMI to the icon 2701, the vehicle may adjust a play intensity of a speaker near the second-row left area from p to 0.


For example, as shown in FIG. 28B, there are users in the driver seat, the front passenger seat, the second-row left area, and the second-row right area in the vehicle. In this case, the vehicle may control the play intensities of the four speakers to be p. When the vehicle detects an operation of sliding upward by the user in the second-row left area on the HMI, the vehicle may adjust the play intensity of the speaker near the second-row left area from p to 1.5 p.


In this embodiment of this application, the first device obtains the position information of the plurality of areas in which the plurality of users are located, and controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work, without a need for a user to manually adjust the sound-making apparatuses. This helps to reduce learning costs of the user and reduce complicated operations of the user. In addition, this also helps the plurality of users enjoy good listening effect, and helps to improve user experience.



FIG. 32 is a schematic diagram of a structure of a sound-making system 3200 according to an embodiment of this application. The sound-making system may include a sensor 3201, a controller 3202, and a plurality of sound-making apparatuses 3203. The sound-making system includes a sensor, a controller, and a plurality of sound-making apparatuses.


The sensor 3201 is configured to collect data and send the data to the controller.


The controller 3202 is configured to obtain, based on the data, position information of a plurality of areas in which a plurality of users are located; and control, based on the position information of the plurality of areas and position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses 3203 to work. Optionally, the data includes at least one of image information, pressure information, and sound information.


Optionally, the controller 3202 is specifically configured to: determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.


Optionally, the controller 3202 is further configured to send a first instruction to a first prompt apparatus, where the first instruction instructs the first prompt apparatus to notify the position information of the sound field optimization center point.


Optionally, the plurality of areas are areas in a vehicle cockpit.


Optionally, the plurality of areas includes a front-row area and a rear-row area. Optionally, the plurality of areas may include a driver area and a front passenger area.


Optionally, the controller 3202 is further configured to send a second instruction to a second prompt apparatus, where the second instruction instructs the second prompt apparatus to notify the position information of the plurality of areas in which the plurality of users are located.


Optionally, the controller 3202 is further configured to adjust a play intensity of each of the plurality of sound-making apparatuses 3203.


Optionally, the plurality of sound-making apparatuses 3203 include a first sound-making apparatus. The controller 3202 is further configured to control a play intensity of the first sound-making apparatus to be a first play intensity. The controller 3202 is further configured to: obtain a third instruction of the user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity, and adjust the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the third instruction.



FIG. 33 is a schematic block diagram of an apparatus 3300 according to an embodiment of this application. The apparatus 3300 includes a transceiver unit 3301 and a processing unit 3302. The transceiver unit 3301 is configured to receive sensing information. The processing unit 3302 is configured to obtain, based on the sensing information, position information of a plurality of areas in which a plurality of users is located. The processing unit 3302 is further configured to control, based on the position information of the plurality of areas in which the plurality of users are located and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.


Optionally, that the processing unit 3302 is further configured to control, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: The processing unit 3302 is configured to: determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.


Optionally, the transceiver unit 3301 is further configured to send a first instruction to a first prompt unit, where the first instruction instructs the first prompt unit to notify the position information of the sound field optimization center point.


Optionally, the plurality of areas are areas in a vehicle cockpit.


Optionally, the plurality of areas includes a front-row area and a rear-row area.


Optionally, the plurality of areas includes a driver area and a front passenger area.


Optionally, the transceiver unit 3301 is further configured to send a second instruction to a second prompt unit, where the second instruction instructs the second prompt unit to notify the position information of the plurality of areas in which the plurality of users is located.


Optionally, the processing unit 3302 is specifically configured to adjust a play intensity of each of the plurality of sound-making apparatuses.


Optionally, the plurality of sound-making apparatuses includes a first sound-making apparatus. The processing unit 3302 is further configured to control a play intensity of the first sound-making apparatus to be a first play intensity. The transceiver unit 3301 is further configured to receive a third instruction, where the third instruction is an instruction instructing to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity. The processing unit 3302 is further configured to adjust the play intensity of the first sound-making apparatus to the second play intensity.


Optionally, the sensing information includes one or more of image information, pressure information, and sound information.


An embodiment of this application further provides an apparatus. The apparatus includes a processing unit and a storage unit. The storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the apparatus performs the sound-making apparatus control method.


Optionally, the processing unit may be the processor 151 shown in FIG. 1, and the storage unit may be the memory 152 shown in FIG. 1. The memory 152 may be a storage unit (for example, a register or a cache) in a chip, or may be a storage unit located outside the chip in a vehicle (for example, a ROM, or a RAM).


An embodiment of this application further provides a vehicle including the sound-making system 3200 or the apparatus 3300.


An embodiment of this application further provides a computer program product. The computer program product includes computer program code, and when the computer program code is run on a computer, the computer is enabled to perform the method.


An embodiment of this application further provides a computer-readable medium. The computer-readable medium stores program code, and when the computer program code is run on a computer, the computer is enabled to perform the method.


In an implementation process, steps in the method can be implemented by using a hardware integrated logical circuit in the processor 151, or by using instructions in a form of software. The method disclosed with reference to embodiments of this application may be directly performed by a hardware processor, or may be performed by using a combination of hardware in the processor 151 and a software module. A software module may be located in a mature storage medium in the art, such as a RAM, a flash memory, a ROM, a programmable ROM, an electrically erasable programmable memory, or a register. The storage medium is located in the memory, and a processor 151 reads information in the memory 152 and completes the steps in the method in combination with hardware of the processor 151. To avoid repetition, details are not described herein again.


It should be understood that, the processor 151 in embodiments of this application may be a CPU, or may be another general-purpose processor, a DSP, an ASIC, an FPGA, or another programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.


It should also be understood that in embodiments of this application, the memory 152 may include a read-only memory and a random-access memory, and provide instructions and data to the processor.


In embodiments of this application, “first”, “second”, and various numeric numbers are merely used for distinguishing for ease of description and are not intended to limit the scope of embodiments of this application. For example, “first”, “second”, and various numeric numbers are used for distinguishing between different pipes, through holes, and the like.


It should be understood that the term “and/or” in this specification describes only an association relationship between associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, the character “/” in this specification generally indicates an “or” relationship between the associated objects.


It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of this application. The execution sequences of the processes should be determined according to functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of embodiments of this application.


A person of ordinary skill in the art may be aware that, in combination with the examples described in embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.


It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments. Details are not described herein again.


In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, division into the units is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.


The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.


In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units are integrated into one unit.


When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technology, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application. The foregoing storage medium includes any medium that can store program code, such as a Universal Serial Bus (USB) flash drive, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disc.


The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.

Claims
  • 1. A method implemented by a processor, wherein the method comprises: obtaining first position information of a plurality of areas in which a plurality of users is located; andcontrolling, based on the first position information and second position information of a plurality of sound-making apparatuses, the sound-making apparatuses to work.
  • 2. The method of claim 1, wherein controlling the sound-making apparatuses comprises: determining a sound field optimization center point, wherein first distances between the sound field optimization center point and center points of all of the areas are equal; andcontrolling, based on a second distance between the sound field optimization center point and each of the sound-making apparatuses, each of the sound-making apparatuses to work.
  • 3. The method of claim 2, further comprising notifying third position information of the sound field optimization center point to a user.
  • 4. The method of claim 1, wherein the areas are in a vehicle cockpit.
  • 5. The method of claim 4, wherein the areas comprise a front-row area and a rear-row area.
  • 6. The method of claim 4, wherein the areas comprise a driver area and a front passenger area.
  • 7. The method of claim 1, further comprising notifying the first position information to a user.
  • 8. The method of claim 1, wherein controlling the sound-making apparatuses comprises adjusting a play intensity of each of the sound-making apparatuses.
  • 9. The method of claim 8, wherein the sound-making apparatuses comprise a first sound-making apparatus, wherein adjusting the play intensity comprises controlling a first play intensity of the first sound-making apparatus, and wherein the method further comprises: obtaining an instruction of a user to adjust the first play intensity to a second play intensity; andadjusting, in response to the instruction, the first play intensity to the second play intensity.
  • 10. The method of claim 1, wherein obtaining the first position information comprises: obtaining sensing information comprising one or more of image information, pressure information, or sound information; anddetermining the first position information based on the sensing information.
  • 11. An electronic apparatus, comprising: a memory configured to store instructions; andat least one processor coupled to the memory and to execute the instructions to cause the electronic apparatus to: obtain first position information of a plurality of areas in which a plurality of users is located; andcontrol, based on the first position information and second position information of a plurality of sound-making apparatuses, the sound-making apparatuses to work.
  • 12. The electronic apparatus of claim 11, wherein the at least one processor is configured to execute the instructions to cause the electronic apparatus to: determine a sound field optimization center point, wherein first distances between the sound field optimization center point and center points of all of the areas are equal; andcontrol, based on a second distance between the sound field optimization center point and each of the sound-making apparatuses, each of the sound-making apparatuses to work.
  • 13. The electronic apparatus of claim 12, wherein the at least one processor is configured to execute the instructions to cause the electronic apparatus to notify third position information of the sound field optimization center point to a user.
  • 14. The electronic apparatus of claim 11, wherein the areas are in a vehicle cockpit.
  • 15. The electronic apparatus of claim 11, wherein the at least one processor is configured to execute the instructions to cause the electronic apparatus to notify the first position information to a user.
  • 16. The electronic apparatus of claim 11, wherein the at least one processor is configured to execute the instructions to cause the electronic apparatus to adjust a play intensity of each of the sound-making apparatuses.
  • 17. The electronic apparatus of claim 16, wherein the sound-making apparatuses comprise a first sound-making apparatus, and wherein the at least one processor is configured to execute the instructions to cause the electronic apparatus to: control a first play intensity of the first sound-making apparatus;obtain an instruction of a user to adjust the first play intensity to a second play intensity; andadjust, in response to the instruction, the first play intensity to the second play intensity.
  • 18. A system comprising: a sensor; andan electronic apparatus coupled to the sensor and comprising: a memory configured to store instructions; andat least one processor coupled to the memory and to execute the instructions to cause the electronic apparatus to: obtain first position information of a plurality of areas in which a plurality of users is located; andcontrol, based on the first position information and second position information of a plurality of sound-making apparatuses, the sound-making apparatuses to work.
  • 19. A computer program product comprising computer-executable instructions that are stored on a non-transitory computer-readable storage medium that, when executed by a processor, cause an electronic apparatus to: obtain first position information of a plurality of areas in which a plurality of users is located; andcontrol, based on the first position information of the areas and second position information of a plurality of sound-making apparatuses, the sound-making apparatuses to work.
  • 20. A vehicle comprising: a plurality of sound-making apparatuses; andan electronic apparatus, comprising a memory configured to store instructions; andat least one processor coupled to the memory and to execute the instructions to cause the electronic apparatus to: obtain first position information of a plurality of areas in which a plurality of users is located; andcontrol, based on the first position information and second position information of the sound-making apparatuses, the sound-making apparatuses to work.
Priority Claims (1)
Number Date Country Kind
202110744208.4 Jun 2021 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

This is a continuation of International Patent Application No. PCT/CN2022/102818 filed on Jun. 30, 2022, which claims priority to Chinese Patent Application No. 202110744208.4 filed on Jun. 30, 2021. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.

Continuations (1)
Number Date Country
Parent PCT/CN2022/102818 Jun 2022 US
Child 18400108 US