The present disclosure generally relates to a mechanism for adjusting sound effect, in particular, to a method for providing an occluded sound effect and an electronic device.
In the process of transmitting sounds in spaces, the sounds will be affected by the transmission distance in the transmission path, the size of the space, the environmental material and the occlusion of sound blockers, etc., such that the acoustic characteristics such as volume, timbre, and frequency response curve may be changed.
When scene/game designers use the development engine to design scenes/games, if they need to add object occlusion detection and object occlusion ratio calculations, they will use the built-in functions such as “Collider”, “collision event detection” and “Raycast” to achieve occlusion detection and occlusion ratio calculation.
For a to-be-calculated object, the “Collider” that matches the shape of the object would be used based on the range of collision detection. In the space for detecting sound blockers, one or more rays may be set in the space for detect occlusions, wherein each ray may be emitted from the sound source to the sound receiver (e.g., a listener). In addition, conditions like ray range and maximum distance may be determined for each ray.
Next, whether a ray collides with the collider on the object may be detected based on the “collision event detection”, such that whether a sound blocker exists in the transmission path may be detected, and the occluding factor can be calculated based on the number of the rays corresponding to the detected collision events.
Since almost all behaviors related to physic status changes are involved with colliders, the calculations for the colliders will consume a certain part of processing resources. Moreover, due to the advancement of hardware specifications, the requirements for the details of scenes/games are getting higher and higher, such that the importance of computing performance and resource allocation is also relatively increased. Therefore, if the computational complexity of the central processing unit and graphics card may be reduced, it will be beneficial to the scene/game development.
Accordingly, the disclosure is directed to a method for providing an occluded sound effect and an electronic device, which may be used to solve the above technical problems.
The embodiments of the disclosure provide a method for providing an occluded sound effect, adapted to an electronic device. The method includes: providing a virtual environment, wherein the virtual environment comprises a first object, and the first object is approximated as a second object; defining an object detection range of a sound source based on a sound ray originated from the sound source, wherein the object detection range extends from the sound source to a sound receiver; in response to determining that the first object enters the object detection range, defining a reference plane based on a reference point on the second object and the sound ray, wherein the reference plane has an intersection area with the object detection range; projecting the second object onto the reference plane as a first projection; determining a sound occluding factor based on the intersection area and the first projection; and adjusting a sound signal based on the sound occluding factor, wherein the sound signal is provided by the sound source to the sound receiver.
The embodiments of the disclosure provide an electronic device including a storage circuit and a processor. The storage circuit stores a program code. The processor is coupled to the storage circuit and accesses the program code to perform: providing a virtual environment, wherein the virtual environment comprises a first object, and the first object is approximated as a second object; defining an object detection range of a sound source based on a sound ray originated from the sound source, wherein the object detection range extends from the sound source to a sound receiver; in response to determining that the first object enters the object detection range, defining a reference plane based on a reference point on the second object and the sound ray, wherein the reference plane has an intersection area with the object detection range; projecting the second object onto the reference plane as a first projection; determining a sound occluding factor based on the intersection area and the first projection; and adjusting a sound signal based on the sound occluding factor, wherein the sound signal is provided by the sound source to the sound receiver.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the disclosure.
Reference will now be made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
See
In
The processor 104 may be coupled with the storage circuit 102, and the processor 104 may be, for example, a graphic processing unit (GPU), a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
In the embodiments of the disclosure, the processor 104 may access the modules and/or the program codes stored in the storage circuit 102 to implement method for providing an occluded sound effect provided in the disclosure, which would be further discussed in the following.
See
In step S210, the processor 104 may provide a virtual environment, wherein the virtual environment may include a first object. In various embodiments, the virtual environment may be the VR environment provided by the VR system, and the first object may be one of the VR objects in the VR environment, but the disclosure is not limited thereto.
In the embodiments of the disclosure, each VR objects in the virtual environment may be approximated, by the developer, as a corresponding 3D object having simple texture, such as a sphere, a polyhedron, or the like. For example, a keyboard object may be approximated/represented as a cuboid with the corresponding size but without the texture of a keyboard, and a basketball object may be approximated/represented as a sphere with the corresponding size but without the texture of a basketball, but the disclosure is not limited thereto. Accordingly, the first object may be approximated as a second object as well, wherein the second object may be a sphere or a polyhedron with a size close to the size of the first object, but the disclosure is not limited thereto.
Roughly speaking, by approximating/charactering the first object as the second object, the subsequent procedure of the calculations of the sound occluding factor of the first object may be simplified, and the details would be discussed in the following.
In step S220, the processor 104 may define an object detection range of a sound source based on a sound ray originated from the sound source. For better understanding the concept of the disclosure,
See
In
In the embodiments of the disclosure, the object detection range DR may be a cone space having an apex A1 on the sound source T1 and centered at the sound ray SR. In other embodiments, the object detection range DR may be designed as other kinds of 3D space that extends from the sound source T1 along the sound ray SR, but the disclosure is not limited thereto. In
In the embodiments of the disclosure, the processor 104 may determine whether an object enters the object detection range DR. If yes, it represents that this object is possible to occlude the sound transmission between the sound source T1 and the sound receiver R1. For simplicity, the first object would be assumed to be the object entering the object detection range DR, and the second object 310 would correspondingly enter the object detection range DR along with the first object, but the disclosure is not limited thereto.
Accordingly, in step S230, in response to determining that the first object enters the object detection range DR, the processor 104 may define a reference plane RP based on a reference point 310a on the second object 310 and the sound ray SR. In
In
In step S240, the processor 104 may project the second object 310 onto the reference plane RP as a first projection P1. In the embodiment, since the second object 310 is assumed to be a sphere, the first projection P1 of the second object 310 on the reference plane RP may be a circle as shown in
In step S250, the processor 104 may determine a sound occluding factor based on the intersection area AR and the first projection P1. In detail, as could be observed in
In another embodiment, in the process of determining the sound occluding factor, the processor 104 may define a reference line RL based on the intersection area AR and the first projection P1, wherein the reference line RL may pass the intersection area AR and the first projection P1. In
Next, the processor 104 may project the overlapped area OA onto the reference line RL as a first line segment L1 and project the intersecting area AR onto the reference line RL as a second line segment L2, but the disclosure is not limited thereto. In addition, the processor 104 may determine the sound occluding factor as a first ratio of the first line segment L1 over the second line segment L2. More specifically, assuming that the length of the first line segment L1 is m and the length of the second line segment L2 is n, the sound occluding factor may be determined to be m/n, but the disclosure is not limited thereto.
After obtaining the sound occluding factor, in step S260, the processor 104 may adjust a sound signal based on the sound occluding factor, wherein the sound signal is provided by the sound source T1 to the sound receiver R1. In the embodiments of the disclosure, how the processor 104 adjusts the sound signal based on the sound occluding factor may be referred to the relevant prior arts, which would not be further provided.
Accordingly, the embodiments of the disclosure may obtain the sound occluding factor in a way with lower computation complexity, such that the computation resource of the VR system may be utilized more efficiently.
See
Since the second object 410 is assumed to be a cuboid, the first projection P1a of the second object 410 on the reference plane RP may be a polygon with 6 edges as shown in
Next, the processor 104 may determine a sound occluding factor based on the intersection area AR and the first projection P1a. In detail, as could be observed in
In another embodiment, in the process of determining the sound occluding factor, the processor 104 may define a reference line RL based on the intersection area AR and the first projection P1a, wherein the reference line RL may pass the intersection area AR and the first projection P1a. In
Next, the processor 104 may project the overlapped area OAa onto the reference line RL as a first line segment L1a and project the intersecting area AR onto the reference line RL as a second line segment L2a, but the disclosure is not limited thereto. In addition, the processor 104 may determine the sound occluding factor as a first ratio of the first line segment L1a over the second line segment L2a. More specifically, assuming that the length of the first line segment L1a is m and the length of the second line segment L2a is n, the sound occluding factor may be determined to be m/n, but the disclosure is not limited thereto.
After obtaining the sound occluding factor, in step S260, the processor 104 may adjust a sound signal based on the sound occluding factor, wherein the sound signal is provided by the sound source T1 to the sound receiver R1. In the embodiments of the disclosure, how the processor 104 adjusts the sound signal based on the sound occluding factor may be referred to the relevant prior arts, which would not be further provided.
Accordingly, the embodiments of the disclosure may obtain the sound occluding factor in a way with lower computation complexity, such that the computation resource of the VR system may be utilized more efficiently.
In other embodiments, since the information of the height of the first projection P1a may be lost while projecting the first projection P1a onto the reference line RL, the disclosure further provides a mechanism for solving this issue.
See
In this case, if the processor 104 estimates the sound occluding factor of the third embodiment according to the teachings of the second embodiment, the sound occluding factor of the third embodiment may be estimated to be the same as the sound occluding factor of the second embodiment, even though the second object of the third embodiment is higher than the second object 410 of the second embodiment.
Therefore, in the third embodiment, after obtaining the first ratio of the first line segment L1a over the second line segment L2a, the processor 104 may correct the first ratio as the sound occluding factor based on a correcting factor. As could be observed in
After obtaining the correcting factor, the processor 104 may, for example, multiply the first ratio by the correcting factor to correct the first ratio as the sound occluding factor, but the disclosure is not limited thereto.
In
In summary, the embodiments of the disclosure may obtain the sound occluding factor in a way with lower computation complexity, such that the computation resource of the VR system may be utilized more efficiently. In addition, by taking the correcting factor into consideration, the accuracy of the sound occluding factor would not be overly affected by the information loss occurred in the process of projections.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
20080240448 | Gustafsson | Oct 2008 | A1 |
20120206452 | Geisner | Aug 2012 | A1 |