Systems and Methods for Acoustic Simulation

Information

  • Patent Application
  • 20220319483
  • Publication Number
    20220319483
  • Date Filed
    May 29, 2020
    4 years ago
  • Date Published
    October 06, 2022
    a year ago
  • Inventors
    • James; Doug Leonard (Stanford, CA, US)
    • Wang; Jui-Hsien (Stanford, CA, US)
  • Original Assignees
Abstract
Systems and methods for acoustic simulation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for simulating acoustic responses, including obtaining a digital model of an object, calculating a plurality of vibrational modes of the object, conflating the plurality of vibrational modes into a plurality of chords, where each chord includes a subset of the plurality of vibrational modes, calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes, deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and storing each modal sound field in a far-field acoustic transfer (FFAT) map.
Description
FIELD OF THE INVENTION

The present invention generally relates to simulating acoustic responses, and more specifically to time-domain precomputation of acoustic transfer models and their efficient storage for use in runtime acoustic response synthesis.


BACKGROUND

Sound is a vibration which propagates as an acoustic wave through a transmission medium. However, the human sense of sound is experiential. That is, the human nervous system performs significant signal processing on received acoustic waves that are then perceived. The field of psychoacoustics is concerned with sound perception and audiology.


A “vibrational mode” (also “vibration mode,” or simply “mode”) of an object refers to a particular oscillation pattern. Objects often have many modes which tend to increase in number as the complexity of the object increases. When one “hears” an object in the real world, it is the result of the ear experiencing pressure waves generated by the object oscillating in one or many of its vibrational modes.


SUMMARY OF THE INVENTION

Systems and methods for acoustic simulation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for simulating acoustic responses, including obtaining a digital model of an object, calculating a plurality of vibrational modes of the object, conflating the plurality of vibrational modes into a plurality of chords, where each chord includes a subset of the plurality of vibrational modes, calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes, deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and storing each modal sound field in a far-field acoustic transfer (FFAT) map.


In another embodiment, the method further includes rendering the digital model of the object in a digital environment, receiving interaction data, where the interaction data describes an interaction between the rendered digital model and a second object in the digital environment, and playing back an acoustic response based on vibrations of the digital model of the object in response to the described interaction.


In a further embodiment, playing back the acoustic response includes selecting at least one FFAT map based on the vibrations of the digital model, determining a location of a listener in the virtual environment with respect to the digital model, summing amplitudes for each frequency generated by the object at the location of the listener based on the FFAT maps.


In still another embodiment, the second object in the digital environment is an avatar.


In a still further embodiment, the second object in the digital environment is a cursor.


In yet another embodiment, the FFAT map is stored as metadata to the digital object.


In a yet further embodiment, calculating the chord sound field includes solving the Helmholtz wave equation in the time domain.


In another additional embodiment, conflating the plurality of vibrational modes comprises utilizing a greedy algorithm to identify the subset of the plurality of chords separated by a gap parameter.


In a further additional embodiment, the FFAT map approximates a squared transfer amplitude at a plurality of coordinates using a real-valued expansion.


In another embodiment again, an acoustic simulator includes a processor, a graphics processing unit (GPU), and a memory, the memory containing an acoustic modeling application, where the acoustic modeling application directs the processor to obtain a digital model of an object, calculate a plurality of vibrational modes of the object, conflate the plurality of vibrational modes into a plurality of chords, where each chord includes a subset of the plurality of vibrational modes, calculate, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes using the GPU, deconflate each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and store each modal sound field in a far-field acoustic transfer (FFAT) map.


In a further embodiment again, the acoustic modeling application further directs the processor to render the digital model of the object in a digital environment, receive interaction data, where the interaction data describes an interaction between the rendered digital model and a second object in the digital environment, and play back an acoustic response based on vibrations of the digital model of the object in response to the described interaction.


In still yet another embodiment, to play back the acoustic response, the acoustic modeling application further directs the processor to select at least one FFAT map based on the vibrations of the digital model, determine a location of a listener in the virtual environment with respect to the digital model, sum amplitudes for each frequency generated by the object at the location of the listener based on the FFAT maps.


In a still yet further embodiment, the second object in the digital environment is an avatar.


In still another additional embodiment, the second object in the digital environment is a cursor.


In a still further additional embodiment, the FFAT map is stored as metadata to the digital object.


In still another embodiment again, to calculate the chord sound field, the acoustic modeling application further directs the GPU to solve the Helmholtz wave equation in the time domain.


In a still further embodiment again, to conflate the plurality of vibrational modes, the acoustic modeling application further directs the processor to utilize a greedy algorithm to identify the subset of the plurality of chords separated by a gap parameter.


In yet another additional embodiment, the FFAT map approximates a squared transfer amplitude at a plurality of coordinates using a real-valued expansion.


In a yet further additional embodiment, a method for rendering sound for a digital environment includes obtaining a plurality of far-field acoustic transfer (FFAT) maps, where the plurality of FFAT maps is associated with an object rendered in the digital environment, receiving interaction data describing an interaction with the object, selecting a portion of FFAT maps from the plurality of FFAT maps, where the selected portion of FFAT maps are associated with vibrational modes of the object activated by the interaction, determining an acoustic response signal to the interaction based on the FFAT maps, and playing back the acoustic response signal.


In yet another embodiment again, the plurality of FFAT maps are generated by obtaining a digital model of the object, calculating a plurality of vibrational modes of the object, conflating the plurality of vibrational modes into a plurality of chords, where each chord comprises a subset of the plurality of vibrational modes, calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes, deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and storing each modal sound field in a far-field acoustic transfer (FFAT) map.


Additional embodiments and features are set forth in part in the description that follows, and in part will become apparent to those skilled in the art upon examination of the specification or may be learned by the practice of the invention. A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings, which forms a part of this disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

The description and claims will be more fully understood with reference to the following figures and data graphs, which are presented as exemplary embodiments of the invention and should not be construed as a complete recitation of the scope of the invention.



FIG. 1 is a system diagram for an acoustic simulation system in accordance with an embodiment of the invention.



FIG. 2 is a block diagram for an acoustic simulator in accordance with an embodiment of the invention.



FIG. 3 is a flow chart for a pipeline for time-domain precomputed acoustic transfer process in accordance with an embodiment of the invention.



FIG. 4 illustrates an example object represented by a tetrahedral mesh in accordance with an embodiment of the invention.



FIG. 5 illustrates a number of vibrational modes of an object in accordance with an embodiment of the invention.



FIG. 6 illustrates a chord sound field for an arbitrary chord in accordance with an embodiment of the invention.



FIG. 7 illustrates two modal sound fields deconflated from a chord sound field in accordance with an embodiment of the invention.



FIG. 8 illustrates two FFAT maps for the two modal sound fields in accordance with an embodiment of the invention.



FIG. 9 is an indifference graph for generating chords using a greedy coloring approach in accordance with an embodiment of the invention.



FIG. 10 is a chart illustrating formalized pseudocode for a modified SortedBalance algorithm in accordance with an embodiment of the invention.



FIGS. 11A and 11B represent a rasterized object and a compact bitset representation of the rasterized object, respectively, in accordance with an embodiment of the invention.



FIG. 12 illustrates a process for estimating ψi in accordance with an embodiment of the invention.



FIG. 13 is a flow chart for a process for playing back the acoustic response of an object in accordance with an embodiment of the invention.





DETAILED DESCRIPTION

Turning now to the drawings, systems and methods for acoustic stimulation using time-domain wave solvers are described. Making a digital object “sound” correct to a human listener is a difficult problem. In many cases, when the synthetic sounds coming from a digital object do not match a subconscious understanding of what a real-world version of the digital object should sound like, the listener can perceive the difference. With the rise of highly interactive digital technology such as (but not limited to) virtual reality (VR), augmented reality (AR), mixed reality (MR), and even more conventional computer interfaces such as 2D monitors, accurate acoustic representations of digital objects are becoming more critical and desirable. Indeed, immersion in a graphically perfect visualization can easily be broken by unnatural acoustics. For example, in a VR game where the user can pick up and manipulate objects in their hands, if the user strikes a wine glass at the stem and subsequently the upper rim, they will expect significantly different resulting sounds.


Current acoustic modeling techniques can yield a relatively accurate model which can be used to recreate a “sound field,” a representation of acoustic waves present in a given environment, which in turn can be used to synthesize audio for many of the above technologies. However, these current methods are extremely computationally intensive. Traditionally, modal sound models use linear vibration modes to effectively characterize the surface vibrations of struck objects. The sound amplitude of each vibrational mode is described by its acoustic transfer function, whose spatial structure accounts for perceptually important wave effects such as (but not limited to) diffraction and radiation efficiency.


Conventional methodologies precompute the acoustic transfer functions by solving the frequency-domain wave equation (Helmholtz equation) once for each vibrational mode. Even for simple objects, there can be hundreds of audible modes, meaning the Helmholtz equation must be solved hundreds of times. Conventional Helmholtz boundary element method (BEM) solvers take on the order from hours to days of transfer precomputation on large computing clusters for objects of even moderate size. Current digital environments have hundreds if not thousands of different types of objects, which number is anticipated to increase as technical limitations are overcome. Consequently, conventional methods are impractical and, in many cases, infeasible as solving the Helmholtz equation for each vibrational mode for each object may take weeks, months, or even years.


Systems and methods described herein greatly reduce the computational power needed to accurately model the vibrational modes of an object. In many embodiments, the vibrational modes can be conflated into a set of “chords” via a process referred to as “mode conflation.” The sound field for each chord (“chord sound field”) can be calculated in groups (as opposed to the conventional frequency domain for reasons described in further detail below) in such a way that the sound field can be accurately decomposed back into separate sound fields for each constituent mode (“modal sound fields”) through a process of “transfer deconflation.” In numerous embodiments, transfer deconflation is achieved using a structure-exploiting QR solver. To further increase throughput, systems and methods described herein can leverage a graphics processing unit (GPU) vector wavesolver that attempts fine-grain load balancing by interleaving the different solves. Systems and methods described herein can simulate orders of magnitude faster (on the order of seconds and minutes), and are expected to be further sped up as computing technology matures further. To enable fast runtime transfer representation, a Far-field Acoustic Transfer (FFAT) map can be used that approximates transfer amplitudes at accuracy suitable for sound rendering. Architectures for acoustic simulation systems are described in further detail below.


Acoustic Simulation Systems

Acoustic simulation systems as described herein utilize time-domain wave solving techniques to rapidly solve the acoustic modes of an object. In many embodiments, the object is a real-world object that has been digitized using a scanning process. However, in many embodiments, the object can be any arbitrary digitally modeled object. Acoustic simulation systems can further pack solved modes into FFAT maps for use in real-time rendering, and/or any other acoustic model storage format as appropriate to the requirements of specific applications of embodiments of the invention. Objects can be stored along with their associated FFAT maps in a repository.


Turning now to FIG. 1, an acoustic simulation system in accordance with an embodiment of the invention is illustrated. System 100 includes an acoustic simulator 110. The acoustic simulator is a computing platform capable of carrying out the acoustic simulation processes described herein. In numerous embodiments, the acoustic simulator controls distributed processing across a number of different acoustic simulators (not pictured). The system 100 further includes multiple object repositories. In many embodiments, the object repositories store digital objects. The acoustic simulator can obtain digital objects from the object repositories and generate an acoustic model. In numerous embodiments, the acoustic models can be stored as metadata to the digital object. However, acoustic simulators do not necessarily need to obtain digital objects from a repository, and instead can obtain them via input from any of a number of other sources including (but not limited to) object modeling on the acoustic simulator, direct input via a storage media, directly from a 3D scanner, via a network and/or via any other input method as appropriate to the requirements of specific applications of embodiments of the invention. One or more object repositories can be used, but are not required for the function of acoustic simulators or acoustic simulator systems.


System 100 further includes a 3D scanner 130. While 3D scanners are not a requirement of acoustic simulation systems, they can be used to directly obtain an accurate digital object version of a real-world object. The system 100 can further include an interface device 140 which enables user interaction with the system. In many embodiments, the interface device 140 is an interactive display such as, but not limited to, a smart phone, a personal computer, a tablet computer, a smart TV, and/or any other interactive display as appropriate to the requirements of specific applications of embodiments of the invention.


Components in system 100 are connected via a network 150. Networks can be composed of wired networks, wireless networks, or a mixture of wired and wireless networks. In many embodiments, the network is the Internet. As can be readily appreciated, any number of different networking solutions can be used as appropriate to the requirements of specific applications of embodiments of the invention. While a particular architecture in accordance with an embodiment of the invention is illustrated in FIG. 1, one can appreciate that any number of different architectures, including those that have additional or fewer components can be used without departing from the scope or spirit of the invention. Acoustic simulators are discussed in further detail below.


Acoustic Simulators

Acoustic simulators are computing devices capable of calculating vibrational modes of 3D objects. In numerous embodiments, acoustic simulators are implemented using a personal computer. However, in numerous embodiments, acoustic simulators are implemented in computing clusters and/or on server systems. Turning now to FIG. 2, an example acoustic simulator in accordance with an embodiment of the invention is illustrated.


Acoustic simulator 200 includes a processor 210. The processor 210 can be any type of logic processing unit such as, but not limited to, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a field-programmable gate-array (FPGA), and/or any other circuit as appropriate to the requirements of specific applications of embodiments of the invention. In many embodiments, acoustic simulators have one or more processors. Acoustic simulator 200 further includes a graphics processing unit (GPU) 220. GPUs are specialized logic processing circuitry built for performing graphics calculations, and have architectures which enable parallelized processing. Acoustic simulators can utilize one or more GPUs in order to accelerate processing, but they are not required. In various embodiments, all processing takes place on the processor.


The acoustic simulator further includes an input/output (I/O) interface 230. The I/O interface can enable communications between the acoustic simulator and other computing devices or input devices (e.g. keyboards, computer mice, touch screens, controllers, etc.). I/O interfaces can communicate with more than one device at a time, and can enable network communications. The acoustic simulator contains a memory 240. The memory 240 can be implemented using volatile memory, nonvolatile memory, or a combination thereof. The memory contains an acoustic simulation application 242. Acoustic simulation applications contain instructions which direct the processing circuitry (including GPUs in many embodiments when available) to perform various acoustic simulation processes. In numerous embodiments, the memory further contains object data 244 and an object repository 246. Object data is any data describing a digital object such as (but not limited to), parameters describing the physical structure of the object, composition data describing what materials the object is made out of, vibrational modes of the object, object labels, and/or any other information about the digital object as appropriate to the requirements of specific applications of embodiments of the invention. In numerous embodiments, parameters describing the physical structure of the object include 3D vertex coordinates describing the surface of the 3D object. However, as can readily be appreciated, any number of different structures can be used to describe a physical object such as (but not limited to) point clouds, vertex information that includes description of interior cavity surfaces, and/or any other representation capable of describing an object's form. Object repositories do not need to be only stored on acoustic simulators, and may be stored elsewhere. In many embodiments, object repositories contain a list of objects and any associated acoustic models (e.g., FFAT maps).


While a particular acoustic simulator in accordance with an embodiment of the invention is illustrated in FIG. 2, one can appreciate that any number of different architectures, including those that have additional or fewer components, and/or additional or fewer data structures stored in memory can be used without departing from the scope or spirit of the invention. Acoustic modeling processes in the time-domain are described in further detail below


Acoustic Modeling Processes

At a high level, acoustic modeling processes determine the modal sound field for all relevant vibrational modes of an object and store them such that they can be easily played back either alone or in combination in real time to simulate the acoustic response of the object. In this way, the simulated digital object will have a “natural” sounding acoustic response to manipulation in a virtual environment. As noted above, a severe bottleneck in conventional acoustic modeling processes is solving for every vibrational mode. Traditionally, each vibrational mode is “solved” in the frequency domain one at a time. In this context “solving” (or “wave solving”) a mode (or chord) refers to calculating the sound field for the given mode (or chord). A “solver” (or “wavesolver”) is a mathematical process that does the work of said calculation, often involving the Helmholtz equation. As noted above, conflation of multiple modes for simultaneous processing enables significant computational efficiency gains over conventional precomputed acoustic transfer (PAT) methodologies. A challenge with conflating multiple modes is later deconflating the resulting chord sound field into individual solutions for the original component modes. Acoustic modeling processes described herein can solve multiple vibrational modes at the same time in the time-domain in such a way that the combined solution can be easily broken down into solutions for each individual constituent vibrational mode. Further, acoustic modeling processes improve on FFAT map storage to enable more efficient real-time playback.


Systems and methods described herein conflate modes in a particular manner and subsequently process them in the time domain (rather than the frequency domain) in such a manner that easily enables accurate deconflation. Turning now to FIG. 3, a high-level flowchart describing a pipeline for time-domain PAT in accordance with an embodiment of the invention is illustrated.


Process 300 includes obtaining (310) an object data. In many embodiments, object data is obtained from an object repository. In various embodiments, object data is obtained at least in part via a 3D scan of an object. Vibration modes are calculated (320) for the object described by the object data. In numerous embodiments, object data that already describes the object's vibrational modes, in which case they do not need to be calculated again. In numerous embodiments, vibration modes are calculated by generating a tetrahedral mesh of the object, however any number of different methodologies can be used to calculate the vibrational modes of an object. An example object represented by a tetrahedral mesh in accordance with an embodiment of the invention is illustrated in FIG. 4.


As an example, in many embodiments, calculation of the vibrational modes of an object using a tetrahedral mesh involves, for each tetrahedron in the mesh, calculating a mass. Each vertex in the mesh can be assigned ¼ of the mass of each of its surrounding tetrahedron, such that each vertex has a mass. Stiffness can further be calculated for each vertex using a set of virtual springs. The mass and stiffness values can be used to calculate the vibration modes of the object using conventional means (e.g. solving a standard eigenvalue problem). In numerous embodiments, all vibration modes are calculated. In some embodiments, vibrational modes that are redundant (e.g. due to symmetry) are either not calculated or discarded. Example vibrational modes of the object illustrated in FIG. 4 in accordance with an embodiment of the invention are illustrated in FIG. 5. As can be readily appreciated, as noted above, using tetrahedral meshes is an example of but one of many ways to calculate the vibrational modes of an object, any number of methods to generate vibrational modes can be used, including (but not limited to) analytical, numerical/computational, and/or experimental methods as appropriate to the requirements of specific applications of embodiments of the invention.


The set of vibration modes for the object are then conflated (330) by “chord.” Each chord consists of a subset of modes that are sufficiently distinct from each other based on a separability constraint. Each chord is then processed to estimate (340) the resulting chord sound field in the time domain. A chord sound field for an arbitrary chord for the object illustrated in FIG. 4 in accordance with an embodiment of the invention is illustrated in FIG. 6. The chord sound field for each chord is then deconflated (350) into a modal sound field solution for each constituent mode. Modal sound fields for individual modes deconflated from the sound field solution of FIG. 6 in accordance with an embodiment of the invention are illustrated in FIG. 7. In many embodiments, the modal sound field information is stored by generating (360) FFAT maps which can be used to easily render sound at runtime. Example FFAT maps for the sound fields in FIG. 7 in accordance with an embodiment of the invention are illustrated in FIG. 8. While FIGS. 4-8 illustrate the outputs of different steps of process 300 with respect to a specific object, it can be readily appreciated that any arbitrary object can be modeled with significantly more modes, chords, and FFAT maps. In order to more fully understand the mathematics involved, the following sections walk through the mathematics and steps used to conflate the vibrational modes, process each chord, deconflate the resulting sound fields, and how to efficiently generate FFAT maps.


A. Base Definitions and Notations

Linear modal analysis is a standard process in engineering and acoustics. Linear modal sound synthesis models describe the displacement of N surface vertices using a linear superposition of m vibrational mode shapes, ûicustom-character3N, each associated with a vibration frequency ωi=2πfi. The surface displacement is given by the matrix-vector multiplication:






u(t)=[û1û2 . . . ûm]q(t)=Uq(t)


where U∈custom-character3N×m is the surface eigenmode matrix, and q(t)∈custom-characterm is the vector of modal coordinates qi(t)∈custom-character. The dynamics of the modal coordinates are governed by m decoupled simple harmonic oscillators:






{umlaut over (q)}
i(t)+(α+βωi2){dot over (q)}i(t)+ωi2qi(t)=ûi·f(t)


where (α, β) are Rayleigh damping parameters, and f(t) is a vector of applied surface forces. The oscillator dynamics for q(t) can be time-stepped efficiently using an infinite impulse response (IIR) digital filter in a way suitable for real-time sound synthesis. ûi(x) is used to denote the ith mode's surface displacement interpolated at x, and un,i(x)=n(x)·ûi(x)∈custom-character denotes the normal component of displacement of mode i at x.


Continuing, acoustic transfer functions are another key concept that should be readily understood by one of ordinary skill in the art. Surface vibrations of an object O cause pressure fluctuations in the surrounding air, which radiates outwards as acoustic waves. The acoustic transfer function characterizes this radiation efficiency. More specifically, let Ω be the the air medium containing O, and Γ be the boundary of O. Then the acoustic pressure p caused by the undamped, harmonically vibrating ith mode ûi is governed by the acoustic wave equation with the following Neumann boundary condition dependent on the normal surface acceleration an,i:












2


p

(

x
,
t

)





t
2



-


c
2





2


p

(

x
,
y

)




=
0

,


x

Ω






np(x,t)=−ρan,i(x,t)=ρωi2un,i(x)cos ωit, x∈Γ  Equation 4:


where c is the speed of sound in air. The steady-state solution has only harmonic components











p

(

x
,
t

)

=




c
i

(
x
)




cosω
i


t


+


d
i

(
x
)






sin

ω

i


t


,






=






c
i
2

(
x
)

+


d
i
2

(
x
)





cos
(



ω
i


t

+


φ
i



(
x
)



)









where (ci, di) describes the pressure wave's amplitude √{square root over (ci2+di2)} and phase φi at any point x, and are effectively the acoustic transfer function. The following text uses the shorthand notation pi(x)=√{square root over (ci2(x)+di2(x))} for the acoustic amplitude of mode i, when not ambiguous.


Systems and methods described herein further enable runtime sound rendering. The runtime sound at any point x (in O's frame of reference) is approximated by the linear superposition of modal contributions,







sound


(

x
,
t

)


=




i
=
0

m




p
i

(
x
)




q
i

(
t
)







where pi(x)=|pi(x)| is mode i's transfer amplitude that is used to scale the qi(t) modal coordinate. Conventional PAT methods usually precompute fast representations for evaluating pi(x), and then compute its modulus. As described further below, in many embodiments acoustic simulation processes approximate the transfer amplitude directly using nonnegative representations. Building on these basic concepts, the concepts of mode conflation and transfer deconflation are introduced and formalized below.


B. Mode Conflation

Mode conflation is the process by which a set of modes for an object are packed into a single wave solve as a “chord.” Consider simulating the first m vibration modes ωi, i=1 . . . m, at the same time, like playing a chord with m frequencies or notes. By the linearity of the wave equation and acceleration dependence in the Neumann boundary conditions, if the normal surface acceleration an is a sum of the m modes' accelerations








a
n

(

x
,
t

)

=




i
=
1

m



a

n
,
i


(

x
,
t

)






then the pressure solution is also the sum of m frequency components,











p

(

x
,
t

)

=





i
=
1

m




c
i

(
x
)





cos

ω

i


t


+


d
i

(
x
)






sin

ω

i


t



,






=



[



cos


ω
1


t




sin

ω

1


t


...





cos

ω

m


t




sin

ω

m


t

]



(





c
1

(
x
)







d
1

(
x
)













c
m

(
x
)




d
m

(
x
)





)








=




τ

(
t
)

T



s

(
x
)









Where τ(t)∈custom-character2m is the vector of trigonometric basis functions, and s(x)∈custom-character2m stacks the unknown transfer coefficients, (ci(x), di(x)).


As such, provided that a chord's modal frequencies are distinct, the temporal basis functions, cos ωit and sin ωit, are all linearly independent functions in time. Therefore, the 2m coefficients {(ci, di)}, and thus the transfer amplitudes pi(x)=√{square root over (ci2+di2)}, can be least-squares estimated from simulated temporal samples of p(x, t)=τ(t)·s(x). This is referred to as “transfer deconflation” and is discussed in further detail in the following section.


However, in many embodiments, it cannot be assumed that there are distinct frequencies for a given object, as many objects have repeated frequencies or very similar frequencies. In fact, (nearly) degenerate frequencies are common, and are often associated with geometric (near) symmetries (e.g. mirror, rotational, cylindrical, etc.). As similar eigenfrequencies lead to similar basis functions that are numerically ill-conditioned, in numerous embodiments, chords used in mode conflation are selected such that they only involve frequencies that satisfy a frequency separation condition. In many embodiments, the frequency separation condition is such that each pair of frequencies fi and fj in a chord satisfies |fi−fj|>ε>0, where ε is a predetermined gap parameter. The gap parameter can be selected based on the number of modes, the number of frequencies, the number of degenerate frequencies, the minimum distance between frequencies, and/or any other metric as appropriate to the requirements of specific applications of embodiments of the invention, including an arbitrary metric selected by a user to tune computational efficiency versus separation quality.


Optimizing chord construction to facilitate later deconflation is better understood with knowledge of transfer deconflation, and is discussed following the below discussion of transfer deconflation.


C. Transfer Deconflation

Given a temporal signal sampled from a superposition of harmonics with distinct frequencies that are known, it is possible to estimate the amplitude (and phase) of the harmonics using least squares. Transfer deconflation is the process by which this is leveraged to decompose a chord's sound field into the sound fields for each individual mode. Formally, the goal of transfer deconflation is to temporally sample p(x, t)=t(t) s(x) to reconstruct the amplitudes, with two additional concerns: first the amplitudes must be estimated at many positions, x, and second repeated periodically, e.g. after time T in order to monitor amplitude convergence while time stepping the wave equation.


A least-squares method can be used to efficiently compute amplitudes. Consider n pressure samples at x taken at uniformly spaced times, ti=t0+iδt, i=1, . . . , n, where δt is the temporal spacing, and t0 is a waiting period since the time-stepping of the wave equation began at t=0. Values for δt and t0 can be arbitrarily selected based on the requirements of particular requirements of an embodiment of the invention. In some embodiments, δt=4Δt, and t0=0, where Δt is the wavesolver's timestep size set to 1/(8fmax). n conditions can be obtained on the coefficients s(x),





τ(ti)Ts(x)=p(x,ti), i=1, . . . n.


In matrix form, this becomes:








[








τ

(

t
1

)

T























τ



(

t
n

)

T








]



s

(
x
)


=



(




p

(

x
,

t
1


)











p

(

x
,

t
n


)




)


As

=

p
.






This linear least-squares problem As=p has a unique solution if there are enough temporal samples, n≥2m, and it will be well conditioned provided that the frequencies are well separated (the sampling must also have sufficient resolution to avoid aliasing, which is achieved by construction). Only p and s depend on the position x, and not the basis matrix A. Therefore, A's QR factorization can be constructed once at cost O(nm2), and reuse it to estimate transfer coefficients s at arbitrary many points x. In many embodiments, this can be computationally accelerated by leveraging a GPU, which is discussed in further detail below.


A second aspect of transfer deconflation is efficiently periodically estimating transfer amplitudes. While timestepping the wave equation and sampling pressures at key positions x, a QR solver can be periodically invoked to obtain transfer amplitudes. However, these periodic estimates have different base matrices, A, and therefore require period QR factorizations. This can be mitigated, however, using basis factorization. Consider the trigonometric basis matrix A and one constructed after a period of time T later, named AT. Due to trigonometric properties, these matrices are related by a rotation matrix:







A
T

=


A
[




B
1


















B
2



































B
m




]


AB





where B is a block diagonal and orthogonal, and Bi∈R2×2 are small rotation matrices







B
i

=


[




cos

(


ω
i


T

)




sin

(


ω
i


T

)






-

sin

(


ω
i


T

)





-

cos

(


ω
i


T

)





]

.





Therefore, given the thin QR factorization of A=QR, the solution to the new least-squares problem is






s=(AT)p=(AB)p=BTAp


where ( ) denotes the pseudoinverse) which amounts to back-substitution with the original QR factorization, followed by a rotation by the block-diagonal BT matrix. Therefore, periodic estimates of transfer amplitude can be performed with negligible additional overhead, and only one QR factorization per chord is needed.


The efficient period transfer estimation enables stopping of the simulation whenever a stopping criterion is met, and to keep iterating if the problem is more challenging. In numerous embodiments, this adaptive strategy increases efficiency and the robustness of the deconflation transfer. In many embodiments, the stopping criterion is the relative custom-character2-norm of the least square error for As=p above. A solver constructed as illustrated returns the latest transfer field after the error falls below the specified tolerance. In various embodiments, a check for convergence can be made in non-overlapping sliding windows (with hop size T=nδt, with an arbitrary error tolerance appropriate to the scenario (e.g., 1%, although errors may often fall well below this tolerance). With this understanding of mode conflation and transfer deconflation, efficient methods of constructing chords during mode conflation that lead to efficient and accurate transfer deconflation are discussed below.


D. Constructing Chords

Systems and methods described herein attempt to generate K chords that simulate all modes, but avoid closely spaced frequencies, so as to ensure that the trigonometric basis matrix A used in transfer deconflation is well-conditioned. Further, acoustic simulation processes attempt to minimize the number of cords, K, to minimize the number of time-domain wavesolves that need to be calculated. Formally stated, given the frequency distribution custom-character (in the form of a sorted list)






custom-character{fi|Frequency of the ith vibrational mode}


then custom-character should be partitioned into a minimal number of K partitions (chords) subject to the frequency separability constraint:








min


(

i
,
j

)



in


same


chord





"\[LeftBracketingBar]"



f
i

-

f
j




"\[RightBracketingBar]"



>
ε




for some gap parameter ε. In numerous embodiments, this parameter ε affects the accuracy and performance of the computations. If ε is set too low, the least-squares problem will have a poorly conditioned A matrix, which can lead to inaccurate transfer amplitude estimates. On the other hand, if ε is set too high, the number of chords K needed to partition custom-character will become large and result in more wavesolves that need to be processed. ε values can be selected using a “time-bandwidth product” discussed further below.


In many embodiments, it is possible to compute an optimal solution to the chord optimization problem (with minimal K chords) using a linear-time algorithm. The chord optimization problem is an instance of a graph coloring problem for a special type of graph called the indifference graph. An indifference graph is an undirected graph where all vertices are assigned a real number (in this case, frequency), and an edge exists whenever two vertices are closer than a unit (the gap parameter, ε). The coloring problem arises since any two vertices connected by an edge cannot be assigned the same color, where here colors represent chords. An instance of an indifference graph in accordance with an embodiment of the invention is illustrated in FIG. 9.


A greedy coloring approach yields an optimal solution for indifference graphs in accordance with the following algorithm:

    • Initialize colors C={ }
    • Scan through the vertices using the sorted order of custom-character.
    • For each vertex v, find c∈C not used by the neighbors of v; otherwise, color v with a new color c′, and C=C∪{c′}.


      The above algorithm can be implemented efficiently in a stack that runs in linear time.


In the context of the overall processing pipeline, three parameters in mode conflation have significant impact on transfer deconflation: (n, δt, ε). Their effects on deconflation performance can be characterized using a single non-dimensional “time-bandwidth product” (TBP), defined as:







T

B

P

=


n



δ

t



ε

=


T

w

i

n

d

o

w



T

b

e

a

t








where Twindow≡nδt is the length of the sliding window, and Tbeat≡1/ε is the inverse of the beat frequency caused by the two closest modal frequencies. The TBP directly affects the stability of the least-square basis matrix A defined above. In numerous embodiments, a basis with a high TBP has better conditioning and the resulting solves are more stable, whereas one with a low TBP (especially when lower than 1) can cause conditioning problems. In many embodiments, a TBP of 1 is sufficient for many challenging problems, and an increase in TBP is indicative of additional computational cost.


Using the above, a set of K chords can be generated from any arbitrary number of modes m in such a way as to lead to easy deconflation. As noted above, between the mode conflation and transfer deconflation steps, chord sound fields are generated. In contrast to traditional methods which operate in the frequency domain to solve for the sound field, chords are solved in the time-domain. Vector wavesolvers which can estimate chord sound fields are described below.


E. Estimating Sound Fields Using Vector Wavesolvers

To solve the bottleneck of estimating the sound field of each mode individually, systems and methods described herein instead perform K separate time-domain wavesolves, each with the conflated boundary condition (BC) of an(x, t) defined above. Time-domain solving of the Helmholtz equation is not typically performed as if one were to need to determine the sound field for a single mode, it is more computationally intensive than by merely performing a frequency-domain wave solve. However, as discussed above, by conflating modes and solving the chord in the time-domain, significant speed increases can be achieved. A second advantage of the time-domain wave solve is that it can be highly parallelized. Wavesolvers in accordance with embodiments of the invention are described below with particular respect to their operation on a GPU in order to leverage their parallel computing capacity, however as can be readily appreciated, similar implementations can be achieved using CPUs, distributed systems, and/or other logic circuits.


In numerous embodiments, depending on the object, close frequencies can occur in the high-frequency range, resulting in several chords having similar highest frequencies, and thus similar step rates. To leverage this, in many embodiments, solves for chords with similar step rates can be run together using the same (min) step size and spatial discretization. This is mathematically equivalent to solving the discreet vector wave equation with BCs,












2


p

(

x
,
t

)





t
2



-


c
2





2


p

(

x
,
t

)




=
0







n


p

(

x
,
t

)


=

-




ρ

a

n

(

x
,
t

)

.







With specific respect to a GPU implementation, solving the vector wave equation has several practical advantages over the scalar one: (1) it can enable reuse of potentially expensive geometry rasterization and other overhead, such as (but not limited to) cell index calculation, across vector components; (2) it can increases per-thread compute density; (3) it can educes kernel launch overhead; and (4) it often results in better caching behavior for fetching modal matrix, which can be sparse for a small chord. To attain a more optimized load balance, a modified SortedBalance algorithm used for makespan minimization problems can be used. The instant problem is a makespan problem because each machine can be considered an instance of wavesolve, each job a chord with conflated modes, and jobs assigned to the same machine can be vectorized.


Chords can be sorted based on the number of modes conflated, which is the measure of compute load, e.g., it accounts for the higher cost of the dense matrix-vector multiply Uq when computing the acceleration BC for larger chords. The list is then looped through in order, and a job is assigned to the lightest machine if the new processing time does not exceed that maximum process time of any machine at the time, or the hardware resources. Otherwise, the job is assigned to a new machine. A formalized pseudocode version of modified SortedBalance in accordance with an embodiment of the invention is illustrated in FIG. 10. In many embodiments, each sound field to be processed can be put into one of an R, G, B, and alpha component of a GPU-ready data structure to further pack data more efficiently. However, any number of approaches for packing each sound field into a component of a vector for a vectorized wavesolve, not just methods that utilize an RGBA vector, can be used as appropriate to the requirements of specific applications of embodiments of the invention.


Acceleration is not restricted only to load balancing. In many embodiments, a hybrid discretization approach can further be applied for improved perfectly matched layers (PML). In many embodiments, a hybrid wavesolvers are based on a finite-difference time-domain (FDTD) discretization scheme with an absorbing boundary layer optimized for GPU programming and for the transfer computation with a fixed object. In many embodiments, a voxelized boundary discretization in conjunction with controlling the staircasing boundary error by sampling finer than the Shannon-Nyquist bound can yield additional efficiency by reducing memory traffic while keeping threads busy.


Hybrid wavesolvers can ensure fast wave simulation for the majority of cells, and accurate grid-boundary absorption using PML in order to minimize artifacts caused by spurious wave reflections that might corrupt transfer deconflation. The inner domain, which contains the object, can utilize a pressure-only collocated grid with a lightweight 7-point stencil, where the outer domain can use a pressure-velocity staggered grid to support accurate split-field PML. Split-field PML gives better absorption than purely scalar pressure-only absorption model because it damps waves in different directions separately. In this way, the inner and outer domains can be combined seamlessly and time-stepped separately.


Finally, to further accelerate wavesolving, a compact object bitset representation can be used. In many embodiments, during solver initialization, the object is conservatively rasterized to the grid using a standard approach. However, the rasterized representation may be needed by various kernels to sample boundary conditions at every time step. Therefore, it can be beneficial in many embodiments to use a compressed representation to reduce memory traffic.


In numerous embodiments, a multi-pass algorithm compresses the rasterized representation into a bitset representation. Consider a list of size |custom-character| of cell indices indicating rasterized cells. The goal is to compute a binary packing to ┌|custom-character|custom-character┐ strings, each of custom-character-bit length. First, a series of passes are run over custom-character to determine the offset of each string. The passes consist of stream compaction, vectorized search, and adjacent differences. An optimal schedule can be generated, and each thread execution group can process the binary representation independently. To demonstrate this concept, a rasterized object in accordance with an embodiment of the invention is illustrated in FIG. 11A, and the compact bitset representation of the rasterized object in accordance with an embodiment of the invention is Illustrated in FIG. 11B. Note in this context, “warp” is synonymous with “thread group.”


Using any or all of these approaches: vector wave equation-based load balancing, hybrid discretization, and bitset representation, can yield additional accelerations, especially when operating on a GPU. However, as can be readily appreciated, even conventional time-domain wave solvers can be used on chords and still yield chord sound fields which can be transfer deconflated. It is not a requirement of any acoustic simulation system to utilize these acceleration steps. With this understanding of how to quickly determine the sound fields for all of an object's modes, methods for storing the sound fields for each mode are explored below.


F. Far-Field Acoustic Transfer (FFAT) Maps

In many embodiments, acoustic transfer functions discussed above are computed in a small domain containing the object. However, in many situations, sound will need to be rendered outside of this small domain. For example, if a player's avatar in a VR game picks up a rock and throws it at a bowl far away from the position of the avatar, it is unlikely that the “ears” of the avatar are located within the domain of the computed acoustic transfer function of the bowl. As such, a practical way to extend these solutions into the acoustic far field is desirable for many applications. FFAT maps are conventionally designed to approximate p(x)∈custom-character. In many embodiments, a modified FFAT map ignores phase values and instead directly approximates the simpler squared transfer amplitude p2(x)=|p(x)|2, using a real-valued expansion. Specifically, in a variety of embodiments, p2 is approximated using a positive polynomial 1/r with direction dependent coefficients








T

(
r
)




(




i
=
1


M
~





ψ
i

(

r
ˆ

)


r
i



)

2




p
2

(
r
)


,




where the radial expansion is evaluated with respect to the object's bounding box center point, x0. The functions ψi, capture the directionality of the radiating fields, and the radial expansion has the correct asymptotic behavior as r→∞.


The coefficient ψi for a given angular direction is estimated using least-squares by taking Ñ samples at the intersection points between an emitted ray from x0, and concentric boxes that aligned with the solver grid. To illustrate this concept, a series of concentric boxes around an object in accordance with an embodiment of the invention are illustrated in FIG. 12. The boxes are geometrically expanded from the bounding box of the object: e.g. Ri=1.25i for the expansion ratio of the ith box. While any number of different values for Ri, {tilde over (M)}, and Ñ can be selected, in many embodiments, {tilde over (M)}=1, and Ñ=3 yield robust results while remaining computationally efficient as the least-square solve per ray only involves two 3-vector dot products and a division. As can be readily appreciated, these values can be modified as desired based on the object and computational power available. The directional parametrization of ψi can be chosen to coincide with the largest box mesh, in order to reduce resampling issues. Pressure values on the other two boxes can be bilinearly interpolated to the appropriate sample locations. Because of the chosen box parametrization, the resulting ψi field can be efficiently processed, compressed, stored, and looked up using standard image processing libraries.


As can be readily appreciated from the above, FFAT maps can be used not only to store the calculated modal sound fields, but to precompute acoustic responses at arbitrary distances outside of the calculated modal sound field. Further, FFAT maps are relatively small, and can be stored in association with an object in order to quickly synthesize any acoustic response needed by a simulation. Processes for using FFAT maps to render audio in digital environments are discussed in further detail below.


G. Audio Synthesis using FFAT Maps


Systems and methods described above are capable of quickly computing acoustic transfer for a given object, and storing the results for each mode in FFAT maps. FFAT maps, including the specialized FFAT maps described above, can be used to render audio at runtime in, for example, a video game, a VR experience, or any other digital experience. By storing FFAT maps as metadata or otherwise in association with an object, they can be rapidly accessed based on the vibrational modes triggered when an object in the simulation is interacted with to determine the appropriate acoustic output for the user based on their relative position to the object.


Turning now to FIG. 13, a process for playing back the acoustic response of an object in a simulation in accordance with an embodiment of the invention is illustrated. Process 1300 includes obtaining (1310) a set of FFAT maps for a given object. As previously noted, FFAT maps can be stored as metadata to object data, and/or otherwise be associated with the rendered model of the object. The object is rendered (1320) in a virtual environment. When an interaction occurs with the object (e.g. it is manipulated in some way that would trigger an acoustic response), interaction data describing the interaction is obtained (1330). In many embodiments, the interaction data contains a vector describing a force applied at a particular point to the object. However, interaction data can describe the interaction using any number of different data structures which can be used to determine which vibrational modes are triggered by the interaction. The appropriate FFAT maps describing the triggered vibrational modes are identified (1340) and used to calculate (1350) the acoustic response to the interaction appropriate to the listener's position with respect to the object. In many embodiments, the pressure response at the listeners location for each triggered mode are summed to produce the appropriate output signal (an “acoustic response signal”). In various embodiments, multiple pressure responses (and multiple output signals) are generated if there are multiple listeners, and/or if multiple output channels are desired (e.g. for binaural output, surround sound speaker layouts, etc.). In various embodiments, the summation is formalized as,









i
=
1

m





"\[LeftBracketingBar]"



p
i

(
x
)



"\[RightBracketingBar]"






q
i

(
t
)

.






The output signal can then be played back (1360) via an appropriate loudspeaker and/or audio system as appropriate to the requirements of specific applications of embodiments of the invention.


While many systems and methods for acoustic simulation are discussed above, many different system architectures and methods can be implemented in accordance with many different embodiments of the invention. It is therefore to be understood that the present invention may be practiced in ways other than specifically described, without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.

Claims
  • 1. A method for simulating acoustic responses, comprising: obtaining a digital model of an object;calculating a plurality of vibrational modes of the object;conflating the plurality of vibrational modes into a plurality of chords, where each chord comprises a subset of the plurality of vibrational modes;calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes;deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode; andstoring each modal sound field in a far-field acoustic transfer (FFAT) map.
  • 2. The method of claim 1, further comprising: rendering the digital model of the object in a digital environment;receiving interaction data, where the interaction data describes an interaction between the rendered digital model and a second object in the digital environment; andplaying back an acoustic response based on vibrations of the digital model of the object in response to the described interaction.
  • 3. The method of claim 2, wherein playing back the acoustic response comprises: selecting at least one FFAT map based on the vibrations of the digital model;determining a location of a listener in the virtual environment with respect to the digital model;summing amplitudes for each frequency generated by the object at the location of the listener based on the FFAT maps.
  • 4. The method of claim 2, wherein the second object in the digital environment is an avatar.
  • 5. The method of claim 2, wherein the second object in the digital environment is a cursor.
  • 6. The method of claim 1, wherein the FFAT map is stored as metadata to the digital object.
  • 7. The method of claim 1, wherein calculating the chord sound field comprises solving the Helmholtz wave equation in the time domain.
  • 8. The method of claim 1, wherein conflating the plurality of vibrational modes comprises utilizing a greedy algorithm to identify the subset of the plurality of chords separated by a gap parameter.
  • 9. The method of claim 1, wherein the FFAT map approximates a squared transfer amplitude at a plurality of coordinates using a real-valued expansion.
  • 10. An acoustic simulator, comprising: a processor;a graphics processing unit (GPU); anda memory, the memory containing an acoustic modeling application;where the acoustic modeling application directs the processor to: obtain a digital model of an object;calculate a plurality of vibrational modes of the object;conflate the plurality of vibrational modes into a plurality of chords, where each chord comprises a subset of the plurality of vibrational modes;calculate, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes using the GPU;deconflate each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode; andstore each modal sound field in a far-field acoustic transfer (FFAT) map.
  • 11. The acoustic simulator of claim 10, wherein the acoustic modeling application further directs the processor to: render the digital model of the object in a digital environment;receive interaction data, where the interaction data describes an interaction between the rendered digital model and a second object in the digital environment; andplay back an acoustic response based on vibrations of the digital model of the object in response to the described interaction.
  • 12. The acoustic simulator of claim 11, wherein to play back the acoustic response, the acoustic modeling application further directs the processor to: select at least one FFAT map based on the vibrations of the digital model;determine a location of a listener in the virtual environment with respect to the digital model;sum amplitudes for each frequency generated by the object at the location of the listener based on the FFAT maps.
  • 13. The acoustic simulator of claim 11, wherein the second object in the digital environment is an avatar.
  • 14. The acoustic simulator of claim 11, wherein the second object in the digital environment is a cursor.
  • 15. The acoustic simulator of claim 10, wherein the FFAT map is stored as metadata to the digital object.
  • 16. The acoustic simulator of claim 10, wherein to calculate the chord sound field, the acoustic modeling application further directs the GPU to solve the Helmholtz wave equation in the time domain.
  • 17. The acoustic simulator of claim 10, wherein to conflate the plurality of vibrational modes, the acoustic modeling application further directs the processor to utilize a greedy algorithm to identify the subset of the plurality of chords separated by a gap parameter.
  • 18. The acoustic simulator of claim 10, wherein the FFAT map approximates a squared transfer amplitude at a plurality of coordinates using a real-valued expansion.
  • 19. A method for rendering sound for a digital environment, comprising: obtaining a plurality of far-field acoustic transfer (FFAT) maps, where the plurality of FFAT maps is associated with an object rendered in the digital environment;receiving interaction data describing an interaction with the object;selecting a portion of FFAT maps from the plurality of FFAT maps, where the selected portion of FFAT maps are associated with vibrational modes of the object activated by the interaction;determining an acoustic response signal to the interaction based on the FFAT maps; andplaying back the acoustic response signal.
  • 20. The method of claim 19, wherein the plurality of FFAT maps are generated by: obtaining a digital model of the object;calculating a plurality of vibrational modes of the object;conflating the plurality of vibrational modes into a plurality of chords, where each chord comprises a subset of the plurality of vibrational modes;calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes;deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode; andstoring each modal sound field in a far-field acoustic transfer (FFAT) map.
CROSS-REFERENCE TO RELATED APPLICATIONS

The current application claims the benefit of and priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 62/854,037 entitled “Optimal Mode Conflation for Time-Domain Precomputations of Acoustic Transfer” filed May 29, 2019. The disclosure of U.S. Provisional Patent Application No. 62/854,037 is hereby incorporated by reference in its entirety for all purposes.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2020/035247 5/29/2020 WO
Provisional Applications (1)
Number Date Country
62854037 May 2019 US