Mid-Air Haptic Generation Analytic Techniques

Information

  • Patent Application
  • 20230215248
  • Publication Number
    20230215248
  • Date Filed
    January 02, 2023
    a year ago
  • Date Published
    July 06, 2023
    a year ago
Abstract
Mid-air ultrasonic haptic devices operate by manipulating an acoustic field to produce a haptic effect on a user. Addressing mid-air haptic devices which abstracts the most basic acoustic fundamental from that of a point to a “primitive” provides tools to adjust shape, location, and amplitude. A primitive can be designed to provide a haptic effect at the location targeted, removing the requirement of the designer needing to understand methods to create a haptic sensation. Further, a control scheme for a set of dynamic acoustic phased-array solvers is presented which enables a distributed system to compensate for unwanted time-of-flight artifacts at low cost. This is achieved by recursively subdividing the system into subtrees of phased-array nodes whose output can be estimated and the desired field drive distributed amongst the nodes. Timings of the desired field drive requests submitted to individual phased-array node inputs are then modified to compensate for the differences between wave coalescence/convergence and wave emission times, the time-of-flight, resulting in a more accurate acoustic field.
Description
FIELD OF THE DISCLOSURE

The present disclosure relates generally to improved techniques in producing and altering haptic effects from focused acoustic field.


BACKGROUND

A continuous distribution of sound energy, which will refer to as an “acoustic field”, can be used for a range of applications including haptic feedback in mid-air, sound-from-ultrasound systems and producing encoded waves for tracking systems.


By defining one of more control points (or focus points) in space, the acoustic field can be controlled. Each point can be assigned a value equating to a desired amplitude at the control point. A physical set of transducers can then be controlled to create an acoustic field exhibiting the desired amplitude at the control points.


Mid-air ultrasonic haptic devices function by focusing ultrasonic energy to a point on the skin. This acoustic field induces two different forces: a dynamic force at the ultrasonic frequency, and a constant force from nonlinear acoustics. None of the mechanoreceptors in the human skin are capable of detecting either of these. The most sensitive mechanoreceptors are sensitive to vibratory stimulation of approximately 20-500 Hz. These are targeted by mid-air haptic devices through modulation of the acoustic field, either through varying amplitude (amplitude modulation, AM) or by tracing a repeating pattern (spatiotemporal modulation, STM).


The result of this is a disconnect between control of the ultrasonic array and haptic output. For example, if a device is to target a haptic on the palm of a user, directing a high-pressure point at a fixed location the palm is not enough—this will not produce a haptic effect. Instead, the host processor must use AM or translate the point in a repeating path to induce a haptic. To date, all mid-air haptic devices have been addressed by dictating point locations with the described disconnect between point location and haptics discussed above.


This disclosure proposes the use of a “haptic primitive” which is a fundamental unit of output which is more sophisticated than the simple high pressure point. One example of a haptic primitive is an AM point. Another example is an STM circle (a set of sequential points which draw a circle which is then repeated). Both of these generate a haptic sensation at the point they are directed. Using this concept, a haptic designer can direct a primitive to a location and deliver a haptic sensation at that location without needing to understand the subtleties of human mechanoreceptors. This primitive can then be translated or distorted to produce an infinite variety of haptic effects. This removes a level of complication between the desired haptic sensation and array output, enabling a broader class of users to program a mid-air haptic device successfully.


Further, by defining one of more control points in space, the acoustic field can be controlled. Each point can be assigned a value equating to a desired amplitude at the control point. A physical set of transducers can then be controlled to create an acoustic field exhibiting the desired amplitude at the control points. This is achieved by solving a complex-valued linear system, where a phased array system comprised of the physical set of transducers may be actuated using sinusoids defined by complex-valued excitations to generate the desired diffraction field that meets the required amplitudes at the locations of the control points.


SUMMARY

Mid-air ultrasonic haptic devices operate by manipulating an acoustic field to produce a haptic effect on a user. The primary method is to focus the acoustic energy to a compact point on a user's skin and modulating that energy at a frequency for which the human mechanoreceptors are most sensitive (typically 20-500 Hz). The modulation possibilities can be grouped into two broad categories: amplitude modulation (AM) and spatiotemporal modulation (STM). AM involves repeatedly attenuating the acoustic field at a sensitive frequency, which manifests as a small vibrating point to the user. STM involves translating a focus point along a repeated path, where the path is repeated at a skin-sensitive frequency. This manifests as a vibrotactile volume in the spatial locations of the path. AM and STM can also be combined to create a wide variety of haptic effects.


Creating a high-pressure point at a given location at a given amplitude is the fundamental problem confronted in designing a mid-air haptic system and enables mid-air haptic feedback. Today's devices expose that functionality to the haptic programmer and assumes enough competency with the complexity of human skin response to design a modulation scheme which produces a haptic effect. This assumption fundamentally limits mid-air haptic programmers to those willing to invest considerable effort into the endeavor.


The disclosure presented here proposes a fundamentally new way of addressing mid-air haptic devices which abstracts the most basic acoustic fundamental from that of a point to a “primitive” and then provides tools to adjust shape, location, and amplitude. A primitive can be designed to provide a haptic effect at the location targeted, removing the requirement of the designer needing to understand methods to create a haptic sensation—the primitive provides the haptic sensation and the designer only needs to worry about the ‘where’ and not the ‘how’. In addition, this disclosure allows for a better distribution of computation between host and device. The mid-air haptic device can compute the points within a primitive instead of the host, allowing for the possibility of an imperfect connection between host and device without significant loss of haptic effect.


Further, a control scheme for a set of dynamic acoustic phased-array solvers is presented which enables a distributed system to compensate for unwanted time-of-flight artifacts at low cost. This is achieved by recursively subdividing the system into subtrees of phased-array nodes whose output can be estimated and the desired field drive distributed amongst the nodes. Timings of the desired field drive requests submitted to individual phased-array node inputs are then modified to compensate for the differences between wave coalescence/convergence and wave emission times, the time-of-flight, resulting in a more accurate acoustic field.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, serve to further illustrate embodiments of concepts that include the claimed invention and explain various principles and advantages of those embodiments.



FIG. 1 shows a block diagram of the basic transform pipeline.



FIG. 2 shows a block diagram of the transformation pipeline including animation transforms.



FIG. 3 shows a block diagram of the transformation pipeline including animation transforms having associated storage.



FIG. 4 shows a block diagram of the primitive generator that corresponds to the palm of a tracked user's hand.



FIG. 5 shows four tracking reference frames, each pinned to the palm of the hand of a tracked hand.



FIG. 6 shows a diagram illustrating states of the device moving through time that are constructed by interpolating together known states at known instants.



FIG. 7 shows a diagram similar to FIG. 6 when the user is not streaming data to the device and state date from the user is unavailable.



FIG. 8 shows the “Estimation Phase” of a method by which a device generates the desired acoustic field.



FIG. 9 shows the “Compute Phase” of a method by which a device generates the desired acoustic field.



FIG. 10 shows the “Apply Solution Phase” of a method by which a device generates the desired acoustic field.





Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.


The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.


DETAILED DESCRIPTION

I. Mid-Air Haptic Generation by Transformations on Primitives


A. Introduction


A primitive is defined as a series of coordinate locations with associated amplitude. The primitive output is followed by transforms (covered below) and finally is given to an acoustic solver to generate the specified field. For instance, an AM point primitive could consist of a single location at the origin with sinusoidal amplitude modulation at 100 Hz. An STM circle primitive could consist of a single point with unit amplitude offset from the origin by a given amount which, over time, precesses in a circle, repeating 100 times per second (100 Hz). In these examples, the “origin” is a construct which allows for transforms to distort the primitive in a controlled way before translating to the haptic user.


Mid-air haptic systems capable of generating complex fields with more than one high-pressure point could receive primitives which have more than one coordinate simultaneously. For instance, an STM circle consisting of two different points traveling along the same circle path, can create a different haptic feel than one point alone and could be a separate primitive. From the perspective of programming consistency, an ideal primitive generates a haptic effect on its own, but this is not strictly required by this invention.


One way of implementing a primitive is by using a mathematical function to generate points and amplitudes over time. For instance, an STM circle in the x-y plane could be generated by






custom-character(t)=R*[cos(ωt),sin(ωt),0],


where R is the radius of the circle, t is time, and co is the STM angular frequency. The amplitude, which would need to be supplied in tandem, is a fixed value in this case. A particularly useful mathematical generator for STM curves is given by








x

(
t
)

=

A


cos



(


a

ω

t

+
δ

)



cos



(

k

ω

t

)



,







y

(
t
)

=

B


cos



(

b

ω

t

)



cos




(

k

(


ω

t

+

π
2


)

)

.









z

(
t
)

=

0
.





where t is time and all other values are adjustable constants. Adjusting δ to zero and k to 1 for instance, results in a series of curves named Rose curves. Adjusting δ to π/2 and k to 0 similarly results in a class of curves named Lissajous curves. Other adjustments to the constants result in circles and lines. Generating the paths using trigonometric functions such as the above yield paths without sharp angles or large spacing between points which is known to create unwanted noise in mid-air haptic devices.


Primitives expressed as generating functions have the added benefit of being easily adjustable in real-time. If the primitive parameters are exposed to the programmer, this gives a sophisticated designer the ability to adjust the primitive in controlled, and often smooth, ways. It must be noted that the time variable here can be evaluated at arbitrary time or linked to discrete points, such as once per ultrasonic acoustic cycle.


After a primitive is selected a series of transforms are applied to modify the shape, size and location of the points being generated to create more sophisticated sensations.


In one embodiment of this invention, this takes place in the form of a series of cascaded affine transforms which are applied through a dot product with the coordinate vector. Affine transforms are implemented as a n×n matrix multiplication where n is one more than the dimension of the input coordinates. For instance, an affine transform on a 2-dimensional set of coordinates is a 3×3 matrix with the input coordinates padded with a 1 as their 3rd coordinate before multiplication. Likewise, 3-dimensional coordinates use 4×4 element transformation matrices. Transformation matrices do not need to be square and can increase or decrease dimensionality if needed. Affine transformations are capable of but not limited to translation, reflection, scaling, rotation, shear, as well as simultaneous combinations of these.


As with primitive parameters, transforms can be implemented in the host or on the device. If implemented on the device, the host could modify exposed coefficients intermittently when things need to be changed, or on a regular basis. It must be noted that the primitive output coordinate system needs to be controlled in order for the transformations to be easily understood. For instance, a standard scaling transformation given by







T
=

[



2


0


0




0


1


0




0


0


1



]


,




which in this example would be interpreted as scaling the x-dimension by a factor of 2 while leaving the y-dimension alone. For this transform to operate as intended (scaling x symmetrically by 2), the primitive curve to be transformed by this needs to be centered at the origin. If, for example, a circle were centered at a coordinate other than zero, this transform would result in both a scaling but also a translation of the geometric center of the primitive. This could interfere with subsequent translation and targeting of the sensation. Therefore, for consistent functioning of transformations, a best practice of locating the geometric center of primitives at the origin should be practiced.


Primitives with more than one simultaneous point specified will need to have each point run through the transformation pipeline independently. One method to accomplish this is to represent each point as column in a location matrix and use the standard definition of dot product to apply each affine transform. In that case the output transformation will produce the transformed coordinates in their respective columns.


Acoustic solvers which take the output of this invention to generate acoustic fields can take as input both location and amplitude. More sophisticated solvers will attempt to achieve the desired parameters within the constraints of the transducers available. Amplitude information can take the form of simple acoustic pressure (measured in pascals), or can be more sophisticated if allowed by the solver. This includes, but is not limited to, squared pressure, acoustic intensity, particle velocity, particle velocity in a particular direction, nonlinear acoustic force, and nonlinear acoustic force in a particular direction. Amplitude information must be specified during the generation of the primitive. This amplitude can be modified post-primitive by scaling that is synched with or independent of transforms. Similar to the primitive generator, the amplitude could be represented as a mathematical function with adjustable variables. Scaling above the maximum output value can be managed through clipping, scaling, dynamic scaling, or other methods common to signal processing.


One particular primitive of note is one which contains points with a specified amplitude of zero. When a zero amplitude goal is given, a standard acoustic solver will attempt to formulate a field with zero (or very low) pressure at that location. A primitive which only contains points with zeros could therefore be seen as a “quiet” region in the field and placed in such a way as to shield potentially sensitive devices or animals. This quiet primitive can be transformed just as other primitives in order to track the sensitive device or animal. A primitive with zero or “null” points needn't have all its points specified at low pressure—a mix of zero and non-zero is also possible.


Turning to FIG. 1, shown is a block diagram 100 of the basic transform pipeline. After primitive generation 110 the output point proceeded through a series of transforms (T1 120, T2 130, . . . Tn 140) before being passed to an acoustic solver 150 which drives the mid-air haptic device.


Turning to FIG. 2, shown is a block diagram 200 of the transformation pipeline starting with primitive generation 201 including animation transforms (T1a1 . . . T1am 202 to T1 203; T2a1 . . . T2 am 212 to T2 213; . . . Tna1 . . . Tnam 222 to Tn 223) and stored transforms (T1s 205, T2s 215, . . . Tns 225). Animation transforms are applied to their associated base transform at regular intervals dictated by logic not shown. At any point the base transform may be reset to a stored value in the stored transform before being sent to the acoustic solver 240.


The introduction of animation transforms in FIG. 2 provide additional examples. These transformations take the same form as the base transforms (T1 120, T2 130, . . . , Tn 140) but are applied to the base transformation repeatedly, at regular intervals. In one embodiment of this invention, these take the form of affine transforms. For example, if T1a1 were active and designed to be applied after every update of the primitive, before T1 is applied to the output of the primitive generator, T1 203 is modified through the dot product of (T1a1 T1) and the output stored into T1 203. This repeated application of the same transform can produce progressive animations such as steady rotation or translation. Multiple animation transforms can be associated with each transform slot. Each can be used independently, with logic designed to dictate when each should be applied—for example, run the first one for 1 second, then switch to the next animation transform, and so on.


Alternatively, multiple animation transforms could be cascaded with each evaluation, or some combination of both methods. This allows the system to independently apply transformations which change the output of the system without constant changes being dictated by the host. As an example, a line primitive can be made to rotate about its center by using a rotation transform in an animation transform slot. Upon each update interval, this animation transform applies a fixed rotation into the base transform, and since repeated applications are cumulative, the resulting base transform progressively rotates the output at a rate dictated by the animation transform and the frequency of animation application. Because the base transforms are continually modified, it is useful to provide storage for each transform to give the ability to return back to a known point. These are shown illustrated by T1s 205, T2s 215, and so on.


Transforms can be fixed, modified by some internal logic, or addressed externally. In the context of mid-air haptic feedback, it is useful to categorize one transform, usually at the end of the chain, as a “tracking transform”. Transforms up to this point typically assume the geometric center of the primitive is located at the origin of the coordinate system. Using this best practice, values in the transforms can be more easily interpreted as scale, sheer, rotation, or other simple operations. But the haptic sensation at some point needs to be located on a user's hand or other body part which exists in a coordinate system separately provided by some kind of tracking device such as a camera, time of flight sensor or the like. Transforming from the internal device-coordinate-system to the tracking-coordinate-system can be accomplished in one or more base transforms. Producing this transform can be done via a host or internally to the array system with feedback from an external or internal tracker.


Generating points using methods presented in this invention which are fed to an acoustic solver which adjusts the output of a mid-air haptic device can occur at different levels in the control chain with different benefits. Traditionally, a mid-air haptic device expects point locations with desired amplitudes from a controller device, delivered in a timely manner. The controller device could take the form of a computer running a display or some other device integrated into user-tracking of some kind. It is possible to implement this invention on this host device which would address and operate the mid-air haptic device like before, with a stream of focus locations or more granular phases and amplitudes or similar information.


In another embodiment of this invention, the haptic device itself generates primitive coordinate locations and amplitudes and accepts from the host options to modify the primitive such as generating constants in a functional primitive as presented above and/or transform coefficients (both base and animation) as well as other logic to control timing and amplitude (not shown). In this arrangement, primitives and transforms can operate on the device, generating haptically effective acoustic field values independently while waiting for updates from a host. This improves the consistency of haptic effects as possible communication interruptions from the host will be largely invisible to the user experiencing the haptic effect.


In the legacy arrangement where points were streamed to the haptic device, missing updates for even a few milliseconds could result in uneven, noisy, or even completely absent haptic sensations. With primitives and transforms being generated on the devices, consistent output is maintained, even when the host is unresponsive.


Changing the primitives or transforms can be done from an external device (such as a host, as described above) or from within the mid-air haptic device. One method of organizing these changes is by means of a sequencer. This takes the form of a set of haptic parameters (primitive specification and transforms) which have a playback time and a pointer to another set of haptic parameters (with their own associated playback time). Upon activation, the system would produce the first set of parameters for the specified time period and then switch to the next as specified by the pointer. That next set of parameters would have its own playback time and pointer to another set. That pointer could point to a unique set of parameters or to one previously used.


In addition, any one set of parameters could also include logic to change parameters in other sets. For example, a group of 4 sets could proceed from 1 to 4 in order, but after getting to 4, that set changes pointers in 2 and 3 so that the order is reversed (3 points to 2 and 2 points to 1). Upon getting back to 1, that set returns the pointers to the original ordering sequence. This creates a sophisticated, repeating pattern without any direct changes necessary. During this playback, the tracking transform would be continuously modified as needed to maintain targeting of the haptic on the user. In this arrangement, a control (or host) device could modify primitives, transforms, pointers, or playback time found in any of the parameter sets, instead of directly modifying currently used transforms.


Switching between sets of parameters can be done immediately or over time via a smooth transition. Small changes, such as changing the translation speed in an animation transform, for instance, can be done immediately without unwanted noise. Some changes, however, such as switching between very different primitives, need to be smoothed to produce a better user experience.


In one embodiment, a transition period can be incorporated using interpolation or low-pass filtering. This can be implemented at the final point output before going into the acoustic solver or at various points in between base transforms. In this case, the x, y, z locations and amplitudes for each point would all be separately interpolated or low pass filtered for a certain period until the filtering or interpolating is deemed unnecessary. Another way to implement the transition is to interpolate or low-pass filter all primitive and transform values from one set of parameters to another.


In some implementations of this invention, more than one primitive with associated, independent transforms will need to be produced simultaneously. For example, a two-handed interaction needs separate haptics to be produced on each hand with each hand moving independently. As discussed above some acoustic solvers have the ability to produce complex pressure fields with more than one high-pressure point at the same time. For these systems, one solution is to maintain at least two independent primitive/transformation pipelines where the output of each is joined into a single request to the acoustic solver at regular intervals. This does have some drawbacks, however, as acoustic energy will be split among all of the desired high-pressure points in the system.


Another solution to produce multi-primitive sensations is to alternate between them. In systems with a multipoint acoustic solver, this can be achieved by only having the amplitude of one primitive on at any one time, with all others off. At a specified interval (for instance, after one complete primitive interval for a repeated path), the amplitude of one primitive cross-fades to another primitive, which is driven for a time before crossfading to another and eventually looping back to the first and repeating. This may reduce the total acoustic energy delivered to any one primitive but can increase the peak pressure for the time each primitive is being driven.


Another method to produce at least two primitives simultaneously is to alternate between them at regular intervals. In other words, produce one for a certain interval, and then switch to the other, then the next, and so on. While this will accomplish the desired sensations, large jumps between individual points (such as between primitives) can produce unwanted audible noise.


A solution to this is to include a set of transition points between each primitive. This transition will include an amplitude ramp towards zero during and/or immediately preceding the transition. New points, not part of any primitive, will then be fed to the acoustic solver which proceed from the last point produced by the previous primitive to the first desired point in the next one. The path for the transition can be a straight line path, or a more complex curve generated by smoothing methods such as filtering or polynomial curves. At or just preceding the arrival at the new primitive, amplitude can ramp back to the desired value. The intervening path to join multiple primitives together will be a compromise between speed and audibility. The faster the transition, the more audible noise will be produced, while the slower the transition, the less time will be spent drawing the haptics which is likely to reduce haptic feel. Ramps can be linear, polynomial, trigonometric, or other suitable curve. Depending on the headroom of acoustic energy available, creating multiple, simultaneous mid-air haptics can change the feel of an individual primitive and care must be taken to accommodate the possible change.



FIG. 3 shows a block diagram 300 of an example implementation of the invention. In this case, a primitive generation 302 interfaces with two independent base transforms (including up to 2 animation transforms T1a1, T1a2 304 to T1 305 and T2a1, T2a2 314 to T2 315, and storage T1s 304, T2s 317), which interfaces with T3 320. This is implemented as a tracking transforms to tilt and translate output points to a user's tracked palm.


Further, FIG. 3 shows an example implementation of this invention. In this case there are 3 base transforms with the first two having 2 animation transforms with associated storage. The third transform T3 320 is reserved as a tracking transform. In this arrangement, all transforms are implemented as 4×4 affine transformation matrices and the primitive generator outputs in 3 dimensions. This arrangement of transforms is capable of a wide variety of animated mid-air haptics with a minimum of computation.



FIG. 4 shows a block diagram 400 of an example implementation of the invention. In this case, the primitive generation 401 outputs only 2-dimensional points as shown in graph form 402. Output of the primitive generation is fed into a base transform with 2 associated animation transforms (including animation transform T1a1, T1a2 403 to T1 404 and storage T1s 405) all still in 2 dimensions 406.


Tilt T2 410 is designed to take the 2D information and output a 3-dimensional set of points which matches the tilt of a tracked user's palm 412. T3 414 then translates the points onto the user's hand 416. The 3D points then interface to the acoustic solver 420.


Further, in this case the output of the primitive generator is 2-dimensional, simplifying the haptic design to a plane which corresponds to the palm of a tracked user's hand. The first base transform and its associated animation and storage transforms are all 3×3 affine transformation matrices, operating on 2-dimensional coordinates. More of these transformations can be cascaded if needed (not shown in the figure). The tilt transform (T2 410) takes the 2-dimensional information and adds a 3rd coordinate through the following matrix multiplication:






u
=

norm



(


n
ˆ

×

[

0
,
0
,
1

]


)








c
=


n
ˆ

z







s
=

sqrt



(



n
ˆ

x
2

+


n
ˆ

y
2


)









T
=

[




c
+


u
x
2

(

1
-
c

)






u
x




u
y

(

1
-
c

)








u
x




u
y

(

1
-
c

)





c
+


u
y
2

(

1
-
c

)







u

y
S






-

u
x



s




]


,




where {circumflex over (n)} is the normal of the tracked palm.


Depending on the definition of normal, a reversing of the normal may be required (multiplying {circumflex over (n)} by −1). After the tilt transform, a regular offset transform (T3 414) is applied to translate from array coordinates to tracking coordinates. This formulation of the invention reduces the amount of computation required for the initial cascade of base transforms, as they are only operating on 2 dimensions, rather than 3. It is possible to group the tilt and offset transform into one transform but is broken out into two in this case for readability.


The above description includes language where an “acoustic solver” drives a system to produce a desired acoustic field. This can take the form of a phased array of ultrasonic transducers where each transducer can be independently controlled. In this case the acoustic solver produces phase and amplitude parameters for driving each transducer. Other arrangements of an acoustic solver could include the ability to move a given transducer or groups of transducers. In this case the solver would produce phase, amplitude and location information. Regardless of actual implementation, this invention will be applicable to any mid-air haptic device which can flexibly produce acoustic fields with specified locations and associated amplitudes.


Turning to FIG. 5, shown is an illustrative example 500 of four tracking reference frames 520525530535, each in this example pinned to the palm of the hand of a tracked hand, a single global reference frame (center) 502 and two emitter reference frames 505, 510. A ‘pin’ transform transforms spatial haptic information from a local tracking reference frame into which the haptic effect is initially composited, into the global reference frame. An ‘emitter’ transform then transforms this spatial haptic information from a global reference frame to a local emitter reference frame in which its local processing is defined. This allows individual discrete devices to manage their own transducer locations, so the devices may be agnostic to their location in the global reference frame.


B. Transformations as Buffered States


Transformations applied to the control point data can be classified into three broad categories, that are applied in a specific order. First, as described, chains of composition transforms take points and other primitives and transform them to build local haptic effects. Second, pinning transforms change a local control point spatial frame of reference in which the effect is built into a global frame of reference.


Finally, emitter transforms change the global spatial frame of reference into a frame of reference with respect to the tile of transducers. These are applied in sequence to take a control point specified to a final control point for a system in control of a transducer array device to solve for. Equally, in some cases these may be omitted, for example if tracking is not necessary, or if a single system reference frame in which the transducer positions are defined is functionally equivalent to a global reference frame.


Each of these can be supplied as a data stream to the mid-air haptic device to realize a further light-weight approach to generating haptic effects. A number of streams of keyframed pinning transformations may be chosen between to allow different local reference frames to be transformed into the global frame. For example, each joint of each finger on the hand may be assigned a reference frame and control points may be moved to a reference frame by selecting that stream and thus pinning the control point to the reference frame selected. This allows for interactions for multiple hands, or multiple joints to be naturally created.


As tracking the user may be folded into the process of generating the pinning transformation stream, this can provide an abstract interface that can allow the process to consume transformations without regard for the origins of the data, so this allows for the use of any tracking system to provide source data. Equally, some compositional transforms may be merged with the tracking to generate a more primitive definition reference frame for the haptic effect, depending on the intent of the haptic designer.


C. Recycling Buffered States


An alternate method for generating haptic effects may be realized by recycling buffered keyframes of the control points through time as shown in FIG. 7, as an alternate mode of device processing when a stream is not being supplied from an external source as in the ring buffer operations shown in FIG. 6.


Turning to FIG. 6, shown is a diagram 600 illustrating states of the device moving through time that are constructed by interpolating together known states at known instants or ‘timepoints’ 610. The clocks 640 and times are given for illustration purposes, as in a real implementation the queue shown would generally be consumed in a period many orders of magnitude shorter. This diagram 600 shows the structure of the ring buffer 670 for the queue and how this maps onto the behavior of the array or arrays along the timeline when the user is actively streaming data to the device. Future states marked as ‘fresh’ (leftward slashes and horizontal lines 601g 601h 601i 601j) are added incrementally to the ring buffer at the write index 630, incrementing it and defining the future behavior of the device.


When the read pointer reaches a state obtained from a user 655, as it is currently in the future, it is mixed with the previous state that is known to be in the past 620 (rightward slashes 601a 601b 601c 601d) to create the state that is to be used now (these two states are shown shaded 601e 601f, straddling the dashed ‘now’ line 645). This is achieved by interpolating the keyframes represented by the two states.


Turning to FIG. 7, shown is a diagram 700 shows the structure of the ring buffer 795 for the queue and how this maps onto the behavior of the array or arrays along the timeline 710 when the user is not streaming data to the device and state date from the user is unavailable 720. The clocks 745 and times are given for illustration purposes, as in a real implementation the queue shown would generally be consumed in a period many orders of magnitude shorter. Three states are shown shaded 701e 701f, 701g straddling the dashed ‘now’ line 740.


In this example, future states that can no longer be written to 755 (horizontal lines) are locked because there is not enough time to replace them before they are used. If the unlocked future states (leftward slashes in FIG. 6) are exhausted before fresh states are obtained from the user, instead of locking and thus using one of the stale states (wide rightward slashes that contain invalid data 701a 701b 701c 701d), policies will be used to copy data from past states 760 (narrow rightward slashes), often amalgamating timing information (shown as the two adjacent states 791) with other state data (shown as the other outlined states 792793) to generate new state data that extrapolates into the future the desired behavior of the array. As such, the ring buffer structure 795 may be used to create simple repeating haptic effects that may be composited with data from other interfaces and/or buffers, where these other types of data may include spatial transformations that are applied later in the haptic effect render pipeline.


When the stream of externally supplied haptic effect primitive data (such as for example, control points or parametric haptic curves) is exhausted, policies describe how to recycle past data to generate repeating effects. This is realized by modifying the times associated with the control point states in the past while copying into the future section of the input buffer. In this way haptic effects comprising repeating cycles of the same control point data may be generated. Parallel queues of transformations that run alongside and modify these states through the application of the transformations can modify the haptic effects, so as to allow them to follow tracking data. Each control point may be generated from separate individual queues of states, or a combined queue—by singling out control points in this way, individual applications may apply bespoke repeating patterns as input to the mid-air haptic system. By recycling the control point states, but allowing the composition and pin or tracking transformations associated with them to change, different effects may be generated. Without loss of generality, recycling data in queues may be performed on queued transformation data with or without recycling control point data, providing palettes of effects such as for example running the two cycles of control point data and transformation data out of phase to produce a rose curve or Lissajous figure. Equally, a further example could be realized as small circles created with the control point data while allowing the transformation data to change to provide a brush that is moved by pinning the small circles to a transformation stream.


As the input queue is being fed by recycled data, further policies may be used to control how the haptic effect changes if an external haptic effect primitive data stream source is resumed. This may involve for example, blending the transform position or created minimum curvature paths so that the haptic effect of the recycling process may proceed smoothly into the haptic effect created by the resumed external stream.


Keyframe states of the haptic points as shown in FIGS. 6 and 7 may also have tangent information in order to construct Bezier and other types curves in the interval between the keyframes. Bezier curves are especially effective however, as by using the de Casteljau algorithm to generate Bezier curves by applying a recursive linear interpolation, a linear interpolation or other interpolation scheme that is already present may be reused to allow the number of keyframe states required to generate complex repeating haptic effects to be minimized.


While this particular approach has advantages for fixed-function hardware pipelines, the recycling of input buffers or queues of haptic effect primitive data may be implemented in any number of different ways, in both software and hardware systems, such as FPGAs or microcontrollers that contain sufficient buffering and queueing capabilities.


II. Distributed Time-Domain Phased-Array Solver for Applications in Ultrasonics


A. Introduction


As described earlier, the system may be split into many pieces which are responsible for local output on each local transducer array. Noting that for ‘small’ groups of transducer elements that are close in space and thus must be ‘close’ in time-of-flight, solving for the group does not incur noticeable time-of-flight artefacts or errors due to the locality of the group. Given this, solving independently for ‘small’ groups of transducer elements will each produce output that is largely free of time-of-flight error.


Therefore, splitting the system into many smaller solvers which are each responsible for a portion of the output is possible, but the independent nature of the groups is a problem. The inputs to each “solver” must be modified to enable the system to work as a whole, allowing these disparate units to cooperate on producing the acoustic field.


These are divided for the purposes of illustration into three categories, although the physical implementations of such devices may fall into more than one adjacent category.


Writing the problem definition in mathematics, aq(Xj) may be used to describe a complex-valued scalar linear acoustic quantity a measured at a position offset from the transducer element q by the translation vector xj, which may evaluate to be acoustic pressure or an acoustic particle velocity in a direction chosen for each j, the matrix A may be written:







A
=

[





α
1

(

𝒳
1

)








α
N



(

𝒳
1

)



















α
1



(

𝒳
m

)









α
N



(

𝒳
m

)





]


,




As this is matrix A is not square, and the degrees of freedom number more than the constraints, this is termed a ‘minimum norm’ system. It is ‘minimum norm’ because as there are infinitely many solutions, the most expeditious solution is the one which achieve the correct answer using the least ‘amount’ of x—the solution x with minimum norm. To achieve this, some linear algebra is used to create a square system from the minimum norm system Ax=b:






A
H
Ax=A
H
b,





(AHA)−1AHAx=x=(AHA)−1AHb,


This AHA is now N columns by N rows and given that the number of transducers is often very large this is an equivalently large matrix, and since any solution method must invert it, with it, this is not an efficient method. A more accessible approach is to create a substitution AHz=x, before applying a similar methodology:






Cz=AA
H
z=Ax=b,






z=C
−1
b=(AAH)−1b,


This time around, as C=AAH is a mere m columns by m rows, this result is a much smaller set of linear equations to work through. The vector z can be converted into x at any time so long as AH can be produced.


This change of variables from the complex-valued vector that describes the drive of individual transducer elements x to the much lower dimensional z has further meaning. Each complex-valued component of z can be viewed as a complex-valued drive coefficient that pre-multiplies a focusing function which generates a focus from all of the individual transducer fields, wherein the focal point is co-located with each individual control point. For m control points therefore, there are m such focussing functions or ‘basis sets’; they can be viewed as defining a complex vector space custom-characterm where points in this space correspond to possible configurations of these m ‘focus points’.


To account for the possibility that the optimal minimum norm solution is not used to form the C matrix, then this may be expressed via an extra weighting for each transducer and control point as σr,q, with r representing the control point index and q representing the transducer index (which may be set to unity in the case that no weighting is desired, otherwise some transducers will be used preferentially in the creation of the control points resulting in deviation from the minimum norm condition)—this can be viewed as reweighting the final x vector used as the excitation vector for the transducer elements by substituting BHz=x, where:







B
=


σ
×
A

=

[





σ

1
,
1





α
1

(

𝒳
1

)









σ

1
,
N





α
N

(

𝒳
1

)



















σ

m
,
1





α
1

(

𝒳
m

)









σ

m
,
N





α
N

(

𝒳
m

)





]



,




and x here denotes component-wise multiplication. Defining the set of αc=[α1(Xc), . . . , aq (Xc), . . . , aN(Xc)], and βc=[σc,1α1(Xc), . . . , σc,qαq(Xc), . . . , σc,NαN(xc)], this adjusted C matrix may be expressed as:







C
=

[





α
1

·


β
1

_









α
1

·


β
r

_









α
1

·


β
m

_

























α
r

·


β
1

_









α
r

·


β
r

_









α
r

·


β
m

_

























α
m

·


β
1

_









α
m

·


β
r

_









α
m

·


β
m

_





]


,




but the dot products for each element may be written as the summation where for instance αa·custom-characterq=1N αq(Xa)custom-character.


If there are multiple devices that have access to disjoint sets of transducer elements, so if there exist M devices such that the global transducer set may be numbered q E {1, . . . , N1, N1+1, . . . , N2, . . . , NM=N}, where device 1 drives transducers 1, . . . , N1, device 2 drives transducers N1+1, . . . , N2 and device M−1 drives transducers Nm−1+1, . . . , N, then each dot product in the matrix C may be written:








β
a

·

=


(




q
=
1


N
1





α
q

(

𝒳
a

)



σ

a
,
q






α
q



(
)


_



)

+

+


(




q
=


N

M
-
1


+
1


N




α
q

(

𝒳
a

)



σ

a
,
q






α
q



(
)


_



)

.






This implies that the C matrix itself may be written in a per-transducer element form as:








C


t

x

,
q


=

[






α
q

(

𝒳
1

)



σ

1
,
q






α
q



(

𝒳
1

)


_









α
q



(

𝒳
1

)



σ

1
,
q






α
q



(

𝒳
r

)


_









α
q



(

𝒳
1

)



σ

1
,
q






α
q



(

𝒳
m

)


_

























α
q



(

𝒳
r

)



σ

r
,
q






α
q



(

𝒳
1

)


_









α
q



(

𝒳
r

)



σ

r
,
q






α
q



(

𝒳
r

)


_









α
q



(

𝒳
r

)



σ

r
,
q






α
q



(

𝒳
m

)


_

























α
q



(

𝒳
m

)



σ

m
,
q






α
q



(

𝒳
1

)


_









α
q



(

𝒳
m

)



σ

m
,
q






α
q



(

𝒳
r

)


_









α
q



(

𝒳
m

)



σ

m
,
q






α
q



(

𝒳
m

)


_





]


,




yielding:






C
=





q
=
1

N


C


t

x

,
q



=


(




q
=
1


N
1



C


t

x

,
q



)

+

+


(




q
=


N

M
-
1


+
1


N


C


t

x

,
q



)

.







This implies that the C matrices for individual transducers may be collected together by a recursive or hierarchical process that exploits sum-reduction operators to construct successively more complete representations of a distributed system of transducers, to be solved in a central location (or the computation repeated in distributed locations for better fault tolerance). However, as the matrix B is required to take the z vector produced and reconstruct the transducer excitations, it is also necessary to express B as a disjoint set of matrices, where the B matrix for a given transducer q may be written:








B


t

x

,
q


=

[





σ

1
,
q





α
q

(

𝒳
1

)













σ

m
,
q




α
q



(

𝒳
m

)





]


,




so the element of the excitation vector corresponding to transducer element q is written:






B
tx,q
z=x
q.


Therefore, since each portion of the x vector may be stored locally as it is only necessarily required to drive the transducer elements, no information regarding individual transducer elements or their weightings needs to be communicated globally to obtain the transducer excitations—only the C matrix for each subset of the system is required to be communicated.


The subsets of the C matrix may be generated locally to the transducer array in a functional unit which will be labelled a “tile”. The “central location” described will be labelled a “solver”. These “tiles” would then each generate a subset of the C matrix and communicate it to the “solver”, where the matrix would be solved to produce the z vector which is communicated back to the “tiles” and expanded into the x vector which informs how the transducers are driven.


Incarnations of the A matrix may also be constructed to generate linear acoustic quantities, such as the acoustic pressure or the particle velocity of the acoustic medium in a known direction. As these are linear quantities, they may be calculated by applying disjoint portions of the A matrix and transmitting only the computed quantities. Then via a similar sum-reduction process to that required to compute the C matrix for the system, any given acoustic quantity may be calculated from synchronised application of portions of x vectors, as any linear quantity a may be found as:








A


t

x

,
q


=

[





α
q

(

𝒳
1

)












α
q



(

𝒳
m

)





]


,









α
´


Ω
,
q


=

A


t

x

,

q

X
q





,




where {acute over (α)}Ω,q is the contribution of the transducer q to the final simulated linear acoustic quantity field. This is useful to give an added layer of control and certainty to the system to allow the controlling device to accurately gauge the capabilities and constraints of the platform or platforms with which it may be communicating.


However, this has the significant limitation that it does not account for the time-of-flight, as to realize the field, the transducers must be driven with the amplitude and phase given by the solution for all time, as the complex valued transducer field functions are valid only for a single frequency field and an exact single frequency implies an infinite spread in time.


The reason why a phased array of transducers must be actuated for all time to achieve this is that the speed of sound causes any change in the transducer output to lag behind the creation of a control point in the field by the travel time of the acoustic wave. As a result, as the size of transducer array or separation of elements increases, causing greater differences in time-of-flight, errors accumulate in the created field.


A way to compensate for this is by offsetting the time of the control point so that the wave travel time is accounted for—by emitting the appropriate amplitude and phase of the wave from each transducer at a time corresponding to the time of flight subtracted from the convergence time of the point. However, as each transducer is a different distance from the point, each must contribute to any given control point in a fixed space and time at different times, their states cannot be solved for simultaneously. The trajectory of the control points must be intersected with the time evolution of the wave from the transducer, where each transducer contributes to each point along the timeline at a different time offset, but then to achieve given control point amplitudes at points in time, many transducers from different times must be controlled through time.


These issues result in having to extend each matrix to solve for the drive to all transducers at all times to create all control points at all times. This implies a very large matrix formulation that cannot be solved satisfactorily.


One way to approximate this is by considering which elements of this large matrix dominate and constructing a solution matrix at each point in time for only these elements. This can be considered as finding points on the control point trajectory with the maximum possible deposited output energy with emission from the actuated transducer at the current time step and adding only the rows and columns corresponding to these after having split up the output along the control point trajectory using a heuristic. Then, basis sets of transducers that are to actuate the control points along the trajectory are built with a temporal envelope function, so the further away in time the wave generated from the transducer activation is, the less the given transducer can contribute. This is described in more detail in a prior patent application.


This approach has some drawbacks. Having to manage and query the acoustic output data associated with the control point trajectory through time involves large data structures, which contribute to the cost and engineering complexity of the solution. Further, the total power is limited as only large groups of similar time-of-flight transducer elements are considered at each time step. Transducers with poorly matching time-of-flight are forced to contribute at greatly reduced power due to the temporal envelope function, reducing efficiency. While the temporal envelope function can be modified to provide a trade-off between high-efficiency and temporal accuracy, with this approach both cannot be achieved simultaneously. Given the high cost of transducers and high complexity of this approach, the implementation is hard to justify.


In this disclosure, a simple method using hierarchical subdivision to produce a distributed system will be described.


At the top level there is a “manager”, responsible for managing the data flow and demands from the user, optionally applying user-specified transformations and collating global data on the state of the produced and producible acoustic field.


Although there is notionally one single “manager”, some embodiments may use a “multihead” strategy where the same global data is accumulated in multiple locations for redundancy or other reasons. This device has notionally one or more children which is each a potential subtree or notional subgraph of “solver” elements.


The “solver” elements each represent a single “timezone” wherein larger than wavelength time-of-flight adjustments between transducer pairs that cannot be made used phased array control techniques are deemed to be unnecessary, so each of these and its controlling units are considered to be a single phased array system. It solves the control point problem to obtain coefficients using data gathered from its children's subtrees and adjusts timings on the input to ensure that the time-of-flight from some indicative central location for the timezone is accounted for. This device has notionally one or more children which is each a potential subtree or notional subgraph of “tile” elements.


The “tile” elements each have models for the collection of transducers under their control and can use these models to determine the coefficients of the transducers to estimate relative levels of acoustic quantities, determine entries for the local contribution to the control point relations matrix or C matrix and expand the solution vector coefficients back into coefficients for transducer drives. Each of these is either directly connected to the transducers or otherwise controls a set of transducers which may be instructed in time to emit an ultrasonic wave at the same frequency with a given amplitude and phase offset, although clearly changing this amplitude and phase offset in time yields frequency shifting output.


Firstly, the useful output that can be obtained from the marshalling of the transducer resources controlled by each “solver” unit must be estimated in order to determine how to split the generation of the acoustic field amongst them. Then each group, while having its available transducers close in space and thus close in time, has a single but different time-of-flight to each control point in the acoustic field. As a result, the control points upon input to each “solver”, in order to be expressed in the interference pattern at the correct time and also synchronized between all “solver” units, can be moved in time from the convergence time expressed by the user to an appropriate emission time by subtracting a time-of-flight indicative of the distance of the subset of transducer elements available to it. This may be a distance from some indicative central location of the subset of transducers, this may be a minimum distance from any single member transducer, it may be computed from a global-to-solver-local transformation or using another heuristic. Consequently, the complex-valued drives for the subset of transducers addressed by the solver may be solved by using the standard phased array paradigm on the control points having been modified in time in this way.


However, as each of the control points change in relative emission time between solving units, it is impossible to know ahead of time which control points will be sharing the resource on each emission time step. In this case, since only the proportional splitting of the problem need be conducted between solving units, estimation may be conducted independently of other control points by computing the diagonal of the C matrix, that is:





α∝,natural,rr·βr,


which estimates what proportion of the linear acoustic quantity can be produced by a unit drive of the available transducers with a fully focusing phase angle. As a result, this is an effective predictor of the relative proportion of the output linear acoustic quantity that can be brought to bear that is comparable between solving units.


To realize this extra functionality which involves multiple “solver” functional units, an extra level in the hierarchy must be introduced, the “manager”. While this is for simplicity described as a single unit, duplicating the necessary data would enable decentralized approaches to function with equal effectiveness.


A directed acyclic graph (DAG) may then be generated, which for simplicity may also double as communication links between nodes. In the example configuration, a single “manager” node connects to one or more subtrees of “solver” nodes, which then connects to one or more subtrees of “tile” nodes as described earlier. Together these links between nodes notionally form a DAG.


Taking only the manager-to-solver and solver-to-solver unit connections, a strategy for the implementation of the scheme may be described as follows. Initially, the user requests the “manager” for an acoustic field consisting of keyframes of control points that define an acoustic field moving in time, which is expected to be temporally consistent. The “manager” enqueues these, dequeuing them based on their requested time but working ahead in time by a maximum time-of-flight. This maximum time-of-flight value may be defined using one or more of a preset, being computed from the control points, defined by a filtered value, the user or by a heuristic and may be defined on a per-control point basis. On dequeuing, the subtree comprised of “solver” units is sent a request for linear acoustic quantity output estimates, which includes the control point location and normal vector (to ensure that no waves are added to the estimate that travel in a direction inappropriate to the acoustic field goal).


As the estimation results traverse up each solving unit subtree, each “solver” unit node aggregates estimation results by summing the results of its child subtrees. Each child subtree alongside the current node is tagged with the summation to affect a recursive decomposition of the set of solving units, effectively tagging each node link and the node itself with a “breadcrumb”, the estimated proportion of the linear acoustic quantity producible by the subtree and node at each control point. Then, when the aggregate estimation results reach the top of the tree, the control points that correspond to the estimates are pushed through the tree a second time effecting the actual solution to the acoustic field generation problem.


The control point definitions in both the estimation and actual solution cases may be the same and simply flow both down and up a communication network tree, they may be queued twice in separate queues or they may be dequeued from the same initial user-supplied control point queue. As the control points traverse the subtrees a second time, they become recursively split or “fissioned” in output acoustic quantity requested using the “breadcrumbs”, the estimates set earlier to determine what proportion of the initial user request to fulfil with each subtree.


Setting up the “breadcrumbs” can be achieved using queues to precisely match timings, although because the only time differences are due to communications up and down the tree, timings may be neglected and simple estimation values which may or may not be low pass filtered can be read, as any errors incurred due to mismatches between time steps are likely to be negligible.


As the requested control points reach each “solver” unit, the amount of solution required of the unit is submitted. For both the estimation and the actual solution traversals, the time-of-flight compensation is computed to ensure that any optional transformations into the local solver spatial reference frame is completed with correct reference to the time. If the transducers are moving it is important to resolve the circular dependency between time-of-flight which depends on distance and thus transducer location and any transformations between the global and solver local space which must be computed at emission time which depends on the time-of-flight. The exemplar approximate solution given here, which may be simplified or equally made more involved, is to compute the transform based on the wave coalescence/convergence time, compute an initial transformation and thus an initial time-of-flight and then use this to compute a final transformation and thus a final time-of-flight at emission time. This assumes that the movement of the transducer array over the time-of-flight may not be negligible in terms of spatial shifts at the control point, although if it is known that such shifts will be negligible, the first initial transformation step may be instead made authoritative and final.


Once these have been resolved, the control points incoming to the local “solver” unit are transformed into the local space, adjusted in time using the time-of-flight and queued in a “solver” local control point queue. The remainder of the solution steps, with the phase preset with the heuristic mechanism described may be then computed with linear systems and output to the transducers as described previously, although it is assumed that there is a synchronized clock between all nodes to allow for synchronized output.


B. Variations


One way to define a solver-local time-of-flight is to use the distance from the origin of the solver-local spatial reference frame as computed by the global-to-solver-local transform. This is a way to economize on external data inputs as this indicative central location is then simply implied by the reference frame.


The estimate steps may be used to generate a target phase for the control points. The phase may be obtained as the weighted average time-of-flight from each transducer weighted by the output linear acoustic quantity producible by each. Equally, this could be generated as the shortest time-of-flight from any one transducer. It is possible that there are further heuristics that may be used.


The estimation step can also be made to return information about cross-conversions between various acoustic quantities, including non-linear quantities, as these are often related to the linear acoustic quantities that are the target of the solving process. Further, array apodization information may be added to the estimate query to imply a certain apodization is in effect during the estimation of the other acoustic quantities. Other effects, including interference from other control points, are modelled out by considering them as a random noise function atop the initial estimation and so can be neglected, or otherwise may be added as extra data in the estimation query.


C. FIGURES

Turning to FIG. 8, shown is the “Estimation Phase” 800 of the method by which the device generates the desired acoustic field. The estimation phase may include the following steps:


1. Manager receives new control trajectory points from user 802 with only a convergence time. These points transformed by any pinning spatial transforms specified by the user and are sent to the solver elements 810804822.


2. Solver elements each subtract the point location from the center of the solver element group at emission time (ideally, although convergence time may be used). This distance then yields the time-of-flight offset. The points are forwarded to the tiles 808812814818816806820824826.


3. Tiles 808812814818816806820824826 each transform the point position and normal by the emitter transform at emission time.


4. Tiles 808812814818816806820824826 each compute the acoustic quantities formed by approximate basis functions.


5. Tiles 808812814818816806820824826 generate exportable estimation data which may involve transformation of any acoustic particle velocity vectors computed by the inverse of the emitter transformation.


6. Tiles 808812814818816806820824826 return the estimation data to their parent elements while summing acoustic quantities, drive sums and drive weighted distances, as per the desired phase heuristic in use.


7. Leave a (potentially filtered) copy of the total potential solver output (a selected indicative linear acoustic quantity to use for the linear system solution) at each point on the interface inbound to the summed subtree as a “breadcrumb”.


8. Acoustic quantities, drive sums and drive weighted distances from each solver element 810804822 are summed.


9. The manager uses the sum total of the limiting acoustic quantities to find conversion factors between the acoustic quantities for the user given points, converting them to a problem posed in terms of an objective value in a linear acoustic quantity, using drive sum and drive weighted distance to find a suitable phase angle for each point.


Turning to FIG. 9, shown is the “Compute Solution” 900 phase of the method by which the device generates the desired acoustic field via a manager 902. The compute solution phase may include the following steps:


1. Each “breadcrumb” is used to divide the acoustic output work amongst the solver element children of each parent element and the parent element itself (if it is a solver element), with the result that each solver element 903907930 receives work posed as a fractional problem.


2. Solver elements 903907930 each subtract the point location from the center of the solver element group at emission time (ideally, although convergence time may be used). This distance then yields the time-of-flight offset. The points are forwarded to the tiles.


3. The points are scheduled in queues within each solver 903907930 according to their emission time, being popped from the queue to be emitted at an appropriate time offset.


4. Tiles 910912915905920921922925927 each transform the point position and normal by the emitter transform at emission time.


5. Tiles 910912915905920921922925927 compute the core control point relations matrix for their transducers.


6. Tiles 910912915905920921922925927 return the core control point relations matrix for their transducers to their parents.


7. The element that is the parent of each tile 910912915905920921922925927 subtree sums the core control point relations matrix for each tile 910912915905920921922925927 element subtree.


8. Each solver element 903907930 solves the specified fractional control point problem in the linear acoustic quantity, obtaining a complex weight for the drive and drive range of each basis set of transducers.



FIG. 10 shows the “Apply Solution” 1000 phase of the method by which the device generates the desired acoustic field via a manager 1030. The apply solution phase may include the following steps:


1. The solution vectors are transmitted to each tile 1001100210120102310251035104010421045.


2. Tiles 1001100210120102310251035104010421045 each expand the local transducer coefficients given by the solution vectors into transducer states, while recording the maximum transducer drive in the range and computing the current contribution to the acoustic field using simulated transducer behavior, where the non-local acoustic and maximum drive data comprise the feedback data.


3. Transducer coefficient data is queued to be emitted at the corresponding emission time.


4. Tiles 1001100210120102310251035104010421045 return the feedback data.


5. The feedback data is accumulated from the tile elements 1001100210120102310251035104010421045, summing the acoustic field contributions for this emission time and performing a maximum reduction on the transducer drive range.


6. The timestamps on the feedback data are changed from emission time to convergence time and interpolated to allow them to be synchronously summed and reduced to correctly reflect the effect of producing the points across multiple solver elements 101010331044.


7. Synchronized feedback data for each timestamp across solver elements 101010331044 is accumulated.


8. The manager unit receives a synchronous feedback stream to adapt and adjust output power and appropriately set the valid input range.


9. Optional monitoring and status data, including acoustic monitoring information is presented to the user.


III. CONCLUSION

In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.


Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way but may also be configured in ways that are not listed.


Further, in this specification the overbar operator, as used in expressions such as for example α(Xb), is defined as having a real scaling factor on each vector component for prioritizing transducers and thus generating weighted basis vectors in the space of complex-valued transducer activations in addition to the usual meaning of complex conjugation.


The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.

Claims
  • 1. A system comprising: a device comprising a plurality of acoustic transducers for creating an acoustic field exhibiting amplitudes at at least one focus point;a primitive for providing a haptic effect in mid-air at a targeted location using the plurality of acoustic transducers, wherein the primitive comprises:at least one coordinate locations with associated amplitude;at least one transform; andat least one acoustic solver.
  • 2. The system of claim 1, wherein the primitive comprises an amplitude modulation point.
  • 3. The system of claim 1, wherein the primitive comprises a set of sequential points that draw a circle.
  • 4. The system of claim 1, wherein the at least one transform comprises animation transforms and stored transforms.
  • 5. The system of claim 4, wherein the animation transforms are applied to their associated base transform at regular intervals.
  • 6. The system of claim 5, wherein the animation transforms comprise affine transforms.
  • 7. The system of claim 4, wherein the at least one transform generates outputs in 3 dimensions.
  • 8. The system of claim 1, further comprising a ring buffer for a queue that maps onto behavior of the plurality of acoustic transducers along a timeline when a user is actively streaming data to the device, thereby creating past states.
  • 9. The system of claim 8, further comprising future states defining future behavior of the device, that are added incrementally to the ring buffer at a write index.
  • 10. The system of claim 9, further comprising present states created by interpolated the future states with past states.
  • 11. The system of claim 1, further comprising a ring buffer for a queue that maps onto behavior of the plurality of acoustic transducers along a timeline, thereby creating past states.
  • 12. The system of claim 11, further comprising new state data generated by extrapolating desired future behavior of the device from the past states.
  • 13. The system of claim 12, wherein the new state data is extrapolated when a user is not actively streaming data to the device.
  • 14. A system comprising: a device comprising a plurality of acoustic transducers for creating an acoustic field exhibiting amplitudes at at least one focus point;a control scheme for the plurality of acoustic transducers to compensate for time-of-flight artifacts, achieved by recursively subdividing the control scheme into subtrees of phased-array nodes whose output is estimated where a desired field drive is distributed amongst the phased-array nodes.
  • 15. The system as in claim 14, further comprising: a manager unit;at least one solver associated with the manager unit;at least one tile associated with each of the at least one solver;solution vectors that are transmitted to each of the at least one tiles;wherein each of the at least one tiles expand local transducer coefficients given by the solution vectors into transducer states, while recording maximum transducer drive in range and computing current contribution to the acoustic field using simulated transducer behavior, wherein non-local acoustic and maximum drive data comprise the feedback data;wherein transducer coefficient data is queued to be emitted at a corresponding emission time;wherein at least one tile returns the feedback data;wherein the feedback data is accumulated from elements of the at least one tile summing the acoustic field contributions for an applicable emission time and performing a maximum reduction on a transducer drive range;wherein timestamps on the feedback data are changed from emission time to convergence time and interpolated to be synchronously summed and reduced to correctly reflect the effect of producing points across elements of the at least one solver;wherein synchronized feedback data for each timestamp across elements of the at least one solver is accumulated;wherein the manager unit receives a synchronous feedback stream to adapt and adjust output power and appropriately set valid input range.
  • 16. The system as in claim 15, further comprising: monitoring and status data, including acoustic monitoring information for presentation to a user.
PRIOR APPLICATIONS

This application claims the benefit of the following two applications, all of which are incorporated by references in their entirety (1) U.S. Provisional Patent Application No. 63/266,331, filed on Jan. 2, 2022; and (2) U.S. Provisional Patent Application No. 63/268,198, filed on Feb. 18, 2022.

Provisional Applications (2)
Number Date Country
63268198 Feb 2022 US
63266331 Jan 2022 US