If an Application Data Sheet (“ADS”) has been filed on the filing date of this application, it is incorporated by reference herein. Any applications claimed on the ADS for priority under 35 U.S.C. §§119, 120, 121, or 365(c), and any and all parent, grandparent, great-grandparent, etc., applications of such applications, are also incorporated by reference, including any priority claims made in those applications and any material incorporated by reference, to the extent such subject matter is not inconsistent herewith.
The present application claims the benefit of the earliest available effective filing date(s) from the following listed application(s) (the “Priority Applications”), if any, listed below (e.g., claims earliest available priority dates for other than provisional patent applications or claims benefits under 35 U.S.C. §119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc., applications of the Priority Application(s)).
None.
If the listings of applications provided herein are inconsistent with the listings provided via an ADS, it is the intent of the Applicants to claim priority to each application that appears in the Priority Applications section of the ADS and to each application that appears in the Priority Applications section of this application.
All subject matter of the Priority Applications and the Related Applications and of any and all parent, grandparent, great-grandparent, etc., applications of the Priority Applications and the Related Applications, including any priority claims, is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
This application is related to U.S. patent application Ser. No. 14/203,401, filed on Mar. 10, 2014, titled SYSTEMS AND METHODS FOR ULTRASONIC POSITION AND MOTION DETECTION, and to U.S. patent application Ser. No. 14/280,463, filed on May 16, 2014, titled SYSTEMS AND METHODS FOR ULTRASONIC VELOCITY AND ACCELERATION DETECTION, which applications are hereby incorporated by reference in its entirety.
This disclosure relates to systems and methods for imaging an object within a region and inducing a haptic sensation on a site of the object using parametric ultrasound. Specifically, this disclosure provides systems and methods for generating a haptic sensation for use in combination with entertainment and/or infotainment devices.
A system may use parametric ultrasound to induce a haptic sensation in or on a user. For example, the haptic sensation may include a tactile stimuli at an identified site of a user. The system may direct multiple ultrasonic pulses toward a point proximate a user of the system. Mixing may occur between the ultrasonic pulses, at the location where the ultrasonic pulses intersect. A new, lower frequency pulse may be generated at the sight of the mixing ultrasonic pluses. The generated low frequency pulse may be at a beat frequency of the original ultrasonic pulses. A beat frequency may be generated at a sufficiently low frequency to induce tactile stimuli in human skin (e.g., 10 Hz to 500 Hz).
The type and/or quality of the induced tactile stimuli (or other haptic sensation) may be controlled or adjusted by altering either the amplitude, frequency, or duration of the ultrasonic pulses. It may be calibrated to induce a multiplicity of different stimuli (e.g., touch, stroke, brush, impulse, or motion). The induced tactile stimuli may also be directed toward an identified area of interest (i.e., a portion or site of/at/on a user), such as a person's hands, arms, legs, feet, lips, face, or head.
A system using parametric ultrasound to induce tactile stimuli may use compressed sensing, or other sensing and imaging techniques, to generate image data. The image data may be useful in directing the operation of the parametric ultrasound.
A system that uses parametric ultrasound to provide induced tactile stimuli may also utilize a dual modality sensor to generate image data. The system may use a first modality to generate coarse image data of an object or region of interest (e.g., a user of an entertainment system). The system may then use this coarse image data to identify portions of interest of the object. The portion of interest may, as a specific example, be a site on a user such as hands, arms, legs, or feet. The system may use a second modality to generate fine image data of the identified portions of interest. Compressed sensing may be used to generate or enhance either the coarse image data or the fine image data.
For example, in various embodiments, a system may include one or more ultrasonic transmitters and/or receivers to implement a first modality. In some embodiments the transmitter(s) and/or receiver(s) may be embodied as one or more transceivers. An ultrasonic transmitter may be configured to transmit ultrasound into a region bounded by one or more surfaces. The ultrasonic receiver may receive direct ultrasonic reflections from one or more objects within the region. As described in detail below, the system may use the ultrasonic transmitters and/or receivers, in combination with compressed sensing, to generate coarse image data of an object and identify, based on the coarse image data, portions of interest of the object.
For instance, in certain examples, a system may also be configured to receive, via an electromagnetic receiver, an electromagnetic reflection from an object within a region. The system may generate fine image data of identified portions of interest using the received electromagnetic reflection. For example, after a portion of interest has been identified via coarse image data, the system may receive electromagnetic radiation from the identified portion of interest and use compressed sensing techniques to generate image data with greater resolution than available in the coarse image data (referred to herein as fine image data).
Either of the two modalities discussed above (i.e., electromagnetic imaging and ultrasound) can be utilized by the system to generate either the fine image data or the coarse image data. For example, a first embodiment may utilize ultrasound to generate coarse image data and electromagnetic imaging to generate fine image data, whereas a second embodiment may utilize electromagnetic imaging to generate coarse image data and ultrasound to generate fine image data. Compressed sensing may be used to generate or enhance either or both modalities.
The image data may be useful in directing the operation of the parametric ultrasonic transceiver used to induce haptic sensation(s) such as a tactile stimuli. It may be useful to determine the location of a user or target for providing induced haptic sensation. For example, a system may utilize generated image data to determine the location of a user and/or site(s) of interest on the user for providing induced tactile stimuli. Then the system may be able to induce calibrated tactile stimuli at the identified site(s) of interest.
The foregoing summary is illustrative only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features are demonstrated with reference to the drawings and the following detailed description.
Entertainment systems are often rated on their ability to both interact with and entertain users of the entertainment system. The interaction between the user and the entertainment system may involve a multiplicity of forms. For example, the entertainment system may interact with the user visually, audibly, or haptically through physical contact with a controller or other device. The user, in turn, may interact with the entertainment system through use of a remote control or a video game controller, or through some other means. The ability of the system to interact with the user may be enhanced through the ability to interact with the user and provide haptic feedback to the user without the need for physical contact between the user and the entertainment system or some peripheral of the entertainment system (e.g., a video game controller). This type of haptic feedback may be generated using parametric ultrasound.
A system may use parametric ultrasound to generate a low frequency pulse at a point proximate the user of the system. Pulses at various frequencies (e.g., 20 Hz to 300 Hz) may be felt by the user as a haptic sensation (e.g., an induced tactile sensation or a touch). By altering the frequency, amplitude, or duration of the ultrasonic pulses the form, type, and/or quality of the tactile sensation may be altered. It may be possible to create a multiplicity of different sensations through the manipulation of the ultrasonic pulses. For example, in some embodiments the system may be calibrated to generate a pulse that feels like a touch, a brush, a stroke, a pinch, an impulse, or even a kinesthetic sensation.
The low frequency pulse may be generated by directing a first ultrasonic pulse along a vector relative to the user of the entertainment system and a second ultrasonic pulse along a different vector relative to the user of the ultrasonic system. The two ultrasonic pulses may intersect at a point proximate a location or site identified for receiving an induced tactile stimuli. Ultrasonic pulses behave nonlinearly in air and tissue. The nonlinear behavior of the ultrasonic pulses may cause mixing to occur at the point where the pulses intersect. The mixing of the ultrasonic pulses may generate a new low frequency pulse at a beat frequency of the original ultrasonic pulses.
As used herein an ultrasonic pulse may be any duration and/or include any number of shorter pulses. For example, an ultrasonic pulse may be a continuous transmission of ultrasound. As another example, an ultrasonic pulse may be a continuous ultrasound transmission lasting for a finite period of time. As another example, an ultrasonic pulse may comprise a plurality of shorter pulses having the same or different frequencies, phases, and/or amplitudes.
A system may be developed to provide haptic feedback to a user of an entertainment system using parametric ultrasonic techniques. To provide haptic feedback to the user the system may also require various subsystems or modules to facilitate haptic feedback to a user. For example, a system for providing haptic feedback to a user may include a radiation receiver, a position module, an identification module, an entertainment device interface, a parametric ultrasonic transmitter subsystem and/or a haptic feedback selection module. The various modules and subsystems may be useful in determining characteristic data about the user that may influence the state of the entertainment system.
Accordingly, the state of the entertainment system may determine the type of haptic feedback to provide to the user. For example, the system may generate data relevant to the position, posture, motion, stance, or location of the user. The data may be generated for one user or many. The data may refer to sites of interest on or in the user. For example, the system may track the user's hands, feet, arms, legs, head, eyes, lips, or other regions of interest. The data relevant to these sites of interest may be instrumental in determining the state of the entertainment system. Depending on the state of the entertainment system the type and placement of the haptic feedback may be determined or adjusted.
For example, in one embodiment the system may track the movement of the user's hands. If the user waves at the entertainment display interface the entertainment system may move from a standby state to an active state. Then the system may provide haptic feedback to the user to confirm the change. From the user's perspective, this may involve turning on the entertainment display interface or displaying a selection screen. The haptic feedback may be felt by the user and interpreted as a touch on the user's hand, confirming that the system is now active. In an embodiment, the system may provide haptic feedback to confirm receipt or execution of a command or selection delivered by the user to the entertainment display interface.
The system may provide an induced haptic sensation to the user in any way that is useful for interacting with the user. In one embodiment the entertainment system may be used to display a video or movie. Then induced haptic sensations may be used to enhance the presentation of the video. For example, haptic sensations may be used to inform the user when the video is starting or stopping. In another embodiment the entertainment system may be used to play an exercise game. Then the system may use induced haptic sensations to inform the user when to step or move during a workout. The induced haptic sensations may be intended to encourage the user to pick up the speed or confirm the completion of a workout.
There may be embodiments where multiple users interact virtually through the entertainment system display interface. The system may provide haptic sensations to each player if they score a point or may induce haptic sensations to provide feedback when a user loses a life or engages another player. It may even be possible for players to direct haptic sensations towards one another during the course of, or as part of, a game. Other embodiments may interact similarly with a single user.
In still another embodiment, a user may engage in a driving simulation on an entertainment system. Induced haptic sensations may be used to provide a sense of motion to the user. They may be used to simulate a turn, wind resistance or other sensations connected with driving. These examples are meant to be illustrative and not restrictive. They are meant to show the breadth of interaction where an entertainment system may use haptic feedback to enhance the experience of a user, convey information, induce user participation, etc.
A radiation receiver may be used to collect information about a bounded region such as a room. The receiver may receive electromagnetic radiation and/or ultrasonic radiation. The received radiation may be ambient radiation from the bounded region or it may be reflected radiation from a transmitter. The radiation receiver may convert the received radiation into an image or image data. The system may use various techniques for generating image data suitable for providing induced haptic sensations to a user.
A system may utilize compressed sensing techniques to generate image data suitable for providing induced haptic sensations to a user. For example, a system may use low resolution imaging (e.g., either ultrasound or electromagnetic radiation) to capture multiple overlapping images, where each image represents only a portion of the object of interest. The images may then be combined using compressed sensing techniques to approximate a single, higher resolution image of the entire object. A perfect approximation of the image may require, in some embodiments, thousands or millions of low resolution image captures, whereas a good or adequate approximation of the image may require only a small percentage of the required number of low resolution images for the perfect approximation. Accordingly, image data for an object may be adequately approximated by applying compressed sensing techniques to a relatively small number of low resolution images.
Examples of “compressed sensing,” including various applications of compressed sensing, methods of performing compressed sensing, uses for compressed sensing, and variations of compressed sensing that are applicable to many of the embodiments described herein, are described in N. P. Pitsianis et al., “Compressive Imaging Sensors,” Proc. of SPIE Vol. 6232, 62302A (2006); Mark A. Neifeld et al., “Optical Architectures for Compressive Imaging,” Applied Optics Vol. 46, No. 22, 5293 (2007); and Justin Romberg et al., “Compressed Sensing: A Tutorial,” IEEE statistical signal workshop (2007), available at http://users.ece.gatech.edu/justin/ssp2007/ssp07-cs-tutorial.pdf, all of which are incorporated herein by reference in their entireties.
As a specific example, an ultrasonic transmitter may be used to direct ultrasound at a small portion of a region of interest. The ultrasonic reflections may be recorded by an ultrasonic receiver. The reflections may be used to generate a very low resolution image of a small portion of the area of interest. This process may then be repeated multiple times for small, possibly overlapping, areas of interest until all or almost all parts of the larger region of interest are covered by one or more of the smaller, low resolution images. A processor may run a compressed sensing algorithm to combine the small low resolution images into an approximation of the region of interest.
In some embodiments the low resolution images may be generated using electromagnetic reflections or ambient sources of electromagnetic radiation such as light or infrared radiation.
In some embodiments the images may be generated in color. In other embodiments the images may be generated in grayscale, infrared, UV, high dynamic range, or another false color representation.
A system may utilize a dual modality sensor system to generate image data. For instance, a system may utilize two imaging modalities for imaging an object at two different resolutions. The object may represent a user or site on a user of an entertainment system. That is, the system may utilize a first modality (e.g., either ultrasound or electromagnetic radiation) to generate image data of an object at a first resolution. The data may be generated directly or through compressed sensing techniques. The data at the first resolution may be used to determine the position of the object or user. The system may then utilize the other modality or compressed sensing to generate image data of portions of interest on the object (not necessarily the entire object) at a second resolution, where the second resolution is higher than the first resolution. The portions of interest may be sites of interest where the system could provide induced haptic sensation to the user (e.g., hands, feet, arms, legs, head, nose, chin, ears, lips). Accordingly, the dual modalities may be used to generate a coarse (i.e., lower resolution) image of the entire object using a first modality, identify portions of interest on the object, and then generate a fine (i.e., higher resolution) image of the portions of interest using a second modality.
As a specific example, a system may transmit ultrasound, via a first ultrasonic transmitter, into the region. An ultrasonic receiver may receive ultrasonic reflections of the transmitted ultrasound from a plurality of sites on the object within the bounded region. A processor may use compressed sensing to generate coarse image data of the object (i.e., a user of an entertainment system) at a first resolution based on the received ultrasonic reflections. The system may then identify a portion of interest on the object (i.e., a site of interest on the user) based on the coarse image data. Electromagnetic radiation may be received from the identified portion of interest on the object. Fine image data of the portion of interest on the object may be generated at a second resolution based on the received electromagnetic radiation. The second resolution may be greater than the first resolution. In some embodiments, the first resolution may be greater than the second resolution. The image data of the portion of interest may be used to provide induced haptic stimuli to sites of interest on the user.
In some embodiments, a kinematic value may be determined that is associated with the portion of interest on the object based on at least one of the received electromagnetic radiation and the received ultrasonic reflections. Similarly, in some embodiments, the state of an entertainment device may be modified based on the determined kinematic value associated with the portion of interest on the object.
As a specific example, a kinematic value may be assigned to the hand of a user of an entertainment system. Rapid changes in the kinematic value of the person's hand may trigger a change in state of the entertainment system. For example, the system may determine that the user is waving at the entertainment display interface and respond by inducing a tactile stimuli at a site of a user to acknowledge the user's wave.
In some embodiments, the coarse image data described above may be generated based on the received electromagnetic reflections and the fine image data may be generated based on the received ultrasonic reflections. In some embodiments either the coarse image data or the fine image data or both may be generated using compressed sensing. In any of the various embodiments described herein, the received electromagnetic radiation may be generated by the system, by another system, or by an auxiliary electromagnetic radiation source, and/or may comprise ambient electromagnetic radiation, such as light.
In some embodiments, one imaging modality may be used to resolve an ambiguity, such as ghost images, in image data generated using another imaging modality. For example, an image generated using ultrasound imaging technologies may have a ghosting image ambiguity that can be resolved using an electromagnetic imaging technology (or even just an electromagnetic position/distance detection technology).
For example, a system may include one or more ultrasonic transmitters and/or receivers, as well as one or more electromagnetic transmitters and/or receivers. Each of these different modalities may capture image data at different resolutions. The system may generate non- or less-important image data at a lower resolution while capturing important image data at a higher resolution. In some embodiments, generating only a portion of the image at a higher resolution, using compressed sensing or other means, may allow the system to process the image data faster and in a more compressed manner while still providing high resolution of portions of interest.
For example, a low resolution image may be generated from ultrasonic reflections using compressed sensing techniques. Then a high resolution image may be generated of the specific areas of interest using an electromagnetic receiver.
In some embodiments, the transmitter(s) and/or receiver(s) may be embodied as one or more transceivers. The ultrasonic transmitter(s) and/or receiver(s) may be operated by the system concurrently with the electromagnetic receiver(s) or in sequential order before or after the electromagnetic receiver(s). The ultrasonic transmitter(s) and receiver(s) may be used in combination with the electromagnetic receiver to generate image data.
One or more of the electromagnetic and/or ultrasonic transmitters, receivers, and/or transceivers may comprise a piezoelectric transducer that may be part of a single transducer system or an array of transducers. In some embodiments, the transducers may comprise or be made from metamaterials. A flat sub-wavelength array of ultrasonic transducers may be used in conjunction with the embodiments described herein, such as those utilizing arrays of metamaterials.
The dual modality sensor system may be configured to utilize different frequency spectrums. An ultrasonic transmitter used on such a system may be configured to transmit ultrasound into a region bounded by one or more surfaces. The ultrasound may be between 20 kHz and 250 kHz. In one embodiment, the ultrasound is specifically between 35 kHz and 45 kHz. An electromagnetic transmitter and/or receiver may also be used on such a system to transmit and/or receive a range of electromagnetic radiation frequencies. For example, a system may be configured to use electromagnetic microwave, terahertz, infrared, visible, and/or ultraviolet radiation. A dual modality sensor system may use the two modalities to produce more detailed image data and/or to correct ambiguities introduced by one of the modalities.
For example, the system may use a first modality to generate coarse image data of an object, and, to get more detailed data about an identified portion of interest, the system may use a second modality to generate fine image data of the identified portion of the object. For example, if the first modality produces an image with a low resolution of an object, the second modality may be used to provide a higher resolution of portions of interest on the object. Another embodiment may include a first modality that introduces an ambiguity into the image data. To correct the ambiguity, the system may use a second modality that is not susceptible to the same type of ambiguity.
For example, the system may include one or more ultrasonic transmitters and/or receivers. The system may use the ultrasonic receivers and/or transmitters to generate coarse image data. For instance, an ultrasonic transmitter may be configured to transmit ultrasound into a region. An ultrasonic receiver may receive ultrasonic reflections from one or more objects within the region. Based on these received ultrasonic reflections, the system may generate coarse image data of the one or more objects via a processor. The data may be generated directly or through compressed sensing techniques
The system may identify portions of interest on the object using the coarse image data. For example, the system may identify a person's hand, finger, arm, leg foot, toe, torso, neck, head, mouth, lip, and/or eye. The portion of interest identified may be based on a state of an entertainment device. Once one or more portions of interest have been identified, the system may use a second modality (e.g., electromagnetic radiation) to gather further details about the portion of interest.
For instance, the system may also use electromagnetic receiver(s) and/or transmitter(s). The system may generate fine image data of the identified portions of interest using received electromagnetic reflections. For example, after a portion of interest has been identified using the coarse image data, the system may receive electromagnetic radiation from the identified portion of interest and generate a higher resolution image of the identified portions of interest.
In some embodiments, a second modality may be used to resolve at least one ambiguity inherent in or caused by the usage of the first modality. For example, image data generated using the first modality may include an ambiguity. For example, image data generated via ultrasound may have ghost images inherent in the image data. In such an example, a second modality (e.g., electromagnetic radiation) may be utilized by the system to resolve the ambiguity introduced by the first modality. For instance, received electromagnetic radiation can be utilized by the system to remove ghost images in the image data generated using the ultrasonic reflections.
Either of the two modalities discussed above (i.e., electromagnetic imaging and ultrasound) may be utilized by the system to generate either fine image data or coarse image data. In addition, the image data in either modality may be generated directly or using compressed sensing. For example, a first embodiment may utilize ultrasound in combination with compressed sensing to generate coarse image data and electromagnetic imaging to generate fine image data, whereas a second embodiment may utilize electromagnetic imaging to generate coarse image data and ultrasound to generate fine image data.
A kinematic value associated with the object or a specific portion of interest on the object may be determined. The kinematic value of an object may comprise the position, velocity, and/or acceleration of the object. The kinematic values may be based on the received electromagnetic radiation and/or the received ultrasonic reflections.
In some embodiments, the direct ultrasound may be reflected from a first portion of an object and the rebounded ultrasound may be reflected from a second, different portion of the object. Positional data may be determined using the received ultrasonic reflections. Direct positional data may correspond to a first directional component of the position of the object and the rebounded positional data may correspond to a second directional component of the position of the object. Similarly, one or more direct and/or rebounded ultrasonic reflections may be used to determine velocity and/or acceleration. For example, velocity and/or acceleration information may be determined using a Doppler shift that corresponds to a motion of the reflecting object.
In some embodiments, received ultrasonic reflections (direct or rebounded) may be used to determine positional data. Positional data sampled at various times may be used to determine and/or estimate current and/or future velocity and/or acceleration information associated with an object. In other embodiments, as described herein, velocity and/or acceleration information may be calculated based on a detected shift in ultrasound reflected by an object. For example, a system may detect a Doppler shift in ultrasound reflected by an object relative to the transmitted ultrasound. A shift to a longer wavelength may indicate that the object is moving away from the ultrasonic receiver. A shift to a shorter wavelength may indicate that the object is moving toward the ultrasonic receiver. The detected shift may be related to a frequency shift, a wavelength shift, a phase shift, a time-shifted reflection, and/or other ultrasonic shift.
Any number of direct and/or rebounded ultrasonic reflections may be obtained from one or more objects within a region to obtain velocity and/or acceleration data over a period of time and/or to obtain more accurate velocity and/or acceleration data with multiple data points. The transmitted ultrasound may be transmitted as directional or non-directional ultrasonic pulses, continuously, in a modulated (frequency, amplitude, phase, etc.) fashion, and/or other format. The ultrasonic transmissions may be spaced at regular intervals or on demand, and/or based on the reception of a previously transmitted ultrasonic transmission. Direct and rebounded ultrasound pulses may be transmitted at the same time, or either one can be transmitted before the other.
Rebounded ultrasonic reflections may be defined as ultrasonic reflections that, in any order, reflect off at least one surface in addition to the object. For example, the rebounded ultrasonic reflections may be reflected off any number of surfaces and/or objects (in any order) prior to being received by the ultrasonic receiver.
A mapping or positioning system may generate positional data associated with one or more of the object(s) based on the direct ultrasonic reflection(s) and/or the rebounded ultrasonic reflection(s). Positional data may be generated using compressed sensing. The positional data may comprise a centroid of the objects, a two-dimensional mapping of the object, an image of the object, a false-color representation of the object, an information representation (blocks, squares, shadows, etc.) of the object, a three-dimensional mapping of the object, one or more features of the object, and/or other information.
The velocity and/or acceleration data may be defined with respect to one or more surfaces of the region, the ultrasonic velocity/acceleration system, a receiver of the system, and/or a transmitter of the system. The one or more objects within the region may comprise machinery, robots, furniture, household property, people in general, gamers, human controllers of electronic devices, electronic devices, fixtures, and/or other human or non-human objects.
The object may comprise a specific portion of a person, such as a hand, finger, arm, leg, foot, toe, torso, neck, head, mouth, lip, and/or eye. In some embodiments, rebounded ultrasonic transmissions may be reflected off an ultrasonic reflector disposed within the room. In some embodiments, the ultrasonic reflectors may be mounted and/or otherwise positioned within the region. In other embodiments, the ultrasonic reflectors may be held, worn, and/or otherwise in the position of the user or operator of the ultrasonic positioning system. The ultrasonic reflectors may modify a characteristic of the reflected ultrasound, facilitating the identification of the received rebounded ultrasonic reflections.
Ultrasonic reflectors may comprise passive, active, and/or actively moved/pivoted ultrasonic reflectors for controlling the direction in which ultrasound rebounds and/or otherwise travels within the region. For example, the ultrasonic reflector may be configured to modify one or more of the frequency, phase, and/or amplitude of the rebounded ultrasound. The modified characteristic may facilitate the differentiation of the direct ultrasonic reflections and the rebounded ultrasonic reflections. In some embodiments the direct and rebounded signals can be differentiated using knowledge of the transmission or reception directions of the respective beams. In some embodiments, the direct and rebounded signals can be differentiated using knowledge of the time-of-flight of the respective beams. In some embodiments, the direction of a reflected beam (and hence directional characteristics of its delivered positional information) can be determined by knowledge of the orientation of the reflecting surface and its reflective characteristics. For example, ultrasonic reflection from a surface may be dominated by specular reflection, thereby allowing straightforward determination of the rebound geometry.
The mapping or positioning system may also generate velocity and/or acceleration data using the rebounded ultrasonic reflection of the object(s) from the one or more surfaces. It will be appreciated that a rebounded ultrasonic reflection from a surface may be rebounded off the surface first and then the object, or off the object first and then the surface.
The system may then generate enhanced velocity and/or acceleration data by combining the direct velocity and/or acceleration data and the rebounded velocity and/or acceleration data. The enhanced velocity and/or acceleration data may be a concatenation of the direct and rebounded velocity and/or acceleration data or a simple or complex function of the direct and rebounded velocity and/or acceleration data.
For example, in one embodiment, the direct and rebounded velocity and/or acceleration data may comprise only time-of-flight information, which, based upon air sound-speed, can be converted to transit distance information for each beam. In such embodiments, the direct velocity and/or acceleration data provides a range from the transceiver to the object, i.e., leaving the velocity and/or acceleration undefined along a two-dimensional spherical surface. Each potential object position along this spherical surface leads, e.g., assuming specular reflections, to a distinct time-of-flight for the rebounded beam from one surface (wall, ceiling, floor); this restricts the locus of possible velocities and/or accelerations of the object to a one-dimensional arc along the spherical surface, thereby improving the velocity and/or acceleration estimate(s).
The system can further refine the velocity and/or acceleration data by analyzing rebound data from a second surface. In the current example, each potential object position along the spherical surface (obtained by the time-of-flight of the direct beam) defines a first time-of-flight for ultrasound rebounded from the first surface and a second time-of-flight for ultrasound rebounded from the second surface; knowledge of both times-of-flight determines the object's position. It is clear that time-of-flight data from other surfaces can, by “over-defining” the problem, improve the positional estimate, e.g., by reducing sensitivity to measurement errors, to the effects of diffuse reflections, etc. In other embodiments, the direct and rebounded velocity and/or acceleration data may comprise directional information.
For example, directional information for direct ultrasound can identify that the object (or a specified portion of it) lies along a known ray, thereby providing two components of its velocity and/or acceleration. Information from rebounded ultrasound can then provide additional acceleration and/or velocity data sufficient to identify the third component of the object's velocity and/or acceleration, i.e., along the ray. The rebounded ultrasound may provide time-of-flight information; each object velocity and/or acceleration along the ray corresponds to a different time-of-flight for rebounded ultrasound from a surface, so the measured time-of-flight identifies the object's location, velocity, and/or acceleration. The rebounded ultrasound may provide directional information (for either transmission or reception); the intersection of this rebound ray with the direct ray serves to identify the object's location, velocity, and/or acceleration.
The enhanced velocity and/or acceleration data may be further enhanced or augmented using additional velocity and/or acceleration data obtained via direct or rebounded ultrasonic reflections and/or other velocity and/or acceleration data, such as velocity and/or acceleration data obtained via other means/systems/methods (e.g., laser detection, cameras, etc.). The direct and the rebounded velocity and/or acceleration data may provide velocity and/or acceleration data for the object at the same or different times, depending on the time at which they are reflected from the object. The enhanced positional data may be analyzed using a dynamical model, e.g., a Kalman filter, designed to combine velocity and/or acceleration data corresponding to different times or directional components, using them together with, and to improve, estimates of the object's present and/or future motion.
In some embodiments, direct ultrasonic reflections may not be used. Rather, a first rebounded ultrasonic reflection and a second rebounded ultrasonic reflection may be used to generate velocity and/or acceleration data. It is appreciated that any number of direct or rebounded ultrasonic reflections may be used to identify a position, velocity, acceleration, and/or other movement information of an object within a region. In various embodiments, the velocity and/or acceleration data gathered using ultrasonic reflections may be combined with other velocity and/or acceleration data, such as infrared, velocity and/or acceleration data provided by manual input, echo location, sonar techniques, laser, and/or the like.
In various embodiments, one or more local, remote, or distributed systems and/or system components may transmit ultrasound via an ultrasonic transmitter into a region. The received ultrasound may include both direct reflections and rebounded reflections. Velocity and/or acceleration data from both the direct reflections and the rebounded reflections may be used to obtain velocity and/or acceleration data that more accurately and/or more quickly describes the relative velocity and/or acceleration data of one or more objects within the region.
Embodiments may include various steps, which may be embodied in machine-executable instructions to be executed by a computer system. A computer system includes one or more general-purpose or special-purpose computers (or other electronic devices). The computer system may include hardware components that include specific logic for performing the steps or may include a combination of hardware, software, and/or firmware.
Embodiments may also be provided as a computer program product including a computer-readable medium having stored thereon instructions that may be used to program a computer system or other electronic device to perform the processes described herein. The computer-readable medium may include, but is not limited to: hard drives, floppy diskettes, optical disks, CD-ROMs, DVD-ROMs, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, solid-state memory devices, or other types of media/computer-readable media suitable for storing electronic instructions.
Computer systems and the computers in a computer system may be connected via a network. Suitable networks for configuration and/or use as described herein include one or more local area networks, wide area networks, metropolitan area networks, and/or Internet or IP networks, such as the World Wide Web, a private Internet, a secure Internet, a value-added network, a virtual private network, an extranet, an intranet, or even standalone machines which communicate with other machines by physical transport of media. In particular, a suitable network may be formed from parts or entireties of two or more other networks, including networks using disparate hardware and network communication technologies.
One suitable network includes a server and several clients; other suitable networks may contain other combinations of servers, clients, and/or peer-to-peer nodes, and a given computer system may function both as a client and as a server. Each network includes at least two computers or computer systems, such as the server and/or clients. A computer system may include a workstation, laptop computer, disconnectable mobile computer, server, mainframe, cluster, so-called “network computer” or “thin client,” tablet, smart phone, personal digital assistant or other hand-held computing device, “smart” consumer electronics device or appliance, medical device, or a combination thereof.
The network may include communications or networking software, such as the software available from Novell, Microsoft, Artisoft, and other vendors, and may operate using TCP/IP, SPX, IPX, and other protocols over twisted pair, coaxial, or optical fiber cables, telephone lines, radio waves, satellites, microwave relays, modulated AC power lines, physical media transfer, and/or other data transmission “wires” known to those of skill in the art. The network may encompass smaller networks and/or be connectable to other networks through a gateway or similar mechanism.
Each computer system includes at least a processor and a memory; computer systems may also include various input devices and/or output devices. The processor may include a general purpose device, such as an Intel®, AMD®, or other “off-the-shelf” microprocessor. The processor may include a special purpose processing device, such as an ASIC, an SoC, an SiP, an FPGA, an PAL, an PLA, an FPLA, an PLD, or other customized or programmable device. The memory may include static RAM, dynamic RAM, flash memory, one or more flip-flops, ROM, CD-ROM, disk, tape, magnetic, optical, or other computer storage medium. The input device(s) may include a keyboard, mouse, touch screen, light pen, tablet, microphone, sensor, or other hardware with accompanying firmware and/or software. The output device(s) may include a monitor or other display, printer, speech or text synthesizer, switch, signal line, or other hardware with accompanying firmware and/or software.
The computer systems may be capable of using a floppy drive, tape drive, optical drive, magneto-optical drive, or other means to read a storage medium. A suitable storage medium includes a magnetic, an optical, or other computer-readable storage device having a specific physical configuration. Suitable storage devices include floppy disks, hard disks, tape, CD-ROMs, DVDs, PROMs, RAM, flash memory, and other computer system storage devices. The physical configuration represents data and instructions which cause the computer system to operate in a specific and predefined manner as described herein.
Suitable software to assist in implementing the invention is readily provided by those of skill in the pertinent art(s) using the teachings presented here and programming languages and tools, such as Java, Pascal, C++, C, database languages, APIs, SDKs, assembly, firmware, microcode, and/or other languages and tools. Suitable signal formats may be embodied in analog or digital form, with or without error detection and/or correction bits, packet headers, network addresses in a specific format, and/or other supporting data readily provided by those of skill in the pertinent art(s).
Several aspects of the embodiments described will be illustrated as software modules or components. As used herein, a software module or component may include any type of computer instruction or computer executable code located within a memory device. A software module may, for instance, include one or more physical or logical blocks of computer instructions, which may be organized as a routine, program, object, component, data structure, etc., that performs one or more tasks or implements particular abstract data types.
In certain embodiments, a particular software module may include disparate instructions stored in different locations of a memory device, different memory devices, or different computers, which together implement the described functionality of the module. Indeed, a module may include a single instruction or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices. Some embodiments may be practiced in a distributed computing environment where tasks are performed by a remote processing device linked through a communications network. In a distributed computing environment, software modules may be located in local and/or remote memory storage devices. In addition, data being tied or rendered together in a database record may be resident in the same memory device, or across several memory devices, and may be linked together in fields of a record in a database across a network.
Much of the infrastructure that can be used according to the present invention is already available, such as: general purpose computers, computer programming tools and techniques, computer networks and networking technologies, digital storage media, authentication, access control, and other security tools and techniques provided by public keys, encryption, firewalls, and/or other means.
The embodiments of the disclosure are described below with reference to the drawings, wherein like parts are designated by like numerals throughout. The components of the disclosed embodiments, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Furthermore, the features, structures, and operations associated with one embodiment may be applicable to or combined with the features, structures, or operations described in conjunction with another embodiment. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of this disclosure.
Thus, the following detailed description of the embodiments of the systems and methods of the disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of possible embodiments. In addition, the steps of a method do not necessarily need to be executed in any specific order, or even sequentially, nor do the steps need to be executed only once.
The dual modality positioning system 110 may transmit the ultrasound 120 as directional ultrasonic pulses, continuously, in a modulated fashion (frequency, amplitude, phase, etc.), and/or in another format. The ultrasound 120 may be transmitted directly toward the persons 151, 152, and 153. The ultrasound 120 may be transmitted indirectly toward the persons 151, 152, and 153. The position of each of the persons may affect the state of an entertainment device used for providing induced tactile stimuli to the users.
In various embodiments, the dual modality positioning system 110 may be any shape or size and/or may comprise a plurality of distributed components. The illustrated embodiment is merely an example and is not intended to convey any information regarding shape, size, configuration, or functionality. In various embodiments, the dual modality positioning system 110 may include an array of transducers, such as piezoelectric transducers, configured to transmit and/or receive ultrasound and/or electromagnetic radiation. The dual modality positioning system 110 may be configured with a first plurality of transducers 112 (or a single transducer) for transmitting ultrasound and/or electromagnetic radiation and a second plurality of transducers 113 (or a single transducer) for receiving ultrasound.
As used herein, the terms rebound and rebounding may include any type of reflection, refraction, and/or repeating that may or may not include a phase, frequency, modulation, and/or amplitude change. Rebounding may be performed by the outer surface of the surface, an inner portion of the surface, or an object disposed on, in, or behind the surface (e.g., exterior paint, drywall, internal metal, studs, interior coatings, mounted panels, etc.).
The ultrasound may ultimately be rebounded 227 to reflect off persons 251, 252, and 253 at a different angle than that obtained in
As illustrated in
The ultrasonic and/or electromagnetic radiation reflectors may comprise passive, active, and/or actively moved/pivoted ultrasonic reflectors for controlling the direction in which ultrasound rebounds and/or otherwise travels within the region. For example, the ultrasonic and/or electromagnetic radiation reflector may be configured to modify one or more of the frequency, phase, and/or amplitude of the rebounded ultrasound and/or electromagnetic radiation. The modified characteristic may facilitate the differentiation of the direct ultrasonic and/or electromagnetic radiation reflections and the rebounded ultrasonic and/or electromagnetic radiation reflections.
The dual modality positing system 310 may generate positional data associated with one or more of the object(s) based on the direct ultrasonic and/or electromagnetic radiation reflection(s) (e.g.,
The positional data may be defined with respect to one or more surfaces of the region, the dual modality positioning system 310, a receiver 312 of the positioning system 310, and/or a transmitter 313 of the positioning system 310. The one or more objects within the region may comprise machinery, robots, furniture, household property, people in general, gamers, human controllers of electronic devices, electronic devices, fixtures, and/or other human or non-human objects.
The object may comprise a specific portion or site of a person, such as a hand, finger, arm, leg, foot, toe, torso, neck, head, mouth, lip, and/or eye. As illustrated in
In some embodiments, pivot control 495 may change other reflective, absorptive, and/or refractive properties of the ultrasonic reflector 472, in addition to its direction. For example, the ultrasonic reflector 472 may have specific ultrasonic or other acoustic absorptive properties. A pivot control 495 may adjust the pivoting and/or acoustic and/or electrical properties.
An ultrasonic transmitter module 580 may be configured to transmit ultrasound in any of the various forms and/or methods described herein. An ultrasonic receiver module 582 may be configured to receive a direct ultrasonic reflection from an object within a region. Additionally, the ultrasonic receiver module 582 may be configured to receive rebounded ultrasonic reflection from the object. As used herein, direct reflections and rebounded reflections refer to the various descriptions provided herein as well as the generally understood and variations of these terms.
A mapping system module 584 generates direct positional data associated with the object based on one or more direct ultrasonic reflections. The mapping system module 584 may also generate direct positional data associated with the object based on one or more indirect ultrasonic reflections, as may be understood in the art. The mapping system module 584 may also generate rebounded positional data associated with the object based on one or more indirect ultrasonic reflections, as may be understood in the art.
A direct reflection module 586 may be configured to facilitate, manage, and/or monitor the transmission and/or reception of direct reflections. The rebounded reflection module 588 may be configured to facilitate, manage, and/or monitor the transmission and/or reception of rebounded reflections.
The positional data calculation module 589 may generate direct positional data associated with the object based on one or more direct ultrasonic reflections. The positional data calculation module 589 may also generate rebounded positional data associated with the object based on one or more rebounded ultrasonic reflections. The positional data calculation module 589 may also generate enhanced positional data by combining the direct positional data and the rebounded positional data.
The positioning system may receive 612 rebounded ultrasonic reflections from at least one object within the region. The rebounded ultrasonic reflections may reflect off the wall(s) first and/or off the object(s) first. The positioning system may generate 614 positional data based on the direct reflections from the object. The positioning system may generate 616 positional data based on the rebounded reflections from the object.
The positioning system may generate 618 enhanced positional data by combining the direct positional data and the rebounded positional data. In other embodiments, the positioning system may transmit the direct positional data and the rebounded positional data to another electronic or other processing device for usage.
Any of the various configurations of ultrasonic transmitters, receivers, reflectors, and/or other components described in conjunction with the detection of the position of an object may also be applied to the embodiments described herein with respect to the detection and/or calculation of velocity and/or acceleration data associated with an object or objects, including those embodiments described below with reference to
In equation 1 above it is assumed that a transmission medium (e.g., air) is relatively stationary, fr is the frequency of the received ultrasound, C is the velocity of the ultrasound in the medium (e.g., air), Vr is the velocity of the ultrasonic receiver relative to the medium, Vo is the velocity of the object relative to the medium, and ft is the frequency of the transmitted ultrasound. An acceleration of the object may be determined using velocity calculations at multiple discrete time periods and/or by detecting a change in the frequency of the received ultrasound, fr, over time.
As described herein, the ultrasonic system 710 may include one or more ultrasonic transmitters and/or ultrasonic receivers and the transmitters and receivers may be physically joined (as illustrated in
In one embodiment, the ultrasound may first be reflected by the object 830, and then rebounded by the reflector 850. In such an embodiment, it may be possible to determine velocity and/or acceleration information of the object 830 relative to the reflector 850.
It is understood that “determining a shift,” “detecting a shift,” “calculating a shift,” and the like may not necessarily require an actual determination of the difference between the, e.g., frequency, of the transmitted and received ultrasound. That is, “detecting a shift” and similar phrases may be constructively performed during a Doppler calculation of velocity and/or acceleration. For example, “detecting a shift” may be constructively performed if a velocity of an object is determined using (1) a known/measured frequency of transmitted ultrasound and (2) a known/measured frequency of ultrasound reflected by the object. The system may or may not actually calculate the frequency difference between the transmitted and received ultrasound, as various derivative and equal algorithms for Doppler-based velocity calculations may be utilized.
In some embodiments, rebounded reflections from the object may be used to determine velocity and/or acceleration data based on a detected shift in the ultrasound, as provided in block 1250. Ultrasound may be transmitted 1205 into a region bounded by at least one surface. A receiver may receive 1212 rebounded ultrasonic reflections from at least one object or a site on an object within the region. A shift, such as a wavelength shift, frequency shift, or phase shift, may be determined 1213 between the transmitted ultrasound and the received ultrasound. The system may then generate 1216 velocity and/or acceleration data based on the detected shift. In various embodiments, velocity and/or acceleration data from direct reflections and rebounded reflections may be optionally combined 1218. Velocity and/or acceleration data from direct reflections and rebounded reflections may be used to determine two-dimensional vectors of velocity and/or acceleration information related to the object or a site on the object.
The dual modality positioning system 1310a may utilize its ultrasound modality similar to that discussed in relation to the previous figures. For example, the dual modality positioning system 1310a may transmit the ultrasound 1325a as directional ultrasonic pulses, continuously, in a modulated fashion (frequency, amplitude, phase, etc.), and/or in another format. The ultrasound 1325a may be transmitted directly toward the person 1362a and/or the ultrasound 1325a may be transmitted indirectly toward the person 1362a.
The portion of interest may be identified based partly on the state of an associated entertainment device. For example, a state of the associated entertainment device may utilize a hand movement for a particular action that cannot be determined using coarse image data. In that situation, the dual modality positioning system 1310c may identify the hands of a person 1362c as a portion of interest for which fine image data (i.e., higher resolution images) are desired. Whatever portions of interest are identified, the dual modality positioning system 1310c may use a second modality to receive additional and more detailed image information.
For example,
A system may use electromagnetic imaging capabilities to receive either coarse image data at a low-resolution or fine image data at a higher, more detailed resolution. For example,
For example,
In one embodiment, a sensor with multiple pixels may be used to capture the representative image of an area of interest 1900. For example, a sensor with six pixels may have captured the representative image of the area of interest 1900. However, such multi-pixel sensors may be expensive, and/or create significant processing burdens.
Therefore, to create a less expensive system, another embodiment may utilize a one-pixel sensor. The one-pixel sensor may be used to capture each portion of the area of interest individually. For example, the one-pixel sensor may first capture pixel 1905, then pixel 1907, then pixel 1909, then pixel 1911, then pixel 1913, and finally pixel 1915. The systems processor would then combine the pixels to form the representative image. However, this process may be even more taxing on the processor than using a multi-pixel sensor.
Therefore, in order to use less processing power, the six pixel image of the area of interest 1900 may be approximated using compressed sensing techniques. This approximation may be completed using multiple smaller images at a lower resolution.
For example,
Each of the five single value image captures 1917 is directed at a specific portion of the region of interest. As shown, these specific portions may overlap. The first capture 1919 represents an average over the top left portion of the region of interest. The second capture 1921 represents an average over the top right portion of the region of interest. The third capture 1923 represents an average over the left two-thirds of the region of interest. The fourth capture 1925 represents an average over the right two-thirds of the region of interest. The fifth capture 1927 represents an average over the bottom half of the region of interest.
A perfect approximation of the six pixel image requires at least six lower resolution image captures—one capture for each of the six pixels of the region of interest. However, a good approximation may be achieved with fewer than six single value images using compressed sensing techniques. For example, the five single value image captures 1917 can be used to approximate the six pixels of the area of interest 1900 in
For example,
Each image capture shows the average value of a portion of the larger image 2102. Small single value image captures are taken in overlapping sections of the larger image 2102. In some embodiments, the image captures may not be overlapping. The average value of each portion of the image is combined to approximate the value of the larger image 2102. The approximation of the larger image may be completed using relatively few lower resolution images. The small number of required single or low value images allows a large image to be approximated quickly and efficiently while requiring lower quality images for the reconstruction.
For example, the upper right corner of the larger image 2102 can be approximated using compressed imaging techniques. As illustrated, the upper right corner of the larger image 2102 is captured by two single value image captures (i.e., image capture 2110 and image capture 2112). As shown, image capture 2110 estimates the color saturation value as 5%, whereas image capture 2112 estimates the color saturation value as 10%. These image captures overlap in the upper right corner of the larger image 2102. Therefore, to estimate the color saturation value of the upper right corner of the larger image 2102 the two color saturation values may be averaged to 7.5%.
Other embodiments may use more single value image captures to approximate each pixel. Some examples may use different algorithms to approximate each pixel. For example, instead of simply averaging each single value image capture, some single value image captures may be weighted more than others in the approximation algorithm. For instance, if a sensing system were to capture a large area in one image and a small area in a second image, the second image may contain a closer value to the actual pixel. Therefore, the system may approximate the pixel closer to the second image.
A more detailed image of areas of interest may be obtained by using additional low resolution images. In certain circumstances it may be advantageous to create a detailed picture. In other examples, a less detailed image, requiring less processing power, is sufficient.
Further, the compressed sensing may be used for the whole area of interest or portions of the area of interest. For example, a state of the associated entertainment device may utilize a hand movement for a particular action. In such an example, the compressed sensing system may determine the location of the user's hands with ultrasound or electromagnetic radiation. Upon the determination, the sensor system may use compressed sensing to create a more detailed image of the user's hand. In addition, the compressed sensing system may be adaptive.
For example, in certain embodiments, the compressed sensing system may detect the movement of a user's hand and identify where the hand has moved with ultrasound or electromagnetic radiation, and use compressed sensing to create a detailed image of the hand in the new place. Other embodiments may utilize, detect, and capture a user's feet, leg, arm, body, and/or facial movements with the compressed sensing techniques described above.
In some instances the generated pulse may be audible to the user of the system. In other instances the pulse may only be felt and not heard by the user 2462. The induced tactile stimuli may provide the user with a feeling of enhanced interaction with the entertainment system. For example, if the user were interacting with a video game for simulating boxing, the system may use induced tactile stimuli to inform the user 2462 if his or her opponent had scored a hit. In addition the induced tactile stimuli may be calibrated to simulate various other tactile sensations (e.g., a touch, an impulse, a brush, or a kinesthetic sensation). The induced tactile stimuli may also be adjusted to target sites of interest on or in the user.
For example, the induced tactile stimuli may be directed toward the user's hands, fingers, arms, legs, toes, feet, head, ears, lips, nose, and/or some other region of interest depending on the application of the entertainment system. The variation in feedback may create a more diverse interaction between the user and the entertainment system. In some instances or embodiments the system may target the user's feet to inform the user when to step in an exercise game.
In some embodiments the system may target a user's 2462 fingers to simulate playing a musical instrument. In still other embodiments the system may use induced tactile stimuli to give the user 2462 a sense of motion during a driving simulation. These examples are intended to be illustrative and not limiting. That is, the examples provided herein are meant to demonstrate multiplicity of applications that may utilize and/or benefit from induced haptic sensations, such as tactile stimuli, to enhance the interaction between an entertainment system and a user 2462.
The beat frequency of interest may, for example, be the magnitude of the difference of the two ultrasonic pulses 2425 and 2427. In addition, the amplitude, phase, frequency and/or duration of the generated pulse 2477 may be adjusted by adjusting either the first 2425 or second 2427 ultrasonic pulse. If the generated pulse 2477 is at a frequency that can be felt by a human, e.g., via human skin, it may be used to provide induced haptic stimulation (e.g., tactile stimuli) to a user of an entertainment system.
For example, the user 2462 may engage in a driving simulation on an entertainment system. Induced haptic sensations may be used to provide a sense of motion to the user, e.g., to simulate a turn, wind resistance, and/or other haptic sensations connected with driving.
There may be embodiments where multiple users interact virtually through the entertainment system display interface. The system may provide haptic sensations to each player if they score a point, or the system may induce haptic sensations to provide feedback when a user loses a life or engages another player. It may even be possible for players to direct haptic sensations towards one another during the course of, or as part of, a game.
Other embodiments may interact similarly with a single user. For example, if the user engaged in a boxing simulation, the entertainment system may use haptic feedback to notify the user of contact by his opponent. This may cause the user 2462 to feel a heightened sense of interaction with the simulation presented by the entertainment system.
For example, the user 2505 is depicted in
For example, the system 2507 may in some embodiments “wake up” from a standby mode when the user 2505 claps. The system 2507 may track the motion of the user's 2505 eyes to determine where the user 2505 is looking on the displayed image 2503. The system 2507 may at times interact with the user 2505 differently depending on whether the user 2505 is standing or sitting.
In some embodiments, the system 2507 may display a video or movie, and certain gestures, such a waving or pointing, may be used to pause, play, fast forward, rewind, or stop the video. During the course of a video game, similar actions may be mapped to swinging a baseball bat or engaging an opponent. Haptic sensations may be provided to the user as feedback and/or as part of the interaction.
The induced haptic sensation (e.g., tactile stimulation) may be generated using any of the various embodiments or combinations thereof that are discussed herein. As illustrated, two ultrasonic pulses 2510 and 2520 are transmitted and mix near or at a site (the hand) of the user. The mixing results in a lower frequency acoustic signal 2530 at a beat frequency of the two ultrasonic pulses 2510 and 2520. The lower frequency acoustic signal 2530 induces a haptic sensation at the hand of the user to provide contextual information relating to the on-screen display and/or the user's interaction therewith.
This disclosure has been made with reference to various exemplary embodiments, including the best mode. However, those skilled in the art will recognize that changes and modifications may be made to the exemplary embodiments without departing from the scope of the present disclosure. While the principles of this disclosure have been shown in various embodiments, many modifications of structure, arrangements, proportions, elements, materials, and components may be adapted for a specific environment and/or operating requirements without departing from the principles and scope of this disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure.
This disclosure is to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope thereof. Likewise, benefits, other advantages, and solutions to problems have been described above with regard to various embodiments. However, benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element. The scope of the present invention should, therefore, be determined by the following claims: