The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
Display technologies for artificial reality (e.g., augmented reality) displays may utilize diffractive waveguide couplers to project images to a user's eyes. However, such display systems may suffer from limited field of view (FOV) caused by limited available guiding angles inside of conventional waveguide mediums. Using a higher-index medium is a common approach to increase FOV. However, the use of such higher-index mediums can be very expensive and may still result in a limited FOV, with only marginal increases in FOV being realized in many cases.
The present disclosure is generally directed to display systems, devices, and methods that include volume Bragg grating (VBG) coupling waveguides. According to at least one embodiment, a wide FOV can be achieved by using an optimized VBG coupler. The disclosed VBG couplers may provide wide FOVs using conventional low-index materials for the waveguides. According to at least one example, an optimized VBG coupler may enable delivery of a wide FOV using a relatively narrower guiding angle range. Such a wide FOV may be realized because the VBG has high spectral selectivity. Accordingly, multiple angles in FOV can be delivered at a single guiding angle, as long as their wavelength is different from each other. In one example, an FOV of approximately 120°×120° may be achieved with 1.5-refractive-index waveguides. Accordingly, the systems presented in this disclosure may provide low-cost, wide-FOV waveguides. Optimized light sources may be used together for maximum efficiency, providing angle (or pixel)-dependent wavelength control.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The following will provide, with reference to
In at least one example, the in- and out-couplers may be general diffractive gratings (e.g., surface relief gratings) with a single surface period of Λ. Then, the guiding angle θg may be determined by the following diffraction law equation:
The guiding angle θg must be greater than the critical angle for total internal reflection (TIR) conditions and less than a certain angle (ex. 75°) for the density of replicated exit pupils.
Using a higher-index medium may be the most traditionally straightforward method to increase FOV. However, it can be a very expensive method and it may still result in a limited FOV when utilized in a conventional waveguide structure.
In
This “zipping coupler” approach is different from previous studies in that a wide FOV can be achieved through a coupler design without a high index material. In other words, it is a method to improve the coupling relation, not the limit of the guiding angle, among the two causes of the limited FOV.
As noted above, a zipping coupler with the coupling relation of Equation 2 cannot theoretically exist. However, the systems disclosed herein may implement a practically equivalent coupling relation using VBG waveguides.
Looking next look at the combination of segments. The out-coupled angles of the left and right ends of each segment are in contact with each other. So, all out-coupled angles within 120 deg FOV are related to one guiding angle. This relationship is not a 1:1 mapping of course, but even out-coupled angles with the same guiding angle have different wavelengths, so they do not create cross-talk with each other. As a result, it is possible to cover a wide FOV using a limited guiding angle, although it has a slightly different wavelength depending on the out-coupled angle within the FOV.
In practice, this coupling relation curve also has a finite narrow bandwidth. So, the wavelength difference at two different points with the same guiding angle must be wider than this bandwidth. The design principle taking this into account is described in the following section.
λ(θ)=2n1p cos (θ−s) (Eq. 3)
For one wavelength, two θs create a Bragg-matching condition. Among these pairs, the one smaller than the critical angle becomes the out-coupled angle θo, and the rest becomes the guiding angle θg. That is, when incident at an angle corresponding to point 502 in
Plotting this narrow band for a fixed angle near the vertical solid line 506 in
The disclosed zipping VBG couplers may be designed using Equations 3 and 4. Multiple pairs of grating pitch p and slant angle s can represent a multiplexed VBG.
In the guiding angle range, the curve of each grating should be positioned as densely as possible to utilize the limited guiding angle range efficiently. The first rule determines the minimum gap at this time. In the out-coupled angle range, the Bragg-matching curves of each grating should be arranged as sparsely as possible to cover a large FOV. The second rule sets the maximum gap at this time.
The zipping function becomes possible because the spacing of the Bragg-matching curves is different in the guiding angle range and in the out-coupled angle range. As a result, in the design represented in
In the above description, 1D FOV along the x-axis has been considered. In the following section, 2D angular space along xy-space will be considered.
As can be seen in
In
Since VBG zipping couplers use high spectral selectivity, if the spectral bandwidth of the light source is too wide, only a part of it can be transmitted. That is, the light efficiency will decrease. On the other hand, as shown in
A VBG zipping coupler, as described herein, necessarily accompanies color nonuniformity because different wavelengths are transmitted differently depending on the angle within the FOV, even within one color channel. Therefore, post-compensation for such angular transmission differences may be necessary to achieve a desired output. Through this post-compensation, the output light will have a narrower color gamut than a system using a laser. However, it can still be a wide enough color gamut for a typical display system.
In the previous sections, only a green channel (525 nm˜575 nm) was analyzed. In some embodiments, a single zipping VBG design may be utilized to cover all three RGB wavelengths. Additionally or alternatively, three separate VBG waveguides (one for each of the three RGB wavelength ranges) may be stacked.
Each color channel VBG may have the same slant angle as shown in
First, the added middle grating should cover a wide and irregular-shaped angular band as shown in
Second, the diffraction efficiency of the out-coupler should be intentionally made much lower. For this, a design change may be required to reduce the refractive index modulation Δn of the grating or the thickness t. Looking at Equations 3 and 4, there is no need to change the design for the slant angle or grating pitch because the Bragg matching condition and its bandwidth are independent of Δn or t. However, since maximum diffraction efficiency is reduced, bandwidth based on 2% efficiency may not be sufficient for a high SNR.
The designs described in the preceding sections as properly designed zipping VBG couplers include certain assumptions that may varied as necessary. For example, the spectral bandwidth was set to 50 nm, the VBG thickness was set to 100 μm, and the maximum refractive index modulation was set to 0.03. 120°×120° of target FOV, 1.5 refractive index, and usable guiding angle range under 75° are also arbitrarily selected values. All of these values may be changed or limited as needed.
As discussed herein, the present disclosure is generally directed to display systems, devices, and methods that include zipping VBG couplers. Optimized VBG couplers can deliver a wide FOV beyond conventional limits utilizing even relatively low-refractive index waveguide materials. Wide FOVs (e.g., approximately)120°×120°) may be achieved with a waveguide having a refractive index as low as approximately 1.5. Low-cost, wide-FOV waveguides are therefore achievable with the presently disclosed systems. Optimized light sources may additionally be utilized for maximum system efficiency.
Embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial-reality systems. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof. Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content. The artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial-reality systems may be designed to work without near-eye displays (NEDs). Other artificial-reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 1200 in
Turning to
In some embodiments, augmented-reality system 1200 may include one or more sensors, such as sensor 1240. Sensor 1240 may generate measurement signals in response to motion of augmented-reality system 1200 and may be located on substantially any portion of frame 1210. Sensor 1240 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof. In some embodiments, augmented-reality system 1200 may or may not include sensor 1240 or may include more than one sensor. In embodiments in which sensor 1240 includes an IMU, the IMU may generate calibration data based on measurement signals from sensor 1240. Examples of sensor 1240 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
In some examples, augmented-reality system 1200 may also include a microphone array with a plurality of acoustic transducers 1220(A)-1220(J), referred to collectively as acoustic transducers 1220. Acoustic transducers 1220 may represent transducers that detect air pressure variations induced by sound waves. Each acoustic transducer 1220 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format). The microphone array in
In some embodiments, one or more of acoustic transducers 1220(A)-(J) may be used as output transducers (e.g., speakers). For example, acoustic transducers 1220(A) and/or 1220(B) may be earbuds or any other suitable type of headphone or speaker.
The configuration of acoustic transducers 1220 of the microphone array may vary. While augmented-reality system 1200 is shown in
Acoustic transducers 1220(A) and 1220(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 1220 on or surrounding the ear in addition to acoustic transducers 1220 inside the ear canal. Having an acoustic transducer 1220 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal. By positioning at least two of acoustic transducers 1220 on either side of a user's head (e.g., as binaural microphones), augmented-reality device 1200 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head. In some embodiments, acoustic transducers 1220(A) and 1220(B) may be connected to augmented-reality system 1200 via a wired connection 1230, and in other embodiments acoustic transducers 1220(A) and 1220(B) may be connected to augmented-reality system 1200 via a wireless connection (e.g., a BLUETOOTH connection). In still other embodiments, acoustic transducers 1220(A) and 1220(B) may not be used at all in conjunction with augmented-reality system 1200.
Acoustic transducers 1220 on frame 1210 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 1215(A) and 1215(B), or some combination thereof. Acoustic transducers 1220 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 1200. In some embodiments, an optimization process may be performed during manufacturing of augmented-reality system 1200 to determine relative positioning of each acoustic transducer 1220 in the microphone array.
In some examples, augmented-reality system 1200 may include or be connected to an external device (e.g., a paired device), such as neckband 1205. Neckband 1205 generally represents any type or form of paired device. Thus, the following discussion of neckband 1205 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, other wearable devices, hand-held controllers, tablet computers, laptop computers, other external compute devices, etc.
As shown, neckband 1205 may be coupled to eyewear device 1202 via one or more connectors. The connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components. In some cases, eyewear device 1202 and neckband 1205 may operate independently without any wired or wireless connection between them. While
Pairing external devices, such as neckband 1205, with augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities. Some or all of the battery power, computational resources, and/or additional features of augmented-reality system 1200 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality. For example, neckband 1205 may allow components that would otherwise be included on an eyewear device to be included in neckband 1205 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads. Neckband 1205 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 1205 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 1205 may be less invasive to a user than weight carried in eyewear device 1202, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
Neckband 1205 may be communicatively coupled with eyewear device 1202 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 1200. In the embodiment of
Acoustic transducers 1220(I) and 1220(J) of neckband 1205 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital). In the embodiment of
Controller 1225 of neckband 1205 may process information generated by the sensors on neckband 1205 and/or augmented-reality system 1200. For example, controller 1225 may process information from the microphone array that describes sounds detected by the microphone array. For each detected sound, controller 1225 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, controller 1225 may populate an audio data set with the information. In embodiments in which augmented-reality system 1200 includes an inertial measurement unit, controller 1225 may compute all inertial and spatial calculations from the IMU located on eyewear device 1202. A connector may convey information between augmented-reality system 1200 and neckband 1205 and between augmented-reality system 1200 and controller 1225. The information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented reality system 1200 to neckband 1205 may reduce weight and heat in eyewear device 1202, making it more comfortable to the user.
Power source 1235 in neckband 1205 may provide power to eyewear device 1202 and/or to neckband 1205. Power source 1235 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage. In some cases, power source 1235 may be a wired power source. Including power source 1235 on neckband 1205 instead of on eyewear device 1202 may help better distribute the weight and heat generated by power source 1235.
As noted, some artificial-reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience. One example of this type of system is a head-worn display system, such as virtual-reality system 1300 in
Artificial-reality systems may include a variety of types of visual feedback mechanisms. For example, display devices in augmented-reality system 1200 and/or virtual-reality system 1300 may include one or more liquid crystal displays (LCDs), light emitting diode (LED) displays, microLED displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen. These artificial-reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error. Some of these artificial-reality systems may also include optical subsystems having one or more lenses (e.g., concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen. These optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light. These optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so-called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion).
In addition to or instead of using display screens, some of the artificial-reality systems described herein may include one or more projection systems. For example, display devices in augmented-reality system 1200 and/or virtual-reality system 1300 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through. The display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial-reality content and the real world. The display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light-manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc. Artificial-reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
The artificial-reality systems described herein may also include various types of computer vision components and subsystems. For example, augmented-reality system 1200 and/or virtual-reality system 1300 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor. An artificial-reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
The artificial-reality systems described herein may also include one or more input and/or output audio transducers. Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer. Similarly, input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer. In some embodiments, a single transducer may be used for both audio input and audio output.
In some embodiments, the artificial-reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system. Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms. Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
By providing haptic sensations, audible content, and/or visual content, artificial-reality systems may create an entire virtual experience or enhance a user's real-world experience in a variety of contexts and environments. For instance, artificial-reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world. Artificial-reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.). The embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various example methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the example embodiments disclosed herein. This example description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to any claims appended hereto and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and/or claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and/or claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and/or claims, are interchangeable with and have the same meaning as the word “comprising.”
Wearable biopotential measurement technologies (e.g., electromyography, electrocardiography) use dry electrodes to record biopotentials from the human body. Biopotential electrodes with low skin-electrode impedance values are desired for improved contact quality, noise performance, and signal quality. However, contact-based health sensing electrodes suffer from the presence of hair on the skin surface. Hair blocks the signal transmission from the skin to the electrode by creating a resistive layer, distorts biopotential signals, and contributes to the baseline noise.
The present disclosure is generally directed to systems and methods for human-computer interaction. The disclosure details development of biopotential electrodes with surface microstructures for improving the hair penetration as well as decreasing the skin electrode contact impedance on the hairy sites of the skin. As will be explained in greater detail below, embodiments of the present disclosure may measure from different angles, by at least one processor and using an imaging technique, skin hair coverage in a skin surface region of a user. Embodiments of the present disclosure may also determine, by the at least one processor and based on the measurements, specifications of biopotential electrodes including microstructures on a surface thereof configured to contact the skin surface region of the user and locations of the biopotential electrodes on a wearable device. Embodiments of the present disclosure may further perform human-computer interaction, by at least one processor, based on biopotential measurements provided by the biopotential electrodes that include microstructures on the surface thereof that are configured to contact the skin surface region of the user.
Embodiments of the present disclosure may perform the human-computer interaction in various ways. For example, the disclosed microstructured electrodes can be applied to any body-worn devices with biopotential recording or stimulation functionalities. In some implementation, the disclosed microstructured electrodes can be used as signal recording electrodes of electromyography (EMG) wristbands. Also, the electrodes on the hairy sites of the wrist (palmar, ulnar, or radial sites) can be replaced with the disclosed microstructured electrodes for improved skin-electrode coupling. Additionally, the disclosed microstructured electrodes can be used as electrodes of chest-worn straps or bands for wellness and fitness monitoring (electrocardiography monitoring, respiration monitoring, lung health monitoring with electrical impedance tomography). In these and other contexts, the microstructured electrodes may improve skin-electrode coupling on the hairy sites of the chest. Further, the disclosed microstructured electrodes can be used as electrodes of disposable or continuous signal recording or stimulation patches. Further, the disclosed microstructured electrodes can be used as electrodes of impedance plethysmography (IPG) devices for continuous blood pressure monitoring, electrodes of total or local skin hydration or perspiration monitoring, and/or electrodes of other wearable stimulation or therapeutic applications (e.g., cancer). Further, the disclosed microstructured electrodes can be used as biopotential recording electrodes of AR/VR/MR glasses and headsets. In some implementations, the microstructured electrodes can be integrated on the temple of eyeglasses or other parts of headsets to interface with the hairy sites on the head to record health biometrics.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The following will provide, with reference to
As illustrated in
At step 1420, one or more of the systems described herein may determine specifications and locations of biopotential electrodes. For example, electrode specification and location determination module 1406, as part of system 1500 in
At step 1430, one or more of the systems described herein may perform human-computer interaction. For example, human-computer interaction performance module 1408, as part of system 1500 in
A system for human-computer interaction may be implemented in any suitable manner. Turning to
Example system 1500 in
Referring to
Referring to
Referring to
Referring to
Comparing
Referring to
Referring to
Comparing
Referring to
Referring to
Referring to
Comparing
As set forth above, results from two subjects show that presence of microstructures on the surface of biopotential electrodes (e.g., non-penetrating the skin) has the potential of decreasing the skin-electrode impedance of subjects with significant skin hair coverage. In the study, micromachined metal electrodes with varying shape, pitch, and height of surface microstructures were used to measure skin-electrode impedance using a desktop impedance recording system. Results show that dense and tall microstructured electrodes may be effective in decreasing the skin-electrode impedance of a subject with at least 30% skin hair coverage. This improvement at the skin-electrode interface may be attributed to improved electrode penetration and increased electrode surface area in the presence of hair with respect to electrodes without surface microstructures.
The disclosed microstructured electrodes can be used in various ways. For example, microstructured biopotential recording electrodes with varying shapes, densities, and heights can be personalized based on the user needs. For example, during the mechanical design of a wristband, wrist skin hair coverage of the user can be measured from different angles using an imaging technique. Then, the specifications of microstructured electrodes and the locations of microstructured electrodes on the wristband can be determined based on the wrist areas with significant hair coverage, such as palmar, ulnar, or radial wrist sites. Additionally, with the disclosed miocrostructured electrodes, low skin-electrode impedance can be achieved in the presence of significant skin hair coverage (>30%) with respect to benchmarks (e.g., electrodes without surface microstructures). The disclosed microstructured electrodes may improve the effectiveness of EMG or impedance-based gesture detection systems due to its improved skin electrode coupling. Also, the disclosed microstructured electrodes can enables smooth wristband integration and improved user comfort. For example, the microstructured electrodes can be integrated in a manner that renders them flush with the surface of the wristband, which eliminates the need for protruding electrodes. Further, materials of the microstructured electrodes can be extended to include soft materials such as conductive polymers for improved user comfort, increased hair penetration, ease-of-fabrication, and cost effectiveness.
In some embodiments, a computer-implemented method may include measuring from different angles, by at least one processor and using an imaging technique, skin hair coverage in a skin surface region of a user, determining, by the at least one processor and based on the measurements, specifications of biopotential electrodes including microstructures on a surface thereof configured to contact the skin surface region of the user and locations of the biopotential electrodes on a wearable device, and performing human-computer interaction, by at least one processor, based on biopotential measurements provided by the biopotential electrodes that include microstructures on the surface thereof that are configured to contact the skin surface region of the user.
In one embodiment, a system may include at least one physical processor, and a computer readable medium having instructions recorded thereon that, when executed by the at least one physical processor, cause the at least one physical processor to measure from different angles, using an imaging technique, skin hair coverage in a skin surface region of a user, determine, based on the measurements, specifications of biopotential electrodes including microstructures on a surface thereof configured to contact the skin surface region of the user and locations of the biopotential electrodes on a wearable device, and perform human-computer interaction based on biopotential measurements provided by the biopotential electrodes that include microstructures on the surface thereof that are configured to contact the skin surface region of the user.
Haptic feedback may be provided by interfaces positioned within a user's environment (e.g., chairs, tables, floors, etc.) and/or interfaces on articles that may be worn or carried by a user (e.g., gloves, wristbands, etc.). As an example,
One or more vibrotactile devices 2740 may be positioned at least partially within one or more corresponding pockets formed in textile material 2730 of vibrotactile system 2700. Vibrotactile devices 2740 may be positioned in locations to provide a vibrating sensation (e.g., haptic feedback) to a user of vibrotactile system 2700. For example, vibrotactile devices 2740 may be positioned against the user's finger(s), thumb, or wrist, as shown in
A power source 2750 (e.g., a battery) for applying a voltage to the vibrotactile devices 2740 for activation thereof may be electrically coupled to vibrotactile devices 2740, such as via conductive wiring 2752. In some examples, each of vibrotactile devices 2740 may be independently electrically coupled to power source 2750 for individual activation. In some embodiments, a processor 2760 may be operatively coupled to power source 2750 and configured (e.g., programmed) to control activation of vibrotactile devices 2740.
Vibrotactile system 2700 may be implemented in a variety of ways. In some examples, vibrotactile system 2700 may be a standalone system with integral subsystems and components for operation independent of other devices and systems. As another example, vibrotactile system 2700 may be configured for interaction with another device or system 2770. For example, vibrotactile system 2700 may, in some examples, include a communications interface 2780 for receiving and/or sending signals to the other device or system 2770. The other device or system 2770 may be a mobile device, a gaming console, an artificial-reality (e.g., virtual-reality, augmented-reality, mixed-reality) device, a personal computer, a tablet computer, a network device (e.g., a modem, a router, etc.), a handheld controller, etc. Communications interface 2780 may enable communications between vibrotactile system 2700 and the other device or system 2770 via a wireless (e.g., Wi-Fi, BLUETOOTH, cellular, radio, etc.) link or a wired link. If present, communications interface 2780 may be in communication with processor 2760, such as to provide a signal to processor 2760 to activate or deactivate one or more of the vibrotactile devices 2740.
Vibrotactile system 2700 may optionally include other subsystems and components, such as touch-sensitive pads 2790, pressure sensors, motion sensors, position sensors, lighting elements, and/or user interface elements (e.g., an on/off button, a vibration control element, etc.). During use, vibrotactile devices 2740 may be configured to be activated for a variety of different reasons, such as in response to the user's interaction with user interface elements, a signal from the motion or position sensors, a signal from the touch-sensitive pads 2790, a signal from the pressure sensors, a signal from the other device or system 2770, etc.
Although power source 2750, processor 2760, and communications interface 2780 are illustrated in
Haptic wearables, such as those shown in and described in connection with
Head-mounted display 2802 generally represents any type or form of virtual-reality system, such as virtual-reality system 2800 in
While haptic interfaces may be used with virtual-reality systems, as shown in
One or more of band elements 2932 may include any type or form of actuator suitable for providing haptic feedback. For example, one or more of band elements 2932 may be configured to provide one or more of various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. To provide such feedback, band elements 2932 may include one or more of various types of actuators. In one example, each of band elements 2932 may include a vibrotactor (e.g., a vibrotactile actuator) configured to vibrate in unison or independently to provide one or more of various types of haptic sensations to a user. Alternatively, only a single band element or a subset of band elements may include vibrotactors.
Haptic devices 2710, 2720, 2804, and 2930 may include any suitable number and/or type of haptic transducer, sensor, and/or feedback mechanism. For example, haptic devices 2710, 2720, 2804, and 2930 may include one or more mechanical transducers, piezoelectric transducers, and/or fluidic transducers. Haptic devices 2710, 2720, 2804, and 2930 may also include various combinations of different types and forms of transducers that work together or independently to enhance a user's artificial-reality experience. In one example, each of band elements 2932 of haptic device 2930 may include a vibrotactor (e.g., a vibrotactile actuator) configured to vibrate in unison or independently to provide one or more of various types of haptic sensations to a user.
Dongle portion 3120 may include antenna 3152, which may be configured to communicate with antenna 3350 included as part of wearable portion 3110. Communication between antennas 3150 and 3152 may occur using any suitable wireless technology and protocol, non-limiting examples of which include radiofrequency signaling and BLUETOOTH. As shown, the signals received by antenna 3152 of dongle portion 3120 may be provided to a host computer for further processing, display, and/or for effecting control of a particular physical or virtual object or objects.
Although the examples provided with reference to
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive biopotential measurements to be transformed, transform the biopotential measurements, output a result of the transformation to perform human-computer interaction, use the result of the transformation to perform human-computer interaction, and store the result of the transformation to perform human-computer interaction. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure. Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
Social networking systems provide many ways for users to engage with each other. For example, social networking systems enable users to create content, share content, comment on each other's shared content, and to compose and send digital messages to each other. In some implementations, social networking systems also provide forums where groups of users may submit electronic messages to a group of social networking system users. These messages may be seen by any member of the group. In some implementations, the forum may also be public such that any social networking system user may view the messaging content within the forum.
While these public messaging forums enable discourse between a larger number of users, they also can give rise to an increase in adversarial behavior. For example, users can add messages to public messaging forums within a social networking system that include adversarial behavior such as hate speech, bullying, explicit language, and so forth.
In light of this, the present disclosure is generally directed to systems and methods for identifying and mitigating escalating behavior in public messaging forums. As will be explained in greater detail below, embodiments of the present disclosure may seed low confidence phrases and keywords in a repository and compare public messages against the low confidence phrases and keywords. The systems and methods may further send any matching messages for human review. If the human review determines that an identified message includes adversarial behavior, the systems and methods described herein may mitigate this behavior by removing the message from the forum and/or by censuring the message sender. Overtime, the systems and methods described herein may upgrade low confidence keywords and phrases that are consistently determined to be adversarial to high confidence keywords and phrases. The systems and methods described herein may automatically mitigate future messages that correspond to high confidence keywords and phrases without any human intervention.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying claim.
As mentioned above, escalation handling may be a critical integrity capability needed to respond to unanticipated increases in adversarial behavior and to mitigate harm swiftly and effectively. As popularity and engagement increases in connection with new public messaging platforms, this capability becomes even more critical especially as building mature machine learning models take time. For example, sophisticated machine learning models need ample data to learn patterns from and to make predictions with high precision. Public message forums and other messaging platforms that are not subject to end-to-end encryption may not have had the ability to automatically identify and mitigate adversarial behavior.
As such, an escalation handling system is described herein that leverages keyword and phrase matching. In some implementations, the escalation handling system may focus mainly on text as text is the primary modality in most public messaging forums. In other implementations, the escalation handling system may include features that focus on images, video, audio, and other means of communication.
In at least one implementation, the escalation handling system can seed a repository (e.g., a text bank) with low confidence keywords and phrases. In one or more implementations, low confidence keywords and phrases can include language that potentially intends harm but without enough certainty to directly cause mitigation. In at least one implementation, the escalation handling system can scan newly created public messages (e.g., associated with a particular group or forum) to determine if the content of the messages is similar to any of the low confidence keywords or phrases in the repository. For example, the escalation handling system can utilize string comparison, or machine learning techniques to determine similarity. In response to determining that a message is similar to a low confidence keyword or phrase, the escalation handling system can send that message to a human reviewer.
In one or more implementations, the human review may assess the message to determine whether the message includes or indicates adversarial behavior. In some implementations, the escalation handling system can filter the number of messages sent for human review when human review capacity is constrained. For example, the escalation handling system can filter the messages based on virality. To illustrate, the escalation handling system may send a message for human review when the message is similar to a low confidence keyword or phrase and the message has been read a number of time that surpasses a predetermined threshold (e.g., the message has been read by more than 100 forum members or by more than 100 times). The escalation handling system can determine virality based on messaging activity surrounding a particular message (e.g., shares, responses, reads).
In response to a human reviewer assessing a message and determining that the message includes adversarial behavior, the escalation handling system can take steps to mitigate the message. For example, the escalation handling system can remove the message from the messaging forum. In additional or alternative implementations, the escalation handling system can also take mitigation steps in connection with the message sender. For example, the escalation handling system can enter a strike against the message sender and maintain a record of that strike. In response to determining that the message sender has more than a threshold number of strikes, the escalation handling system may take further steps against the message sender such as removing the message sender from the forum, removing messaging privileges from the message sender, and so forth.
Over time, the escalation handling system may determine that certain low confidence keywords and/or phrases have a probability of leading to mitigating steps upon human review. In response to this determination, the escalation handling system can move these low confidence keywords and/or phrases to a high confidence repository (e.g., text bank). At this point, the escalation handling system may automatically scan newly created public messages within the forum for content similar to keywords and phrases in the high confidence repository. In response to determining that a message is similar to a keyword or phrase in the high confidence repository, the escalation handling system may automatically take mitigation steps in connection with that message—without any human intervention. As such, the escalation handling system may remove the message, and/or enter strikes against the message sender. In some implementations, the escalation handling system may automatically take additional mitigating steps against the message sender if the number of strikes against the sender exceeds a predetermined threshold.
In some implementations, in response to moving a low confidence keyword or phrase to the high confidence repository, the escalation handling system may retroactively identify messages previously entered into the forum for additional mitigation. For example, the escalation handling system may identify messages added to the messaging forum within a previous threshold amount of time (e.g., a week, a month) that are similar to keywords and phrases in the high confidence repository. The escalation handling system may then automatically take mitigation steps in connection with those identified messages as described above.
The escalation handling system may periodically seed the low confidence repository with keywords and phrases in connection with various types of harm. In this way, the escalation handling system can tailor itself to current and up-to-date harmful language. In this way, the escalation handling system efficiently and effectively utilizes a hybrid approach to identify and mitigate messages that include harmful language and behavior.
The detection engine can further check for matches of the groups, threads, and/or messages against the low confidence signal bank (e.g., the low confidence repository) and the high confidence signal bank (e.g., the high confidence repository). In response to determining that a match exists with the low confidence signal bank, the detection engine can enqueue the group, thread, and/or message for human review. In response to determining that a match exists with the high confidence signal bank, the detection engine can automatically enforce one or more actions against the group, thread, and/or message.
When enforcement is needed, the escalation handling system can add strikes against a user or group. These strikes may accumulate based on the violation type. In response to the number of strikes exceeding a threshold amount, the escalation handling system can take down messages, disable or take down threads, and/or gate groups. Moreover, the escalation handling system can engage in bulk actioning based on message ids. For example, the escalation handling system can retroactively analyze and potentially enforce against messages that were added to the platform in the past (e.g., within a threshold amount of time). Conversely, the escalation handling system can also retroactively undo a previous enforcement against a message, thread, or group based on additional analysis. Additionally, in some implementations, the escalation handling system can generate and show a visual indicator (e.g., a banner, popup, label) to give additional context to users once one or more enforcement actions have been taken.
Following enforcement, the escalation handling system can identify high confidence signals and add those signals to the high confidence signal banks. The escalation handling system can further log and monitor based on the previous enforcement. Moreover, in some implementations, the escalation handling system can use the high confidence signal bank to predict additional violating keywords. The escalation handling system can add or remove keywords from the high confidence signal bank based on these predictions.
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
The traditional method of performing image color calibration for cameras has the limitation of accuracy drift over different luminance levels. The color correction accuracy can be negatively affected by an imaging system's vignette effect, the non-linear sensor response to camera exposure time setting, and the device under test's off-axis effect.
The present disclosure is generally directed to tensor-based cluster matching for optics system color matching. As will be explained in greater detail below, embodiments of the present disclosure may integrate additional variables into a 5-dimension tensor-based procedure with a geometric, exposure time look-up-table and a conjugate of a camera vignette factor.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The following will provide, with reference to
As illustrated in
At step 3320, one or more of the systems described herein may estimate a color correction matrix. For example, at least one processor may estimate, based on the imaging results, a color correction matrix at least in part by integrating additional variables into a 5-dimension tensor-based procedure with a geometric, exposure time look-up-table and a conjugate of a camera vignette factor.
At step 3330, one or more of the systems described herein may store the color correction matrix. For example, at least one processor may store the color correction matrix in a memory accessible to the at least one processor.
At step 3340, one or more of the systems described herein may modify the imaging results. For example, at least one processor may employ the color correction matrix to modify the imaging results.
Referring to
What is sought is the optimal linear transformation matrix A (e.g., 4 rows×3 columns) that best maps the processed color samples P into the corresponding original color samples O as in procedure 3908, where 1 is a column vector of N ones that provides a DC offset, or shift, in the brightness level. Thus, each transformed pixel color is a linear combination of a DC offset and the processed red, green, and blue samples. For example, the red color of the first transformed pixel can be determined according to procedure 3910, where the two subscripts on the A matrix elements denote their row and column positions, respectively. Given more than twelve independent RGB samples (e.g., the number of unknowns in the A matrix), then the set of linear equations can be over-determined and the least-squares solution can be given by procedure 3914, which can be a fundamental equation used to estimate the A color correction matrix.
The processed image normally contains spatial distortions as well as color calibration problems. In addition, the processed image may not be properly registered, or aligned, to the original image. These problems can create outliers (e.g., processed color samples that do not agree with the majority fit) that may unduly influence the estimation of the color correction matrix A. Utilizing an iterative least-squares solution with a cost function can help minimize the weight of outliers in the fit. This robust least-squares solution reduces the weight of outliers in the fit using a cost function that is inversely proportional to the error, or Euclidean distance, between the original sample O and the fitted processed sample O. If this error distance is large, then the associated cost of the fitting error will be small and the outlier's influence on the estimate will be minimal. To implement the robust least-squares solution, procedure 3914 can be applied to the N matching original and processed RGB color samples. Procedure 3914 utilizes the Euclidean distance as the optimization function with a tunable parameter corresponding to exposure time and conjugate of the vignette factor. In the meantime, a cost vector (C) can be generated according to procedure 3916, and this cost vector can be an element-by-element reciprocal of the error vector (E) plus a small epsilon (E) utilized to cycle the outliner of extremely over-exposed signal and dark current noise which does not add value to the merit.
Through continuous optimization, this robust least-squares solution can yield a very linear channel of processed signals of both O and P.
As set forth above, systems and methods of tensor-based cluster matching for optics system color matching are disclosed. For example, additional variables can be integrated into a 5-dimension tensor-based procedure with a geometric, exposure time look-up-table and a conjugate of a camera vignette factor.
The present disclosure describes an antenna system designed for mobile electronic devices. In one embodiment, a transparent uniplanar right hand circular polarized (RHCP) antenna may be provided with antenna feeding mechanisms for global positioning system (GPS) L1 band (1575.42 MHz-1609.31 MHz) communication. In another embodiment, a uniplanar antenna radiating structure constructed from a transparent conductive material (e.g., transparent metal mesh) may be provided. The transparent metal mesh may be divided into active and dummy/floating segments through a process referred to as “precise incision.” A denser, active metal mesh segment may be applied around the perimeter of the transparent metal mesh. The contours of the metal mesh segments may be designed such that the majority of the surface currents at each side are perpendicular to the other sides. In some cases, the antenna may be excited by another active metal mesh segment that is connected to a coplanar waveguide (CPW) feed and is capacitively fed by the metal mesh segment.
In some cases, optically transparent conductors in the form of transparent metal mesh may allow visible light to pass through while simultaneously enabling conduction along the radio frequency (RF) spectrum. The implementations herein may have substantially lower sheet resistivity compared to other transparent conductors such as indium tin oxide (ITO) or Aluminum zinc oxide (AZO). This renders transparent metal mesh as a more suitable candidate for use as a conductor in high frequency RF applications.
Additionally, the utilization of transparent metal mesh in the design of antennas may provide a greater degree of design freedom, as it enables the physical configuration of the antenna to be concealed in different active and dummy sections of the transparent metal mesh. These benefits of using transparent metal mesh may enable many different antenna designs on a given substrate such as the lenses of a pair of augmented reality (AR) glasses. At least in some cases, the lenses are the single largest component within the glasses' form factor. As such, the use of transparent metal mesh may release fairly large portions of space within the AR glasses that was previously occupied by conventional laser direct sintering (LDS) antennas, flex antennas, or printed circuit board (PCB) antennas. The embodiments herein may utilize the added flexibility provided by the transparent metal mesh to design an optically transparent antenna while maintaining good antenna radiation efficiency. Furthermore, to minimize the complexity of integrating transparent metal mesh onto a lens through lamination, a uniplanar antenna with simple feeding may be provided.
In some cases, the RHCP antenna may be excited by the active metal mesh segment #2 which is connected to a coplanar waveguide (CPW) feed and capacitively feeds metal mesh segment #1. Active metal mesh segment #3 is electrically connected to active segment #1 and is located at same side corner of segment #2. The two sides of segment #3 are in parallel with segment #1, such that the current at the two open edges of segment #3 are also orthogonal to each other. Adjusting the dimension of these two edges generates a 90-degree phase difference between the antenna's two orthogonal E-field components when the RHCP antenna is resonating at the desired frequency.
Additionally, a metalized border with a width of 1 mm, for example, may be applied to the perimeter of the transparent metal mesh. The metalized border's dimensions may be defined by L1 and W1. The metalized border may be hidden within the glass frame and may not be visible to the end users. The embodiments herein may reduce the sheet resistance on the edges, where the current concentration is highest, so as to enhance the radiating efficiency of the antenna. The RHCP antenna may be considered as a wide slot antenna that is excited by the active metal mesh segment L2×W2, which may be connected to a coplanar waveguide (CPW) feed. By adjusting the dimensions of the active metal mesh segment L3×W3 connected to the metalized border at the corner, a 90° phase difference may be achieved between two orthogonal electrical field (e-field) components without the need for external phase-shift networks or phase delay transmission lines, as may otherwise be required in dual-feed or single-feed circularly polarized patch or slot antennas.
The simulated return loss and axial ration for this transparent right hand circularly polarized antenna are illustrated in
Furthermore, it should be noted that the CP band may be adjusted by altering the dimensions of active metal mesh segments L3 and W3, while keeping all other antenna parameters constant. Additionally, it should be noted that variations in the dimensions of active metal mesh segment L2, W2 and gap g will not shift the resonance frequency in GPS signals but will affect the antenna's impedance matching. Ideally, GPS antennas should have good RHCP radiation over the entire upper hemisphere to efficiently receive incoming GPS signals. However, due to head blockage, most radiation is reflected in the forward direction.
In order to examine the slot region of the RHCP antenna in more detail,
In one specific embodiment, a system is provided. The system may include a substrate, a transparent conductive material applied to the substrate in a specified pattern that forms an antenna, and an electrically conductive border at least partially surrounding the substrate. In some cases, the transparent conductive material may be applied in at least two separate sections of the substrate. In such cases, the two separate sections of the substrate are separated by portions of non-conductive transparent material. This antenna may be a slot antenna or other type of antenna, which may be applied to the outer surface of a pair of AR glasses.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to any claims appended hereto and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and/or claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and/or claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and/or claims, are interchangeable with and have the same meaning as the word “comprising.”
This application claims the benefit of priority under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 63/484,201 filed Mar. 2, 2023, Provisional Application No. 63/486,832 filed Feb. 24, 2023, Provisional Application No. 63/484,061 filed Feb. 9, 2023, Provisional Application No. 63/385,265 filed Nov. 29, 2022, and Provisional Application No. 63/481,363 filed Jan. 1, 2023, the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63484201 | Feb 2023 | US | |
63486832 | Feb 2023 | US | |
63484061 | Feb 2023 | US | |
63385265 | Nov 2022 | US | |
63481363 | Jan 2023 | US |