The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and is described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
The present disclosure is generally directed to object characterization using light scattering. As is explained in greater detail below, embodiments of the present disclosure may include an apparatus that includes a laser, a source fiber that delivers the laser radiation to an object, and a detector fiber that receives scattered laser radiation and illuminates a detector array with the scattered laser radiation to form speckles on the detector array. The detector array may include a plurality of detectors, and may be positioned to receive the scattered laser radiation from the end of the detector fiber. The distance between the detector array and the end of the detector fiber may be adjustable, and may be coupled optically by additional optical elements such as lenses, polarizers, filters, splitters, and combiners. A controller may be configured to receive detector data from the detector array, determine a time-dependent intensity autocorrelation function for each detector of a plurality of detectors, and determine an ensemble average autocorrelation function. The apparatus may provide information relating to dynamic processes within the object, such as fluid flow. In some examples, blood flow within a body part of a user may be characterized using multi-speckle diffuse correlation spectroscopy (mDCS).
Blood flow in specific areas of the brain may correlate with neuronal activity in those areas, indicating specific brain functions (such as the use of particular words, an emotion, an intent to select or interact with an object within a real or virtual environment, a desire to select an option such as a menu option, a desire to control a real or virtual device, a desire to operate a computer interface device such as a mouse, a desire to enter one or more alphanumeric characters, or other brain function). Thus, observing blood flow may provide the basis for a brain-computer interface. Light directed at a point of a person's head may penetrate and diffuse through that area of the head, creating a speckle field. Changes in the speckle field over time may provide information about blood flow in the targeted area, providing a non-invasive method for a brain-computer interface. An imaging array fed by a corresponding array of multi-mode fibers can penetrate hair and thus collect light to observe the speckle field with minimal interference from a user's hair. In addition, correlating speckles in the speckle field to pixels in the imaging array on a N:1 speckle-to-pixel basis may provide a high signal-to-noise ratio. In some approaches, an apparatus may be configured to obtain an approximately 1:1 speckle-to-pixel ratio, and then the speckle diameter may be adjusted to at least approximately optimize the signal-to-noise ratio (SNR). This may lead to an N:1 speckle-to-pixel ratio, where N may be greater than 1 (e.g., approximately 2, 3, or 4, or within the range 1.5-5, such as between approximately 2 and approximately 4), depending on background noise. An example apparatus may be operated with a 1:1 speckle-to-pixel ratio at the detector array, or may operate with a N:1 speckle-to-pixel ratio at the detector array, where N≥1 (e.g., where N is between approximately 2 and approximately 4). In some examples, identification of a brain function may be correlated with an eye-tracking system to identify an object or virtual representation that the person desires to interact with.
In some examples, the imaging array may be large (e.g., 32×32, or N×M where N and M are both integers and N×M>1000). In some examples, an apparatus may include a 128×128 detector array (e.g., a SPAD array), such as a 512×512 detector array (e.g., a SPAD array).
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The following provides, with reference to
Speckle correlation spectroscopy, such as diffuse correlation spectroscopy (DCS), measures the dynamics of scatterers deep within a scattering medium, such as blood-perfused tissue, by collecting the diffused laser light from the object illuminated by a source fiber and detecting the laser speckle fluctuations. The source fiber may have a source end configured to receive the laser radiation from the laser and a delivery end configured to illuminate the object with laser radiation. Since the penetration depth of laser radiation collected by the detector can be controlled by increasing the source-detector separation (ρ), DCS may enable the monitoring of deep tissue dynamics such as the cerebral blood flow noninvasively. However, in some implementations, the sensitivity of DCS to cerebral hemodynamics may be limited by the low photon fluxes (Nph) detected at large ρ, since Nph decays exponentially with p. Values of ρ for DCS on the adult human head may typically not exceed 25-29 mm, corresponding to a mean sensitivity depth which is roughly one-third to one-half of ρ (˜10 mm), which may be insufficient to effectively probe through scale and skull. To address this issue, systems and methods of the present disclosure include high-sensitivity multi-speckle DCS (mDCS), which can extend ρ by detecting thousands of speckles in parallel to boost the signal-to-noise ratio (SNR).
In some examples, an apparatus may also include an optical configuration configured to direct a portion of the unscattered laser radiation around the object. Scattered and unscattered radiation may be incident together on the detector array, and interference effects may further help increase an SNR. This is represented schematically in
In some examples, the detector array 220 may detect 1024 speckles, or approximately one speckle per detector. This offered a 32-fold increase in signal-to-noise ratio over a single-speckle DCS. The light source 202 may be a laser, such as a long-coherence (e.g., >9 m) continuous wave (CW) laser. The laser may have an emission wavelength within visible or IR wavelengths, such as a red or near-IR wavelength. In some examples, the laser emission wavelength may in the range 700 nm-1200 nm, for example, 700 nm-900 nm, and example results were obtained using an emission wavelength of 785 nm. Source fiber 204 may direct laser radiation from the light source to the object 212, and the light may be coupled into the object 210 by delivery end 208 of source fiber 204. Detected light, such as scattered and/or diffused light, is collected from the object 210 by collector end 216 and directed along detector fiber 214 to detector end 218 and emerges as light cone 222 to form speckles on the receiving surface of the detector array 220. The detected light is then detected by the detector array 220, and electronic signals from the detector array 220 may be provided to the controller 230.
In some experiments, the object 210 included an equilibrium homogeneous liquid phantom, and the detector array 220 was a 32×32 SPAD array detector. The object may be any object to be imaged, such as a human head, other bodily organ or component, animal, or any other object to be imaged. The source-detector separation (ρ) may be adjusted, and in some experiments was set to 11 mm. The fiber-SPAD distance (z) may also be adjusted, for example, by rotating an adjustable lens tube.
Since Nph decays by about a factor of 10 per 10 mm, the systems and methods described herein may give a ˜15 mm extension of ρ and a ˜6 mm increase in depth sensitivity over a single-speckle DCS. Greater improvements may also be achieved. This approach may be scalable to even more speckles (e.g., 10,000, 100,000, or more) with the use of large pixel count SPAD arrays, which may extend ρ to ˜50 mm or more and depth sensitivity to ˜21 mm or more, reaching the cortex.
The autocorrelation function is described in more detail below, along with an extensive discussion of the advantages of using a SPAD array.
As shown in
Contrast values may be determined in laser speckle contrast imaging. For diffuse correlation spectroscopy (DCS), the correlation time may be determined and used to estimate the speed of the blood flow. In DCS, the variation in contrast values may affect the amplitude of the autocorrelation function (β), but not the correlation time (τ).
The configuration shown in
In some examples, a wearable device (such as the example device of
In some examples, the distance between the detector end of a detector fibers and the detector array (and/or a lens configuration associated with the optical fiber) may be fixed (e.g., such that the average size of the speckles in the speckle fields is approximately one pixel). In some examples, the distance and/or the lenses may be adjustable (e.g., such that the average size of the speckles in the speckle fields is approximately one pixel). The adjustment may be performed manually by a user and/or may be performed by the systems described herein during a calibration process (e.g., until the average speckle size is identified as approximately one pixel).
Further discussion of autocorrelation function characteristics now follows, including a detailed discussion of signal-to-noise (SNR) characteristics. When an example apparatus injects coherent light into a dynamic scattering medium (such as that shown in
Even though the SNR of g2 may be increased by having longer integration times (the time over which the speckle intensity is recorded for each g2 calculation, Tint) and higher photon count rates (Nph), these parameters are limited by the time scales of the dynamics used to measure and the laser maximum permissible exposure (MPE) on skin. In one approach, a single-mode fiber (SMF) may be used as the detection fiber to ensure coupling of only one speckle onto a single-photon detector (called “single-speckle DCS”). In another approach, a multi-mode fiber (MMF) may be used to couple multiple (M) speckles onto the single-photon detector. However, although the detection count rate increases (Nph∝M), the magnitude of g2 decreases with the number of speckles arriving at the detector (β∝1/M), effectively resulting in no gain of the g2 SNR.
Multi-speckle DCS (mDCS) allows significant improvements in the SNR, compared to single-speckle DCS, using parallel DCS measurements of M>1 speckles to provide M independent photon-counting channels. Parallel DCS measurements of M=1024 speckles may be achieved by using a kilopixel SPAD array. While coupling one speckle on every pixel in the SPAD array gave an SNR gain of 32×, pulsing a CW laser source may give an additional SNR gain. For example, an experimentally obtained SNR gain of 3.2× from using pulsed laser radiation resulted in a total SNR gain of more than 100. The additional SNR gain due to pulsing the laser may be increased by reducing the duty cycle of the laser. For example, the SNR gain may be related to the inverse of the square-root of the duty cycle. If the duty cycle is 1, the SNR gain may be 1 (corresponding to no SNR improvement). If the duty cycle is 0.25, then the SNR gain may be 2. The duty cycle may be adjusted to obtain a desired SNR gain. Systems and methods implementing this approach (e.g., pulsed laser operation) may provide a scalable implementation of DCS that allows both high SNR and high sensitivity to the cortex.
An example apparatus may include a laser, such as a semiconductor laser, such as a visible (e.g., red-emitting) or near-IR CW laser diode. In some experiments, a 780 nm CW laser may be coupled to an object using a source optical fiber (such as a single-mode fiber). The object may include a dynamic scattering medium. A detector optical fiber (e.g., a multimode fiber) may be used to direct scattered light to a detector, such as a SPAD array. The scattered light may include light diffusing out of the scattering medium. In some experiments, for evaluation purposes, the object may include an equilibrium homogeneous liquid phantom. For example, a phantom may be used to determine the SNR for a particular apparatus configuration. In some examples, the object may be a static or a rotating diffuser, which may be used to determine the diameter of the speckles.
In some examples, the detector array may include 1024 SPADs arranged in a 32×32 array, with a pixel pitch of 50×50 μm and an active area of 6.95 μm in diameter at each pixel. In some examples, the pixels may be generally square pixels, but other pixel shapes are possible, such as circles. Each pixel may have its own photon counting electronics, for example, that may run at greater than 250,000 frames per second. Operation at 625 kfps was obtained using a 32×32 detector array (in this example, a SPAD detector array that may be referred to as a SPAD camera or SPAD array) The SPAD detector array enables simultaneous detection of a plurality of speckles, and this results in an appreciable SNR improvement of the autocorrelation function (g2). In some examples, an apparatus may ensure that each SPAD pixel detects one or more speckles by adjusting the speckle size to be equal or smaller than the pixel active area.
The average diameter of a speckle (d) obeys the relationship of Equation (1):
d=λz/D (Equation 1)
Here, λ is the wavelength of the light (785 nm), z is the distance between the detection fiber and the SPAD array (3.5-10 mm) and D is the fiber core diameter (e.g., 200-910 μm). In some examples, apparatus configurations may allow a reduction in d by decreasing z, and/or by using a larger fiber core diameter D.
A high-contrast image of the speckles may be formed on the SPAD array, and the diameter of the speckles at varying distance z may be determined. This may use a high-throughput rotating diffuser phantom, such as shown in
In other experiments, an equilibrium milk phantom in a reflection geometry with a ρ of 11 mm may be used in a configuration similar to that shown above in
From each pixel (e.g., each detector of a detector array), the g2 function as a function of time lag r may be calculated. Then a comparison may be made with the g2 curve obtained in a single run from a single pixel (“the pixel g2”) as in Equation 2:
The curve obtained from the ensemble average of all 1024 pixels (“the ensemble g2”) is given by Equation 3:
Here, n(t) is the number of photons counts in time bin t, the square bracket ( . . . ) denotes the average over an integration time Tint, and M is the number of independent speckle field measurements.
The SNR of g2 in single-speckle DCS is determined by a number of parameters that includes bin time, integration time, count rate, decorrelation time, and coherence factor. In order to evaluate mDCS, a noise model that may be applied to single-speckle DCS may be extended to mDCS. Under the assumption of ergodicity, this may be accomplished by incorporating the number of speckles together with the integration time in the model. In this new model, the g2 STD at each time lag may be estimated according to Equation 4 below:
Here, T (=Tbin) is the correlator time bin interval, t (=Tint) is the averaging integration time, M is the number of detected speckles, β is the coherence factor g2(0), 2Γ is the decay rate, m is the delay time bin index, and <n> (=Nph×Tbin) is the count rate within Tbin per pixel. The multi-speckle factor M plays the same role statistically with Tint. This results from the ergodicity of a random process in the system. To validate the mDCS noise model, a comparison of the measured g2 SNR at increasing Tint or Nph to increasing the ensemble size M at short Tint or low Nph is performed.
This leads directly to Equation (6):
The square-root dependence of SNR(0) to t (=Tint) and M arises from the ergodicity of the measured system. In this way, a higher SNR beyond the Tint and Nph limits is achieved by using larger M. DCS measurements may be configured to detect tissue dynamics at longer ρ (or larger penetration depths) at the expense of count rates. Therefore, this comparison validates the mDCS noise model in the low count rate regime. The ensemble averaging allows the recovery of some gains of the SNR even when the total count rate is approaching the SPAD's dark count rate (DCR) of <100 Hz per pixel.
In addition to capturing more speckles, the SNR may be further increased by pulsing the laser at higher peak power (Nph) at the expense of shorter pulse width (Tit), as long as the average power is below the MPE on skin. This approach results in a net SNR gain because the SNR is linear in Nph and square-root in Tint. This approach may be validated by comparing the usual SNR from using a CW laser to the SNR that would be obtained from 15× laser input peak power at 1/15 duty cycle, which results in a 3.5× SNR gain. The combination of this pulsed mode (3.5× SNR gain) with the kilopixel SPAD array (32× SNR gain) leads to a total of >100× SNR gain.
Accordingly, examples of the present disclosure include a scalable approach for mDCS using a kilopixel SPAD array, and with a pulsed mode, to enhance the SNR gain by a factor of greater than 100 times compared to the single-speckle CW-DCS. This means that this technique may be used to measure signal changes on significantly faster time scales and/or longer penetration depth. Thus, if a conventional technique would allow a ρ for DCS measurement on the adult human head as high as 29 mm using M=14 channels, the techniques discussed herein may allow an increase of ρ by ˜15 mm or more and an increase in depth sensitivity by about ˜6 mm or more. A noise model for mDCS may be established by assuming speckle ergodicity (which assumptions are supported quantitatively with experimental results), where the SNR is approximately proportional to Nph× √Tint×√M in the shot noise limit. In addition, the mDCS model may be applied in the low photon limit. The kilopixel SPAD array offers a significant increase of channels (M=1024) by a factor of 36 times as compared to M=4-28.
With the advent of LIDAR technology, high-sensitivity kilopixel SPAD arrays with small dark count rates and high frame rates are commercially available, and detector arrays having larger numbers of pixels may be fabricated. This allows further increases in SNR using SPAD arrays having greater than at least 10,000, or at least 100,000, and in some examples at least 1 million pixels. Using a larger number of pixels, a larger fiber core diameter may be used to accommodate more speckles and faster data transfer and processing rates for real-time mDCS measurements. As is explained in greater detail below, this technique can also be implemented in the time-domain mDCS to enable enhanced depth sensitivity.
The approaches described herein for mDCS can also be implemented in the time-domain to enhance sensitivity to deep tissue. As discussed above, the steady-state operation of mDCS may use a continuous wave (CW) laser light source. A challenge with steady-state DCS measurements is that the total signal includes the desired signals from deep tissues (e.g., the brain) in addition to the unwanted signals from the intervening superficial tissues (e.g., the scalp and skull). This problem occurs because, in steady-state DCS, all photons are detected from the source point which reaches the detector point regardless of the path the photons took through the tissue. Employing pulsed laser light source, improved multi-speckle time-domain diffuse correlation spectroscopy (mTD-DCS) may be achieved. The time-domain approach enables systems and methods described herein to selectively capture different photons that have traveled along different path lengths through tissue. As a rule of thumb, photons are injected from the source point and capture the returning photons at the detector point, and photons that have traveled through deep tissue have longer path lengths as compared to photons that have traveled through superficial tissue. In other words, photons that have traveled through deep tissues may arrive a few hundreds of picoseconds to a few nanoseconds later as compared to the photons that have only traveled through superficial tissues. By applying time-gated strategies to the DCS signal, systems described herein can differentiate between short and long photon path lengths through the tissue and determine the tissue dynamics for different depths. This technique involves picosecond pulsed laser sources, and a time-correlated single-photon counting (TCSPC) to time-tag each detected photons with two values, the time-of-flight from the source to the detector points to obtain the temporal point-spread function (TPSF), and the absolute arrival time to calculate the temporal autocorrelation function for DCS. By evaluating the correlation functions over different time gates of the TPSF, TD-DCS may differentiate between early and late arriving photons and evaluate the dynamics at different depths within the tissue. The mDCS approach described herein using a kilopixel to megapixel SPAD arrays may enable parallel independent measurements of TD-DCS signals and further increase the instrument sensitivity.
The number of speckles that are projected on a SPAD array may determine the maximum SNR enhancement factor that is achieved with the mDCS technique. However, the instantaneous number of speckles may change over time because the speckles from dynamical scattering media are constantly changing over time (i.e., appearing and disappearing across time). Accordingly, an object detection technique to locate every speckle and count the number of speckles per frame is used. The number of speckles per frame increases as the speckle size decreases.
An example configuration may use a rotating diffuser phantom in transmission geometry with a 785 nm CW laser source, a SMF source fiber end (4.4 μm core diameter, 0.13 NA), and a MMF detector fiber end (400 μm core diameter, 0.5 NA). The speckle diameter and the number of speckles can be adjusted by tuning the fiber-SPAD distance.
A pixel clustering method may be used to determine the diameter of the speckles. This method may use the decrease in magnitude of the coherence factor β as the photon counts from more speckles are summed up prior to calculating g2.
An example apparatus may include a rotating diffuser phantom in transmission geometry with a 785 nm CW laser source, a SMF source fiber end (4.4 μm core diameter, 0.13 NA), and an MMF detector fiber end (400 μm core diameter, 0.5 NA). The speckle diameter and the number of speckles can be adjusted by tuning the fiber-SPAD distance.
where βmax=0.67 in this configuration and K is the number of speckles per cluster. β decreases as the cluster length is increased. The crossing at
corresponds to the cluster length where two speckles per cluster are detected.
A pixel clustering method may determine the speckle size, or the number of speckles, even without having a good speckle visualization, which is beyond the capability of conventional techniques.
Physical defects in SPAD pixels can effectively increase the dark count rate (DCR) due to trapped carriers and after-pulsing. Pixels with a high DCR (>100 Hz) may be referred to as “hot pixels” and pixels with a low DCR (<100 Hz) may be referred to as “cold pixels.”
As discussed earlier, SNR is approximately proportional to Nph×√Tint×√{square root over (M)} in the shot noise limit. Furthermore, Nph has an upper limit that is determined by the MPE for skin exposure, thus limiting the SNR. Although this limit can be overcome through capturing more speckles, the SNR can be increased further by changing the parameter combination between Nph and Tint. This is because the two parameters have different exponents (linear in Nph and square-root in Tint). In this way, Nph can be increased for a shorter effective Tint without going beyond the average power set by MPE.
In some examples, a method may further include determining a characteristic time related to the object. The characteristic time of a dynamic physical process within the object may be determined based on the ensemble average autocorrelation function. The characteristic time may be related to fluid flow dynamics within the object, such as blood flow within a body part such as the head. In some examples, the controller may be configured to provide a controller output including a characteristic time determination (e.g., a characteristic of a dynamic process within the object) based on the ensemble average autocorrelation function.
In some examples, a method may include receiving scattered radiation from an object, illuminating a detector array using the scattered radiation, and determining an ensemble average correlation function, for example, based on the time-dependent intensity correlation function for each of a plurality of detectors of the detector array. In some examples, the scattered radiation may be laser radiation. In some examples, the scattered radiation may be near-IR radiation, for example, having a wavelength between 700 nm and 900 nm. In some examples, the scattered radiation may be scattered laser radiation.
In some examples, a method may include placing a wearable device on a user, where the wearable device is configured to irradiate a body part of the user using laser radiation and collect scattered laser radiation from the body part. The wearable device may include a band (e.g., a strap encircling a body part), wristband, ring, belt, helmet, spectacles, or other wearable device. The controller may be part of the wearable device. In some examples, the controller may be in wireless or wired communication with the wearable device. In some examples, some or all functions of the controller may be provided by a remote computer, such as a remote server.
In some examples, a computer-implemented method may include performing one or more steps of an example method as described herein. For example, a computer-implemented method may include receiving data from a plurality of detectors, determining a time-dependent autocorrelation function for each of the plurality of detectors, and determining an ensemble average autocorrelation function based on the time-dependent autocorrelation function for each of the plurality of detectors. An example method may further include control of a laser; for example, energizing the laser to produce pulsed laser radiation. An example method may further include adjusting a distance between a fiber end from which laser radiation emerges and a detector array, for example, so that a speckle dimension may at least approximately match a detector area dimension.
In some examples, an apparatus may include a laser, a source fiber configured to receive laser radiation from the laser at one end and deliver the laser radiation to an object from the other end, a detector fiber receiving scattered laser radiation at one end and illuminating a detector array with the scattered laser radiation from the other end, forming speckles on the light receiving surface of the detector array. The detector array may include a plurality of detectors positioned to receive the scattered laser radiation from the end of the detector fiber. The distance between the detector array and the end of the detector fiber may be adjustable. A controller may be configured to receive detector data from the detector array, and, for each detector of the plurality of detectors, determine a time-dependent intensity autocorrelation function. The controller may be further configured to determine an ensemble average autocorrelation function based on the time-dependent intensity autocorrelation functions determined for each detector.
In some examples, an apparatus may include a band (e.g., at least one strap or other support structure) configured to attach the apparatus to the body part of the user. In some examples, the support structure may be a component of a wearable device, such as a wrist-band, helmet, visor, spectacles, hat, other head-mounted device, glove, ring, or other wearable device. In some examples, a method may further include wearing the wearable device so that a body part is illuminated by laser radiation (e.g., from one or more source fibers), and scattered laser radiation is collected by one or more detector fibers. In some examples, the object may be directed illuminated by a laser. For example, a laser may be directed at an object, such as a body part, and the source fiber may be omitted. In some examples, the laser radiation may be collimated (e.g., using one or more lasers or collimators). For example, a head-mounted device may support one or more lasers, and one or more detector fibers configured to detect scattered laser radiation. In some examples, a device may be worn on other body parts, for example, as a smart watch or wristband, on a forearm or other limb, or worn on a hand or finger (e.g., as a glove or ring).
In some examples, additional optical elements (such as lenses, filters, polarizers, splitters, or combiners) may be located between the detector end of the detector fiber and the detector array. For example, the position of one or more optical elements (such as one or more lenses) may be adjusted to adjust the speckle diameter on the detector array.
In some examples, an apparatus may use an interferometric mDCS approach. In some examples, a fraction of the incident laser radiation (e.g., 10% of the laser source emission) is redirected using a splitter, bypasses the diffuse object, and is combined with the diffused light (also termed scattered light) at the end of the detector fiber or combined directly with the speckle field at the detector array. This approach may be used to obtain stronger speckle contrast using an interferometric approach to increase the overall SNR of the mDCS.
In some examples, one or more additional optical elements (such as lenses, filters, polarizers, splitters, or combiners) may be located between the detector end of the detector fiber and the detector array.
In some examples, an apparatus may be configured as an interferometric mDCS, where unscattered laser radiation is directed (e.g., using a splitter) to bypass the object and is combined with the scattered light, for example, at an end of the detector fiber or combined directly with the speckle fields at the detector array. This approach may improve speckle contrast and may increase the overall SNR of the mDCS apparatus.
In some examples, a laser may have a coherence length of at least 1 m, for example, at least 5 m, and in some examples at least 9 m. In some examples, the coherence length of the laser radiation may be longer than the source-detector distance, which may be approximately 1 cm for some applications. However, a longer coherence length may give a higher amplitude autocorrelation function, which may give a higher SNR. The SNR may be improved by using a laser having a higher coherence length, such as at least 1 m.
In some examples, a laser may be a pulsed laser or a continuous-wave (CW) laser. For some examples, the pulsed laser may provide an improved SNR. However, in some examples, the CW laser may be less expensive and may be commercially advantageous in some applications. Also, in some examples, a CW laser and associate circuitry may be readily miniaturized, so a CW laser may be used in applications with a smaller form factor.
In some examples, the laser wavelength may be in the range 700 nm-1200 nm, for example, in the range 700 nm-900 nm. For example, a laser having an emission wavelength of 1064 nm (or greater) may be used provided the detectors have sufficient sensitivity at the appropriate wavelength. In some examples, detectors may include superconducting nanowire single photon detectors (SNSPD). Any suitable laser wavelength may be used, depending on the object under study. If the object is a human head, a wavelength in the range 700 nm-1200 nm may be used. These example wavelengths may pass through a layer of hair, skin, and/or bone (e.g., the skull) and be scattered within the brain, allowing internal processes within the brain to be characterized. Similar wavelengths may be used for the characterization of other body parts.
In some examples, a source fiber may include at least one single-mode fiber, and a detector fiber may include at least one multi-mode fiber. In some examples, a source fiber may include at least one multi-mode fiber. In some examples, the detector fiber may include a plurality of fibers, such as a bundle of multimode fibers. In some examples, the detector fiber may include a plurality of multimode fibers to provide speckles for large-pixel-count array, a detector array having at least 1 million pixels. In some examples, the singular terms source fiber and/or detector fiber may be used for convenience to refer to fiber bundles. For example, a source fiber may include a multi-mode fiber, a bundle of single-mode fiber, or a bundle of multi-mode fiber, as long as the overall coherence length upon delivery is longer than the photon path length between the source and the detector.
In some examples, an apparatus may include a support configured to direct laser radiation at an object, and receive scattered laser radiation from the object. The support may be provided by a band, for example, a strap that may extend around a portion of the object. For example, the object may include a body part, and the band may encircle the body part. In some examples, one or more source fibers may be configured to illuminate the object, and one or more detector fibers may be configured to receive scattered laser radiation. The apparatus may further include a distance adjuster configured to adjust the distance between the detector end of the detector fiber and the detector array. Scattered laser radiation may be collected by the collector end of the detector fiber and may emerge from the detector end of the fiber to form a plurality of speckles on the detector array.
In some examples, the apparatus may include a controller configured to provide an output including a characteristic time based on the ensemble average autocorrelation function. The characteristic time may convey information about dynamic properties within the object, such as fluid flow dynamics, including blood flow dynamics. The controller may be configured to provide a time determination (such as a characteristic time) related to a dynamic process within an object illuminated by the laser radiation from the source fiber, such as fluid flow dynamics within the object, such as blood flow within a body part.
In some examples, the apparatus may include a wearable apparatus configured to be worn by a user. The apparatus may be configured so that laser radiation from the source fiber (or, in some examples, directly from a laser) may be directed into a body part of a user when the apparatus is worn by the user. The collector end of the detector fiber may receive scattered radiation from the body part of the user. A wearable device may include a head-mounted device, which may include a band (such as a strap having the appearance of a head band), helmet, visor, spectacles, or other head-mounted device.
In some examples, a two-dimensional or three-dimensional image relating to blood flow dynamics within brain (or other body part) of a user may be determined. An imaging apparatus may include one or more source fibers (and/or laser sources) and one or more detector fibers. The apparatus may have a plurality of different source-detector separations, and/or (in some examples) one or more source-detector separations may be varied during collection of image data. In some examples, one or more of the following may be adjusted during data collection: orientation of the apparatus (e.g., a band may at least partially rotate around a head of a user), source-detector separation, wavelength (or multiple wavelengths may be used), orientation of one or both of source and detector fibers, or other parameter.
In some examples, a system may include at least one physical processor, and physical memory including computer-executable instructions that, when executed by the physical processor, cause the physical processor to perform one or more steps of any example method described herein. In some examples, a non-transitory computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to perform one or more steps of any example method described herein. An example method may include receiving data from a plurality of detectors, determining a time-dependent autocorrelation function for each of the plurality of detectors, and determining an ensemble average autocorrelation function based on the time-dependent autocorrelation function for each of the plurality of detectors.
In conclusion, an example apparatus may include a laser, a source fiber configured to deliver the laser radiation to an object (e.g., illuminating the object using laser radiation emerging from the delivery end of the source fiber), and a detector fiber configured to receive scattered laser radiation and illuminate a detector array with the scattered laser radiation to form speckles on the detector array. The detector fiber may have a collector end configured to receive scattered laser radiation from the object (e.g., laser radiation scattered by a dynamic process within the object) and a detector end configured to illuminate the detector array and form a plurality of speckles on the detector array. The laser may provide red and/or near-IR radiation when energized. The detector array may include a plurality of detectors, such as SPAD detectors or superconducting nanowire single photon (SNSPD) detectors, and may be positioned to receive scattered laser radiation from the detector end of the detector fiber. The distance between the detector array and the end of the detector fiber may be adjustable. A controller, which may include one or more processors, may be configured to receive detector data from the detector array, determine a time-dependent intensity autocorrelation function for each detector of a plurality of detectors, and determine an ensemble average autocorrelation function. The apparatus may provide information relating to dynamic processes within the object, such as blood flow in examples where the object is a body part of a user.
Example 1: An apparatus may include a laser configured to provide laser radiation, a detector fiber having a collector end configured to receive scattered laser radiation and a detector end, a detector array including a plurality of detectors positioned to receive the scattered laser radiation from the detector end of the detector fiber, and a controller, configured to receive detector data for each detector of the plurality of detectors, determine a time-dependent intensity autocorrelation function for each detector of the plurality of detectors, and determine an ensemble average autocorrelation function based on the time-dependent intensity autocorrelation function for each detector of the plurality of detectors.
Example 2. The apparatus of example 1, further including a distance adjuster configured to adjust a distance between the detector end of the detector fiber and the detector array.
Example 3. The apparatus of any of examples 1 or 2, further including at least one optical element located between the detector end of the detector fiber and the detector array.
Example 4. The apparatus of any of examples 1-3, where the plurality of detectors includes an arrangement of single-photon avalanche diodes.
Example 5. The apparatus of any of examples 1-4, where the plurality of detectors includes at least 1000 detectors.
Example 6. The apparatus of any of examples 1-5, further including a source fiber, where the source fiber has a source end configured to receive the laser radiation from the laser and a delivery end, and the source fiber includes a single-mode fiber.
Example 7. The apparatus of any of examples 1-6, where the detector fiber includes a multimode fiber.
Example 8. The apparatus of any of examples 1-7, where the apparatus is configured so that the scattered laser radiation emerges from the detector end of the detector fiber to form a plurality of speckles on the detector array.
Example 9. The apparatus of any of examples 1-8, where the laser radiation has a wavelength of between 700 nm and 1200 nm.
Example 10. The apparatus of any of examples 1-9, where the laser radiation has a coherence length of at least 1 m.
Example 11. The apparatus of any of examples 1-10, further including a beam-splitter configured to direct unscattered laser radiation to the detector array.
Example 12. The apparatus of any of examples 1-11, where the controller is configured to provide a controller output including time determination based on the ensemble average autocorrelation function.
Example 13. The apparatus of example 12, where the time determination is related to fluid flow dynamics within an object illuminated by the laser radiation.
Example 14. The apparatus of any of examples 1-13, where the apparatus is a wearable apparatus configured to be worn by a user, and the apparatus is configured so that the laser radiation is directed into a body part of the user when the apparatus is worn by the user, and the collector end of the detector fiber receives the scattered laser radiation from the body part of the user.
Example 15. The apparatus of any of examples 1-14, where the apparatus is a head-mounted device and the body part is a head of the user.
Example 16. The apparatus of any of examples 1-15, where the apparatus includes at least one band configured to attach the apparatus to the body part of the user.
Example 17. A method may include collecting scattered laser radiation using a detector fiber, illuminating a detector array using the scattered laser radiation to form a plurality of speckles on the detector array, and determining an ensemble average correlation function based on time-dependent intensity correlation functions for each of a plurality of detectors of the detector array.
Example 18. The method of example 17, further including adjusting a distance or a lens position between an end of the detector fiber and the detector array so that a speckle size on the detector array is approximately equal to a detector area within the plurality of detectors.
Example 19. The method of any of examples 17 or 18, where the scattered laser radiation has a wavelength of between 700 nm and 1200 nm.
Example 20. The method of any of examples 17-19, further including Illuminating an object using laser radiation, and determining a characteristic time related to fluid flow within the object from the ensemble average correlation function.
Embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial reality systems. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof. Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content. The artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial reality systems may be designed to work without near-eye displays (NEDs). Other artificial reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 2100 in
Turning to
In some embodiments, augmented-reality system 2100 may include one or more sensors, such as sensor 2140. Sensor 2140 may generate measurement signals in response to motion of augmented-reality system 2100 and may be located on substantially any portion of frame 2110. Sensor 2140 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof. In some embodiments, augmented-reality system 2100 may or may not include sensor 2140 or may include more than one sensor. In embodiments in which sensor 2140 includes an IMU, the IMU may generate calibration data based on measurement signals from sensor 2140. Examples of sensor 2140 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
In some examples, augmented-reality system 2100 may also include a microphone array with a plurality of acoustic transducers 2120(A)-2120(J), referred to collectively as acoustic transducers 2120. Acoustic transducers 2120 may represent transducers that detect air pressure variations induced by sound waves. Each acoustic transducer 2120 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format). The microphone array in
In some embodiments, one or more of acoustic transducers 2120(A)-(J) may be used as output transducers (e.g., speakers). For example, acoustic transducers 2120(A) and/or 2120(B) may be earbuds or any other suitable type of headphone or speaker.
The configuration of acoustic transducers 2120 of the microphone array may vary. While augmented-reality system 2100 is shown in
Acoustic transducers 2120(A) and 2120(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 2120 on or surrounding the ear in addition to acoustic transducers 2120 inside the ear canal. Having an acoustic transducer 2120 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal. By positioning at least two of acoustic transducers 2120 on either side of a user's head (e.g., as binaural microphones), augmented-reality device 2100 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head. In some embodiments, acoustic transducers 2120(A) and 2120(B) may be connected to augmented-reality system 2100 via a wired connection 2130, and in other embodiments acoustic transducers 2120(A) and 2120(B) may be connected to augmented-reality system 2100 via a wireless connection (e.g., a BLUETOOTH connection). In still other embodiments, acoustic transducers 2120(A) and 2120(B) may not be used at all in conjunction with augmented-reality system 2100.
Acoustic transducers 2120 on frame 2110 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 2115(A) and 2115(B), or some combination thereof. Acoustic transducers 2120 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 2100. In some embodiments, an optimization process may be performed during manufacturing of augmented-reality system 2100 to determine relative positioning of each acoustic transducer 2120 in the microphone array.
In some examples, an augmented reality device may be configured to support one or more source fibers and one or more detector fibers. For example, in relation to
A laser and/or detector array may be located with the frame of a head-mounted device, or within a module that may be attached to the head-mounted device.
In some examples, augmented-reality system 2100 may include or be connected to an external device (e.g., a paired device), such as neckband 2105. Neckband 2105 generally represents any type or form of paired device. Thus, the discussion of neckband 2105 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, hat, ring, other wearable devices, hand-held controllers, tablet computers, laptop computers, other external compute devices, etc.
In some examples, the paired device may include an mDCS apparatus, and may include, for example, a light source, detector fiber, detector array, and other components such as those described herein. A paired device may include a neck-band, watch, smart phone, wrist band, chest band, hat, ring, other jewelry item or bodily adornment, other wearable device, smart shoe, clothing item, hand-held controller, tablet computer, laptop computer, other external computer devices, etc. A chest band may include an apparatus configured to monitor cardiac function, for example, including one or more of an mDCS apparatus, pulse oximeter, electrocardiograph, or other component. A device component, such as a band (e.g., a strap or other support component), may encircle a limb or other body part, and monitor, for example, blood flow, blood velocity, and/or other circulatory parameter.
As shown, neckband 2105 may be coupled to eyewear device 2102 via one or more connectors. The connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components. In some cases, eyewear device 2102 and neckband 2105 mayoperate independently without anywired orwireless connection between them. While
Pairing external devices, such as neckband 2105, with augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities. Some or all of the battery power, computational resources, and/or additional features of augmented-reality system 2100 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality. For example, neckband 2105 may allow components that would otherwise be included on an eyewear device to be included in neckband 2105 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads. Neckband 2105 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 2105 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 2105 may be less invasive to a user than weight carried in eyewear device 2102, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial reality environments into their day-to-day activities.
Neckband 2105 may be communicatively coupled with eyewear device 2102 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 2100. In the embodiment of
Acoustic transducers 2120(I) and 2120(J) of neckband 2105 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital). In the embodiment of
Controller 2125 of neckband 2105 may process information generated by the sensors on neckband 2105 and/or augmented-reality system 2100. For example, controller 2125 may process information from the microphone array that describes sounds detected by the microphone array. For each detected sound, controller 2125 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, controller 2125 may populate an audio data set with the information. In embodiments in which augmented-reality system 2100 includes an inertial measurement unit, controller 2125 may compute all inertial and spatial calculations from the IMU located on eyewear device 2102. A connector may convey information between augmented-reality system 2100 and neckband 2105 and between augmented-reality system 2100 and controller 2125. The information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented-reality system 2100 to neckband 2105 may reduce weight and heat in eyewear device 2102, making it more comfortable to the user.
Power source 2135 in neckband 2105 may provide power to eyewear device 2102 and/or to neckband 2105. Power source 2135 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage. In some cases, power source 2135 may be a wired power source. Including power source 2135 on neckband 2105 instead of on eyewear device 2102 may help better distribute the weight and heat generated by power source 2135.
As noted, some artificial reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience. One example of this type of system is a head-worn display system, such as virtual-reality system 2200 in
In some examples, the band 2204 (or other portion of a head-mounted device) may be configured to support one or more source fibers and one or more detector fibers. For example, in relation to
Artificial reality systems may include a variety of types of visual feedback mechanisms. For example, display devices in augmented-reality system 2100 and/or virtual-reality system 2200 may include one or more liquid crystal displays (LCDs), light emitting diode (LED) displays, microLED displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen. These artificial reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error. Some of these artificial reality systems may also include optical subsystems having one or more lenses (e.g., conventional concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen. These optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light. These optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so-called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion).
In addition to or instead of using display screens, some of the artificial reality systems described herein may include one or more projection systems. For example, display devices in augmented-reality system 2100 and/or virtual-reality system 2200 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through. The display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial reality content and the real world. The display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light-manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc. Artificial reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
In some examples, an augmented reality or virtual reality system may include head-mounted device configured to detect scattered laser radiation from the head of a user, as described in detail herein. One or more source fibers may be arranged around the exterior of a person's head, and one or more detector fibers may be arranged to detect scattered laser radiation. In some examples, a near-eye device may be used as a support structure for one or more source fibers and/or one or more detector fibers.
The artificial reality systems described herein may also include various types of computer vision components and subsystems. For example, augmented-reality system 2100 and/or virtual-reality system 2200 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor. An artificial reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
The artificial reality systems described herein may also include one or more input and/or output audio transducers. Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer. Similarly, input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer. In some embodiments, a single transducer may be used for both audio input and audio output.
In some embodiments, the artificial reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system. Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms. Haptic feedback systems may be implemented independent of other artificial reality devices, within other artificial reality devices, and/or in conjunction with other artificial reality devices.
By providing haptic sensations, audible content, and/or visual content, artificial reality systems may create an entire virtual experience or enhance a user's real-world experience in a variety of contexts and environments. For instance, artificial reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world. Artificial reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.). The embodiments disclosed herein may enable or enhance a user's artificial reality experience in one or more of these contexts and environments and/or in other contexts and environments.
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive light scattering data (such as detector data) to be transformed, transform the detector data (e.g., by forming an ensemble average autocorrelation function), output a result of the transformation (e.g., a characteristic time of a dynamic process), use the result of the transformation to provide a characterization of the dynamic process, and store the result of the transformation (e.g., to determine the time dependence of a dynamic process, such as a fluid flow process such as blood flow). Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference may be made to the appended claims and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
This application claims the benefit of U.S. Provisional Application No. 63/018,301 filed Apr. 30, 2020, the disclosure of which is incorporated, in its entirety, by this reference.
Number | Date | Country | |
---|---|---|---|
63018301 | Apr 2020 | US |