This set of inventions relates generally to the fields of visual and motion sensing, and more specifically to new and useful improvements in the design of optical and/or motion sensors and the architecture and implementation of systems that contain such sensors to enable the physical size of such sensors and systems to be dramatically reduced and to enable the implementation of such systems to be dramatically simplified. This has applications in the design and manufacture of electronic devices containing such sensors, with direct impacts to many market segments, including, without limitation, augmented reality, virtual reality, autonomous vehicles, security, etc.
In current practice, many electronic devices are manufactured to contain visual and/or motion sensors.
The visual sensors are typically some form of discrete camera component, meaning a combination of optical elements (lenses and/or mirrors) and a sensor, such that the action of the optical elements is to modify the direction and/or divergence of photon wavefronts (light) entering the device, so that when those wavefronts arrive at the sensor, they are in an appropriate position and level of focus to produce a clear image on the sensor. Referring to
Motion sensors known as “MEMS” (or “Microelectromechanical Systems”) units, components, or devices may be utilized in various system configurations. For example, some motion and/or acceleration sensors may be known as “IMUs” (or “Inertial Measurement Units”) and configured to assist in the measurement and/or determination of accelerations and/or positions. These sensors typically are discrete components (
These components, when used together on a motion-sensitive device (such as a VR headset) often require extensive supporting architecture, including, but not limited to, data lines, power lines, mechanical vibration dampers, sizable transparent apertures in the outer shell of the device, and significant onboard compute power.
These components, when used together on a motion-sensitive device also often require extensive calibration procedures to improve the accuracy and relevance of their outputs. These calibration procedures typically include intrinsic calibrations designed to improve the accuracy of the direct data outputs of these components, as well as extrinsic calibrations designed to identify the relative position and orientation offsets between the sensors and each other, or other components in the system (e.g., displays, etc), so that their outputs may be properly aligned and analyzed together.
Since it is advantageous to produce smaller and lighter devices with the simplest calibration and system architecture requirements, it is clear that the intelligent miniaturization and combination of visual and/or motion sensors is of high importance. This provisional patent application discusses methods for the above, as well as systems and methods for otherwise reducing calibration and/or system requirements of such integrated devices.
Various embodiments are directed to improvements in the design of optical and/or motion sensors and the architecture and implementation of systems that contain such sensors to enable the physical size of such sensors and systems to be dramatically reduced and to enable the implementation of such systems to be dramatically simplified. This has applications in the design and manufacture of electronic devices containing such sensors, with direct impacts to many market segments, including, without limitation, augmented reality, virtual reality, autonomous vehicles, security, and the like.
This application claims priority to U.S. Provisional Patent Application Ser. No. 63/360,387 and filed on Sep. 28, 2021, which is incorporated by reference herein in its entirety.
The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention. All specific descriptions herewith should be considered particular, non-limiting, examples of the general principles invented, described, and claimed here.
In particular, in the description herein, many preferred embodiments will refer to visual sensors in the form of “diffractive optics visual sensor(s)” (or “DOVS”). Such sensors typically may comprise elements such as a diffractive element, typically either in the form of an amplitude or phase grating, and a sensing element (such as a CCD or CMOS sensor), such that wavefronts incident on the entry surface of the diffractive element undergo a combination of constructive and destructive interference by the time they arrive at the sensing element. For example, referring to
The advantage of such DOVS devices is that they can be made much smaller than typical refractive optics visual sensors because they do not require the extensive physical size and spacing of refractive elements such as lenses and mirrors.
However, while the preferred embodiments described below will reference DOVS devices, in general, they may also be achieved with traditional refractive optics devices, and so all following descriptions should be considered to encompass refractive optics devices as well.
Further, the preferred embodiments discussed below will reference “visual” sensors, meaning sensors designed to be sensitive to light amplitudes, wavelengths, and properties in a similar range to the sensitivity of the human eye. All of the preferred embodiments, however, may be implemented with other forms of electromagnetic sensors, sensitive to amplitudes, wavelengths, and properties of electromagnetic waves outside the sensitivity of the human eye. These may include, without limitation, infrared sensors, ultraviolet sensors, polarization sensors, microwave sensors, x-ray sensors, etc.
Various embodiments may comprise system and method for producing a miniaturized component capable of producing relative pose information as a direct output from a combination of integrated visual and motion sensors.
In typical use, visual sensors and IMUs are separate components. However, to aid miniaturization, they may be combined, either as multiple devices within a single component package, or as multiple integrated devices on a single piece of substrate within a single component package. This single component package may also include an onboard resonator clock, or may include an external input for receiving a clock signal from another system. In the case of DOVS, the diffractive element may also be included directly on top of the sensor substrate, either as an active part of the component packaging, or as a sub-part within the component packaging, paired with an aperture in the component packaging to allow light to be incident on the diffractive element (
Such a device may also include onboard algorithms to analyze the data from the DOVS and IMU components, and/or any other external data in order to produce calibration coefficients relating either to the intrinsic operation of the individual sensing components, or to the extrinsic relationship between them. Such algorithms may be implemented with the techniques of kalman filters, deep neural networks, or other techniques. Such calibration coefficients may be partially or wholly bootstrapped via initial estimations and measurements taken during the manufacturing process of the component. In particular, some of the calibration coefficients, such as the extrinsic translation offsets between components, may be taken directly from the high-accuracy and high-precision semiconductor manufacturing masks used in the manufacturing process.
Various embodiments may comprise a system and method for integrating multiple miniaturized visual sensors into an electronic device while offloading the associated compute.
Multiple DOVS devices may be integrated as components into single electronic devices.
Various embodiments may comprise a system and method for analyzing the output of multiple independent sensors and dynamically computing their relative positions and orientations (extrinsic calibration).
An electronic device may contain a multitude of sensors, such as the integrated “relative pose direct output” sensors described in #1 above. The outputs of these sensors may be treated as individual estimates of the relative pose of each sensor over time. These relative poses may be treated as inputs into an algorithm, such as one implemented using the methods of kalman filters, to produce a likelihood function, dynamically estimating the relative position and orientation of the multitude of sensors such as to maximize the computed likelihood of all of their results being simultaneously maximally correct. Referring to
Various embodiments may comprise a system and method for analyzing the output of multiple independent sensors and dynamically computing their intrinsic calibration coefficients.
As noted above, the analysis system and likelihood function may also comprise the estimation of intrinsic calibration parameters of the various sensors. These parameters may include such information as zero-reading bias, temperature dependencies, axis orthogonalities, or other measurements. These parameters may be transmitted back to the individual sensors for inclusion in any on-sensor computations, or may be stored and utilized at the system level. As before, these parameters may be computed in sub-batch form, used to identify malfunctioning or erroneous sensors, subject to various constraints, may be bootstrapped by prior knowledge, etc.
Various embodiments may comprise a system and method for controlling the manufacture of thin, transparent or semi-transparent layers on top of visual sensors.
In the manufacture of visual sensors, it may be advantageous to deposit, construct, wear away, or otherwise place, bond, and/or modify a transparent or semi-transparent layer of material in the optical path of the sensor itself. Such a problem is, for example, necessary in the construction of DOVS, as any amplitude or phase diffractive element is such a transparent or semi-transparent layer of material.
It is advantageous for the function of the full system for the properties of this layer to be controlled very precisely. Such properties may include thickness, positioning, microstructures, etc.
To ensure the precise control of such properties, rather than simply blindly executing a manufacturing process and attempting to measure and/or calibrate the end result, instead, the sensor may be powered-on, active, and transmitting output/debug/logging data during the manufacturing process itself. Such power supply and data output may be achieved through permanent traces or through temporary traces, which may be optimized for the manufacturing process, but removed or otherwise separated once complete.
Such sensor output during the manufacturing process may be used as an input in a control algorithm for the manufacturing process itself. For example, if a known, desired property or performance has been determined, such as the sensor output when faced with a static target, then during manufacturing, the sensors-being-manufactured may be pointed at such a static target, and their output may be compared with the known, desired performance. Upon reaching some threshold relative to the known performance, this may signal the manufacturing system to modulate the manufacturing parameters. Referring to
Various exemplary embodiments of the invention are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the invention. Various changes may be made to the invention described and equivalents may be substituted without departing from the true spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process, process act(s) or step(s) to the objective(s), spirit or scope of the present invention. Further, as will be appreciated by those with skill in the art that each of the individual variations described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present inventions. All such modifications are intended to be within the scope of claims associated with this disclosure.
The invention includes methods that may be performed using the subject devices. The methods may comprise the act of providing such a suitable device. Such provision may be performed by the end user. In other words, the “providing” act merely requires the end user obtain, access, approach, position, set-up, activate, power-up or otherwise act to provide the requisite device in the subject method. Methods recited herein may be carried out in any order of the recited events which is logically possible, as well as in the recited order of events.
Exemplary aspects of the invention, together with details regarding material selection and manufacture have been set forth above. As for other details of the present invention, these may be appreciated in connection with the above-referenced patents and publications as well as generally known or appreciated by those with skill in the art. The same may hold true with respect to method-based aspects of the invention in terms of additional acts as commonly or logically employed.
In addition, though the invention has been described in reference to several examples optionally incorporating various features, the invention is not to be limited to that which is described or indicated as contemplated with respect to each variation of the invention. Various changes may be made to the invention described and equivalents (whether recited herein or not included for the sake of some brevity) may be substituted without departing from the true spirit and scope of the invention. In addition, where a range of values is provided, it is understood that every intervening value, between the upper and lower limit of that range and any other stated or intervening value in that stated range, is encompassed within the invention.
Also, it is contemplated that any optional feature of the inventive variations described may be set forth and claimed independently, or in combination with any one or more of the features described herein. Reference to a singular item, includes the possibility that there are plural of the same items present. More specifically, as used herein and in claims associated hereto, the singular forms “a,” “an,” “said,” and “the” include plural referents unless the specifically stated otherwise. In other words, use of the articles allow for “at least one” of the subject item in the description above as well as claims associated with this disclosure. It is further noted that such claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation.
Without the use of such exclusive terminology, the term “comprising” in claims associated with this disclosure shall allow for the inclusion of any additional element—irrespective of whether a given number of elements are enumerated in such claims, or the addition of a feature could be regarded as transforming the nature of an element set forth in such claims. Except as specifically defined herein, all technical and scientific terms used herein are to be given as broad a commonly understood meaning as possible while maintaining claim validity.
The breadth of the present invention is not to be limited to the examples provided and/or the subject specification, but rather only by the scope of claim language associated with this disclosure.
This application claims benefit of U.S. Provisional application No. 63/360,387, filed Sep. 28, 2021, the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63360387 | Sep 2021 | US |