This patent application relates generally to optical lens design and configurations in optical systems, such as head-mounted displays (HMDs), and more specifically, to systems and methods for high-throughput testing and module integration of rotationally variant optical lens systems.
Optical lens design and configurations are part of many modern-day devices, such as cameras used in mobile phones and various optical devices. One such optical device that relies on optical lens design is a head-mounted display (HMD). In some examples, a head-mounted display (HMD) may be a headset or eyewear used for video playback, gaming, or sports, and in a variety of contexts and applications, such as virtual reality (VR), augmented reality (AR), or mixed reality (MR).
Some head-mounted displays (HMDs) rely on lens designs or configurations that are lighter and less bulky. For instance, rotationally variant optics, or “freeform” optics, is an emerging technology that uses lens and/or mirror surfaces that lack an axis of symmetry. This lack of symmetry can help spread of light and ultimately create an optical device with a higher resolution and a smaller form factor. A camera lens for eye-tracking components or systems in a head mounted-display (HMD), for example, may be highly freeform or rotationally variant. However, there are notable challenges involving manufacturing and integration of such freeform optical components.
Features of the present disclosure are illustrated by way of example and not limited in the following figures, in which like numerals indicate like elements. One skilled in the art will readily recognize from the following that alternative examples of the structures and methods illustrated in the figures can be employed without departing from the principles described herein.
For simplicity and illustrative purposes, the present application is described by referring mainly to examples thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. It will be readily apparent, however, that the present application may be practiced without limitation to these specific details. In other instances, some methods and structures readily understood by one of ordinary skill in the art have not been described in detail so as not to unnecessarily obscure the present application. As used herein, the terms “a” and “an” are intended to denote at least one of a particular element, the term “includes” means includes but not limited to, the term “including” means including but not limited to, and the term “based on” means based at least in part on.
There are many types of optical devices that utilize optical design configurations. For example, a head-mounted display (HMD) is an optical device that may communicate information to or from a user who is wearing the headset. For example, a virtual reality (VR) headset may be used to present visual information to simulate any number of virtual environments when worn by a user. That same virtual reality (VR) headset may also receive information from the users eye movements, head/body shifts, voice, or other user-provided signals.
In many cases, optical lens design configurations seek to decrease headset size, weight, cost, and overall bulkiness. However, these attempts to provide a cost-effective device with a small form factor often limits the function of the head-mounted display (HMD). For example, while attempts to reduce the size and bulkiness of various optical configurations in conventional headsets can be achieved, this often reduces the amount of space needed for other built-in features of a headset, thereby restricting or limiting a headsets ability to function at full capacity.
With regard to rotationally variant (or “freeform”) optics, there are several challenges in manufacturing and integration of such optics. Manufacturers may typically rely on test data to iteratively tune manufacturing processes to meet performance specifications. Because rotationally variant optical components, by definition, have an asymmetrical geometry, it may be difficult to manufacture such components in a repeatable, reliable, and efficient fashion, especially in high volumes.
In addition to manufacturing, optical component integration may be another technical challenge as well. For instance, integrating lens module housing with a sensor (e.g., in a camera assembly for eye-tracking in an augmented reality (AR) headset) may involve any number of specific and nuanced processes that may require accurate and repeatable execution, and again, especially in scale. More specifically, the lens module housing may need to be integrated with the sensor via any number of active alignment (AA) processes. Such alignment processes may require use of through-focus modulation transfer function (MTF) curves that are collected over a camera field of view (FOV) to position the sensor where the curves peak together.
The systems and methods described herein may provide for high-throughput testing and module integration of rotationally variant optical lens systems. In this way, a new mass production (MP) metrology for freeform lens and/or camera modules may be provided. Among many key advantages and benefits, the systems and methods described herein may enable improved techniques for iterative tuning, which may be utilized during manufacturing by a manufacturer of rotationally variant or freeform optics, for example, to optimize lens/optics manufacturing processes and ultimately increase quality and yield. Moreover, the systems and methods described herein may also provide high-throughput testing and integration for camera and sensor modules. These and other examples may be provided in the detailed description below.
It should also be appreciated that the systems and methods described herein may be particularly suited for virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) environments, but may also be applicable to a host of other systems or environments that include optical lens assemblies, e.g., those using rotationally variant or freeform optics, or other similar optical configurations. These may include, for example, cameras or sensors, networking, telecommunications, holography, telescopes, spectrometers, or other optical systems, such as any system or method for forming images or projecting images. Thus, the optical configurations described herein, may be used in any of these or other examples. These and other benefits will be apparent in the description provided herein.
Reference is made to
In some examples, the system 100 may include the head-mounted display (HMD) 105, an imaging device 110, and an input/output (I/O) interface 115, each of which may be communicatively coupled to a console 120 or other similar device.
While
The head-mounted display (HMD) 105 may communicate information to or from a user who is wearing the headset. In some examples, the head-mounted display (HMD) 105 may provide content to a user, which may include, but not limited to, images, video, audio, or some combination thereof. In some examples, audio content may be presented via a separate device (e.g., speakers and/or headphones) external to the head-mounted display (HMD) 105 that receives audio information from the head-mounted display (HMD) 105, the console 120, or both. In some examples, the head-mounted display (HMD) 105 may also receive information from a user. This information may include eye moments, head/body movements, voice (e.g., using an integrated or separate microphone device), or other user-provided content.
The head-mounted display (HMD) 105 may include any number of components, such as an electronic display 155, an eye tracking unit 160, an optics block 165, one or more locators 170, an inertial measurement unit (IMU) 175, one or head/body tracking sensors 180, and a scene rendering unit 185, and a vergence processing unit 190.
While the head-mounted display (HMD) 105 described in
An example of the head-mounted display (HMD) 105 is further described below in conjunction with
The electronic display 155 may include a display device that presents visual data to a user. This visual data may be transmitted, for example, from the console 120. In some examples, electronic display 155 may also present tracking light for tracking the user's eye movements. It should be appreciated that the electronic display 155 may include any number of electronic display elements (e.g., a display for each of the user). Examples of a display device that may be used in the electronic display 155 may include, but not limited to a liquid crystal display (LCD), a light emitting diode (LED), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode (AMOLED) display, micro light emitting diode (micro-LED) display, some other display, or some combination thereof.
The optics block 165 may adjust its focal length based on or in response to instructions received from the console 120 or other component. In some examples, the optics block 165 may include a multi multifocal block to adjust a focal length (adjusts optical power) of the optics block 165.
The eye tracking unit 160 may track an eye position and eye movement of a user of the head-mounted display (HMD) 105. A camera or other optical sensor inside the head-mounted display (HMD) 105 may capture image information of a user's eyes, and the eye tracking unit 160 may use the captured information to determine interpupillary distance, interocular distance, a three-dimensional (3D) position of each eye relative to the head-mounted display (HMD) 105 (e.g., for distortion adjustment purposes), including a magnitude of torsion and rotation (i.e., roll, pitch, and yaw) and gaze directions for each eye. The information for the position and orientation of the user's eyes may be used to determine the gaze point in a virtual scene presented by the head-mounted display (HMD) 105 where the user is looking.
The vergence processing unit 190 may determine a vergence depth of a user's gaze. In some examples, this may be based on the gaze point or an estimated intersection of the gaze lines determined by the eye tracking unit 160. Vergence is the simultaneous movement or rotation of both eyes in opposite directions to maintain single binocular vision, which is naturally and/or automatically performed by the human eye. Thus, a location where a user's eyes are verged may refer to where the user is looking and may also typically be the location where the user's eyes are focused. For example, the vergence processing unit 190 may triangulate the gaze lines to estimate a distance or depth from the user associated with intersection of the gaze lines. The depth associated with intersection of the gaze lines can then be used as an approximation for the accommodation distance, which identifies a distance from the user where the user's eyes are directed. Thus, the vergence distance allows determination of a location where the user's eyes should be focused.
The one or more locators 170 may be one or more objects located in specific positions on the head-mounted display (HMD) 105 relative to one another and relative to a specific reference point on the head-mounted display (HMD) 105. A locator 170, in some examples, may be a light emitting diode (LED), a corner cube reflector, a reflective marker, and/or a type of light source that contrasts with an environment in which the head-mounted display (HMD) 105 operates, or some combination thereof. Active (or passive) locators 170 (e.g., an LED or other type of light emitting device) may emit light in the visible band (˜380 nm to 850 nm), in the infrared (IR) band (˜850 nm to 1 mm), in the ultraviolet band (10 nm to 380 nm), some other portion of the electromagnetic spectrum, or some combination thereof.
The one or more locators 170 may be located beneath an outer surface of the head-mounted display (HMD) 105, which may be transparent to wavelengths of light emitted or reflected by the locators 170 or may be thin enough not to substantially attenuate wavelengths of light emitted or reflected by the locators 170. Further, the outer surface or other portions of the head-mounted display (HMD) 105 may be opaque in the visible band of wavelengths of light. Thus, the one or more locators 170 may emit light in the IR band while under an outer surface of the head-mounted display (HMD) 105 that may be transparent in the IR band but opaque in the visible band.
The inertial measurement unit (IMU) 175 may be an electronic device that generates, among other things, fast calibration data based on or in response to measurement signals received from one or more of the head/body tracking sensors 180, which may generate one or more measurement signals in response to motion of head-mounted display (HMD) 105. Examples of the head/body tracking sensors 180 may include, but not limited to, accelerometers, gyroscopes, magnetometers, cameras, other sensors suitable for detecting motion, correcting error associated with the inertial measurement unit (IMU) 175, or some combination thereof. The head/body tracking sensors 180 may be located external to the inertial measurement unit (IMU) 175, internal to the inertial measurement unit (IMU) 175, or some combination thereof.
Based on or in response to the measurement signals from the head/body tracking sensors 180, the inertial measurement unit (IMU) 175 may generate fast calibration data indicating an estimated position of the head-mounted display (HMD) 105 relative to an initial position of the head-mounted display (HMD) 105. For example, the head/body tracking sensors 180 may include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, and roll). The inertial measurement unit (IMU) 175 may then, for example, rapidly sample the measurement signals and/or calculate the estimated position of the head-mounted display (HMD) 105 from the sampled data. For example, the inertial measurement unit (IMU) 175 may integrate measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated position of a reference point on the head-mounted display (HMD) 105. It should be appreciated that the reference point may be a point that may be used to describe the position of the head-mounted display (HMD) 105. While the reference point may generally be defined as a point in space, in various examples or scenarios, a reference point as used herein may be defined as a point within the head-mounted display (HMD) 105 (e.g., a center of the inertial measurement unit (IMU) 175). Alternatively or additionally, the inertial measurement unit (IMU) 175 may provide the sampled measurement signals to the console 120, which may determine the fast calibration data or other similar or related data.
The inertial measurement unit (IMU) 175 may additionally receive one or more calibration parameters from the console 120. As described herein, the one or more calibration parameters may be used to maintain tracking of the head-mounted display (HMD) 105. Based on a received calibration parameter, the inertial measurement unit (IMU) 175 may adjust one or more of the IMU parameters (e.g., sample rate). In some examples, certain calibration parameters may cause the inertial measurement unit (IMU) 175 to update an initial position of the reference point to correspond to a next calibrated position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point may help reduce accumulated error associated with determining the estimated position. The accumulated error, also referred to as drift error, may cause the estimated position of the reference point to “drift” away from the actual position of the reference point overtime.
The scene rendering unit 185 may receive content for the virtual scene from a VR engine 145 and may provide the content for display on the electronic display 155. Additionally or alternatively, the scene rendering unit 185 may adjust the content based on information from the inertial measurement unit (IMU) 175, the vergence processing unit 830, and/or the head/body tracking sensors 180. The scene rendering unit 185 may determine a portion of the content to be displayed on the electronic display 155 based at least in part on one or more of the tracking unit 140, the head/body tracking sensors 180, and/or the inertial measurement unit (IMU) 175.
The imaging device 110 may generate slow calibration data in accordance with calibration parameters received from the console 120. Slow calibration data may include one or more images showing observed positions of the locators 125 that are detectable by imaging device 110. The imaging device 110 may include one or more cameras, one or more video cameras, other devices capable of capturing images including one or more locators 170, or some combination thereof. Additionally, the imaging device 110 may include one or more filters (e.g., for increasing signal to noise ratio). The imaging device 110 may be configured to detect light emitted or reflected from the one or more locators 170 in a field of view of the imaging device 110. In examples where the locators 170 include one or more passive elements (e.g., a retroreflector), the imaging device 110 may include a light source that illuminates some or all of the locators 170, which may retro-reflect the light towards the light source in the imaging device 110. Slow calibration data may be communicated from the imaging device 110 to the console 120, and the imaging device 110 may receive one or more calibration parameters from the console 120 to adjust one or more imaging parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, etc.).
The I/O interface 115 may be a device that allows a user to send action requests to the console 120. An action request may be a request to perform a particular action. For example, an action request may be to start or end an application or to perform a particular action within the application. The I/O interface 115 may include one or more input devices. Example input devices may include a keyboard, a mouse, a hand-held controller, a glove controller, and/or any other suitable device for receiving action requests and communicating the received action requests to the console 120. An action request received by the I/O interface 115 may be communicated to the console 120, which may perform an action corresponding to the action request. In some examples, the I/O interface 115 may provide haptic feedback to the user in accordance with instructions received from the console 120. For example, haptic feedback may be provided by the I/O interface 115 when an action request is received, or the console 120 may communicate instructions to the I/O interface 115 causing the I/O interface 115 to generate haptic feedback when the console 120 performs an action.
The console 120 may provide content to the head-mounted display (HMD) 105 for presentation to the user in accordance with information received from the imaging device 110, the head-mounted display (HMD) 105, or the I/O interface 115. The console 120 includes an application store 150, a tracking unit 140, and the VR engine 145. Some examples of the console 120 have different or additional units than those described in conjunction with
The application store 150 may store one or more applications for execution by the console 120, as well as other various application-related data. An application, as used herein, may refer to a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of the head-mounted display (HMD) 105 or the I/O interface 115. Examples of applications may include gaming applications, conferencing applications, video playback application, or other applications.
The tracking unit 140 may calibrate the system 100. This calibration may be achieved by using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determining position of the head-mounted display (HMD) 105. For example, the tracking unit 140 may adjust focus of the imaging device 110 to obtain a more accurate position for observed locators 170 on the head-mounted display (HMD) 105. Moreover, calibration performed by the tracking unit 140 may also account for information received from the inertial measurement unit (IMU) 175. Additionally, if tracking of the head-mounted display (HMD) 105 is lost (e.g., imaging device 110 loses line of sight of at least a threshold number of locators 170), the tracking unit 140 may re-calibrate some or all of the system 100 components.
Additionally, the tracking unit 140 may track the movement of the head-mounted display (HMD) 105 using slow calibration information from the imaging device 110 and may determine positions of a reference point on the head-mounted display (HMD) 105 using observed locators from the slow calibration information and a model of the head-mounted display (HMD) 105. The tracking unit 140 may also determine positions of the reference point on the head-mounted display (HMD) 105 using position information from the fast calibration information from the inertial measurement unit (IMU) 175 on the head-mounted display (HMD) 105. Additionally, the eye tracking unit 160 may use portions of the fast calibration information, the slow calibration information, or some combination thereof, to predict a future location of the head-mounted display (HMD) 105, which may be provided to the VR engine 145.
The VR engine 145 may execute applications within the system 100 and may receive position information, acceleration information, velocity information, predicted future positions, other information, or some combination thereof for the head-mounted display (HMD) 105 from the tracking unit 140 or other component. Based on or in response to the received information, the VR engine 145 may determine content to provide to the head-mounted display (HMD) 105 for presentation to the user. This content may include, but not limited to, a virtual scene, one or more virtual objects to overlay onto a real world scene, etc.
In some examples, the VR engine 145 may maintain focal capability information of the optics block 165. Focal capability information, as used herein, may refer to information that describes what focal distances are available to the optics block 165. Focal capability information may include, e.g., a range of focus the optics block 165 is able to accommodate (e.g., 0 to 4 diopters), a resolution of focus (e.g., 0.25 diopters), a number of focal planes, combinations of settings for switchable half wave plates (SHWPs) (e.g., active or non-active) that map to particular focal planes, combinations of settings for SHWPS and active liquid crystal lenses that map to particular focal planes, or some combination thereof.
The VR engine 145 may generate instructions for the optics block 165. These instructions may cause the optics block 165 to adjust its focal distance to a particular location. The VR engine 145 may generate the instructions based on focal capability information and, e.g., information from the vergence processing unit 190, the inertial measurement unit (IMU) 175, and/or the head/body tracking sensors 180. The VR engine 145 may use information from the vergence processing unit 190, the inertial measurement unit (IMU) 175, and the head/body tracking sensors 180, other source, or some combination thereof, to select an ideal focal plane to present content to the user. The VR engine 145 may then use the focal capability information to select a focal plane that is closest to the ideal focal plane. The VR engine 145 may use the focal information to determine settings for one or more SHWPs, one or more active liquid crystal lenses, or some combination thereof, within the optics block 176 that are associated with the selected focal plane. The VR engine 145 may generate instructions based on the determined settings, and may provide the instructions to the optics block 165.
The VR engine 145 may perform any number of actions within an application executing on the console 120 in response to an action request received from the I/O interface 115 and may provide feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via the head-mounted display (HMD) 105 or haptic feedback via the I/O interface 115.
At least one position sensor, such as the head/body tracking sensor 180 described with respect to
Based on the one or more measurement signals from one or more position sensors, the inertial measurement unit (IMU) 175 may generate calibration data indicating an estimated position of the head-mounted display (HMD) 105 relative to an initial position of the head-mounted display (HMD) 105. In some examples, the inertial measurement unit (IMU) 175 may rapidly sample the measurement signals and calculates the estimated position of the head-mounted display (HMD) 105 from the sampled data. For example, the inertial measurement unit (IMU) 175 may integrate the measurement signals received from the one or more accelerometers (or other position sensors) over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated position of a reference point on the head-mounted display (HMD) 105. Alternatively or additionally, the inertial measurement unit (IMU) 175 may provide the sampled measurement signals to a console (e.g., a computer), which may determine the calibration data. The reference point may be a point that may be used to describe the position of the head-mounted display (HMD) 105. While the reference point may generally be defined as a point in space; however, in practice, the reference point may be defined as a point within the head-mounted display (HMD) 105 (e.g., a center of the inertial measurement unit (IMU) 175).
One or more locators 170, or portions of locators 170, may be located on a front side 220A, a top side 220B, a bottom side 220C, a right side 220D, and a left side 220E of the front rigid body 205 in the example of
In some examples, the head-mounted display (HMD) 105 may be glasses comprising a front frame including a bridge to allow the head-mounted display (HMD) 105 to rest on a users nose and temples (or “arms”) that extend over the user's ears to secure the head-mounted display (HMD) 105 to the user. In addition, the head-mounted display (HMD) 105 of
As further shown in
Although depicted as separate components in
The reflective element 306 may include any number of reflective materials, such as a glass plate, a waveguide, a holographic optical element (HOE), or combination thereof, or other element.
In some examples, the sensor element 308 may be any sensor or sensor-like element to receive photo-illumination or optical signals. In some examples, the sensor element may include any number of photodetectors or photodiodes. The at least one optical component 310 may include any number of optical components. In some examples, the at least one optical component 310 may be similar to the optics block 165 described with respect to
The at least one rotationally variant optical component 312 may include any number of freeform optical components. As described herein, the rotationally variant optical component 312 may have an asymmetrical folded geometry. In some examples, the asymmetrical surface of the rotationally variant optical component 312 may help provide greater spread or dispersion of the optical illumination 304. This, in turn, may provide enhanced performance, smaller packaging or form factor, and other various benefits in AR/VR/MR environments.
It should also be appreciated that the at least one rotationally variant optical component 312 may not be limited to only structurally rotationally variant optics or freeform optics, but may also include, for example, any off-center or off-axis portion of a rotationally symmetrical optical component (or surface) or rotationally symmetrical optical component that may be tilted. In other words, the at least one rotationally variant optical component 312, as described herein, may involve any asymmetrical surface/part or any symmetrical surface/part that is used in asymmetrical (or similar) ways to exhibit asymmetrical-like characteristics.
As described above, there may be manufacturing and integration challenges associated with rotationally variant (or freeform) optical components used in any number of AR/VR/MR headsets, cameras, or other similar optical systems. Manufacturers and suppliers of rotationally variant optical components, for example, may rely on test data to iteratively tune the various processes and techniques to provide optical components that meet one or more performance specifications.
It should be appreciated that for conventional rotationally invariant lenses, nominal performance is generally high and therefore straightforward to compare against in lens process tuning. For instance, through-focus modulation transfer function (MTF) curves associated with rotationally invariant lenses may generally have peaks that tend to be well-behaved. In additional, these through-focus modulation transfer function (MTF) curves may also line up with each other during active alignment (AA).
For rotationally variant, highly freeform, or complex geometrical lenses or similar optical components, nominal performance, in part due to intrinsic higher levels of distortion, may be different relative to conventional rotationally invariant optics. In some scenarios, the modulation transfer function (MTF) curves for rotationally variant or freeform lenses may be higher than a rotationally symmetrical lenses attempting to perform the same or similar function (e.g., correct asymmetric aberration content) because a rotationally symmetrical component may not be able to perform this function.
In particular, one of the functions of the optical system 302 may be to compensate for any aberration introduced by reflective element 306, for example, and work in concert with all relevant optical components to provide high quality imaging. However, lens manufacturers and/or sensor module integrators generally test the lens or optical system 302 by itself (e.g., without access to the reflective element 306). In many ways, this may introduce inherent challenges to the overall testing process. For example, through-focus modulation transfer function (MTF) curves may not line up even in nominal design. It should be appreciated that even if everything were made perfect and ideal, this would not be the case. Thus, this may necessarily create challenges, for example, in lens process tuning and/or camera module integration.
In
In order to optimize the lens process tuning and/or camera module integration, it may then be imperative to provide a way to generate through-focus modulation transfer function (MTF) curves for a rotationally variant lens that better resemble depict through-focus modulation transfer function (MTF) curves for a conventional rotationally invariant lens. However, there may be some challenges with this. First, it should be noted that a “best-focus” plane may not be straightforward to define. Second, in some scenarios, if significant sensor tilt is introduced in the process, a glue bond between sensor and lens may also be uneven and thereby cause thermal and/or stability issues for a camera during use. Third, adding surface fitting techniques, e.g. via software or other algorithm, however, may also add time to the already time-consuming process and ultimately generate more cost for mass production (MP) of rotationally variant optics.
To address these and other issues, the systems and methods described herein may provide high-throughput testing and module integration of rotationally variant optical lens systems. In some examples, the systems and methods may provide a nulling apparatus. The nulling apparatus may be provided, for example, using a computer-generated hologram, prism (e.g., power prism), lens and mirror elements, phase plates, or other similar components. It should be appreciated that the nulling apparatus may be configured based on a wavefront aberration profile of any given lens module, such that the generate through-focus modulation transfer function (MTF) curves from a well-made rotationally variant lens, for example, may peak within close proximity to one another. It should also be appreciated that the nulling apparatus may also change the conjugate position of the object. For example, a freeform optical system may have originally been designed to work at a close conjugate (even tilted conjugate plane) and the null apparatus may then allow the image plane to be conjugate to a larger object distance with different tilt thus making conventional modulation transfer function (MTF) curves and alignment stations to be used.
By creating, tuning, and utilizing such a nulling apparatus may enable manufacturers, suppliers, and module integrators ability to enable high-throughput lens and camera module build with relatively complete mass production compatibility. In other words, manufacturers, suppliers, and module integrators may easily and readily insert the nulling apparatus while still using existing machinery, processes, techniques, and infrastructure to provide high performing rotationally variant optical components using the techniques described herein.
To illustrate this,
As shown in
As shown in
There may be any number of systems for production-level modulation transfer function (MTF) testing. By way of example, such systems may include, but not limited to, a telescoping element, a light source (with or without collimation), a sample holder, an actuator for sample positioning (e.g., in x-, y-, and/or z-positioning), a controller, and various computing elements, such as a processor, input/output, etc. It should be appreciated that such modulation transfer function (MTF) testing systems may be dedicated machinery to provide modulation transfer function (MTF) testing functionality and features.
In order to create a nulling (or compensating) apparatus, there may be a number of design steps involved.
At block 510, an optical element (e.g., a compensator) may be inserted relatively faraway from a unit under test (UUT). The unit under test (UUT) may include a lens, but may also include, other components, such as a freeform prism, mirror apparatus, diffractive component, metalens, or other similar unit or component. It should be appreciated that distance may be determined by how far is needed to have sufficient separation between field points of interest, or sufficient field sampling of the modulation transfer function (MTF) (or spatial frequency response (SFR)) test target. It should also be appreciated that at this step, much care and attention should be given to help ensure a correct test target is used and that an image of the target is at the correct location on the camera sensor.
At block 520, a merit function that maximizes the modulation transfer function (MTF) values at nominal focus and minimizes the difference between modulation transfer function (MTF) values at either side of nominal focus may be built. As described above, the graph 400D of
It should be appreciated that a merit function, in general, may be described as a difference between a current state versus a desired state. As such, optimization techniques may generally seek to minimize the merit function, and thus the difference between current and desired states. Doing so would create an “optimized” condition.
In some examples of optimization, the merit function may be represented as a single number that captures one or more aspects of desired lens performance. Here, the merit function D may be constructed by taking a root means square (RMS) of all identified operands, which may be provided as follows:
where m may represent a number of operands, wi may represent a weighting factor for operand i, ci may represent a current value for operand i, and ti may represent a target value for operand i. It should be appreciated that squaring each operand may serve to magnify the operands with the worst performance and ensure that positive and negative operand values do not offset each other in sum. It should also be noted that individual operands may be relatively weighted to emphasis their desired contribution to overall performance. A target value for most operands, for example, may be zero, as described above.
So in this case, it may be desirable, for example, to have peak modulation transfer function (MTF) curves be above a certain number (e.g., 70%). Also, having a difference between the through-focus modulation transfer function (MTF) values to be minimized toward zero may also be desirable. In each case, this may be achieved using at least one weighting factor.
At block 530, variables associated with the compensator may be provided. In some examples, this may include position and orientation of the compensator. It should be appreciated that everything within the unit under test (UUT) should be kept fixed. For polynomials, it may be helpful to start with low order terms and incrementally add additional terms as needed. Example variables to be provided for the compensator may include, but not limited to: radius of curvature, conic constant, polynomial terms changing a surface shape (e.g., XY polynomials, Zernike polynomials, Forbes/Q polynomials, Legendre polynomials, etc.), diffractive/hologram parameters, phase terms, etc. Variables for position and orientation may also be provided. These may include, but not limited to: X, Y, Z, θx, θy, θz, or other variable. In addition, other variables to consider here may include, but not limited to, the following: material of the compensator, thickness/wedge (basically the X/Y/Z/alpha/beta/gamma position of the compensator surfaces with respect to each other), birefringence (for example, intentionally introduced stress birefringence).
It should be appreciated that these variables may not necessarily be literal mathematical variables, but parameters that may be varied or adjusted to obtain the desired merit function. For example, in an optimization scenario, one or more of these parameters may be changed or adjusted, and these changes or adjustments may affect the value of the merit function that is determined and calculated, as described above. In some examples, if the merit function value goes down with some of these changes or adjustments to these variables, then this may indicate that such changes/adjustments of these variables are desirable and to keep going to bring down the merit function. If the merit function goes up (e.g., away from zero), then this may suggest that these changes/adjustments of these variables are undesirable to reverse course to make the merit function go the other way (e.g., closer to zero).
At block 540, the nulling apparatus (or null element) may be iteratively optimized with the merit function until no significant further improvement and desired performance is achieved. In some examples, optimization may be considered when the through focus modulation transfer function (MTF) curves peak together, are generally aligned, or “well-behaved,” as described above. In other words, there may be a predetermined threshold and optimization would be determined when each field point is operating at a diffraction limit. At this point, further improvement of geometric aberrations may not necessarily provide higher modulation transfer function (MTF). Thus, modulation transfer function (MTF) would then be limited by diffraction from a beam limiting aperture(s). It should also be appreciated that it may be desirable for the compensator element to be manufacturable, which generally means that using available materials may be an important factor to make sure the null element is not too thin or too thick, the surface variation (if using polynomials) is not too abnormal, or if a computer generated hologram is used to make sure the fringe density is manufacturable with current technology (i.e., not too dense), etc. These manufacturability constraints may be applied during any optimization process.
So these manufacturability constraints should be applied during the optimization process. It should also be appreciated that optimization may be considered done when each field point is operating at its diffraction limit. At this point, further improvement of geometric aberrations would not provide higher modulation transfer function (MTF). Thus, modulation transfer function (MTF) would be limited by diffraction from the beam limiting aperture(s).
To help illustrate,
As shown in
It should be appreciated that shaded/non-shaded areas and differing dotted lanes, as shown in
As shown in
As shown in
As shown in
The optical configuration of
It should be appreciated that it may be important to have the null apparatus (or null element) or compensating element be manufacturable and usable. Accordingly, it may be important to create the null apparatus using generally available materials and making sure it is not too extreme in size, thickness, weight, or other characteristic. Furthermore, it may be important to make sure the null apparatus may have a surface variation (if using polynomials) that is not too “freeform.” If it is, it may be difficult to manufacture. For example, if a computer generated hologram is used, it may be important to make sure the fringe density is manufacturable with current technology (i.e., not too dense), etc. Thus, one or more manufacturability constraints may and should be applied during one or more steps of the optimization process described herein as well. In some examples, at least one tolerance analysis on the null apparatus may be performed to ensure that it can be fabricated so as not to cause an improper detector focus of the lens under test (LUT) or unit under test (UUT).
It should be appreciated that the process to create the nulling apparatus, as described herein, may simply be an example to facilitate mass production (MP) metrology for rotationally variant optics. For instance, the example described above may be shown for a finite or infinity conjugate setup. It should be appreciated that a finite conjugate setup may refer to imaging of objects at a “finite” distance away from a lens/camera. In contrast, an infinity conjugate setup may image objects at “infinity” distance away (e.g., a photography camera pointing toward something very far away). In other words, an infinite conjugate may be where an object distance (Zobj) is many focal lengths away from a lens, e.g., Zobj>>EFL (effective focal length), where the EFL may be a distance from a principal point to a focal point.
Some lenses are designed for finite conjugate while others are designed for infinite conjugates. Infinity conjugate may generally be more straightforward with an input beam being collimated/planar wavefront, whereas with finite conjugate, a lens manufacturer may have to make sure the spatial frequency response (SFR) target is positioned at the correct conjugate position/correct distance away from the lens. That said, it should be appreciated that the method or technique for creating the nulling apparatus may be applied to both infinity and finite conjugate testing configurations. Furthermore, it should be appreciated that in the general sense, the modulation transfer function (MTF)/spatial frequency response (SFR) target position and orientation may also be used as variable in optimization.
Note that in general, a lens designed for finite conjugate may not generally have good performance if used at infinity conjugate, and vice versa. The nulling compensator, described herein, may help with this as well. For example, even if a lens manufacturer only has an infinity conjugate tester, the compensation provided by the null element may still allow a finite conjugate lens to work with an infinity conjugate tester. In other words, the metrology equipment of the supplier may still be usable and a separate or distinct fully custom test apparatus may not be required, just incorporation of this null optic may be sufficient.
Although examples described above are directed to using a fabricated null apparatus or corrector, it should be appreciated that the null apparatus may not be limited to only fabricated null correctors but may also include other similar components. For example, as shown in
It should be appreciated that using a deformable mirror (DM) or digital micromirror device (DMD) as the null-like apparatus 670, instead of or in combination with a uniquely fabricated null optic may have several advantages. For instance, a deformable mirror (DM) or digital micromirror device (DMD) may be tuned for multiple unite under test (UUT) configurations, and not limited to any single design, which may offer a broader array of application and flexibility. Providing a deformable mirror (DM) or digital micromirror device (DMD) may also remove and minimize any challenges that may be associated with null element fabrication error and/or metrology of the null element. These and other benefits may be realized as well.
The systems and methods described herein may provide a technique for creating and designing a nulling apparatus or element useful in mass production (MP) metrology of rotationally variant optical components, which, for example, may be used in a head-mounted display (HMD) or other optical applications.
The benefits and advantages of the techniques for mass production (MP) metrology of rotationally variant optical components described herein, may include, among other things, enabling manufacturers or suppliers improved techniques for iterative tuning during, for example, manufacturing of rotationally variant or freeform optics to ultimately increase quality and yield. As described above, a manufacturer may continue to use existing modulation transfer function (MTF) metrology equipment and processing techniques and simply insure the nulling apparatus between the unit under test (UUT) and spatial frequency response (SFR) target projection system. Moreover, the systems and methods described herein may also provide high-throughput testing and integration for camera and sensor modules, which in turn may have benefits in optical power customizability while minimizing overall lens assembly thickness, reducing power consumption, increasing product flexibility and efficiency, and improved resolution. This may be achieved in any number of environments, such as in virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) environments, or other optical scenarios.
As mentioned above, there may be numerous ways to configure, provide, manufacture, or position the various optical, electrical, and/or mechanical components or elements of the examples described above. While examples described herein are directed to certain configurations as shown, it should be appreciated that any of the components described or mentioned herein may be altered, changed, replaced, or modified, in size, shape, and numbers, or material, depending on application or use case, and adjusted for desired resolution or optimal results. In this way, other electrical, thermal, mechanical and/or design advantages may also be obtained.
It should be appreciated that the apparatuses, systems, and methods described herein may facilitate more desirable headsets or visual results. It should also be appreciated that the apparatuses, systems, and methods, as described herein, may also include or communicate with other components not shown. For example, these may include external processors, counters, analyzers, computing devices, and other measuring devices or systems. In some examples, this may also include middleware (not shown) as well. Middleware may include software hosted by one or more servers or devices. Furthermore, it should be appreciated that some of the middleware or servers mayor may not be needed to achieve functionality. Other types of servers, middleware, systems, platforms, and applications not shown may also be provided at the back-end to facilitate the features and functionalities of the headset.
Moreover, single components described herein may be provided as multiple components, and vice versa, to perform the functions and features described above. It should be appreciated that the components of the apparatus or system described herein may operate in partial or full capacity, or it may be removed entirely. It should also be appreciated that analytics and processing techniques described herein with respect to manufacturing or sensor integration of rotationally variant optical components, for example, may also be performed partially or in full by these or other various components of the overall system or apparatus.
It should be appreciated that data stores may also be provided to the apparatuses, systems, and methods described herein, and may include volatile and/or nonvolatile data storage that may store data and software or firmware including machine-readable instructions. The software or firmware may include subroutines or applications that perform the functions of the measurement system and/or run one or more application that utilize data from the measurement or other communicatively coupled system.
The various components, circuits, elements, components, and/or interfaces, may be any number of optical, mechanical, electrical, hardware, network, or software components, circuits, elements, and interfaces that serves to facilitate communication, exchange, and analysis data between any number of or combination of equipment, protocol layers, or applications. For example, some of the components described herein may each include a network or communication interface to communicate with other servers, devices, components or network elements via a network or other communication protocol.
Although examples are generally directed to head-mounted displays (HMDs), it should be appreciated that the apparatuses, systems, and methods described herein may also be used in other various systems and other implementations. For example, these may include other various head-mounted systems, eyewear, wearable devices, optical systems, etc. in any number of virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) environments, or beyond. In fact, there may be numerous applications in various optical or data communication scenarios, such as optical networking, image processing, spectroscopy, telescoping technologies, etc.
It should be appreciated that the apparatuses, systems, and methods described herein may also be used to help provide, directly or indirectly, measurements for distance, angle, rotation, speed, position, wavelength, power, shape, transmissivity, and/or other related optical measurements. For example, the systems and methods described herein may allow for a higher optical resolution and increased system functionality using an efficient and cost-effective design concept. With additional advantages that include higher resolution, lower number of optical elements, more efficient processing techniques, cost-effective configurations, and smaller or more compact form factor, the apparatuses, systems, and methods described herein may be beneficial in many original equipment manufacturer (OEM) applications, where they may be readily integrated into various and existing equipment, systems, instruments, or other systems and methods. The apparatuses, systems, and methods described herein may provide mechanical simplicity and adaptability to small or large headsets. Ultimately, the apparatuses, systems, and methods described herein may increase resolution, minimize adverse effects of traditional systems/approaches, and improve visual efficiencies.
What has been described and illustrated herein are examples of the disclosure along with some variations. The terms, descriptions, and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the scope of the disclosure, which is intended to be defined by the following claims—and their equivalents—in which all terms are meant in their broadest reasonable sense unless otherwise indicated.