The present disclosure relates to mixed reality devices and, more particularly, relates to a mixed-reality visor device for selective control of a user's field of view.
This section provides background information related to the present disclosure, which is not necessarily prior art. This section provides a general summary of the disclosure and is not a comprehensive disclosure of its full scope or all of its features.
According to the principles of the present teachings, systems and methods are provided for modifying a view perceived by a user who is substantially contained within an enclosure. The present systems and methods provide benefits and applications in a wide variety of industries, activities, environments, and situations. In the interest of providing a robust disclosure illustrative of the unique contributions to the art, however, the present disclosure will be provided in connection with aircraft flight training applications. This description should not be regarded as limiting the potential uses, benefits, and/or claims, unless specifically stated.
In some embodiments according to the principles of the present teachings, a system is provided including the following: a view-blocking wearable user visor-headset having a display surface and see-through camera; a distinguishing system configured to detect regions corresponding to an exterior of the enclosure from regions contained within an interior of the enclosure and output a region signal; and a vision system configured to overlay imagery graphics upon the display surface of the view-blocking wearable user visor-headset based on the region signal. Details relating thereto will be provided herein.
Generally, according to the principles of the present teachings, a mixed reality device is provided that is to be worn by a user in the area of flight training, particularly preparation and certification for flight in reduced visibility conditions resulting from clouds, fog, haze, smoke, or other adverse weather conditions or lack of sunlight (night operations).
By way of non-limiting example, pilots completing basic flight training are initially qualified to fly only under conditions permitting substantial visibility outside the cockpit so that aircraft orientation relative to the ground or distant horizon is easily viewed. Having a visible ground reference enables the pilot to both control the aircraft and visually see obstructions and other air traffic by pilot. This initial condition or restriction of qualification is termed Visual Flight Rules (VFR) by the U.S. Federal Aviation Administration (FAA). In order to fly when visibility is restricted, such as by clouds or fog, a pilot must demonstrate proficiency at maintaining flight control with reference only the instrument panel; this is termed as flight under Instrument Flight Rules (IFR) and requires additional training and certification.
The FAA defines weather-related flight conditions for VFR and IFR in terms of specific values for cloud ceiling and visibility. U.S. Federal Regulations for VFR require a ceiling greater than 3,000 feet above-ground-level (AGL) and horizontal visibility of greater than 3 miles in most airspace (i.e., visual meteorological conditions (VMC)). VFR establishes that VMC is sufficient for pilots to visually maintain separation from clouds and other aircraft. When weather conditions or other factors limit or reduce visibility and/or cloud ceilings below VMC, then these conditions are generally referred to as instrument meteorological conditions (IMC) and require a pilot to fly under Instrument Flight Rules (IFR). By way of example, IMC may exist when cloud ceilings drop to less than 1,000 feet above ground level (AGL) and/or horizontal visibility reduces to less than 3 miles.
Due to these reduced weather conditions or other factors that can result in pilot disorientation, a pilot trainee or pilot must complete specialized training in order to fly under IFR conditions because there may be little to no outward visibility from the cockpit to the exterior environment. Such training includes receiving specialized instruction from a certified flight instructor to simulate conditions where visibility outside the aircraft is limited. This is typically accomplished by the pilot trainee or pilot wearing simple view-limiting devices (VLDs), such as goggles, hoods, or visors (see
There are a number of relevant points regarding IFR vs VFR flight. For example, IFR flying challenges pilots with multi-tasking as they visually scan an array of instruments monitoring everything from equipment status to aircraft orientation to area navigation. Task-saturation occurs when the pilot becomes overwhelmed with information and can no longer keep up with flying the aircraft. Saturation may result from unexpected events such as equipment failures or inadvertent flight into compromised weather conditions. Such disorientation or confusion has led to loss of control accidents. It is therefore important that both new and veteran IFR pilots establish and maintain a high level of proficiency in IFR flying.
Visual Meteorological Conditions (VMC) generally require 3 statute miles visibility with aircraft remaining clear of clouds at a minimum of 500 feet below, 1000 feet above, and 2000 feet horizontally. These minimums may increase or decrease slightly based on the type of controlled airspace (near vs away from an airport for example). VMC is a regulatory prerequisite of VFR flying.
Separate from the aforementioned discussion, Mixed-Reality (MR)—not to be confused with Virtual-Reality (VR) or Augmented-Reality (AR)—is an interactive experience where computer-generated perceptual information is super-imposed on a predominantly real-world environment. MR can be defined as a system that fulfills three basic features: a combination of real and virtual worlds, real-time interaction, and accurate three-dimensional (3D) registration of virtual and real objects. The overlaid sensory information can be constructive (i.e., additive to the natural environment), or destructive (i.e., masking of the natural environment). This experience is commonly implemented in the form of specialized goggle or visor hardware that the user wears to seamlessly interweave the real physical world with elements of computer-generated imagery. In this way, mixed reality only modifies a user's perception of a chiefly real-world environment, whereas virtual reality completely replaces the real-world environment with a simulated one.
The primary value of mixed reality is the way components of the digital world blend into a person's perception of the real world, not as a simple display of data, but through the integration of immersive sensations, which are perceived as natural parts of an environment. Commercial mixed reality experiences have been largely limited to entertainment and gaming businesses with some industrial applications in medicine and other areas.
Augmented Reality (AR) is associated with visors designed to project generated digital imagery upon a clear, see-through lens that permits users to directly view the remaining natural environment. Because a clear lens is essentially used as a computer screen in this case, the associated digital imaging overlay is characteristically translucent such as with a Heads-Up-Display (HUD) and therefore cannot be used to, as effectively, fully block a user's view of surroundings. For example, AR applications typically generate text data overlays to a work environment such as during medical procedures where a surgeon prefers not to look away from the patient for any duration.
Widespread commercial use of MR technology for IFR flight training has not been pursued due in part to complexities involved with processing a dynamic environment such as an aircraft cockpit during flight operations. The present teachings describe materials and methods that enable implementation of streamlined MR hardware and software that offers improved cost-effectiveness, safety and quality of training.
Conventional IFR training employs long-standing View Limiting Devices (VLDs) to block views outside the aircraft's windows. Aircraft cockpit windows are typically placed above and to the sides of an instrument gauge panel. Industry standard VLD goggles are correspondingly shaped like blinders with opaque surfaces that inhibit views beyond the instrument panel. These IFR goggles, visor, or “hoods” are usually constructed from inexpensive plastic and are head-mounted using elastic or fabric straps. Some common types available to date are illustrated in
An accompanying flight instructor or safety pilot supervises the student wearing the visor or goggles to ensure it is worn properly while also monitoring aircraft motion and orientation with reference to external views. Such partial view blocking visors or goggles are also used during practical flight tests where a candidate is required to demonstrate proficiency in IFR flight to an FAA examiner.
Being essentially blinders, conventional VLDs pose shortcomings in effectively replicating IFR conditions. Often the fit and positioning of the formed view-blocking areas do not conform well to the span of the instrument panel and user's height, requiring the pilot to maintain an unnatural and uncomfortable head-down position to prevent view of the aircraft exterior. Such head repositioning has a direct effect on how aircraft motion is sensed and interpreted by the user thus presents potentially dissimilar effects to those that would be experienced under real IFR conditions. Furthermore, aircraft movements due to turbulence or maneuvering may cause inadvertent head movements that momentarily expose an exterior view to the user. Such glances, however brief, can provide enough information to reorient the pilot user hence diminishing value of the training session. VLDs also do not offer the capability to impose more complex IFR scenarios such as sudden transitions from clear to obscure weather conditions. One of the major risk factors with flight safety is inadvertent flight into IMC such as clouds during night flying. In such cases there is a surprise factor that makes maintaining proper aircraft control a challenge. VLDs are worn and removed deliberately therefore do not offer possibility for replicating sudden and unintended flight into IFR conditions. Nor do they offer methods for gradual changes in exterior visibility.
The present teachings provide numerous advantages. For example, the present teachings provide improved safety, efficiency, and effectiveness of training for vehicular operations during adverse conditions such as poor visibility due to fog or rain. In the case of aircraft, particularly small general aviation aircraft, serious accidents resulting from pilots inadvertently flying from clear weather (VMC) into inclement weather (IFR or IMC) unfortunately continue to occur on a regular basis despite increased training and awareness. Such accidents frequently result in a loss of control of the aircraft or controlled flight into high-elevation terrain such as mountains or high-rise objects. Oftentimes, even experienced IFR-rated pilots encounter mishaps in IMC due to lapses in judgement and eroded skills. The rate of these loss of control in IMC accidents continue to be of concern to the FAA and general aviation community.
A recognized contributor to these weather-related accidents is a lack of adequate primary or recurrent IFR flight training. Much of this training takes place in ground-based flight simulators or employ the use of VLD hoods or goggles to simulate instrument conditions during actual flight. These simple tools offer limited realism in terms of replicating instrument meteorological conditions as well as limited control over simulated training conditions. For example, although ground-based flight simulators used in primary flight training can block cockpit exterior views as desired, they typically do not incorporate motion; a major factor contributing to loss of spatial orientation leading to loss of aircraft control. Real life instrument flight conditions remove visual reference to the earth's horizon, which normally provides a means for the pilot to maintain orientation and aircraft control. Losing this visual reference may lead to misinterpretation of aircraft movements leading to pilot disorientation and subsequent loss of aircraft control.
In the case of actual flight with conventional view-limiting devices such as head-mounted visors or goggles, variations in the wearer's height, external lighting, and movements of the aircraft due to turbulence or maneuvering may unintentionally offer momentary glimpses of the aircraft exterior sufficient to reorient the pilot trainee. These unintended breaks in blocked visibility detract from the difficulty of practice conditions so can lead to significant deficiencies in skill over time. Furthermore, trainees need to apply conventional IFR hoods or visors manually for IFR-training phases of flight, which removes the element of surprise that often accompanies actual encounters with IMC such as inadvertent flight into clouds. Pilots accidentally flying into IMC can experience significant anxiety and disorientation due to the sudden loss of outside visual reference combined with abrupt aircraft movements associated with turbulence and other convective activity associated with inclement weather.
An object of the present teachings is enablement of simplified integration of visualization control offered by computer-based simulation with real life training conditions via a novel mixed-reality (MR) system and method. In some embodiments, the system is provided having an MR-visor headset worn by a pilot-user during actual IFR in-flight training. In some embodiments, the system utilizes a built-in viewer, internal- and external-radiation energy sources and sensors such that the user's view outside cockpit windows can be easily set and controlled during any phase of flight. In this way, the trainee can be subject to obstructed or altered views outside the cockpit regardless of head position and at the discretion of a flight instructor or examiner. An MR-visor for IFR offers a level of realism and control well beyond simple conventional VLD headwear used to date. Enhanced realism during IFR training can better prepare new instrument pilots, help maintain proficiency with experienced IFR-rated pilots, and provide flight examiners more rigorous methods for assessing a candidate's capabilities.
Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
Example embodiments will now be described more fully with reference to the accompanying drawings.
Another variation of related art, termed Mixed-Reality (MR), lies between VR and AR. Transparency of the see-through lens screens of AR visors limit the opaqueness of computer-generated imagery on the resulting view of the environment. With MR, computer-generated imagery is combined with that of actual surroundings but without use of the clear-, see-through lens LCD screen used for AR. Instead, MR employs a fully enclosed visor similar to VR, that utilizes a built-in dual-lens camera to access 3D views of actual surroundings. This type of hardware facilitates opaque computer-generated graphics (as with VR) that can now augment visuals of actual surroundings via processing of camera-based imagery. Hence, MR can offer a more immersive version of augmented reality that is not limited by the transparent nature of a clear lens display. For example, an MR display can impose opaque three-dimensional (3D) objects such as extra virtual solid walls to a room whereas an AR-display would have difficulty preventing visual transparency through such a virtual wall from the user.
In accordance with some aspects of the present teachings, the basic MR-IFR visor utilizes standard components of a head-worn virtual reality (VR) display (i.e. VR headset) which utilizes video see-through display technology for immersing the user in a digitally-enhanced visual environment. Such standalone VR headsets typically include the following:
Additional sensors may be used for tracking extended head movements as well as specific objects in the surroundings.
The MR display is therefore similar to VR headsets in form, but now capable of adding precision-located holographic content to the actual surroundings by use of camera-assisted tracking and see-through technology. For example, this basic embodiment may include four (4) head-tracking cameras, two (2) directed forward 21 (above right and left eyes) and two (2) directed diagonally to the left side 22 and the right side 23. By using sensor fusion-based positional tracking methods, these cameras continuously track the position of the user's head in relation to the physical environment without need for any additional external measurement devices. Each of the head-tracking cameras contains an Inertial Measurement Unit (IMU) which in turn includes an accelerometer and a gyroscope that allow high-frequency measurement of headset orientation. Together the cameras and their IMUs enable precise and reliable positional tracking based on sensor fusion. Inside-out optical positional tracking utilizes Simultaneous Localization and Mapping (SLAM) algorithms applied to the image stream of the head-tracking cameras. This “inside-out” approach is contrary to the most common “outside-in” positional tracking approach employed in consumer-grade VR headsets. Inertial tracking methods based on the data stream produced by the IMUs supplement the optical positional tracking methods, which is particularly useful in the event of abrupt head movement.
Two (2) forward-looking high-definition RGB cameras 24 are used for creating a video see-through MR imagery. The cameras provide live view of actual surroundings while also permitting video recording and MR tracking of marker-less landmarks. Whereas conventional MR displays are typically designed to function only within close-range views such as a small room, the MR-IFR cameras and their variations are to provide for both near- and far-vision; thereby facilitating alternating views between the aircraft interior and far-off objects seen through cabin windows. The MR-IFR concept introduces the use of additional sets of physical lenses or high-speed auto-focusing lenses to provide rapid and reliable transition from near-to far-sight views. In one variation as shown in
A light sensor array 25 facing forward and to the sides of the headset allows measurement of luminous intensity of the natural light surrounding the user. Specifically, this sensor array provides detection and tracking of:
Natural lighting intensity and distribution both for the interior and exterior of the aircraft can vary significantly over the course of a flight as weather conditions and relative position of the sun change over time and location. The present disclosure introduces MR hardware and an associated methodology that is akin to radio signal modulation in order to achieve accurate, consistent, and stable fixation of visible and obstructed regions desired by the IFR pilot trainee. For example, the primary measures in radio receivers are gain, selectivity, sensitivity, and stability. In similar fashion, the invention can provide user parameters and software settings that utilize similar parameters to easily set and maintain the desired boundaries between viewable and unviewable areas provided by the MR-visor headset. Gain describes the amount of amplification a signal may require in order to be properly registered by a receiver or sensor. Adjusting gain may assist in defining an aircraft cabin's window areas by strengthening the signal from low-light external environmental conditions during such times as sunrise or when the sky is overcast. Selectivity is the ability to filter out certain frequencies of energy so that the receiver or sensor can tune in to a particular bandwidth of electromagnetic energy. Adjusting selectivity can assist in distinguishing outside natural light from interior lighting sources by tuning in to specific wavelengths that are not shared with interior artificial aircraft lighting. In this way, light sensors on the MR-visor can more easily distinguish interior and exterior views of the cabin. Relatedly, sensitivity is the ability for the receiving hardware or detectors to distinguish true signals from naturally occurring background noise. Users of the MR-visor can set the sensitivity level of detectors to assist in defining visibility boundaries as well. For example, nighttime or other low-light conditions may require users to increase the sensitivity of visor-mounted sensors in order to provide sufficient signal contrast for detecting the interior areas of the cabin. Finally, stability describes how well the desired signal is maintained over the duration of use. For embodiments of the present disclosure, stability translates to how well the MR-visor maintains the original visibility boundaries set by the user as external conditions such as lighting, head position, aircraft position, and acceleration forces change over time. Such hardware is to utilize manual user input settings, software-based control, and optional software settings to easily and efficiently set and automatically maintain signal-to-noise ratios required for fixing the desired visibility boundaries. The MR-visor hardware includes detectors or sensors that feed signal read data to a computing unit that may reside on the headset or a nearby console. Software may also be designed to fix window overlay areas based only on initial user settings.
With reference to
Additionally, a computer vision-based hand-tracking algorithm that utilizes a close-range depth camera 26 can track the user's hand in real-time which allows calibration steps to be conducted without any programming or additional hardware. Before the operation, the system is calibrated by manual steps as illustrated in
In case automatic detection fails or some of the edges of the window area 28 are not detected correctly, the user can “draw” window edges by using a point and pinch gestures 29 recognized by the system's hand-tracking algorithm. The calibration steps are repeated for each window surface in the cockpit. After the process is completed, the system maintains the position of the anchors which in turn allows MR content to be shown instead of the actual view seen through the windows. The system allows accurate and stable tracking of the cockpit window area so that digital imagery appears to replace the real environment outside the plane normally seen through the windshield and windows of the aircraft. Thus, IFR training scenarios that may include clouds, rain, snow, birds, other aircraft, and variable lighting effects (for instance strobe lights) can be generated via the headset's display. Computer-vision imagery may be turned off at any time to grant the user full view of actual surroundings via the MR-visor's see-through cameras.
Once the calibration is completed, stable tracking (i.e., anchors remain superimposed over only the cockpit window areas) is achieved by combining the visual data (camera image) as well as the inertial data from the sensors inside the headset and inertial data from an optional external gyro sensor 30.
The combination of these sensor data enables stable tracking even during extreme lighting and motion conditions. For example, conventional tracking may not be capable of keeping up with a combined scenario consisting of:
In such case, typical hardware and software methods cannot maintain a proper fix on the defined window areas as at least one of the data sources (such as the RGB camera) is momentarily compromised. In contrast, as described in the proposed process flow (
As represented in
Compared to the prior art, particularly mechanical VLDs, the MR-IFR visor offers several kinds of significant improvements to in-situ flight training:
In some embodiments, the invention may additionally incorporate a novel arrangement of electromagnetic emitter(s) and receiver(s) in and around the aircraft structure and MR-visor 7 that provide supplemental data to the computer-controlled vision to enable more accurate and consistent distinction between internal and external views from the cockpit. These additional emitter/receiver combinations permit significantly simplified user set up and operation under the highly variable conditions of actual flight training.
The visual data coming from the visor as well as from the external sensors would consist of a three-dimensional (3D) point-cloud. The 3D image from the stationary ToF camera is correlated with the 3D image from the stereo camera in the visor which allows object-tracking of the instruments to be stable regardless of lighting conditions inside the cockpit. The point-cloud represents the physical shape of the cockpit dashboard and flight instruments rather than the respective color image in which readings and numbers would dynamically change. Thus, the reliability and stability of tracking the flight instruments' position and window areas can be higher than with purely RGB-camera-based approaches.
In some embodiments, the MR-IFR visor may employ gaze-tracking technology that can be useful in gathering data concerning the user's observation pattern during training exercises.
Said gaze data can be accessed wirelessly post-flight for review and analysis as well as during the flight when the instructor sitting next to the pilot thus enabling more informed, real-time feedback. For the instructor, real-time access to the pilot's gaze trail is a novel tool for teaching and becomes particularly useful when the system assesses adherence to common teaching principles (such as “spend most time monitoring attitude indicator”) are quantified and measured automatically.
Extending on eye-monitoring utility, another variant of the MR-IFR visor may contain similar inward-facing cameras for the right eye 56 and the left eye 57 that track additional metrics from a user's eyes such as changes in pupil diameter, blinks, saccades, and perceptual span. Such metrics can help assess the cognitive load on the pilot in terms of visual attention, alertness, fatigue, and confusion. This supplemental eye-tracking data may help the flight instructor better understand the level of difficulty experienced by the trainee during any exercise. With eye-tracking data available in real-time, the instructor is also able to quantify if deliberate interventions created artificially in the training scenario produce the intended effect on the pilot. Example of such interventions can include sudden blinding lights from simulated sun, lightning, or strobe lights, or other MR imagery simulating clouds, rain, birds, or aircraft traffic. Eye-tracking data can therefore help quantify the individual limits of cognitive overload for pilot thereby allowing difficulty level to be optimized for each training session.
In some aspects of the present disclosure, the MR-IFR visor may employ face-tracking technology to accumulate more data on user feedback.
These factors relate to the pilot's attention, memory, motivation, reasoning, and self-awareness. Face-tracking acts as a tool for the instructor to use in obtaining objective assessment of the pilot's experiences which can be used for optimizing the training session in terms of difficulty and current capabilities of the pilot.
According to various aspects of the present disclosure, the MR-IFR visor may include additional physiological measurement devices for the user/trainee. For example,
Second, electroencephalogram (EEG) sensors 62 record the electrical activity of the user's brain during the flight. EEG data recorded and shown to the instructor in real-time helps in verifying reaction times and other cognitive behavior. EEG can quantify various training situations and indicate whether the pilot's reaction time is normal given any training scenario. EEG can also indicate the level of cognitive load experienced by the pilot which is typically measured post-flight with well-established questionnaires such as the NASA Task Load Index (NASA-TLX). By making this EEG measurement available to the instructor in real-time, the complexity of the training session can be adjusted in-flight for each pilot trainee according to skill level.
Finally, Galvanic Skin Response (GSR) sensors 63 can be used for recording the change in the electrodermal activity in the user's skin due to sweating. GSR reveals can provide more useful real-time biofeedback information on the pilot-trainee. As skin conductance is not under the voluntary control of a human being, it can reveal nervousness on the part of the trainee, even in cases where the subject may deliberately be attempting to hide emotional responses from the instructor for any reason.
Another optional feature for the MR-IFR visor is an embedded surround sound audio system.
According to yet additional aspects of the present disclosure, the MR-IFR visor may include a programmable Global Positioning System (GPS) tracking feature. A GPS-based tracking device 65 embedded into the visor shown in
Notably, a full virtual-reality (VR) implementation of the invention can be facilitated where all (100%) imagery supplied to the user is computer generated in synchronization with real time flight orientation data provided by GPS and/or other sensors. This approach leverages conventional flight simulation software by combining fully synthetic visuals with actual operating conditions that replicate real life scenarios such as those leading to spatial disorientation.
The MR-IFR visor may be implemented with an optical see-through display similar to augmented-reality (AR) visors in order to provide reduced hardware size, weight, and cost. Such hardware may be ideal for cases where the translucent nature of computer-generated overlay imagery applied over a see-through lens is not a critical factor. For example, certain flight training operations may be satisfied with replicating only partial obscurity of aircraft exterior views in return for reduced cost and weight of the visor system.
As with the primary MR-based visor embodiment, this AR version is also comprised of various cameras and sensors for tracking the orientation and the position of the headset using the inside-out positional tracking approach using 4 head-tracking cameras—2 directed forward 41 (above right and left eyes) and 2 directed diagonally to the left side 42 and the right side 43. Each contains an IMU comprised of an accelerometer and a gyroscope. A light sensor array 44 facing forward and the sides of the headset for measuring the luminous intensity of the natural light may also be included as well as a close-range depth camera 45 for tracking the user's hand in real-time. The software concerning positional tracking, AR imagery, and calibration is also similar to the primary MR embodiment.
While the present disclosure has been described in terms of potential embodiments, it is noted that the inventive concept can be applied to a variety of head-mounted VR, MR and AR designs for use in IFR flight training and other applications. For example, embodiments of the present disclosure can assist with training in handling ground vehicles and marine craft during adverse weather or lighting conditions. Furthermore, certain hardware and software embodiments may incorporate items like optimized design features or artificial intelligence. It will be apparent to those skilled in the art that various changes may be made without departing from the scope of the invention.
A computer vision based technique is proposed for a mixed reality (MR) visor based instrument flight rules (IFR) pilot training. This requires emulating a supervised flight practice scenario wherein the trainee is presented with poor visibility conditions due to clouds, fog, other adverse weather or night-time operations, in order to train them to fly the aircraft by reference to onboard instruments and sensor readings. It is thus critical that the video feed of the surrounding cockpit area, presented to the trainee pilot via the MR visor, is augmented/overlaid with emulated outdoor conditions on the windows that are well-registered with the 6 DOF pose of the MR visor in real time.
The system according to the present teachings works by exploiting the fact that an airplane cockpit is a small workspace within which the MR visor needs to operate and, as such, a 3D point cloud mapping of the workspace need only be done once. As opposed to typical robotics or AR use cases, where simultaneous localization and mapping (SLAM)—or parallel tracking and mapping (PTAM), as it is known to the AR community-must be done at frame rate in order to explore a large unknown environment, our application can safely assume that the environment is known and mapped beforehand. Subsequently, only localization of the MR visor needs to be done with respect to the pre-mapped 3D point cloud, and computationally expensive map updates need not be done frequently. The following steps are included:
1. Offline Map Building: Mapping involves building a 3D point cloud of the cockpit interior using monocular or stereo cameras integrated within the visor [1], or via sensor fusion approaches involving camera(s), LiDAR and/or inertial measurement units (IMUs) [2]. However, 3D LiDARs popular in self-driving cars can be prohibitively expensive as they typically cost upward of $10,000 for a reasonable vertical resolution (with the horizontal resolution achieved by electromechanical spinning of the LiDAR beam internally). On the other hand, optical cameras or image sensors are considerably cheaper, and visual SLAM has been shown to achieve robust and real-time performances for indoor environments [1, 3]. Insufficient lighting within the cockpit might pose challenges to optical cameras. However, instrument panel backlights can potentially present a feature-rich environment to achieve reliable SLAM. Alternatively, infrared image sensors may be used. Furthermore, a sensor fusion of cameras and IMU sensors—i.e., visual inertial SLAM—can potentially enhance the accuracy of visual SLAM alone, particularly in low-lit conditions, occlusions, poor texture, as well as increase the throughput [7, 8].
Mapping may be done in an entirely offline manner, so that speed may be traded off for accuracy. This can be done using a front-end interface on the trainer's tablet device, possibly by the trainer themselves, by moving within the scene some standalone stereoscopic camera or a sensor rig consisting of aforementioned sensors (which is pre-calibrated in the factory), thereby acquiring a one-time, fixed point-cloud 3D reconstruction of the entire cockpit. Note that a typical cockpit features instrument panels and other objects including seating, windscreen and window edges, indoor paneling, etc. This presents a highly feature rich environment for successful visual SLAM and pose estimation. Creating a complete and accurate map of a given cockpit before flight training begins has the advantage that computationally expensive and iterative algorithms such as bundle adjustment [1] need not be done at run-time.
2. Offline Segmentation of Overlay Region: The cockpit wind screen and window region needs to be precisely segmented. A naïve approach would attempt to perform said segmentation in every frame, as is typical in marker based or marker-less AR, where exploration of the environment and therefore mapping must be done as an online process. By contrast, our scenario merely requires the regions to be overlaid once with high accuracy, as long as these regions are clearly delineated within the 3D point cloud-which is a straightforward process as the point cloud is fixed and known beforehand. Additionally, a pre-segmentation as described above also helps to identify and discard any points within the 3D point cloud that arise due to any features on the windscreen and windows (except along the edges), as these happen to be due to the objects/scenery outside the airplane and thus cannot be relied upon when localizing the visor with respect to the map in step #4 (since these features change as the plane moves).
We can either use robust and invariant classical machine learning based approaches (such as CPMC [4]), or modern deep learning methods (such as Mask R-CNN [5]). This step may be done interactively using the trainer's tablet so as to achieve a precise segmentation that is well-registered with the point cloud. Furthermore, provided the processing platform (which is not necessarily embedded into the visor to keep it lightweight, and may be placed in the vicinity, or be a wearable device, and may use WiFi or wired communication with the visor) is connected to the cloud, the human input obtained for different airplanes as described above, may be used to improve the pre-trained models for segmentation so as to be more robust and adaptive to a wide range of airplane models. Note that, similar to the previous step, this step is not time-critical.
3. Real-Time Aircraft Pose Localization: There are three different frames of reference to be considered.
First, the world/global 3D frame of reference. This is the frame of reference within which the aircraft moves and flies.
Second, the aircraft/cockpit/map 3D frame of reference. This is the frame of reference within which the pilot/visor moves, and may be considered as the frame of the point cloud map that would be built to perform visor localization (step #1 above). The origin of the aircraft's frame of reference might as well be the tip of the aircraft nose, but that would essentially introduce a non-zero translation vector (at least) between the aircraft and the cockpit (i.e. map) frame of reference. Hence, the most appropriate choice of origin for this frame of reference is some arbitrarily chosen point that is visible in the point cloud of the cockpit. This can be any feature point detected as part of the corner/interest point detection algorithm used for the visual SLAM process such as FAST or ORB (c.f. [7, 8]).
When the aircraft is stationary (e.g., when the map is being built or when the training session has not yet started), the world frame of reference and the aircraft/cockpit frame of reference may be considered aligned. That is, the translation vector between the two is a null vector and there is no rotation between them. When the aircraft is in motion (either on the ground or in the air), the rotation between the two frames of reference may be measured via IMU sensors or accelerometer-gyroscope modules placed in the cockpit [6]. This relative pose between the 3D world and the aircraft frame of reference is needed, along with the relative pose of the pilot/visor with reference to the aircraft/cockpit, in order to render/augment the synthetic imagery/video on the cockpit windscreen such that it is well-registered.
Third, the trainee/visor 3D frame of reference. This is the frame of reference of the trainee pilot whose origin is essentially the optical center of one of the camera(s) mounted on the visor. Augmenting a well-registered virtual overlay in this frame of reference (as the resulting video feed is viewed by the pilot) requires that the pose of this frame of reference (i.e., translation and rotation of its origin) with respect to the cockpit/map frame of reference be computed for every incoming video frame. This problem is the subject of step #4 below. Further, the frames of reference of all other camera(s) and sensor(s) on the visor should be known with respect to the “master” camera, a process called calibration.
4. Real-Time Visor Pose Localization: At run-time, the incoming video feed from the visor and other sensory input (LiDAR and/or IMUs) need only be localized (tracked) with reference to the pre-built 3D point cloud map. Once the visor is able to localize itself in the environment i.e., the 6 DOF pose is known, the visor feed is now, in essence, well registered with the pre-built 3d map of the cockpit, and thus the windows can easily be overlaid/masked out, as desired. Note that this step is highly time critical, and needs to be done at a minimum frame rate of 60 FPS. A time delay of a single frame can potentially present a not so well registered augmentation with respect to the actual windows and windscreens, inadvertently and irreversibly giving rise to disorientation and compromising the MR ergonomics. Hence it is imperative that this step be optimized for real-time performance besides accuracy. While open source libraries such as [7, 8] exist that are able to demonstrate real-time SLAM, we propose to adapt them for our stringent application demanding fast 60 FPS localization via hardware accelerated feature extraction. Optimized hardware implementation, for instance on GPU, is all the more important as 3D synthetic imagery/video must also be rendered at high frame rate (see step [5] below). This hardware and associated software is to provide real time pose tracking on an embedded platform at high frame rate for the specific use case of IFR training (ref
A system-level overview of the visor pose localization process is shown in
5. Pose-Aware MR Overlay: The last step in the process is the actual overlay where synthetic video feed needs to be augmented to replace the original scene visible through the cockpit windscreen or windows. Computer generated imagery/video may be developed using 3D game engines such as Unreal Engine or Unity. At run-time, the synthetic environment is rendered in real-time with the viewpoint determined in accordance with the estimated pose (steps #3 and #4 above).
As discussed above, Instrument Flight Rules (IFR) training involves flying by reference to instruments and sensors on board the plane. In this regard, the method described above can also be used to augment additional information (text/image/video) on designated areas on the instrument panel for instance in order to provide added guidance and instruction to the trainee pilot. This may involve integrating additional algorithms into the processing platform for object detection and recognition.
The following references are cited in the preceding paragraphs, and are incorporated herein by reference in their entirety. [1] G. Klein and D. Murray, “Parallel Tracking and Mapping for Small AR Workspaces,” 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007, pp. 225-234, doi: 10.1109/ISMAR.2007.4538852. [2] C, Debeunne and D. Vivet. “A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping,” Sensors 2020, 20, 2068. https://doi.org/10.3390/s20072068. [3] G. Klein and D. Murray, “Parallel Tracking and Mapping on a camera phone,” 2009 8th IEEE International Symposium on Mixed and Augmented Reality, 2009, pp. 83-86, doi: 10.1109/ISMAR.2009.5336495. [4] J. Carreira and C. Sminchisescu, “CPMC: Automatic Object Segmentation Using Constrained Parametric Min-Cuts,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1312-1328 July 2012, doi: 10.1109/TPAMI.2011.231. [5] K. He, G. Gkioxari, P. Dollár and R. Girshick, “Mask R-CNN,” 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980-2988, doi: 10.1109/ICCV.2017.322. [6] https://invensense.tdk.com/smartmotion/ [7] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel, J. D. Tardos, “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM”, ArXiv, 2020 (https://arxiv.org/abs/2007.11898). [8] R. Mur-Artal and J. D. Tardós, “Visual-Inertial Monocular SLAM With Map Reuse,” in IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 796-803, April 2017, doi: 10.1109/LRA.2017.2653359. [9] P. Furgale, J. Rehder, R. Siegwart, “Unified Temporal and Spatial Calibration for Multi-Sensor Systems.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 2013. https://github.com/ethz-asl/kalibr
The present disclosure introduces material and methods for replicating instrument meteorological conditions (IMC) during flight under any actual weather conditions. By blocking and/or modifying exterior views from the cockpit in-situ, a pilot can more accurately experience the physical and psychological effects of actual IFR conditions. The present disclosure provides hardware, software, and methods for providing a mixed-reality (MR) headset that vastly improves realism compared to flight simulators and existing conventional IFR flight training hardware which consist mainly of simple blinder-type IFR hoods, visors, and goggles.
With reference to IFR pilot training as a primary application, advantages of the present disclosure include, but are not limited to, the following:
1) In-situ training-IFR pilot training is most effective when conducted during actual flight conditions where flight dynamics and associated directional forces can lead to physiological misinterpretation and disorientation. The present disclosure provides hardware that is used during actual flight thereby exposing trainees to these real-life dynamic conditions. The fully enclosed view limitation guarantees no “peeking”; the type of which frequently occurs with conventional VLDs, whether intentionally or unintentionally. It also offers instructors a high degree of control over external visibility for the pilot trainee irrespective of actual weather conditions, enabling effective replication of challenging instrument meteorological conditions (IMC) scenarios during any phase of flight.
2) Improved view control-embodiments of the present disclosure utilizes a next-generation design approach where sensors located on an MR headset are used to obstruct outside views from the cockpit to various degrees with simple adjustments to hardware sensitivity parameters such as signal gain. Furthermore, graphical replication of exterior views can be generated by an image processor to enhance awareness and reaction to such scenarios. This headset may include a single- or multi-lens camera for viewing the true external environment. Sensors fitted onto one or more locations on the headset are used to distinguish exterior from interior lighting such that built-in software can rapidly and reliably define window areas of the cockpit in three dimensions. This window area can then be blocked or altered in the user's view regardless of aircraft- or head position. Software with adaptive mapping algorithms is used to maintain definition of cockpit window positions relative to the user.
3) Enhanced setup features-embodiments of the present disclosure may additionally incorporate electromagnetic radiation sources such as infra-red (IR) emitters located inside and/or outside the aircraft in order to assist visor headset sensing of exterior and interior views of the cockpit. External lighting can change significantly during a flight due to weather changes, sun position, and aircraft orientation. These lighting dynamics may impose challenges for pattern recognition capabilities of the MR headset sensors and computing hardware. Supplementing exterior lighting with fixed and stable IR lighting can help maintain a more consistent contrast between exterior and interior regions thereby further minimizing any errors in sizing and positioning of window areas relative to the user's perspective.
4) Reduced weight and form factor-embodiments of the present disclosure provide optimized hardware and replication that reduces system size and weight compared to conventional VR headsets. An ongoing concern for VR and MR headset products has been the bulkiness and weight of the product, which contribute directly to fatigue and potential muscle strain by the user. The present disclosure describes methods that take advantage of sensor technology and software to minimize the size and weight of the hardware required by the MR headset. Hardware systems may incorporate wireless or wired data connections to a separate computing unit in order to offload weight and volume from the wearable headset, resulting in more ease and comfort for the user. This results in a product that can be used for extended durations without adding significantly to pilot fatigue.
5) Enhanced imagery-embodiments of the present disclosure may incorporate a single- or multi-lens camera(s) within the headset in order to enable external viewing along with mixed reality components to the user. A dual-lens camera provides the user with three-dimensional views of the environment upon which computer-generated imagery can be overlayed. Imagery may be of clouds, fog, rain or other objects representing instrument meteorological conditions (IMC) and/or other visual elements.
6) Simplified equipment set up-software for the MR-IFR headset of embodiments of the present disclosure is optimized to require minimal programming, initialization routines, and inputs from the user in order to establish and maintain the desired dimensional bounds defining cockpit window areas. For example, this software may reduce user inputs to a single gain slide setting that establishes boundaries for window areas over which mixed-reality elements are used to vary visibility outside said window areas. Or it may implement artificial intelligence to adapt to dynamic environmental conditions.
In sum, the MR-IFR visor invention offers a long overdue, modern upgrade to the simple molded plastic IFR hoods, visors, and goggles that continue to be used today for IFR training. Advancements in electronics miniaturization and mixed-reality (MR) software development enable a low-cost and effective means for more accurately replicating IFR conditions during training flights under any weather conditions. By ensuring full control of exterior views and real time variable transparency settings, pilot trainees can benefit from dramatically improved realism that better acquaints them with real world scenarios, thus enhancing safety while reducing costs associated with extended flight training under actual IMC. The present disclosure also provides a means for IFR pilots to maintain a high-level of proficiency when using this hardware for recurrent training as well as a means for improved skills assessment and examination.
The present invention further relates to a method and system for dynamically overlaying computer-generated imagery on regions defined by electromagnetic energy, such as infra-red (IR), via a transparent medium, such as glass of an aircraft cockpit window. During flight, an aircraft cockpit environment poses challenging lighting and image distortion effects that complicate implementation of infra-red region signaling. Such scenarios are distinct from conventional approaches involving static environments where infra-red energy is reflected from stable opaque surfaces, to define specific regions.
The presented invention provides methods for utilizing electromagnetic energy transmission, reflection, or combination thereof, via a transparent medium such as glass, to define a region signal corresponding to window areas of an enclosure. Two key categories of transparent materials that exhibit distinct optical properties are isotropic and birefringent materials.
An isotropic material is one that has uniform properties in all directions. This means that its refractive index, a measure of how much light bends when entering the material, is the same regardless of the direction of the incoming light. Common examples of isotropic materials include standard glass and many plastics. These materials do not exhibit birefringence and are characterized by their predictable behavior when interacting with light. For instance, when infra-red energy passes through isotropic glass, it primarily undergoes refraction and partial reflection. The refractive index remains constant, simplifying the calculations for light transmission and reflection.
In contrast, birefringent materials have different refractive indices depending on the polarization and direction of the incoming light. This anisotropy causes light entering the material to split into two rays, known as the ordinary and extraordinary rays, each following different paths and traveling at different speeds. Crystalline materials, such as calcite and quartz, commonly exhibit birefringence. When infra-red energy interacts with a birefringent material, it experiences more complex behavior due to the varying refractive indices. This can result in significant challenges for accurate light detection and processing, as both the direction and polarization of the light must be carefully managed.
Considering the example of glass used in aircraft cockpit windows, if it is isotropic, like standard window glass, infra-red energy passing through it will primarily refract at a consistent angle based on Snell's Law, with some portion being reflected. But if the cockpit window glass were birefringent, the infra-red energy would split into two rays upon entering the material, each following a different path due to the differing refractive indices. This would complicate the signal detection process, requiring more sophisticated algorithms to differentiate and accurately capture both rays. Additionally, the birefringent glass would introduce polarization effects, necessitating the use of specialized filters to ensure accurate signal analysis.
Transmitting infra-red energy through both isotropic and birefringent transparent materials presents distinct challenges that must be addressed for effective system performance. When infra-red energy encounters a transparent medium, such as glass, the behavior of the light depends significantly on whether the material is isotropic or birefringent.
For isotropic transparent materials, such as standard glass, the infra-red energy primarily undergoes transmission and refraction. The energy bends as it passes through the medium due to the change in speed, requiring calculations using Snell's Law to determine the angles of incidence and refraction. This ensures that sensors are positioned appropriately to capture the refracted infra-red energy effectively. Additionally, a portion of the infra-red energy is reflected off the surface of the isotropic medium. Anti-reflective coatings designed to enhance transparency and reduce glare can alter the reflectivity for infra-red wavelengths, requiring sensors to be calibrated to account for these specific properties. Reflected infra-red energy from an isotropic transparent medium can also exhibit polarization effects, necessitating polarization filters to accurately capture and analyze the reflected infra-red energy without distortion.
In contrast, birefringent transparent materials, which have different refractive indices for light polarized in different directions, split the infra-red energy into two rays upon entering the medium. These rays, known as the ordinary and extraordinary rays, travel at different speeds and along different paths due to the material's anisotropic nature. This splitting and varying behavior complicate the detection process, as sensors must differentiate between the two rays and account for their distinct refractive indices. Birefringent materials can also create additional polarization effects, requiring more sophisticated filtering and analysis techniques to ensure accurate detection.
Both isotropic and birefringent materials present challenges related to environmental factors. Temperature fluctuations, ambient light variations, and stress-induced birefringence in otherwise isotropic materials can all affect infra-red energy transmission. These environmental changes necessitate continuous calibration and dynamic adjustment of sensor parameters to maintain accurate detection. High-sensitivity sensors equipped with infra-red filters help mitigate ambient light interference, ensuring that the sensors capture the intended infra-red signals effectively.
The differences between isotropic and birefringent materials also influence advanced signal processing algorithms. For isotropic materials, algorithms must account for straightforward refraction and reflection, while for birefringent materials, they must handle the more complex behavior of split rays and varying refractive indices. Real-time environmental sensors integrated into the system provide crucial data to adjust signal processing parameters dynamically, ensuring consistent and accurate overlay of computer-generated imagery.
Understanding these technical distinctions allows the system to be optimized for accurate and reliable infra-red energy detection and processing across both isotropic and birefringent transparent materials. By considering the unique properties and challenges of each type of material, the system can effectively adapt to handle diverse real-world scenarios. This includes precise sensor placement, advanced calibration techniques, and the use of appropriate filters and algorithms to ensure robust and accurate infra-red detection, enhancing the system's reliability and effectiveness in various applications.
Reflecting infra-red energy off a transparent medium such as glass significantly differs from reflecting it off an opaque surface such as a colored panel. When infra-red energy encounters a transparent medium, such as glass, a portion of the energy is transmitted through the medium while another portion is reflected off its surface. The transmitted energy undergoes refraction, bending as it passes through the medium due to the change in speed. This dual-path nature requires sensors to account for both the reflected and refracted components of the infra-red energy. Calculations using Snell's Law are necessary to determine the angles of incidence and refraction, ensuring that sensors are positioned appropriately to capture the infra-red energy effectively.
Transparent media like glass may also have anti-reflective coatings designed to enhance transparency and reduce glare. These coatings can alter the reflectivity of the surface for infra-red wavelengths, requiring sensors to be calibrated to account for the specific reflectivity properties of the coated glass. Variations in coatings can affect the amount of infra-red energy that is reflected and detected by the sensors. Additionally, reflected infra-red energy from a transparent medium can exhibit polarization effects, where the reflected light becomes polarized based on the angle of incidence. Polarization filters may be necessary to accurately capture and analyze the reflected infra-red energy, ensuring that the sensors detect the intended signal without distortion.
In contrast, reflecting infra-red energy off an opaque surface, such as a black colored panel, involves total reflection without any transmission through the medium. The energy is reflected predictably based on the law of reflection, where the angle of incidence equals the angle of reflection. Smooth surfaces provide specular reflection, where the infra-red energy is reflected in a single, predictable direction. Additionally, opaque surfaces generally provide stable and uniform reflection characteristics that are unaffected by changes in material properties, unlike transparent media. This stability simplifies the sensor calibration process, allowing for more straightforward and reliable infra-red detection.
By considering the unique properties of transparent surfaces, the system can be effectively adapted to handle the challenges presented by this scenario. This includes accounting for the dual-path nature of infra-red energy with transparent media, managing the effects of coatings and polarization, and distinguishing between reflected energy and thermal emissions. Through precise sensor placement, advanced calibration techniques, and the use of appropriate filters and algorithms, the system ensures robust and accurate infra-red detection and processing across different types of transparent surfaces, enhancing reliability and effectiveness in real-world applications.
Infra-Red Light Interaction with Transparent Medium
The invention accounts for the refractive index of the transparent medium, such as glass, to ensure accurate sensor positioning and detection. The refractive index influences the bending of infra-red rays as they pass through the medium, necessitating precise calculations to optimize sensor placement. By employing relevant mathematical formulas such as Snell's Law, the angles of incidence and refraction are determined, allowing sensors to be positioned appropriately to capture the infra-red energy effectively.
To mitigate the loss of infra-red signal strength as it passes through the transparent medium, high-sensitivity sensors can be used to detect weaker signals. Additionally, anti-reflective coatings can be applied to the medium to minimize signal loss and enhance the efficiency of infra-red transmission through the transparent material.
Varying environmental conditions, such as temperature fluctuations and ambient light, on infra-red detection pose further challenges. Calibration routines can be implemented to adjust sensor sensitivity in real-time based on environmental data. Infra-red filters can be used to block out ambient light interference, such as sunlight, ensuring infra-red signals are accurately detected and processed.
The invention optimizes sensor placement for comprehensive coverage and accurate detection. Sensors can be arranged in a grid or circular pattern around the transparent medium, with overlapping detection zones to eliminate blind spots. This arrangement ensures that infra-red energy passing through the entire surface of the transparent medium is effectively captured. Periodic recalibration of the sensors is performed to maintain accuracy and adapt to any changes in the environment.
To accurately define the area of the transparent medium, an embodiment employs complex calibration routines that account for the varying environmental conditions and optical properties of the medium. Advanced algorithms are developed to compensate for distortions caused by refraction, ensuring precise detection. Machine learning techniques are incorporated to improve detection accuracy over time by learning from calibration data, enabling the system to adapt and refine its performance continually.
Infra-red light scattering and distortions are inherent challenges when using infra-red energy passing through a transparent medium. The invention can address these issues through use of adaptive optics, which involve real-time adjustments to the sensor array to correct wavefront distortions. Wavefront distortions refer to the deviations from the ideal propagation of a light wave as it passes through a medium. In the context of this invention, wavefront distortions occur when infra-red energy traverses a transparent medium, such as glass, causing irregularities in the wavefront due to variations in the medium's refractive index, surface imperfections, and environmental factors. These distortions can lead to inaccuracies in the detection and overlay of computer-generated imagery, as the infra-red signals are altered from their intended paths, resulting in a degraded image quality and misalignment of the overlaid data. Correcting wavefront distortions involves compensating for these deviations to restore the intended wavefront shape, ensuring accurate and reliable sensor data processing. Wavefront sensors are implemented to detect these distortions, providing data that is used to adjust the overlaying of computer-generated imagery. Calibration algorithms are developed to consider the refractive index and thickness of the transparent medium, continuously adjusting sensor parameters to mitigate scattering effects.
In dynamic environments such as moving cockpits, factors like ambient lighting from the sun, interior lights, and the dynamic movement of lighting and shadows can impact detection and overlay accuracy. The invention provides for ambient light sensors to detect real-time changes in lighting conditions, adjusting infra-red sensor sensitivity and overlay parameters accordingly. Predictive algorithms can be implemented to anticipate changes in lighting and shadows based on aircraft movement and position, allowing pre-adjustments to maintain accuracy. Multi-spectral sensors can be deployed to differentiate between infra-red signals and ambient light, ensuring that the infra-red data is not corrupted by visible light changes. Multi-spectral sensors are devices capable of detecting and measuring light across multiple wavelengths or spectral bands, beyond the visible spectrum. In the context of this invention, multi-spectral sensors can be used to differentiate between infra-red signals and other types of light, such as visible or ultraviolet light. These sensors capture data from various spectral bands simultaneously, allowing for the precise identification and isolation of infra-red energy used in the overlay of computer-generated imagery versus that from ambient sources such as the sun. By analyzing the distinct spectral characteristics of different light sources, multi-spectral sensors enhance the accuracy and reliability of the system, ensuring that infra-red data is not contaminated by ambient light or other environmental interferences.
Defining and maintaining a sharp edge for the windowpane region is crucial for the accurate overlay of computer-generated imagery. Advanced edge detection algorithms can be used to precisely identify the boundary of the windowpane, accounting for distortions and dynamically updating the boundary in real-time. High-resolution infra-red sensors provide detailed data for accurate edge detection, capturing fine details even in challenging conditions. Image stabilization techniques, both hardware (e.g., stabilized sensor mounts) and software (e.g., digital image stabilization), can be used to maintain a sharp edge despite vibrations and movements of the aircraft cockpit.
To ensure reliable performance in real-world scenarios, embodiments may include shielding from electromagnetic interference (EMI), which is common in aircraft cockpits. EMI shielding involves the use of materials or techniques to block or attenuate electromagnetic fields, preventing unwanted interference with electronic components and systems. In the context of this invention, EMI shielding is employed to protect the sensor array and associated electronics from electromagnetic interference common in aircraft cockpits. This shielding ensures that the infra-red sensors and processing units operate accurately and reliably, free from distortions or disruptions caused by external electromagnetic sources. Effective EMI shielding can involve the use of conductive or magnetic materials, enclosures, and grounding techniques designed to absorb or reflect electromagnetic waves, thereby safeguarding the integrity of the sensor data and the overall performance of the system. Temperature compensation mechanisms are also deployed to account for variations in sensor performance due to temperature changes, maintaining consistent accuracy.
The invention incorporates low-latency data processing pipelines to ensure real-time updates to the overlay, critical for applications where even minor delays can impact accuracy and usability. Redundant sensor systems are implemented to ensure continued operation in case of sensor failure, enhancing reliability and robustness in critical applications.
The user interface of the described systems can provide adjustable display parameters, allowing users to manually adjust brightness, contrast, and overlay sensitivity as needed. Feedback mechanisms can be incorporated to alert users to any issues with the sensor array or overlay accuracy, enabling immediate corrective actions and ensuring optimal performance.
The sensor array arrangement can be optimized for reliable operation through proper placement and configuration of the sensors. In one embodiment, the sensors are strategically positioned in a hexagonal grid around the transparent medium. This hexagonal grid configuration is chosen to maximize coverage and minimize blind spots, ensuring that the entire surface of the transparent medium is effectively monitored.
A hexagonal grid offers several advantages over other grid configurations. The close packing of sensors in a hexagonal pattern provides the most efficient coverage, reducing the likelihood of any areas being left unmonitored. This configuration also allows for overlapping detection zones, which enhances the accuracy and reliability of infra-red energy capture.
To determine the optimal placement of sensors within this hexagonal grid, mathematical formulas such as Snell's Law are employed. Snell's Law describes how light waves, including infra-red energy, bend when passing from one medium to another with a different refractive index. By applying Snell's Law, the system can calculate the expected angles of refraction for infra-red rays entering the transparent medium at various incident angles. These calculations guide the precise placement of sensors, ensuring that they are positioned to effectively capture the refracted infra-red energy.
For example, if an infra-red ray enters the transparent medium at a certain angle, Snell's Law can predict how much the ray will bend as it passes through the medium. This information is used to position sensors at locations where the refracted rays are expected to travel, ensuring that the sensors can detect the infra-red energy accurately. By accounting for the refractive properties of the medium, the system can maintain high detection accuracy and avoid errors that might arise from unaccounted refraction effects.
Additionally, the hexagonal grid arrangement and the use of Snell's Law for sensor placement help in addressing challenges posed by variations in the medium's thickness or imperfections on its surface. These factors can alter the path of the infra-red rays, but the calculated sensor positions ensure that such variations are accounted for, providing robust detection capabilities.
This optimized sensor array arrangement is crucial for applications where precise detection and overlay of computer-generated imagery are required. In dynamic environments, such as aircraft cockpits or automotive head-up displays, maintaining accurate detection of infra-red energy through a transparent medium is essential for reliable operation. The hexagonal grid configuration, combined with the precise calculations based on Snell's Law, ensures that the system can deliver consistent and accurate performance even under challenging conditions.
Furthermore, the hexagonal grid allows for scalability and flexibility in sensor deployment. Additional sensors can be easily integrated into the grid to enhance coverage or improve resolution as needed. This adaptability makes the system suitable for a wide range of applications, from small-scale implementations in portable devices to large-scale deployments in advanced vehicular systems.
High-sensitivity infra-red sensors with low noise could be key components of the system, selected specifically to detect weaker infra-red signals passing through the transparent medium. These sensors are designed to be highly responsive to infra-red energy, ensuring that even the faintest signals are captured accurately. The selection of high-sensitivity sensors addresses the challenge of signal attenuation, which can occur as infra-red energy passes through the medium and encounters various optical properties that may diminish its strength.
One of the primary technical requirements for these sensors is their ability to operate with low noise. Noise in sensor readings can obscure the true signal and lead to inaccuracies in the data being captured. By using sensors with low intrinsic noise levels, the system ensures that the infra-red signals are detected with high fidelity, preserving the integrity of the data. This is particularly important in applications where precision is critical, such as in overlaying computer-generated imagery on real-world scenes.
To further enhance the accuracy of infra-red signal detection, these sensors are equipped with infra-red filters. These filters are designed to selectively allow infra-red wavelengths to pass through while blocking other types of ambient light, such as visible and ultraviolet light. Ambient light can introduce significant interference, especially in environments with varying lighting conditions. For instance, in an aircraft cockpit or an automotive head-up display, changes in sunlight, artificial lighting, and reflections can all impact sensor performance.
Infra-red filters mitigate this interference by ensuring that the sensors are primarily responsive to the intended infra-red signals. This selective filtering is essential for maintaining the accuracy and reliability of the system. By eliminating the ambient light interference, the sensors can focus on capturing the relevant infra-red energy, providing clean and precise data for further processing.
Additionally, these high-sensitivity, low-noise infra-red sensors are integrated into the system in a way that maximizes their effectiveness. The positioning of the sensors, guided by the optimized sensor array arrangement, ensures that they are placed at strategic locations where infra-red signals are expected to be strongest and most accurate. This careful placement, combined with the sensors' technical capabilities, enhances the overall performance of the system.
The use of high-sensitivity infra-red sensors with low noise and infra-red filters is particularly advantageous in dynamic environments where lighting conditions can change rapidly. For example, in an aircraft cockpit, the lighting environment can vary dramatically as the aircraft maneuvers, with shifts in sunlight and shadows. The robust sensor design ensures that these variations do not compromise the detection accuracy, allowing for reliable operation even under challenging conditions.
Moreover, the integration of these advanced sensors contributes to the system's ability to perform real-time adjustments and calibration. As the sensors continuously monitor the infra-red signals, they provide real-time data that feeds into the dynamic calibration routine and advanced signal processing algorithms. This integration allows the system to adapt to changing conditions, maintain high detection accuracy, and deliver precise overlay of computer-generated imagery.
Another aspect of the system is the implementation of a dynamic calibration routine designed to periodically adjust the sensitivity and positions of the sensors based on real-time detected signal strength and environmental data. This calibration routine ensures that the system maintains optimal performance despite varying conditions that may affect the infra-red energy transmission and detection. The dynamic calibration routine serves to continuously fine-tune the system, compensating for factors such as changes in ambient lighting, temperature fluctuations, and potential obstructions or alterations in the transparent medium. By regularly recalibrating, the system can adapt to transient and long-term changes, ensuring consistent and accurate overlay of computer-generated imagery.
The process begins with the sensors continuously monitoring the infra-red signal strength passing through the transparent medium. Variations in signal strength can indicate changes in the medium or environmental conditions that need to be addressed. The system collects and analyzes environmental data, including ambient light levels, temperature, and other relevant factors that could impact sensor performance. This data is used to adjust the sensor parameters dynamically. Based on the collected data, the system can optimize sensor setting parameters for current conditions. This may involve fine-tuning the angle or distance of the sensors relative to the transparent medium to ensure optimal detection.
Machine learning algorithms are integrated into the system to enhance the dynamic calibration routine. These algorithms enable the system to learn from calibration data over time, improving detection accuracy and allowing the system to adapt to changing conditions more effectively. The system collects extensive data during each calibration cycle, including sensor readings, environmental conditions, and the results of any adjustments made. This data forms the basis for machine learning analysis. Machine learning algorithms analyze the collected data to identify patterns and correlations between environmental factors, signal strength variations, and sensor performance. By recognizing these patterns, the system can predict how different conditions will impact sensor accuracy. Using insights gained from pattern recognition, the machine learning algorithms can make predictive adjustments to sensor parameters. For instance, if the system anticipates a drop in signal strength due to an approaching weather change, it can preemptively adjust sensor sensitivity and positions to maintain accurate detection. The algorithms continually refine their models based on new calibration data, improving their predictive accuracy over time. This ongoing learning process enables the system to become increasingly robust and reliable in diverse operating conditions.
The integration of a dynamic calibration routine with machine learning algorithms offers several key advantages: enhanced accuracy through continuous adjustments and learning from real-time data, adaptability to sudden and gradual changes in the environment, robust performance through proactive predictive adjustments, and reduced maintenance requirements by automating calibration. In practical implementation, the dynamic calibration routine might be executed at regular intervals, such as every few minutes, depending on the stability of the environment and the specific application requirements. The system could use a combination of fixed and movable sensors to optimize coverage and flexibility. Machine learning models could be hosted on an onboard processing unit, with the capability to update models periodically based on new data trends and insights. By integrating dynamic calibration and machine learning, the system achieves a level of precision and reliability necessary for applications requiring real-time, accurate overlay of computer-generated imagery, such as in aviation, automotive head-up displays, and mixed reality systems in dynamic environments.
Advanced signal processing algorithms can be employed to address and correct distortions caused by refraction and transmission loss as infra-red energy passes through the transparent medium. These algorithms are essential for maintaining the accuracy and quality of the overlaid computer-generated imagery. The system continuously monitors and compensates for the distortions introduced by the varying optical properties of the medium and environmental conditions. This ensures that the infra-red signals are accurately interpreted, and the resulting imagery is precisely aligned and clear.
To further enhance the accuracy of the overlay, real-time environmental sensors are integrated into the system. These sensors continuously gather data on environmental factors such as ambient light levels, temperature, and other relevant conditions that could affect infra-red signal transmission. The data collected by these sensors is used to dynamically adjust the signal processing parameters, allowing the system to respond in real-time to any changes in the environment. This dynamic adjustment capability ensures that the overlay remains consistent and accurate despite fluctuations in environmental conditions.
The advanced signal processing algorithms work in tandem with the real-time environmental sensors to provide a robust solution for maintaining the fidelity of the overlay. By continuously analyzing the infra-red signals and the environmental data, the system can make immediate corrections to account for any distortions or losses. This integration of real-time data processing and environmental monitoring allows for a high degree of precision and reliability in the overlay of computer-generated imagery.
Moreover, these algorithms are designed to learn and adapt over time. As the system operates, it gathers extensive data on how different environmental conditions affect signal transmission and processing. Machine learning techniques can be applied to this data to further refine the algorithms, improving their ability to predict and correct for distortions in future operations. This continuous improvement process ensures that the system becomes increasingly accurate and reliable, adapting to new conditions and challenges as they arise.
In practical applications, the integration of advanced signal processing algorithms and real-time environmental sensors ensures that the system can deliver high-quality, accurate overlays in dynamic environments. Whether used in aviation, automotive head-up displays, or mixed reality systems, this approach provides the precision and consistency necessary for effective and reliable operation.
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to,” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
This application is a continuation patent application of U.S. patent application Ser. No. 17/919,304 filed on Oct. 17, 2022, which is a U.S. National Phase Application under 35 U.S.C. 371 of International Application No. PCT/US2021/064043, filed Dec. 17, 2021, which claims the benefit of the following: U.S. Provisional Application No. 63/128,163, filed on Dec. 20, 2020; U.S. Provisional Application No. 63/180,040, filed on Apr. 26, 2021; and U.S. Provisional Application No. 63/190,138, filed on May 18, 2021. The entire disclosures of each of the above applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63190138 | May 2021 | US | |
63180040 | Apr 2021 | US | |
63128163 | Dec 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17919304 | Oct 2022 | US |
Child | 18791998 | US |