The present application claims benefit of prior filed Indian Provisional Patent Application No. 202011006205, filed Feb. 13, 2020, which is hereby incorporated by reference herein in its entirety.
The present disclosure generally relates to controlling searchlights for aerial vehicles.
Search and Rescue (SAR) operations are often performed using aircraft with specialized rescue teams and equipment to assist people in real or likely distress. These include mountain rescue, ground search and rescue, air-sea rescue, etc. Generally, rotary wing or fixed wing aircraft are used for aerial SAR operations.
Systems like InfraRed (IR) cameras and Forward Looking IR Radar (FLIR), which detect thermal radiation, and night vision goggles are used in performing SAR during night operations.
The night search and rescue is a challenging task. Either the On Scene Coordinator (OSC) or the pilot will have considerable workload when performing SAR operations during night time. Among multiple factors that govern the effectiveness of SAR during night time, illumination from search, navigation and other lights play a considerable role.
Operations performed by the coordinator include stow, deploy, filter change, zoom, searchlight dim along with pointing-scanning and coordinating. Available systems include hand grip controllers, searchlight control panels and filter selectors that interface with a searchlight to carry out these activities. Such interfaces with the searchlight are quite cumbersome and put a considerable amount of load on the person conducting the search. Existing hand grip controllers limit a degree of freedom of movement of a search coordinator or pilot, which can limit the effectiveness of the search operation.
Accordingly, it is desirable to provide methods and systems to improve user interfaces with search and rescue equipment including searchlight control. In addition, it is desirable to provide user interfaces that allow a user to wholly focus on the search tasks and to minimize distraction of how to operate control interfaces. Furthermore, it is desired to reduce workload of a person conducting search operations and to facilitate focus on executing the mission. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
Systems and methods described herein provide one or more processors configured to execute program instructions to cause the one or more processors to receive camera images of a scene including a portion being lit by a searchlight, receive terrain data from a terrain database, generate an augmented/virtual reality scene based on the camera images and the terrain data, display, via a headset display, the augmented/virtual reality scene, track gaze and head movement using headset sensors, output tracking data based on tracked gaze and head movement, and output instructions for controlling at least one function of the searchlight based on the tracking data.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
A more complete understanding of the subject matter may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.
Techniques and technologies may be described herein in terms of functional and/or logical block components and/or modules, and with reference to symbolic representations of operations, processing tasks, and functions that may be performed by various computing components or devices. Such operations, tasks, and functions are sometimes referred to as being computer-executed, computerized, software-implemented, or computer-implemented. It should be appreciated that the various block components and modules shown in the figures may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of a system or a component may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
When implemented in software or firmware, various elements of the systems described herein are essentially the code segments or instructions that perform the various tasks. In certain embodiments, the program or code segments or programming instructions are stored in a tangible processor-readable medium, which may include any medium that can store or transfer information. Examples of a non-transitory and processor-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette, a CD-ROM, an optical disk, a hard disk, or the like.
Disclosed herein is a searchlight control system offering improvements in searchlight technology. The present disclosure proposes usage of multimodal interactions with the searchlight to provide superior performance and increase overall mission efficiency. The disclosed searchlight control system is interfaced with multimodal control interactions, which can lead to more autonomous mission operations and reduce previous laborious steps for a pilot or mission coordinator. The present disclosure describes a head mounted display that displays a search scene captured by a camera mounted to an aerial vehicle, processes the captured images to provide an augmented or virtual reality display and utilizes gaze tracking as one input for controlling a searchlight. In embodiments, additional inputs include speech recognition for controlling at least one function of the searchlight.
Systems and methods described herein reduce the load of a person conducting search operations and facilitate focus on executing the mission. The searchlight control system enables a searchlight connected with multiple modalities to improve accessibility and efficiency of the search mission. The multiple modalities include voice and/or gesture control in conjunction with an augmented display that will aid the coordinator in accomplishing the mission with ease. The herein described searchlight control system allows a greater degree of freedom of movement for an on-scene coordinator, improved focus on mission aspects, enhanced situational awareness and facilitates quick decision making.
Referring to
In embodiments, searchlight control system 10 additionally includes voice recognition unit 30 to provide a further mode of input for the searchlight control system. Having a voice activated search light allows greater degree of freedom of movement of a searchlight coordinator. Microphone 88 of headset 14 is either wirelessly or connected directly to voice recognition module 32 or a separate microphone 88 is provided. Voice recognition module 32 is trained for a list of possible search and rescue vocabulary. This vocabulary includes search light movement commands (such as turn, tilt, move, rotate), searchlight property commands (such as increase/reduce brightness, focus, slave to camera), filter selection commands and tracking commands (such as track object (human/vehicle) in focus, record). In some embodiments, gaze slaving control for direction of searchlight 12 is combined with voice activated control of other searchlight features like searchlight property commands, filter selection and/or tracking commands. Voice recognition module 30 generates voice control commands 90 for the given speech command and searchlight controller 42 is configured to react to voice control commands 90 to control searchlight 12.
In embodiments, searchlight control system 10 includes a display device 24 as an additional control input. Display device 24 is configured to display output from camera 46 remotely in a monitor/handheld system like a tablet. Display device 24 is configured to allow synchronous operation of viewing and searching in poor weather/visibility conditions. That is, live images of the camera 46 can be viewed and touchscreen controls can be used to adjust searchlight functions (like beam direction, searchlight property, filter selection and tacking). In this way, a control-on-the-go process is provided by which a searchlight 12 can be adjusted while viewing the scene 52. In embodiments, a current state/orientation of the searchlight 12 is sent from searchlight controller 12 to display device 24. The view stream is coupled with a touchscreen interface provided for the coordinator to re-direct the searchlight 12 and includes a set of soft switches to change the orientation of the searchlight 12 and change the light parameters by sending corresponding commands to the searchlight controller 12. Control of the searchlight 12 may be passed between headset 14 and display device 24 based on mode selections and may be distributed between them so that some searchlight functions are controlled by headset 14 and some by display device 24. Display device 24 can also work in conjunction with voice activated searchlight 12, wherein the coordinator can use voice to control some searchlight functions (e.g. tracking) and some searchlight functions are controlled through display device 24 (e.g. searchlight property) and some searchlight functions are controlled through headset 14 (e.g. gaze slaved beam direction). In embodiments, display device 24 includes an interactive map module 26 providing a graphical map allowing touch control to select areas of interest and to generate searchlight and/or aerial vehicle commands to move the aerial vehicle 40 and the searchlight 12 to the area of interest through autopilot system 80 and searchlight controller 42. In some embodiments, interactive map is displayed in augmented/virtual reality world of headset 14 and similar area of interest selection can be made by gesture control which is detected by gesture determination module 22.
Having summarized in the foregoing overall functions of searchlight control system 10, more detailed information will now be provided with reference to
In embodiments, camera 46 includes one or more cameras that operate (primarily) in the visible spectrum or in the infrared spectrum or a combination thereof. In embodiments, camera 46 is part of an Enhanced Vision System (EVS). An EVS camera is an airborne system that captures a forward-looking scene so as to provide a display that is better than unaided human vision. EVS camera includes imaging sensors (one or more) such as a color camera and an infrared camera or radar. EVS camera includes, in embodiments, a millimeter wave radar (MMW) based imaging device, a visible low light television camera, one or more InfraRed cameras (possibly including more than one infrared camera operating at differing infrared wavelength ranges) and any combination thereof to allow sufficient imaging in poor visibility conditions (e.g. because of night time operation or because of inclement weather).
The processing system 36 is configured to receive terrain data 58 from a terrain database 38. Terrain database 38 includes data elements describing ground terrain and some buildings. Thus, slopes, hills, mountains, buildings and even trees can be described in terrain database 38. Terrain database 38 allows terrain features to be included in displays generated by processing system 36, particularly three-dimensional perspective displays provided through headset display 16 of headset 14.
The processing system 36 is configured, via augmented/virtual reality generation module 64, to generate an augmented/virtual reality scene based on image data 56 from camera 46 and terrain data 58 from the terrain database 38. Other synthetic feature data sources in addition to camera 46 and terrain database 58 can be provided. In embodiments, augmented/virtual reality generation module 64 is configured to receive at least image data 56 from camera 46, position and orientation data 66 and terrain data 58. Position and orientation data 66 defines position and orientation of camera 46 and camera parameters in order to be able to localize the scene 52 being viewed by the camera 46. Position and orientation data 66 is generated based on global positioning and orientation data obtained from sensor system 68 of aerial vehicle 40. Sensor system 68 includes a global positioning receiver that determines position of aerial vehicle 40 based on satellite signals received from at least three satellites. Further, sensor system 68 includes an Inertial Measurement Unit (IMU) to allow orientation of the aerial vehicle (yaw, pitch and roll) to be determined. Yet further, sensor system 68 includes sensors (optionally associated with searchlight actuators 62) to allow relative orientation and position of camera and aerial vehicle 40 to be determined. Other camera parameters including zoom level may be determined in sensor system 68. The position and orientation data 66 from sensor system 68 is fused by augmented/virtual reality generation module to precisely localize the scene being captured by the camera 46 in real word space. Position and orientation data 66 can be used to positionally register synthetic terrain features defined in terrain data 58 (and optionally other synthetic features from other data sources) using known transformation functions between real space and image space, thereby generating an augmented or virtual reality scene. Augmented/virtual reality generation module 64 is configured to generate display data 70 representing the generated augmented/virtual reality scene. Thus, a live feed from camera 46 is perfectly registered via position and orientation data 66 on top of a 3-D graphical view of the terrain (and possibly other synthetic features), creating a blended image that gives pilots and search coordinators enhanced situational awareness. Augmented reality adds digital elements from terrain database 38 and other data sources to a live view from camera 46. Virtual reality implies a complete immersion experience such that live view from camera 46 is wholly replaced by corresponding synthetic features. A virtual reality view may be intermittent to allow an operative greater visibility when orienting in the search scene 52 before reverting to augmented reality to locate a potential target in the scene 52.
The headset display 16 of headset 14 is configured to display the virtual/augmented reality scene based on the display data 70 generated by augmented/virtual reality generation module 64. Headset display 16 includes a stereoscopic head-mounted display that provides separate images for each eye.
Headset 14 includes headset motion sensors 18 configured to track head movement. Headset motion sensors 18 include an IMU including one or more accelerometers and gyroscopes to allow position and orientation (yaw, pitch and roll) of the head to be calculated. In some embodiments, headset 14 includes an outwardly facing headset camera (not shown) or a cluster of ranging devices to allow for highly accurate determination of position and orientation of headset 14 through Simultaneous Localization And Mapping (SLAM) techniques. Headset motion sensors 18 are configured to output position and orientation data as part of gaze and head tracking data 72. In some embodiments, headset 14 includes one or more inwardly facing headset gaze cameras 20 to allow eye gaze of a wearer to be tracked according to known techniques. Headset 14 includes an eye tracking module that receives images of the eyes from headset gaze camera (s) 20 and outputs an eye tracking vector as part of gaze and head tracking data 72. Accordingly, headset 14 is configured to generate gaze and head movement tracking data 72 that is used to control a variety of features of searchlight control system 10.
Searchlight control system 10 includes a searchlight controller 42 that is configured to output searchlight control data 64 for controlling at least one function of the searchlight 12 based on the gaze and head movement tracking data 72. In embodiments, eye tracking vector (which is part of gaze and head movement tracking data 72) includes data concerning a gaze point, fixations, amongst other gaze metrics. In embodiments, searchlight controller 12 is configured to gaze slave movement of the searchlight 12 and to gaze slave control movement of the searchlight 12 through searchlight actuators 62 and searchlight control data 64. That is, searchlight controller 12 is configured to determine a point or area of interest according to the gaze of the wearer of the headset 14 as defined in the gaze vector, to transform that point or area of interest in image space into real word space (using an inverse of a similar transform to that described above with respect to augmented/virtual reality generation module 64) and to generate searchlight control data 64 accordingly. In some embodiments, gaze and head movement tracking data 72 is utilized by augmented/virtual reality generation module 64 to execute foveated rendering by rendering parts of the scene 52 that a wearer is looking at in higher resolution than peripheral parts.
In accordance with various embodiments, position and orientation data representing head movements of a user are output from headset 14 as part of gaze and head movement tracking data 72, which is used as a control input for augmented/virtual reality control module 64. Specifically, as the head orientation moves (e.g. pitch, roll and/or yaw), this movement is captured in gaze and head movement tracking data 72 and part of the scene 52 that is displayed should be updated accordingly. Augmented/virtual reality generation module 64 is configured to generate display data 70 to show part of the scene 52 according to head movements of a wearer of headset 14 (as defined in gaze and head movement tracking data 72). This tracking of head movement to the part of scene 52 to be displayed is determined based on a transform of a view of the wearer in image space (which is known from headset display parameters and orientation of headset 14 included in gaze and head movement tracking data 72) into real word space of the scene 52 (which is known based on position and orientation data 56 including global position of aerial vehicle 40, orientation of aerial vehicle 40, orientation of camera 46 and other parameters) and the augmented/virtual reality generation module 64 generates image data 70 spatially corresponding to the required part of the scene 52. When head movement indicated by gaze and head movement tracking data 72 is outside of current field of view of camera 46, processing system 36 is configured to generate scene capture data 76 defining an area of scene 52 that is to be captured (e.g. a defined geographical area). Based on scene capture data 76, camera controller 46 of aerial vehicle 40 is configured to generate camera control data 78, which is used by camera actuators 74 to capture the area of scene 52 defined by scene capture data 76.
In some embodiments, searchlight control data 64 and/or camera control data 78 is useful for automatically controlling though an autopilot system 80 a position of the aerial vehicle 40 as well as, or alternatively to, utilizing searchlight actuators 62 and/or camera actuators 74 to control position and orientation of searchlight 12 and/or to control position and orientation of camera 46.
The present disclosure provides a searchlight control system 10 that allows a wearer of headset 14 to control direction and orientation of beam 60 by where the wearer is looking in a display of scene shown on headset display 18. An augmented or virtual reality view is provided through headset display 16 that allows the wearer to be fully immersed in the search mission and to intuitively control the beam of the searchlight 12. Further, which parts of scene 52 are being viewed is controlled by head movement (and optionally additionally eye tracking), which can be used to control aim of camera 46. Thus, the scene 52 is more clearly viewed and the searchlight 64 can be more effectively controlled than with prior art control systems.
Searchlight 12 is controlled by additional control inputs to gaze slaving, in accordance with various embodiments. In embodiments, headset 14 includes a gesture determination module 22 configured to determine hand gestures of a wearer of headset 14. In some embodiments, searchlight control system 14 includes hand controllers (not shown) that have light emitting devices to that allow gesture determination module 22 to determine position and orientation of the hands through constellation tracking methods. In other embodiments, hand controllers are not required and headset 14 includes a camera (not shown) for imaging the hands of the wearer. The gesture determination module 22 is configured to analyze the imaging of the hands of the wearer to determine hand gestures and to correspondingly provide gesture control data 82 that is processed by processing system 36 and used by processing system 36 and searchlight controller 42 to determine searchlight control data 64 for controlling one or more features of searchlight 12. For example, gestures may be defined by gesture determination module for selecting operating parameters of searchlight 12 such as filter or focus. In other embodiments, gaze control mode may be exited and gesture control mode may be entered such that direction of beam 60 is controlled through gesture control. Gesture control data 82 may be used to control display by headset display 16. For example, a zoom function may be invoked by gesture control, which can be performed via a zoom of camera 46 or through image processing by augmented/virtual reality generation module 64. In some embodiments, a field of view shown by headset display 16 is controlled by a combination of gestures and head movements whereby coarse control is achieved through gestures (e.g. by gesture selecting a particular part of a currently displayed scene such as a floor of a building) with head movements providing detailed control of which part of the scene is centrally displayed. Alternatively, head motion control mode can be exited and a gesture control mode can be entered in which field of view in headset display 16 is controlled primarily through gesture control.
In accordance with various embodiments, searchlight control system 10 includes a display device 24 including an interactive map module 26 and a touchscreen control module 28. Interactive map module 26 is configured to generate an interactive map of the search scene and to synchronously display live images from camera 46. Although display device 24 is shown as a separate display in
In accordance with various embodiments, searchlight control system 10 includes a voice recognition unit 30 including a microphone 88 and a voice recognition module 32. Microphone can be included in headset 14 or provided as a separate device. Voice recognition module 32 is configured to process audio data received from microphone 88 to recognize vocabulary for which the voice recognition module 32 is trained or otherwise specially configured using known speech recognition algorithms. In some embodiments, vocabulary data 86 is received from vocabulary database 34 such as a search and rescue focused vocabulary database 34. Voice recognition module 32 is configured to process the audio data to identify any of a list of vocabulary in vocabulary data 86 and to output corresponding voice control commands 90 to searchlight controller 42. Searchlight controller 42 is responsive to voice control commands 90 to control at least one function of the searchlight 12 including direction of beam 60, filter, focus and other functions. Exemplary voice commands to be included in vocabulary data 86 include any one or a combination of:
In embodiments, voice recognition unit 30 is configured to receive voice commands relating to display shown by headset display 16 including zoom and camera aiming functions, which can be executed through augmented/virtual reality generation module 64 or through camera controller 46.
In embodiments, voice control through voice recognition unit 30 is used in association with gaze and head movement controls of headset 14. That is, some functions of searchlight 12 are controlled through voice control commands 90 and some functions are controlled through gaze and head tracking data 72. For example, movement of beam 60 is controlled based on gaze and head tracking data 72 and filter selection, tracking or search light property commands are controlled through voice control commands 90. In other embodiments, a combination of voice, gesture and gaze/head movement controls is provided. For example, searchlight movement commands are controlled based on gaze and head tracking data 72 and voice control commands 90 and gesture control data 82 are used to control different functions selected from searchlight property commands, filter selection command and tracking commands.
Various modes have been described herein including gaze control mode, gesture control mode, head motion control mode, headset control mode and touchscreen control mode. These modes may be selected by voice, gesture or touch control. For example, a gesture may be defined to open a menu of selectable modes that is viewed through headset display 16. In this way, a wearer of the headset 14 can transition between controlling a direction of searchlight 12 by gaze control and touchscreen control. Similarly, position of aerial vehicle 40 can be controlled through gesture control or touchscreen control of interactive map depending on the selected mode. When the modes are not in conflict with each other, some searchlight controls can be controlled by at least two of voice control, gesture control and touchscreen control (e.g. searchlight focus). Similarly, depending on the selected mode, field of view in headset display is controlled by head motion control mode, gesture control mode or touchscreen control mode. The present disclosure thus provides multimodal control of a searchlight 12, a camera 46 (or at least field of view in headset display 16) and optionally an aerial vehicle 40 through any combination of gesture, voice, gaze, head motion and touchscreen inputs.
In the embodiment of
In step 210, processing system 36 receives video data 56 captured by camera 46 mounted to aerial vehicle 40. Camera 46 is configured to capture video of scene 52, which is lit by searchlight 12 and which potentially includes a target of the search mission (such as a human or vehicular target). In step 220 terrain data 58 defining synthetic terrain features (and their location in the world) is received by processing system 36 in addition to optionally receiving further data defining further synthetic features such as graphical representations of certain objects (e.g. humans or vehicles). In step 230, processing system 36 is configured to generate, via augmented/virtual reality generation module 64, an augmented/virtual reality scene defined by display data 70. Augmented/virtual reality generation module 64 simulates binocular vision by generating different images (included in display data 70) to each eye, giving the illusion that a two-dimensional picture is a three-dimensional environment.
In step 240, the augmented/virtual reality scene defined in display data 70 is displayed through headset display 16. A wearer of headset 14 is able to look up and down, left and right to traverse a view of the augmented/virtual reality scene. Eye (gaze) and head movements are tracked in step 250 by headset gaze camera 20 and headset motion sensors 18 to generate gaze and head tracking data 72, which is fed back to processing system 36 to update a position of a field of view defined by display data 70. The field of view of scene 52 may be changed by augmented/virtual reality generation module 64 sending a different portion of a buffered augmented/virtual reality scene or by adjusting focus and/or aiming of camera 46 by instructing camera controller 46 by sending scene capture data 76. Furthermore, a function of searchlight 12 is controlled based on gaze and head tracking data 72 in step 270. That is, a gaze slaving algorithm is implemented so that an area of interest according to what the eyes of the wearer of headset 14 are looking at in image space is transformed to an area of illumination by searchlight 12 in the real world. Searchlight controller 42 calculates searchlight control data 64 so that searchlight 12 illuminates that area of interest in the real world. In embodiments, the calculation is based on the target real world coordinates, the global position and orientation of the aerial vehicle 40 and searchlight properties (e.g. position on vehicle and current orientation) to determine an angular (and possibly focus) adjustment required to illuminate the target area.
In step 270 at least one additional mode of input is received for controlling the searchlight 12 and/or the aerial vehicle 40. In embodiments, the additional mode of input is gesture control. Gestures of hands of a wearer are detected by gesture determination module 22 processing vision images of a wearer's hands (from an outwardly facing camera of headset 14) or by detecting a constellation of light emitting devices on hand controllers and using a constellation detection algorithm. Gesture determination module 22 outputs gesture control data 82 to output one or more functions of searchlight 12 such as brightness, focus and type of filter being used. Various filters are possible including different color filters and infrared filters (multispectral filters), smoke/fog filters (for optimal use in fog or smoke conditions—generally an amber filter), a peripheral vision filter having a band of light extending around a center spot, etc. Searchlight 12 may include a filter wheel and brightness and/or focus setter that is operated by searchlight actuators 62 under instruction from searchlight controller 42 based on gesture control data 82.
Another mode of input according to step 270, which may be additional to or alternative to gesture control, is voice control. Voice recognition unit 30 detects voice commands in audio data received from microphone 88 (which may be part of headset 14) and generates corresponding machine voice control commands 90 to control a function of searchlight 12. Voice control commands 90 are processed by searchlight controller 42 to output searchlight control data for implementing the commanded function. For example, brightness, focus and filter selection may be controlled by voice control commands 90. Voice control commands 90 may also invoke a tracking function of searchlight controller 42 (and camera controller 46) whereby a target in augmented/virtual reality display scene shown in headset 14 is identified and the target is automatically tracked by searchlight 12 (and camera 46) by way of known tracking algorithms. The target may be identified by a combination of gaze being directed at the target in the augmented/virtual reality display scene paired with a tracking voice command. Processing system 36 is, in such an embodiment, configured to combine identified target in tracking data 72 and tracking command in voice control commands 90 and to send a searchlight tracking invocation command to searchlight and camera controllers 42, 46.
Another mode of input according to step 270, which may be additional to or alternative to gesture and voice control, is touchscreen control whereby searchlight controls (e.g. searchlight movement commands, searchlight property commands, filter selection commands and tracking commands) and/or aerial vehicle controls are selected by a user through display device 24. Display device 24 may be operated by a different search coordinator from headset 14. As such, processing system 36 may include rules to arbitrate control commands of searchlight 12, particularly if any controls from different modes of input are in conflict with one another. Different modes of operation may be set by users to avoid control conflicts such as selecting a gaze control mode or a touchscreen control mode). In embodiments, display device 24 is configured to display an interactive graphical map by interactive map module 26. A user may select an area of interest of interactive map, which is resolved by touchscreen control module 28 into aerial vehicle commands 84. Interactive map will be of such a scale as to encompass a region that is significantly outside of the range of view of camera 46. Interactive map provides a convenient way to command aerial vehicle 40 to fly to the selected area of interest and to illuminate the area of interest with searchlight 12. Interactive map displays a region around aerial vehicle 40 as well as a graphical indicator of current vehicle location, which is based on global position data obtained from sensor system 68. At the same time as displaying interactive map, display device 24 displays live video (or augmented video as described elsewhere herein) of the scene 52, thereby providing an overview of location of a search beam 60 via the interactive map and also the actual area of illumination of the search beam 60 in the video feed. In other embodiments, the interactive map is shown on command by the headset display 16 at the same time as the augmented/virtual reality scene. A command for the interactive map can be selected by a wearer of the headset through gesture or voice control. Further, the interactive map can be selected by the wearer through gesture interactions with the map whereby a pointer or other selection device shown in the virtual world is used to select an area of interest in the interactive map. Alternatively, an area of interest in the virtual interactive map may be selected by gaze detection or by a combination of gaze and voice commands. Headset 14 can respond by issuing aerial vehicle commands to aerial vehicle 40 in the same way as when selections are made on the interactive map through display device 24 (e.g. by transforming image space coordinates to real world coordinates for use by autopilot system 80).
The present disclosure thus allows an intuitive, immersive experience for controlling searchlight and aerial vehicle during a search and rescue mission. Accuracy and speed of control of searchlight control system is significantly enhanced as compared to prior art hand grip controls, allowing the search coordinator to focus wholly on the search task.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application.
Number | Date | Country | Kind |
---|---|---|---|
202011006205 | Feb 2020 | IN | national |
Number | Name | Date | Kind |
---|---|---|---|
5589901 | Means | Dec 1996 | A |
6962423 | Hamilton et al. | Nov 2005 | B2 |
7675248 | Mubaslat et al. | Mar 2010 | B2 |
8488841 | Lee | Jul 2013 | B2 |
10698481 | Najafi Shoushtari | Jun 2020 | B1 |
10979685 | Silverstein | Apr 2021 | B1 |
20100045666 | Kornmann et al. | Feb 2010 | A1 |
20120206050 | Spero | Aug 2012 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20160327950 | Bachrach et al. | Nov 2016 | A1 |
20170374276 | Veeramani et al. | Dec 2017 | A1 |
20180053284 | Rodriguez | Feb 2018 | A1 |
20180136642 | Tian et al. | May 2018 | A1 |
20180217249 | La Salla | Aug 2018 | A1 |
20190107845 | Kaine | Apr 2019 | A1 |
20190332931 | Montantes | Oct 2019 | A1 |
20200191946 | Kalyan | Jun 2020 | A1 |
20210088784 | Whitmire | Mar 2021 | A1 |
20210089124 | Manju | Mar 2021 | A1 |
20210255625 | Baladhandapani | Aug 2021 | A1 |
20220180756 | Baladhandapani | Jun 2022 | A1 |
Number | Date | Country |
---|---|---|
3072817 | Sep 2016 | EP |
2013175909 | Sep 2013 | JP |
101621876 | May 2016 | KR |
20170090888 | Aug 2017 | KR |
Number | Date | Country | |
---|---|---|---|
20210255625 A1 | Aug 2021 | US |