Aspects of the present disclosure involve real-time identification of critical force capability effectiveness zones and occlusion or unknown zones near those forces. Personnel, vehicles, ships, submarines, airplanes, or other vessels are often occluded by terrain surfaces, buildings, walls, or weather, and sensor systems may be incapable of identifying objects on the other sides of the occlusions, or objects may simply be outside of range of sensors or weapons capabilities. Users, such as field commanders may use the system described herein to identify the occlusion zones, track targets amongst occlusions, as well as threat ranges from these occlusion zones, in advance of force actions, and to share the data between systems in real-time to make better more informed decisions.
One example of this problem of individual human perception can be well illustrated by the 1991 Battle of 73 Easting during the first Gulf War during adverse weather conditions that severely restricted aerial scouting and cover operations. Although successful for the U.S. side, asymmetrical force risk was higher than necessary because although it appeared to be a flat featureless desert, the occluding subtle slight slope of the terrain was not initially recognized to occlude visual battlefield awareness by a tank commander named HR McMaster. The subtle slight land slope occlusion prevented identifying awareness of critical real-time data of enemy numbers, positions, and capabilities in the absence of advanced aerial reconnaissance due to severe weather conditions.
Aspects of the present disclosure enable users more acutely aware of sloped or other terrain or regions that are outside their field of visual, perceptual or sensory awareness of which can contain fatal hazards, particularly when these zones have not been scouted for hazards in real-time. Users can then adjust their actions to eliminate or avoid the hazards of the occlusion zones. The limitation of the perceptual capability of one pair of human eyes and one pair of human ears on an individual or mobile unit can be reduced by utilizing multiple users remotely tapped into one user's omni-directional sensor system(s) and can thus maximize their perceptual vigilance and capability of the one user or unit through remote robotic control and feedback of the individual or unit carried sub-systems. Maximized perceptual vigilance can be achieved from tapping into near full immersion sensors, which can include sensing vision three dimensional (3D) display from depth cameras (optics), temperature, stereo or surround or zoom-able microphone systems, pinching, poking, moisture, vestibular balance, body/glove sensation while producing an emulated effect of this remotely producing nearly full sensory immersions. Tracking, history, force capability, prediction, as well as'other data can be augmented onto the display system to augment reality and to further enhance operations.
Various aspects of the present disclosure allow for identifying the real-time range capability of a force or forces, their weapons, real-time orientation (pointing direction) of weapons (with integrated orientation sensors on weapons) and weapons ranges, equipment or other capabilities, as well as sensor and visual ranges during multiple conditions of night and day and varying weather conditions. From identified real-time zone limitations based on weapons ranges, occlusions, terrain, terrain elevation/topographical data, buildings, ridges, obstructions, weather, shadows, and other data, field commander decisions are able to be made more acutely aware of potential hazard zones, to avoid or make un-occluded and aware of, and be better prepared for in order to reduce operational risks. The system can be designed to implement real-time advanced route planning by emulating future positions and clarifying occlusions and capabilities in advance, thus allowing for optimal advanced field positioning to minimize occlusion zones, avoid hazards from, and maximize situational awareness.
The regions that are occluded, and that are also not in real-time view of any extra-sensory perception sharing system 12, need to be clearly identified so that all participating systems are made well aware of the unknown zones or regions. These unknown regions can be serious potential hazards in war zones or other situations and need to be avoided or be brought within real-time view of a unit using a three dimensional (3D) sensor system which can be a omni-camera, stereoscopic camera, depth camera, “Zcam” (Z camera), RGB-D (red, green, blue, depth) camera, time of flight camera, radar, or other sensor device or devices and have the data shared into the system. In order to share the data the unit can have the extra-sensory perception sharing system 12 but do not need to have an integrated onboard display, because they can be stand alone or remote control units.
From the “x-ray like” vision perspective of person 12A (“x-ray like” meaning not necessarily actual X-ray, but having the same general effect of allowing to see through what is normally optically occluded from a particular viewing angle) the viewable layers of occlusion L2 through L11 have a planar left and right HUD viewing angles with center of the Field Of View (FOV) of the HUD display are shown by 38A, 38B, and 22A respectively.
The “x-ray like” vision of person 12A of the occluded layers L2 through L11 can be achieved by other extra-sensory perception sharing systems 12 units that are within communications range of person 12A or within the network, such as via a satellite network, where person 12A can communicate with using extra-sensory perception sharing system 12 (
A field commander can, out of consideration of potential snipers, or desire to enhance knowledge of unknown space 2C can call in another drone 12D to allow real-time sensor coverage of space 2C and transfer data to other extra-sensory perception sharing systems 12, thus creating the ability of making space 2C potentially less of an unknown to other extra-sensory perception sharing systems 12 in the area and can be marked accordingly. Since in
This map of
Unit 12A on the left of
In
Shown in
Unknown regions of
The occlusion regions are clearly marked in real-time so that personnel can clearly know what areas have not been searched or what is not viewable in real-time. The system is not limited to a single floor, but can include multiple floors, thus a user can look up and down and see through multiple layers of floors, or even other floors of other buildings, depending on what data is available to share wirelessly in real-time and what has been stored within the distributed system. A helicopter with the extra-sensory perception sharing system 12 hovering overhead can eliminate occluded regions 2E and 2H in real-time if desired. Multiple users can tap into the perspective of one person, say for example, inside person 12H, where different viewing angles can be viewed by different people connected to the system so as to maximize the real-time perceptual vigilance of person 12H. To extend the capability of inside person 12H robotic devices that can be tools or weapons with capabilities of being manipulated or pointed and activated in different directions can be carried by person 12H and can be remotely activated and controlled by other valid users of the system, thus allowing remote individuals to “watch the back” or cover person 12H. Alternatively, a stereographic spherical camera may be triggered or otherwise remotely activated by various users of the system to “watch the back” of person 12H.
In
As illustrated, process 500 begins with obtaining a plurality of data feeds that identify an object and/or region or a real-world environment that is occluded from view at an interface (operation 502).
Referring to
The data feeds may be obtained from various types of sensors, such as an omni-cam-era, stereoscopic camera, depth camera, “Zcam” (Z camera), RGB-D (red, green, blue, depth) camera, time of flight camera, radar, or other type of sensor. And the obtained data feeds may be captured in a variety of formats. For example, the data feeds may include audio, video, three-dimensional video, images, multimedia, and/or the like, or some combination thereof. In one particular embodiment, one or more of the data feeds may be obtained from an airborne warning and control system (AWAC) (e.g., drone 12C), and according to the AWAC data format, as is generally understood in the art (a mobile, long-range radar surveillance and control centre for air defense).
Referring again to
According to one embodiment, the data feed from the drones 12C and 12D, when compared to the data feed obtained from the satellite 12E, may be more relevant to identifying specific objects included within the occluded region 2C because they have a potential direct line of sight to the region and the satellite 12E does not. Thus, the data feeds corresponding to the drones 12C and 12D may be identified and not the satellite 12E data feed. In another embodiment, since the data feeds are in different formats, some data may be more useful in uniquely identifying the occluded object than others. For example, data feeds that include high-resolution images may be more useful in uniquely identifying an object than a data feed that only provides geographical coordinates. As another example, if the format of the data feed is video, it may be more useful in identifying the actual object occluded from view and movement of the object, but not as useful when attempting to determine the specific geographic location of the object. In yet another example, if the data feed is of the AWAC format, the data may useful in providing a specific location of the occluded object, but not when attempting to uniquely identify the occluded object itself. For example, video may be more accurate in determining the exact types of weapons and ordinance that may be carried. Additionally, video may allow for a more accurate count of ground troops. Spherical video images allow for users to view the same data in different directions to get a more accurate real-time coverage. In comparison, AWAC data allows for precise latitude and/or longitude positioning, which would allow precision location that may be used to create velocity vectors for each individual target. Given a location identified via AWAC data, terrain position, and velocity vector predictions could be created as the target reaches a particular position thus providing the user with a tactical edge.
Referring back to
According to one embodiment, to generate the enhanced data, each of the selected data feeds may be weighted (e.g., assigned a value) based upon various characteristics of the occluded region and/or the occluded object, and the accuracy of the data feed identifying the occluded region and/or occluded object. Further, the assigned weighting may, optionally, depend upon the current tactical mode in which a user is engaged. For example, if a user is looking to determine troop strength and weapons the user may assign a higher weighting to video data, because the video data may be more easily processed by stopping and/or stepping thru frames of the video to get an accurate count and tag the group with the appropriate strength/range attributes.
As another example, video may be more accurate in determining the exact types of weapons and ordinance that may be carried by soldiers in combat because the video data actually includes real images of the weapons and/or ordinance. Thus, the video data feed may be assigned a higher weight than other data feeds, in such contexts. In another embodiment, video may allow for a more accurate count of ground troops than infra-red data, and thus, would be assigned a higher weight that an infra-red data feed. In yet another embodiment, spherical video images allow for users to view the same data in different directions to get a more accurate real-time coverage. Such data may be weighted higher than static image data feeds. In one embodiment, AWAC data allows for precise latitude and/or longitude positioning, which would allow precision location that may be used to create velocity vectors and corresponding time stamps for each individual occluded object and/or region. Thus, AWAC data may be assigned a higher weighting when compared to video, when attempting to precisely locate an occluded object and/or region. In another embodiment, infra-red data feeds may be more accurate at identifying occluded objects and/or regions is wooded areas, as the data provides thermal images of objects that may not be visible in regular video data. In such a contexts, the Infra-red data feed would be assigned a higher weight than a video data, feed, image data feed, or other data feeds.
The assigned weightings of the various data feeds may change with time. For example, if a highly accurate and/or highly weighted sensor becomes unavailable then the next best sensor data is used and the user is notified of an accuracy degradation. If more accurate sensors become available the user is notified of an accuracy upgrade. The most accurate position would be a triangulation of two (2) or more sensors identifying the exact same location. This is downgraded to one sensor and further downgraded by sensors with less accuracy.
Once the data feeds have been weighted, the data may be enhanced by combining one or more of the weighted data feeds into an aggregate data feed and/or other type of display that clearly identifies an occluded region and/or an occluded object. In one embodiment, data that meets a weight threshold signifying a certain accuracy level and/or accuracy measure may be combined to generate the enhanced data. For example, video data feeds may be enhanced with actual terrain data (e.g., the terrain data may be overlayed with the video) to help identify potential critical traffic routes and bottlenecks allowing for strategic troop placement or demolition. It is contemplated that any number of data feeds satisfying the weighting threshold may be combined to generate the enhanced data.
The generated enhanced data, including data uniquely identifying the occluded object and/or region and data identifying a location of the occluded object and/or region may be provided to an interface for display (operation 508). In one particular embodiment, the enhanced data may be rendered or otherwise provided in real-time in a three-dimensional stereographic space, as a part of a virtual spherical HUD system. More particularly, the three-dimensional stereographic space of the HUD system may be augmented with the enhanced data (or any data extracted from the obtained data feeds) to enable user interacting with the HUD device to view the object and/or region that was initially occluded from view.
Given unit position and orientation (such as latitude, longitude, elevation, & azimuth) from accurate global positioning systems or other navigation/orientation equipment, as well as data from accurate and timely elevation and/or topographical, or other databases, three dimensional layered occlusion volumes can be determined and displayed in three dimensions in real-time and shared amongst units where fully occluded spaces can be identified, weapons capabilities, weapons ranges, weapon orientation determined, and marked with weighted confidence level in real-time. Advanced real-time adaptive path planning can be tested to determine lower risk pathways or to minimize occlusion of unknown zones through real-time unit shared perspective advantage coordination. Unknown zones of occlusion and firing ranges can be minimized by avoidance or by bringing in other units to different locations in the region of interest or moving units in place to minimize unknown zones. Weapons ranges from unknown zones can be displayed as point ranges along the perimeters of the unknown zones, whereby a pathway can be identified so as to minimize the risk of being effected by weapons fired from the unknown zones.
As illustrated, the computer node 600 includes a computer system/server 602, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 602 may include personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
Computer system/server 602 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server 602 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network, In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
As shown in
Bus 608 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. Such architectures may include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
Computer system/server 602 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 602, and it includes both volatile and non-volatile media, removable and non-removable media.
System memory 606 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 610 and/or cache memory 612. Computer system/server 602 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 613 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 608 by one or more data media interfaces. As will be further depicted and described below, memory 606 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program/utility 614, having a set (at least one) of program modules 616, may be stored in memory 606, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 616 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
Computer system/server 602 may also communicate with one or more external devices 618 such as a keyboard, a pointing device, a display 620, etc.; one or more devices that enable a user to interact with computer system/server 602; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 602 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 622. Still yet, computer system/server 602 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 624. As depicted, network adapter 624 communicates with the other components of computer system/server 602 via bus 608. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 602. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, and external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
The embodiments of the present disclosure described herein are implemented as logical steps in one or more computer systems. The logical operations of the present disclosure are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit engines within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system implementing aspects of the present disclosure. Accordingly, the logical operations making up the embodiments of the disclosure described herein are referred to variously as operations, steps, objects, or engines. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope of the present disclosure. From the above description and drawings, it will be understood by those of ordinary skill in the art that the particular embodiments shown and described are for purposes of illustrations only and are not intended to limit the scope of the present disclosure. References to details of particular embodiments are not intended to limit the scope of the disclosure.
This application is a continuation-in-part and claims benefit to U.S. patent application Ser. No. 13/385,039 filed on Jan. 30, 2012, which claims benefit to U.S. provisional application Ser. No. 61/629,043 filed on Nov. 12, 2011 and U.S. provisional application Ser. No. 61/626,701, filed on Sep. 30, 2011, as well as U.S. patent application Ser. No. 14/271,061 filed on May 6, 2014, which claims benefit to U.S. patent application Ser. No. 12/460,552, filed on Jul. 20, 2009, which is claims benefit to U.S. patent application Ser. No. 12/383,112, which are herein incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
61626701 | Sep 2011 | US | |
61629043 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13385039 | Jan 2012 | US |
Child | 14480301 | US | |
Parent | 14271061 | May 2014 | US |
Child | 13385039 | US | |
Parent | 12460552 | Jul 2009 | US |
Child | 14271061 | US | |
Parent | 12383112 | Mar 2009 | US |
Child | 12460552 | US |