Claims
- 1. A process for dynamic human visualization of events occurring within a volume having varying spatial and temporal gradients, said process providing readily adjustable scale and resolution, and initiating activities internal thereto, comprising:
acquiring data, wherein said data represents imagery, geometric and time relationships to be used for generating motion paths, stored maps, location, and activity, and wherein said data is acquired from standard sources; integrating said data, wherein said integrating said data uses full external network connectivity, wherein said data is acquired from simulations, actual events or standard sources, and wherein said data includes multi-source satellite and aerial imagery available in various wavelengths and formats; developing at least one database, having a software architecture from which at least one model is generated; generating at least one display containing at least one depiction from said at least one model and said data, wherein said depiction is displayed in real time; and controlling said at least one display.
- 2. The process of claim 1 further comprising enabling accurate and rapid visualization of an area via orienting position based on a geographical coordinate system to at least one eyepoint,
wherein, said geographical coordinate system is fully compatible with standard navigation systems, wherein included within said area are events having a range of spatial and temporal gradients, and wherein systems operating to said geographical coordinate system permit navigation systems to connect, register, and synchronize within said process.
- 3. The process of claim 2 further comprising enabling generation and control of at least one large-scale depiction on said at least one display while permitting use of said data,
wherein said at least one depiction is provided in a two-dimensional display, wherein said at least one depiction is provided in a fully stereoscopic display, wherein, said at least one display is adapted for use by at least one person as a virtual image display, wherein, terrain, depicted features or modeled objects display at resolution levels related to primary object resolution, said at least one eyepoint distance, and display surface capability, and wherein, said at least one eyepoint and said at least one display's parameters are controllable.
- 4. The process of claim 3 wherein said at least one display is adapted for use by more than one person as a theater display.
- 5. The process of claim 1 further comprising:
running said process on at least one multiprocessor computer, having memory, upon which a portion of said data, is processed; incorporating fast file compression and decompression, wherein said fast file compression and decompression reduce requirements for said memory thus enabling development of database files representing large geographic areas; and wherein, said process accepts streaming information to update or replace said data, providing timely updates for said depiction displayed in real time.
- 6. The process of claim 1 in which said at least one model is a terrain model, wherein said terrain model contains terrain imagery and geometry data,
wherein said at least one model retains the positional accuracy inherent in said data as originally acquired, wherein retention of the positional accuracy enables an accurate depiction of an object's location and dynamic replay of events occurring within said volume, wherein said at least one model is geo-specific, geo-referenced, and universally scalable and provides an accurate depiction representative of a round world, wherein, cultural features are added to said software architecture with negligible impact on response time of said process, wherein, types and instances of mobile objects are added having appearance, location, and dynamics established by external sources, and wherein, said software architecture and said process enable multiple scenarios to be modeled or displayed while maintaining fast update rates.
- 7. The process of claim 6 further comprising employing database software to convert data files from said at least one model into database products,
wherein, said data files consist of a portion of said terrain imagery and a portion of said geometry data contained in said terrain model, wherein, said terrain imagery model combined with said geometry data incorporating terrain elevation is generated from more than one source in at least one pre-selected degree of resolution, wherein, said database products are terrain models, fixed and mobile object models, weather or visibility effects, or map materials with multiple layers of information, wherein, cultural features are added to said software architecture with negligible impact on response time of said process, wherein, many types and instances of mobile objects are added, said instances having appearance, location, and dynamics established by external sources, and wherein, said software architecture and said process enables multiple scenarios to be modeled or displayed while maintaining update rates that facilitate real time display.
- 8. The process of claim 1 further comprising:
interfacing to outside events; defining objects and events to be displayed using said model; and providing two-way communications with external events; wherein said interfacing is accomplished via a Master Object Manager module having software architecture, wherein said Master Object Manager collects communication and control processes, wherein said Master Object Manager can interact with standards-based processes selected from the group consisting of: distributed interactive simulation (DIS), DoD systems under High Level Architecture (HLA), Defense Information Infrastructure Common Operating Environment (DII-COE) formats for the Global Command and Control System (GCCS), and commercial computer network communications protocols, wherein, said software architecture of said Master Object Manager achieves update rates facilitating real time viewing on said display and permitting a user's areas of interest to be embedded at a pre-selected resolution, and wherein, said data is in a format selected from the group consisting of: DII-COE messages in GCCS-M, Combat Command and Control System, HLA, DIS, military LINK, and air traffic control radar or any combination thereof.
- 9. The process of claim 8 employing at least one specialized file structure in CTL World software architecture, world geometry, and at least one specialized operation to organize said data,
wherein said world geometry is provided by CTL World software's display generation process, wherein, said CTL World software incorporates flexible user interface provisions, various input device drivers for position and motion control, and broadly functional application programmer interface (API) features, and wherein, said CTL World display software is written for, and adapts itself to, multi-processor CPUs, multi-channel video outputs, and multi-pipe computer systems.
- 10. The process of claim 7 wherein said at least one database is populated with clip texture files, said clip texture files stored separately from said geometry files,
wherein said separate storing until run time of said clip texture files and said geometry files eliminates at least some computation prior to window content selection for said at least one display.
- 11. The process of claim 10 wherein said geometry files are used to generate triangulated irregular network (TIN) files,
wherein said TIN files are polygons assembled to approximate the surface shape of terrain.
- 12. The process of claim 11 wherein said clip texture files and said TIN files have an associated data density that indicates the degree of precision in representing actual terrain,
wherein said clip texture files and said geometry files are retained and processed separately until combined immediately prior to said generating said at least one display.
- 13. The process of claim 10 further comprising applying dual quad tree architecture to said clip texture files and said terrain geometry files,
wherein management of both position and resolution variations within said clip texture files and said terrain geometry files facilitates the population of at least one worldwide database, wherein resolution of said display can be adjusted for varying eyepoints, a first adjustment possibly defining a first level of a plurality of levels within said quad tree architecture, wherein each succeeding level of said plurality of levels may consist of four subsectors each depicting a quarter of the area of said depiction of an immediately preceding level but containing the same amount of image data as said depiction of the immediately preceding level, thus providing higher resolution than any of said preceding levels, and wherein moving through said plurality of levels, in either direction, provides a resolution required by a user.
- 14. The process of claim 13 in which said dual quad architecture is expandable,
wherein said dual quad architecture consists of 32 levels that can hold and operate anywhere on the earth's surface with a resolution of two centimeters.
- 15. The process of claim 9 wherein said Master Object Manager assembles and tracks locations, orientation, types, activities and depiction-relevant factors for objects,
wherein said Master Object Manager refines an object list for said display by incorporating various sorting, filtering and aggregation algorithms, wherein some aspects of selection for visibility and said display's level of detail required are conducted within said Master Object Manager to reduce computational demands in said CTL World display generator, thereby conserving memory resources for graphics processes while ordering data traffic between graphics processing and external systems, and wherein said Master Object Manager may feed multiple copies of said CTL World to match various extended visualization generation demands.
- 16. The process of claim 1 further comprising storing said motion paths as track history,
wherein said storing includes said motion paths that are from external sources and said activities initiated internally thereto.
- 17. The process of claim 16 further comprising providing for replay of said events,
wherein said replay combines external sources and said activities initiated internally thereto for replaying at least parts of said at least one depiction on said at least one display.
- 18. The process of claim 17 in which said track history and a GVP Replay Controller are used to reconstruct and manipulate said at least parts of said at least one depiction.
- 19. A system, having inputs, and outputs, that enables a process for dynamic human visualization of a volume, including events having varying spatial and temporal gradients that are occurring within the volume, said system providing readily adjustable scale and resolution and initiating activities internal thereto, comprising:
at least one data generator as at least one source of data,
wherein said data represents imagery, geometric and time relationships to be used for generating motion paths, stored maps, location, and activity, and wherein said data is acquired from standard sources; memory for storing and accessing at least a portion of said data; at least one interface for communication between said system and external devices; at least one visualization device, having inputs and outputs, for displaying at least one depiction,
wherein said depiction may be derived at least in part from a model, having at least one input and at least one output, and is displayed in real time; at least one record and playback device for provision of at least some inputs to said visualization device; software for manipulating said process,
wherein said software is used to generate at least one database, wherein said software is used at least in part to create said at least one model from said database, wherein, said software is used to control said inputs to and said outputs from said at least one model for inputs to said at least one display, wherein, said software is used to control said outputs from said record and playback device and said interface; and at least one controller for controlling said inputs and outputs to said system.
- 20. The system of claim 19 wherein said data generator comprises at least one device selected from the group consisting of: a real time data collection system, a GCCS system, a scenario generator, a device simulator, and a cockpit simulator.
- 21. The system of claim 19 wherein said memory is provided within at least one computer, wherein said computer incorporates multiprocessors.
- 22. The system of claim 19 wherein said at least one interface for communication between said system and external devices is a Master Object Manager, comprising at least one software module and at least one hardware connection sufficient to interface said system to at least one source external to said system.
- 23. The system of claim 19 wherein said visualization device is selected from the group consisting of: a CRT or flat-panel display, a single user display for unobtrusive wear upon the human body, a large scale projector with screen, a helmet-mounted display, a display built in to wearable optics, a volumetric display, a vehicle-mounted display, shutter glasses for left and right eye view control, a cockpit-mounted display, a heads-up display, a device that supports binocular stereopsis for true 3-D, and dual optics virtual displays.
- 24. The system of claim 19 wherein said at least one record and playback device is a global visualization process playback controller.
- 25. The system of claim 19 wherein said software for manipulating said process comprises at least a CTL World software module,
wherein, said CTL World software module outputs active stereo, and wherein, said CTL World software module supports binocular stereopsis for true 3-D displays as well as helmet, head-mounted, and custom 3-D visualization products.
- 26. The system of claim 25 wherein said at least one controller for controlling said inputs and outputs to said system incorporates hardware devices that interface using said CTL World software, said hardware devices selected from the group consisting of: a mouse, a trackball, a pointer, a joystick, a keyboard, a microphone, a device employing a capacitive sensor, and a touch screen.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
[0001] The invention described herein may be manufactured and used by or for the government of the United States of America for governmental purposes without the payment of any royalties thereon or therefor.