Embodiments of the present disclosure relate generally to systems and methods for spacecraft mission technologies and specifically to systems and methods to facilitate space object control and simulation.
Mission technologies are essential to monitor and control space objects. Improved mission technologies, as well as improved training capabilities for operators of the technologies, will allow for better monitoring and control of space objects.
A mission control center (sometimes referred to as a flight control center or operations center) is a facility that manages space flights using systems adapted for controlling spacecraft, referred to here as “mission technologies”. Mission technologies may also include systems for simulating spacecraft control. Commercial entities may be involved in mission technologies; for example, they may provide training, monitoring, planning, and other operations related to space flights. Spacecraft security and spacecraft sustainability are important aspects of mission technologies for protecting national and international interests and the implementation of systems and methods that can provide improved capabilities for on-orbit decision making are needed to help support these aspects.
Systems and methods described herein may provide a common platform and interface for space mission operations, training, and simulation. They may feature a high-fidelity modeling and simulation backend tailored for Rendezvous and Proximity Operations (RPO) (including mission planning, and analysis, trade studies) but may be extensible for general purpose astrodynamics simulation. The systems and methods may provide a graphical front-end for input and visualization, including inertial frames, relative frame orthographic projections, waypoint trajectory planning, and information displays. Application Programming Interfaces (APIs) may be available for batch-scripting astrodynamics analysis. The systems and methods described herein may be cloud-based, enabling improved access to “Simulation-as-a-Service”: a high-performance computer architecture supporting massively parallel Monte-Carlo-type analyses and satellite operations from anywhere. This environment may advantageously provide analyses and planning tools for both space defense and commercial Orbital Servicing, Assembly, and Manufacturing (OSAM) applications.
In some aspects, the techniques described herein relate to a method for viewing on-orbit operations, the method including: requesting, using a pilot vehicle interface, a view of the on-orbit operations, wherein the view includes at least one object; receiving, from a simulation engine executing on a virtual machine, scenario data describing a status of the on-orbit operations; receiving object information about how the at least one object interacts with the on-orbit operations; integrating the scenario data with the object information to obtain the on-orbit operations; and providing, via the pilot vehicle interface, the view of the on-orbit operations.
In some aspects, the techniques described herein relate to a method, wherein the at least one object includes at least one of a ground station and at least one of a satellite.
In some aspects, the techniques described herein relate to a method, wherein the scenario data includes launch information, and wherein the object information includes a calculation of orbital data that is based on the launch information.
In some aspects, the techniques described herein relate to a method, wherein the at least one object is not on-orbit, and wherein the view of the on-orbit operations is a simulation of at least two scenarios of the at least one object being on-orbit.
In some aspects, the techniques described herein relate to a method, wherein after the view of the simulation, the at least one object is on-orbit and a second view is provided that displays on-orbit information for the at least one object after it is on-orbit.
In some aspects, the techniques described herein relate to a method, wherein the view of the on-orbit operations includes a first simulation that models a first on-orbit maneuver of the at least one object, and further including: providing a second view of the on-orbit operations that includes a first simulation that models a first on-orbit maneuver of the at least one object; requesting the view of the on-orbit operations a second time; providing the view, wherein the view reverts back in time to remove the first simulation; requesting a third view of the on-orbit operations that includes a second simulation that models a second on-orbit maneuver of the at least one object; and providing the third view of the on-orbit operations that includes the second simulation that models the second on-orbit maneuver of the at least one object.
In some aspects, the techniques described herein relate to a method, wherein the view of the on-orbit operations is a view of simulated operations, and further including requesting a second view of the on-orbit operations, wherein the second view includes real-time operations of the at least one object; and providing the second view of the real-time operations.
In some aspects, the techniques described herein relate to a method, further including requesting an on-orbit maneuver of the at least one object; and sending, via the scenario manager, a command to execute the on-orbit maneuver to the at least one object.
In some aspects, the techniques described herein relate to a method, further including receiving, from an AI agent, an action to maneuver the at least one object.
In some aspects, the techniques described herein relate to a method, wherein the action is provided as a simulation in the view by the pilot vehicle interface.
In some aspects, the techniques described herein relate to a method, wherein the at least one object is on-orbit in real-life, and further including receiving a confirmation of the action and sending a command to execute the action in real-life to the at least one object.
In some aspects, the techniques described herein relate to a method, wherein the command is executed faster than in real-time.
In some aspects, the techniques described herein relate to a method, further including sending, by the scenario manager, a command to the at least one object that is on-orbit that causes the at least one object to execute a maneuver.
In some aspects, the techniques described herein relate to a method, further including receiving, by the scenario manager, telemetry data from the at least one object, wherein the telemetry data is based on the command.
In some aspects, the techniques described herein relate to a system for viewing on-orbit operations, the system including: a simulation engine executing on a virtual machine, the simulation engine providing scenario data describing a status of the on-orbit operations; and a pilot vehicle interface requesting a view of the on-orbit operations, receiving the scenario data from the simulation engine, receiving object information about how an at least one object interacts with the on-orbit operations, integrating the scenario data with the object information to obtain the on-orbit operations; and providing the view of the on-orbit operations.
In some aspects, the techniques described herein relate to a system, wherein the simulation engine executes in at least five modes, the modes including: command and control, space battle management, battlespace, tactical decision aids, and digital space range.
In some aspects, the techniques described herein relate to a system, wherein the at least one object is viewed as a launch vehicle analysis in each of the five modes.
In some aspects, the techniques described herein relate to a system, wherein a state estimation library is used in the battlespace mode to provide a white cell.
In some aspects, the techniques described herein relate to a system, wherein the pilot vehicle interface executes on the virtual machine.
In some aspects, the techniques described herein relate to a system, wherein the system relies on a single clock.
These and other features, and characteristics of the present technology, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. As used in the specification and in the claims, the singular form of ‘a’, ‘an’, and ‘the’ include plural referents unless the context clearly dictates otherwise.
As used herein, a communication may also be referred to as a “message,” and a “communication session” and may include one or multiple electronic records, text, rich media, or data structures that are transmitted from one communication device to another communication device via a communication network. A communication may be transmitted via one or more data packets and the formatting of such data packets may depend upon the messaging protocol used for transmitting the electronic records over the communication network.
As used herein, a data model may correspond to a data set that is useable in an artificial neural network and that has been trained by one or more data sets that describe conversations or message exchanges between two or more entities. The data model may be stored as a model data file or any other data structure that is useable within a neural network or an Artificial Intelligence (AI) system. The term “Artificial Intelligence” as used herein generally refers to machine intelligence that includes a computer model or algorithm that may be used to provide actionable insight, make a prediction, and/or control actuators. The AI may be a machine learning algorithm. The machine learning algorithm may be a trained machine learning algorithm, e.g., a machine learning algorithm trained from data. Such a trained machine learning algorithm may be trained using supervised, semi-supervised, or unsupervised learning processes. Examples of machine learning algorithms include neural networks, support vector machines and reinforcement learning algorithms.
As used herein, the phrases “at least one,” “one or more,” “or,” and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising,” “including,” and “having” can be used interchangeably.
The term “automatic” and variations thereof, as used herein, refers to any process or operation done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material.”
The term “computer-readable medium” as used herein is used interchangeably with the terms “computer-readable media” and “storage media” and refers to any tangible storage and/or transmission medium that participate in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media may be referred to herein as storage media and includes, for example, NVRAM, or magnetic or optical disks. Volatile media may be referred to herein as storage media and includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present disclosure are stored.
A “computer readable signal” medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The terms “determine,” “calculate,” and “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.
It shall be understood that the term “means” as used herein shall be given its broadest possible interpretation in accordance with 35 U.S.C., Section 112, Paragraph 6. Accordingly, a claim incorporating the term “means” shall cover all structures, materials, or acts set forth herein, and all of the equivalents thereof. Further, the structures, materials or acts and the equivalents thereof shall include all those described in the summary of the disclosure, brief description of the drawings, detailed description, abstract, and claims themselves.
Aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium.
In yet another embodiment, the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure. Illustrative hardware that can be used for the disclosed embodiments, configurations, and aspects includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
Examples of the processors as described herein may include, but are not limited to, at least one of Qualcomm® Snapdragon® 800 and 801, Qualcomm® Snapdragon® 610 and 615 with 4G LTE Integration and 64-bit computing, Apple® A7 processor with 64-bit architecture, Apple® M7 motion coprocessors, Samsung® Exynos® series, the Intel® Core™ family of processors, the Intel® Xeon® family of processors, the Intel® Atom™ family of processors, the Intel Itanium® family of processors, Intel® Core® i5-4670K and i7-4770K 22 nm Haswell, Intel® Core® i5-3570K 22 nm Ivy Bridge, the AMD® FX™ family of processors, AMD® FX-4300, FX-6300, and FX-8350 32 nm Vishera, AMD® Kaveri processors, Texas Instruments® Jacinto C6000™ automotive infotainment processors, Texas Instruments® OMAP™ automotive-grade mobile processors, ARM® Cortex™-M processors, ARM® Cortex-A and ARM926EJ-S™ processors, other industry-equivalent processors, and may perform computational functions using any known or future-developed standard, instruction set, libraries, and/or architecture.
In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this disclosure can be implemented as program embedded on personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
Methods described or claimed herein can be performed with traditional executable instruction sets that are finite and operate on a fixed set of inputs to provide one or more defined outputs. Alternatively, or additionally, methods described or claimed herein can be performed using artificial intelligence (AI), machine learning (ML), neural networks, or the like. In other words, a system is contemplated to include finite instruction sets and/or artificial intelligence-based models/neural networks to perform some or all of the steps described herein.
The term “agent” as used herein includes the terms user, operator, player, AI agent, and human agent, among others. The term “agent” may be an AI agent or a human agent and also includes a human-agent team (e.g., a system composed of one or more interacting humans and AI systems).
The term “object” as used herein may refer to any space objects, any ground objects, including vehicles, targets, ground stations, assets (e.g., high value assets (HVAs), Jackals, satellites, etc.), weapons, and service providers, among others. The term “object” may refer to various types of infrastructure and equipment (e.g., civilian, private, public, commercial, and/or military) including objects at fixed locations (e.g., launch sites that are not mobile) and objects at mobile locations (such as mobile ground launch locations and vehicles (manned and unmanned)). Further examples of objects include landers, launchers, capsules, and shuttles. Objects may be friendly objects (e.g., sometimes called a “blue” object that is not considered a threat) or a threat object (e.g., sometimes called a “red” object that is considered a threat).
As used herein, the term “time window” is used interchangeably herein with the terms timing of windows, timing windows, windows of time, and windows of timeframes. The term “time window” may be described by a type of window that is associated with an objective of the timing (e.g., a launch window, a visibility window, etc.).
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of various embodiments disclosed herein. It will be apparent, however, to one skilled in the art that various embodiments of the present disclosure may be practiced without some of these specific details. The ensuing description provides illustrative embodiments only and is not intended to limit the scope or applicability of the disclosure. Furthermore, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scope of the claims. Rather, the ensuing description of the illustrative embodiments will provide those skilled in the art with an enabling description for implementing an illustrative embodiment. It should however be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein.
While the illustrative aspects, embodiments, and/or configurations illustrated herein show the various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a Local Area Network (LAN) and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined in to one or more devices or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network. It will be appreciated from the following description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system.
Various additional details of embodiments of the present disclosure will be described below with reference to the figures. While the flowcharts will be discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosed embodiments, configuration, and aspects.
Referring initially to
In various aspects, data bus adaptors 121 refers to an ability to ingest or export data by interfacing with a provider. The bus itself may be agnostic and can communicate over whatever protocol and transport layer is required (e.g., HTTP and JSON). Mission data processing 122 may refer to telemetry that is downlinked and used for mission planning. For example, telemetry data may be GPS data that shows the location of a satellite, and this data can have various uses, e.g., orbit determination. Images that have been downlinked from the satellite may also be stored. Data hygienist 123 may refer to standardization and sanitization of data that do not follow a consistent schema, e.g., data that is received from different providers. Supervisors/workers 124 may refer to having a worker for each data provider, where the worker fetches and enters the data into the database at a per-provider configured interval. A supervisor manages all of the workers and if a worker fails, the supervisor can restart the worker without human intervention.
In various embodiments, the modeling and simulation API 110 is embedded into a SIM API service (e.g., unallocated SIM API instances 170). A SIM API instance may be considered a virtual machine instance. Embodiments disclosed herein may run on multiple virtual machines. The GUI 130 may communicate with AI pilots 140 which may include an RPO COA (“course-of-action”) generation 141, pattern of life characterization 142, constellation optimization 143, and early-warning change detection 144. The AI pilots 140 and GUI 130 may communicate with the scenario manager 101. Data from the unallocated SIM API instances 170 and a mission manager 150 (which includes, for example, mission planning 151, comms scheduling 152, constraint checking 153, and opportunity generation 154, among others) may be input to the scenario manager 101. The mission manager 150 may be a module in the physics engine and may be exposed through a C API that the Elixir application, SIM API, then exposes through a REST, gRPC (e.g., a framework for Remote Procedure Calls), or WebSocket API. Mission planning 151 may include comms scheduling 152, constraint checking 153, and opportunity generation 154, among others. These modules may be used to validate a plan when planning a mission. For example, if any of the modules throw a warning or returns a violation, the mission plan may be aborted.
Comms scheduling 152 may enable an operator to schedule comms and ensure that the lighting constraints are correct and that the satellite is pointed in a sufficient, improved, or an optimal direction in order to have a successful contact (e.g., a contact that is desired or defined by a mission plan). Constraint checking 153 may include checking whether a mission plan violates any constraint, where constraints can include lighting, license constraints (e.g., Federal Communications Commission (FCC), National Oceanic and Atmospheric Administration (NOAA), etc.), and distance constraints, among others. Opportunity generation 154 refers to showing when a sufficient, improved, or optimal time is in order to take an action regarding a target (e.g., engage a target, image a target, etc.).
A telemetry and commanding service 160 may communicate with scenario manager 101, modeling and simulation API 110, mission manager 150, and other infrastructure/platform services 182. The Mosaic system may further include constellation management services 178 (which includes, for example, heterogeneous constellation 176 and homogenous constellation 177, each of which provide information about elements of the configurations of their respective constellations) that receives input from modeling and simulation API 110 and also receives input from flight dynamics 171 (which includes, for example, orbit determination 172, maneuver planning 173, orbit management 174, and conjunction analysis 175). Flight dynamics may communicate with mission manager 150 and data manager 120. Mission manager 150 may communicate with ground command and control (“C2”) 159. The data manager service 120 can import and export data, and one or more identification and authorization components may provide identification and authorization systems and methods for the Mosaic system. For example, elements may communicate over an exposed C API or over a message queue such as ZeroMQ. As a further example, scenario manager 101 elements may communicate over gRPC or directly through a python API (which may, in some instances, be used instead of a web interface). In some aspects, mission engineering may work with python API and a space operations team, and external customers may use scenario manager 101 through gRPC, REST, and web socket APIs that may be exposed as web interface APIs. In some aspects, the systems and methods disclosed herein primarily include four services of the scenario manager 101, the modeling and simulation (“M&S”) API 110, the data manager service 120, and the PVI 130.
In some aspects, APIs may be exposed to C and Python where necessary, and code may be written in C++ for advantageous speed. Various embodiments of system 100 enables universal and plug-and-play mission control software (“MCS”) APIs. In such embodiments, one or more of the components of system 100 comprise services that have interchangeable parts that can be used with different vendor software or custom software components. In one such example, the system may enable different flight software or custom command and telemetry databases for distinct spacecraft. In some embodiments, custom and extensible data providers can be added through, for example, a provider behavior. Advantageously, systems and methods disclosed herein may be agnostic to encryption schemes, and the system may be scalable across various RPO missions. Additionally, the system and methods disclosed herein may support many (e.g., hundreds) of concurrent satellites and each satellite's command and control may be isolated through specific channels providing fault tolerance and isolated failure.
In some aspects, the Mosaic system (and/or components thereof) may communicate with other elements. For example, the modeling and simulation API 110 may communicate with vehicle JSON config 180. Communications with the JSON configs may comprise internal communications. In various aspects, an internal defined JSON schema may be used. In further aspects, an actual JSON can be imported by an external user conforming the JSON schema being used internally. For example, an internal JSON schema may be used by internal purposes and also by external parties in order to create a satellite or a ground station via the JSON schema. These communications, and others, may be stored in Git system repository. Alternatively, internal communications such as, but not limited to, the JSON config communications, may be included in a service communication such as, but not limited to, a REST API communication. Config files and communications, such as, but not limited to, the modeling and simulation (M&S) API 110, may be received from external sources (e.g., for specific missions). It is also contemplated that such communication may be loaded as a JSON file. A user Space Operations Center (SOC) 181 may also communicate with the Mosaic system. The user SOC may be internal or external to the system. Ground C2 159 may communicate with an antenna broker 186 as well as with infrastructure/platform 182 which may include encode/decode 183, encrypt/decrypt 184, and packetization 185 modules, respectively. Antenna broker 186 may communicate with Satellite Control Network (SCN) 187, Commercial (COMM) 188, and other services 189. One SCN may comprise the Space Force's network of antennas for commanding satellites. Commercial may comprise commercial ground control equipment and/or satellites. The Mosaic system 100, via data manager 120, may communicate with government data lakes and applications 190, which may include a Unified Data Library (“UDL”), such as, but not limited to, https://unifieddatalibrary.com, commercial data providers 191 such as ExoAnalytics (https://unifieddatalibrary.com) 191a, LeoLabs (https://leolabs.space/) 191b, CelesTrack (https://celestrak.org/) 191c and OurSky (oursky.ai) 191d, to name a few, and contact scheduler API 192. Libraries may include, and are not limited to a Numerical Algorithms Library (e.g., containing general numerical algorithms, similar to what would be found in Python's script library), an Astrodynamics Library (e.g., a general astrodynamics library for handling time, high-fidelity propagation, computing maneuvers, etc.), a Discrete Event Simulation (DES) (e.g., custom implementation of a discrete event simulation), an Attitude Dynamics Library (e.g., contains logic for propagating the attitude of a satellite forward in time), a Filtering Library (e.g., a general filtering library, built to generate measurements from instruments and process them in filters), a RPO Mission Planning Library (e.g., a library built to allow RPO missions to be scheduled on the DES through Finite State Machines (FSM), allowing for complex missions to be build, where it may also define metrics for triggering events and logging information), and/or, a State Estimation Library (e.g., a library built to allow state estimation and relative navigation to be performed within the DES), among others.
Embodiments of the M&S environment described herein include illustrative algorithms and capabilities. In various embodiments, astrodynamics algorithms may be dependent on robust and efficient numerical algorithms. Modeling high fidelity propagation may require numerical integration or ordinary differential equations (ODEs). Embodiments may include computing visibility intervals and range metrics require efficient root solvers. In various aspects, optimizing maneuver plans may require optimization algorithms having improved flexibility and efficiency. In some aspects, the numerical algorithms (e.g., numalg) library is designed to meet these needs, and algorithms are only implemented when they support the astrodynamics related libraries. Various libraries and algorithms may be incorporated with the systems and methods disclosed herein, as described in Appendix A. Appendix A is incorporated fully by reference.
In various embodiments, scenario manager service 101 works with other components of the system to execute various modes. These modes may include (i) spacecraft command and control, which may include the pilot vehicle interface used to operate live spacecraft, plan missions, and receive telemetry, with decision-support from AI, (ii) space battle management, which may include systems and AI necessary to make sense of activity in an identified domain to make spacecraft decisions and control tactical and/or other units, (iii) battlespace, which may comprise a multiplayer wargaming environment for exercises and wargames utilizing realistic asset models and user-specific perspectives across military space missions, (iv) tactical decision aids, which may include systems to automate specific processes and synthesize complex data into actionable information for system users, which may be referred to as space operators, and (v) digital space range, which may comprise digital representations of spacecraft, sensors, and payloads to test orbital activities and validate tactics.
In some aspects, scenario manager service 101 may communicate with the modeling and simulation API 110 to execute a modeling and simulation engine (also referred to as a simulation engine). In some embodiments, the modeling and simulation API 110 may also be referred to as a SIM API and it may use Elixir with a REST API and modeling simulation engine information injected at compile time in order to calculate data sets based on selection of a mode and corresponding input. The data sets may be sent over a REST API, a WebSocket API, or via gRPC to the pilot vehicle interface (PVI) 130. In some embodiments, cloud services may enable the delivery aspects of the system 100 including the modeling and simulation API 110. In some advantageous aspects, because some of most of the functionality (e.g., the physics, persistence, etc.) on the backend, it is possible to leverage the backend in whole or in part for different applications, such as those described herein. For example, a headless API may be used with a PVI having an AstroUX-inspired frontends. The modeling and simulation API 110 may also include various APIs, such as an API for physics and weather modeling data (e.g., via a REST API, WebSocket API, etc.). Thus, various data may be input into the modeling and simulation engine at a compile time. For example, new data sets may be added into the physics engine which would then be added during compilation. It is contemplated that user data may be added after the application has compiled. In some embodiments, the modeling simulation and API 110 may contain any number of different modes that may be executed. One example of a mode is simulation. Another example of a mode is battlespace (e.g., a wargaming scenario). Yet another mode may be on-orbit operations. In some embodiments, battle management may be a simulated or real scenario whereas on-orbit operations may be with actual satellites that could be engaged in one or more operations. Further modes may include tactically responsive space and digital range or other aspects described above related to the scenario manager service 101.
In some embodiments, the system and methods disclosed herein may reuse data to execute scenarios (where the term “scenarios” includes exercises) and repeat one or more portions of a scenario. For example, sometimes the system may repeat aspects of some or all of a specific scenario. The scenario manager 101 may be responsible for persisting data that is used in some scenarios. As a further example, in a space battle management scenario, an operator may create objects within the scenario (e.g., satellites) and configure other information, as well as perform any number of maneuvers (e.g., launch one or more objects such as satellites). Any or all of this data may be persisted in the scenario manager 101. Each scenario, regardless of the type, may have a SIM API that is configured and/or executed and attached to the scenario, or the scenario manager 101, for the life of that scenario. In some systems, all of the data in the SIM API is in memory only (e.g., one or more databases). However, because data may be persisted in the scenario manager 101, the data may persist through different instances or different scenarios (e.g., the data may persist even if an API crashes, if it's disconnected, or if it reboots). This advantageously allows the Mosaic system to execute actions such as replay, as well as replay together with changing to a different maneuver during the replay (e.g., such as implementing counterfactuals at some point in time).
In some aspects, the systems and methods disclosed herein advantageously allow operators to train on the same system that they could also execute operations (e.g., run missions, fly, etc.) on. For example, the same code path(s) and/or exercise(s) may be used by an operator in a training scenario as are used in an operations scenario. This may be possible by using a SIM API to store created scenarios so that planning a mission can occur in the same system that the training occurred on. In one such example, the scenario manager 101 may store the data and subsequent planning or loading of the stored scenario may occur on different SIM API instances, which may be assigned randomly at scenario start, load, or reattachment.
In various aspects, a difference between planning and training is the use of a command and control (C2) component. Various software may be used for command and control, such as software by MAX Ground Data System (GDS) by Rocket Lab ASI, Ball Aerospace Open C3, or other custom developed C2 Software. Advantageously, the ability to use different software systems allows the system to be able to have modular components. For example, in various embodiments, it is possible to swap out one encryption scheme for another, and/or to add on new backends or versions of backends (e.g., simulated flight software, hardware, etc.). Further, additional data sources and/or providers can be added with advantageous ease, so that it is possible to ingest and/or export additional data. Further, it is advantageously possible to add in controls for satellites that are external to (e.g., not part of) the system.
In further embodiments, other components may advantageously be modular (such as the use of different command and telemetry databases). For example, for each mission, each spacecraft may have unique command and telemetry data. In some aspects, if a third-party company were using MAX flight software, the presently disclosed system advantageously allows command and telemetry database to be imported for use within the presently disclosed system. Furthermore, the modularity provides benefits of reduced friction for implementation and changing of various components, and ease of ability to use this system to control various satellites even if they are new to this system. Further advantages include that, when the operator makes plans to maneuver, the operator may confirm the maneuver and then the maneuver may be executed faster than in real-time. For example, the system may send the commands out through scenario manager 101 into the command-and-control component (e.g. Ground C2 159), which would manage the commands as necessary. In some aspects, the simulation engine can run much faster than real-time because it contains a model of the universe with objects (e.g., celestial bodies, satellites, ground stations, rockets, etc.). A software in the loop (SIL) application may run MAX Flight Software that is faster than real-time called FTRT SIL. In various aspects, FTRT SIL may be an improved representation, or the most faithful representation, of the flight software. Operators may run a maneuver or mission plan through FTRT SIL before executing the plan with the real satellites. The command(s) may communicate directly to the spacecraft (e.g., if it is a basic command) or the command may be a sequence of commands. In some embodiments, the command may be a sequence of commands that use a scripting language such as FlightJAS and are communicated to the ground stations (e.g., the service provider). Still further, it is contemplated that the commands may be sent out over WebSockets and may then be communicated to a satellite. Furthermore, the commands may be communicated to another system and translated into the relevant protocol and then communicated to the satellite. The satellite may then decode and decrypt the commands, which are then read by the satellite and telemetry data may be communicated.
Telemetry data may be the information about an object (e.g., a satellite), including any and all information. Telemetry data describes the state of the object. The information may include information provided by the satellite about its date and time, position, velocity, and any or all of the subsystems (e.g., all of the subsystems that compose the satellite). The ability to communicate telemetry data is advantageous in various aspects because external imagery may not show as much information about the object, such as whether solar arrays have deployed, if it is sun-pointing, and whether its tumbling, among other information.
As described herein, systems and methods may incorporate one or more ground stations. Information about ground stations may be obtained using systems such as one or more of Azure orbital and/or ViaSat, among others, and any of these could be swapped out advantageously for modularity. Aspects of the systems and methods disclosed herein have advantageous modularity by using connections at the edge of the system (e.g., at the edge of the Mosaic system, such as vehicle JSON config 180, user SOC 181, Government Data Lakes and Applications 190, Commercial Data Providers 191, Contact Scheduler API 192, Antenna Broker 186, and other infrastructure/platform services 182). For example, data communications for components such as infrastructure and/or command and control and/or ground stations, among other components, may be changed with more ease for the systems and methods disclosed herein (e.g., using an API). This may advantageously provide additional redundancy and fault tolerance to the systems and methods.
Turning to the data manager service 120, data may be imported from any number of commercial and government sources. In various embodiments, components herein may use languages that make them easier to interface with other components. For example, scenario managers (e.g., scenario manager 101) and an API data manager may use Elixir. In various embodiments, a construct called the behavior in Elixir may provide an interface that is easier and faster to interface with other languages. Advantageously this may allow a faster and easier ability to ingest data from any number of providers. One example of this functionality is the use of space track so that an operator can import objects in the sky when using the systems and methods described herein to execute various tactics, development, training, and/or Wargaming, among other functionality to ingest data. Further advantages include that these systems and methods (including the communications functionality described herein) improve integration with any classified environment, e.g., by making it faster and easier, such as by being able to add a new provider as opposed to re-architecting part of the systems. Still further, regarding other components such as bus adapters, services such as GMSEC (Goddard Mission Services Evolution Center), UCI (Universal Command and Control Interface), etc. may advantageously be incorporated.
In various embodiments, the PVI may include access to live system status and control of satellites, modeling and simulation for future architecture studies, gaming platform for multiplayer orbital warfare training, and assist maneuver planning with AI. AI pilots, in various embodiments, may be developed using deep reinforcement learning (RL) agents. The AI agents may be trained on any number of scenarios (e.g., runs, which may number in the hundreds of millions), and the scenarios may be exported into a data set, such as a set of matrices.
AI agents may be developed using an in-house developed RL framework. They may be built to be lightweight with improved modularity, speed, and have improved ease to debug. Systems and methods disclosed herein may train AI agents as they play many (e.g., thousands or millions) of individual RPO engagements and the AI agents may be trained in a set of OpenAI Gym compliant environments, which can serve to wrap the simulator in such a way to expose task specific functionality. In some aspects, the agents get integrated into the simulator (PVI backend) and have tasks that include cooperative inspection, and 1v1 (e.g., one player versus one player) OEM (e.g., dogfighting), orbital intercept (ingress) and extract learning weights directly from the network and load them into C++ code. This may advantageously lower overhead because embodiments may not require additional C++ packages to execute. Advantageously, this may allow the AI to function as a tactical decision aid for operators, where the agents autonomously map out waypoint plans that the operator then only needs to verify. Automation may be enabled, at least in part, via Expert Systems using Finite State Machines (FSM). States may specify various semi-autonomous behaviors, where goals are defined parametrically and maneuvers are calculated on the fly, and transitions between states may be controlled by user defined criteria using real-time data. State machines may allow canonical RPO primitive behaviors (e.g., Forced Motion Circumnavigation, linear drift) to be strung together into a complete mission plan, and pre-constructed state machines may enable automated behaviors of both adversaries and allies for training purposes. In some embodiments, each of these behaviors may be context-dependent (e.g., where they are in space and in relation to something else) but they can be run independently or combined to achieve desired effects. In various aspects, the behaviors can also be conditionally triggered; for example, a condition of going x kilometers close to a specified target and then doing a force motion circumnavigation (FMC) or if y amount of delta V is used then do another maneuver. In some advantageous aspects, this ability provides an operator a more open space to conduct an operation especially if the target is non-cooperative, uncooperative, or there is uncertainty about the location of a target, regardless of whether the target is friendly or otherwise.
Advantageously, the exported data may perform variations to the scenario(s) to determine improved actions to take within the scenarios. For example, the matrices may be compiled into a bar model for use with a physics engine to be used in a maneuver which may include various pre-defined maneuver types (such as, but not limited to, wheel or perch) or any algorithmically determined maneuvers, and the deep reinforcement learning agents may also be used in such a manner, with provided or otherwise, applied criteria. Thus, in various embodiments, a target may be performing a maneuver or some action and the systems may determine, based on the conditions, what decision to make (e.g., the agent may determine what decision(s) to make). Using the systems and methods disclosed herein, it is possible to project the choices forward in time to create potential scenarios and determine, for example, how the agent would perform. It is possible to view potential results and select specific scenarios to save and execute. Further advantageous embodiments allow the ability to analyze backward and forward in time in the scenario(s). For example, it is possible to analyze between two or more different maneuvers based on different criteria, e.g., based on what an operator thinks a target may do. This provides advantages with methods and modes such as training, tactics development, gaming, etc., where an operator may better understand what an object does and how different actions and decisions (including the details and effects of taking a discrete number of steps and/or the details and effects of taking each step within the number of steps) affect outcomes of scenarios and maneuvers.
In some examples, the system and methods may take a specified number of steps where, at a predetermined step, each instance of the scenario is going to execute a different decision (e.g., a different maneuver or a same maneuver with different conditions) to determine what the outcome of the decision would be. Advantageously, this provides functionality to see how to see how the different decisions could change the results. Various components of the systems and methods disclosed herein enable this functionality.
As described herein, scenario manager service 101 works with other components of the system to execute various modes (e.g., spacecraft command and control (the pilot vehicle interface used to operate live spacecraft, plan missions, and receive telemetry, with decision-support from AI), space battle management (the systems and AI necessary to make sense of activity in the domain to make decisions and control tactical units), battlespace (which may be referred to as battlespace and is a multiplayer wargaming environment for exercises and wargames utilizing realistic asset models and user-specific perspectives across all of the military space missions), tactical decision aids (systems created to automate specific processes and synthesize complex data into actionable information for space operators), and digital space range (digital representations of spacecraft, sensors, and payloads to test orbital activities and validate tactics)).
These modes may advantageously share backend architecture for simulations including storage media, systems and methods. However, spacecraft command and control controls spacecraft that are in orbit. In some aspects, the architecture is a service-oriented architecture (SOA) where each service has an exposed API that one or more services may communicate with. Roughly each service provides a set of capabilities or functionality that can be combined with other services to achieve one or more desired effects. SIM API is the embedded physical and simulation and it communicates with scenario manager 101. Scenario manager 101 may maintain the state data for any or all satellites, a scenario, and the routing to that particular scenario and satellite(s). The data manager service 120 may be a repository of observations, satellites, satellite states, etc. In various embodiments, the frontend can advantageously combine these different services to produce a compelling narrative (e.g., to create one or more scenarios, to import satellite(s) and state data from the data manager service 120 and then inject that data into the SIM API 170 instance to execute a scenario either for training, tactics development, or an actual mission with real satellites).
Digital space range includes characterization and configuration of operators and/or residence-based objects, among other components, using sensors, payloads, observations, and/or other elements that have been developed and simulated in a simulation environment. Advantageously, it is possible (e.g., using the shared simulation environment and its components) to run the simulations anywhere geographically, including within an ITAR approved location, and have the differing locations interact. Different locations may have different teams (e.g., white, red, and blue, which may be referred to as cells) and they may compete together in a shared environment.
With respect to the multiplayer capabilities, this functionality may include 1v1 or n v n (e.g., multiplayer versus multiplayer) teams from any ITAR-compliant location in the world and a scenario/game browser may be provided that allows custom scenario creation and private/public hosting. The battlespace may advantageously provide network synchronized time (e.g., a client clock may be auto synced on scenario load) and a team-based database (e.g., where each team is assigned read/write access to separate authoritative sources of truth where only the white cell has override ability to the truth and team databases). A physics engine may be provided that is a team-based physics engine (e.g., each team may be assigned a different propagator to emulate team/country fidelity) and there may be networked objectives (e.g., each team/player may be assigned custom objectives). Additionally, team and player progress/completion may be tracked digitally and may use a leaderboard. Success criteria may be dynamically assigned to global and team-based objectives and scenario performance reporting may be automated. Advantageously, embodiments may include player versus computer capabilities for multiplayer features where team (e.g., red team) behaviors at the constellation/unit level, driven by AI. Player roles may be assigned per team and replay functionality may allow return to a previous state in the scenario to branch alternative decisions, as described herein.
In some embodiments, a white team may be referred to as the white cell and it may be an administrative cell that has the ability to view and set all criteria in the environment, including state estimation (e.g., the white cell may view and control state estimation and state estimation libraries). One or more teams, e.g., the red cell, may have better technology in terms of types of objects and properties of objects (e.g., satellites) and more accurate state estimation, or less accurate state estimation. Advantageous embodiments of the present disclosure may include aerospace applications, where the simulation environments include the functionality and components to run aerospace simulations (including the wargaming simulations) across geographically diverse locations. In some advantageous embodiments, because the backend of the system disclosed herein handles the state in a centralized place, this makes it unnecessary to have to sync clocks or state from any individual clients that may be dispersed in diverse locations. This is advantageous because it is difficult to manage clocks on different servers (e.g., it is difficult to determine how to know when one thing happened before or after another, or even in relation to another (e.g., lamppost clocks, vector clocks, hybrid logical clocks, etc.)). The ability to have a single clock that pushes updates, as disclosed herein, provides improved solutions to these problems.
In various embodiments, only one clock may be relied upon (e.g., using the server running the simulation(s), which is managed by the white cell). For example, using clock sync, the white cell may enable a player to join the environment and for the player's scenario to be synched with whatever clock the master timekeeper has. In some aspects, the different cells may have varying permissions and/or varying roles and even different operators or players (e.g., individual team members) may have different permissions and/or roles. For example, if an engagement is simulated with 50 jackal objects and there are 75 adversarial satellites, control may be varied for some sets of like objects (e.g., control of five satellites) where other sets of like objects are under a different control. Thus, some advantageous embodiments enable improved granularity of control within the simulation and/or granularity of information within the scenarios (e.g., statistics such as a number of vehicles that were lost or destroyed, the amount of delta V, and/or the amount of fuel that you spent, among others). The statistics may include agent and human component statistics as well, including if using an immersive digital facility. For example, a dedicated facility may be provided where operators can train or tactics can be developed. Advantageously, a distinct facility may be used that has a dedicated network for high reliability and performance. Illustrative examples of statistics may include a leaderboard to encourage competition, or more detailed statistics that show an operator's skill development over time through various metrics like delta V used, faces of a satellite (including adversary satellite) imaged, number of maneuvers to accomplish a goal, time to accomplish a goal, number of assets protected, number of adversaries destroyed or disabled, time to respond to a launch, time to launch, etc. Thus, advantageously, when training, it is possible to collect and summarize the data at a granular level where it is advantageous to monitor statistics and progress (e.g., for use with a scoring mechanism). Such advantageous embodiments can improve training by providing improved information to users being trained in order to improve their knowledge and skill level (e.g., to a higher skill level, to provide greater knowledge, to improve judgment, to train faster, etc.).
Turning now to space data management, the system may comprise a catchable responsive space or tactical command and control. For example, an adversarial nation may launch a rocket with satellites, and space data management may help determine how to respond to that while advantageously using a simulation environment. Thus, a model for launching at launch sites may be simulated. The model may simulate an incident response for a timing of when the object(s) (e.g., a satellite) is on orbit, and the model may help determine how to respond (in some embodiments, determine if there are satellite derived assets on orbit that have the ability to respond to the launch, or is it necessary to launch other assets). Part of the simulation can determine the speed (e.g., how quickly) at which the response can occur. In some advantageous aspects, this is helpful; e.g., if an adversarial nation makes a threatening maneuver towards a high value asset, then it is possible to determine how to respond. Thus, advantageous applications of the systems and methods disclosed herein may include scenarios of responding to an adversarial nation, as well as scenarios including on-orbit objects (including combat or defensive operations). These scenarios may occur in real-life, may be simulations of real-life, or may be gaming only.
Turning to types of tactical decision aids, these may include, for example, residuals, visibility windows/access times, transfer/intercept porkchop plots, covariance, point of close approach, camera to sun angle, range rate, and reachability analyses and visualization. In some embodiments, tactical decision aids may include aspects such as covariance tools (including determinations of likelihoods for locations of objects including in-orbit objects) and other tools that may help operators (e.g., pilots and AI pilots) to be able to determine improved decisions. Tactical decision aids may include not only software but also visual overlays and tools that perform calculations in the background in real time in order to advantageously update information provided to the operator (e.g., a heads-up display). Tactical decision aids may include charts as well as other tools that are static and/or dynamic in nature. For example, some tools may show that, for a given airframe, an energy maneuverability plot displays details of the airframe such as whether it is possible to achieve an altitude and/or an airspeed at a certain altitude. Tactical decision aids include providing information on range rate, which refers to the rate at which a range to another object is changing. Range rate information, in some aspects, may include a tool that, for a given object, provides the curves of range rates that an operator may want to consider for a safe approach to an object, which may be overlaid with other information such as the range to the object and how to adjust a range for a desired result. Tools such as range rate may be advantageously useful for reachability analyses and launch window analyses.
Transfer analyses may be another tactical decision aid. Transfer analysis information provides information such as how to visualize an amount of fuel needed to perform a certain maneuver at a certain time.
Various embodiments of the present disclosure provide tools that advantageously involve and/or model live operations. For example, the battlespace mode may be a multiplayer type of environment for humans to interact within and space battle management may have capabilities similar to an Airborne Warning and Control Systems (AWACS) aircraft. In various aspects, when managing an entire battlespace, a lot of units going around the digital range may be more similar to a machine simulation environment. For example, if you have a satellite model that you want to incorporate and see how it performs against other satellite models, it is possible to use an offline analysis (e.g., including Monte Carlo analyses). Advantageously, this may provide information about many trajectories, and tradeoffs of performing different maneuvers at different timings. It is possible to perform analyses on maneuvers across an entire envelope of possibilities. Many of the tools described herein (such as battlespace, digital range, launch window modeling, and satellite modeling) are available across all the modes (e.g., environments). For example, launch window modeling may be used in the space power management mode and also in other modes if needed. Advantageously, these tools may be chosen and used based on the focus of the simulation(s) and the arrangement of the default interfaces that are needed when you access various modes.
Various modes, or portions of modes, may exist completely within the scenario manager, data manager and use one or more of the same APIs. Some modes may use additional components, such as ground station as a service and command and control. Some modes may use same backend components and have different features that are exposed. Various modes may embed or use AI pilots, such as some or all of the tactical decision aids.
In various embodiments, loading a scenario may mean that certain information is requested and sent to the backend (e.g., to the scenario manager service) in order to create a new record in one or more databases based on the name of the scenario and some other metadata. The scenario manager service may then call to an unallocated instance of the SIM API (e.g., in the pool of SIM APIs), which then assigns it to the scenario for the life of that scenario. At this time, the simulation may be initialized in the environment. After this time, it is advantageously possible to adjust the scenarios (including capacity, conditions, astrodynamics information and measurements, and/or states of object(s), among others) as disclosed herein. For example, it is possible to continue to spawn satellites while a scenario (e.g., a mission) is running. Also advantageously, it is possible to speed up the scenario or increase the speed of the simulation. By way of example, it is contemplated that a user may be able to project forward in time and scrub time back and forth in that projection.
Various embodiments provide different frames (e.g., different views) such as a UCI view, an Earth centered inertial (ECI) frame, and a latitude, longitude and altitude frame. Additionally, it is possible to use commanding and telemetry interfaces, which may be the same or similar to what is needed for real-life satellite operations. These may be implemented using, at least in part, various external tools as discussed herein.
Aspects of the present disclosure are provided through the GUI. For example, if a user wanted to perform a launch window analysis, the user would proceed through the menus to select options in the GUI and perform the selected options. However, the modeling and simulation libraries (e.g., components available in the back end) could be available to a user who wants to work in a Python scripting environment. Thus, the user could call all of the same functions that the GUI can call but from the script environment. For example, if they wanted to perform a batch analysis of thousands of launch window simulations, they could advantageously speed up the process by batch scripting the analysis in a selected language (e.g., Python).
The view of
A white cell may be the exercise controller (e.g., coordinator, referee, etc.) who has managing privileges. Thus, the white cell may set up the scenarios, control one or more clocks (e.g., the system clock and/or player clocks), and have access to the absolute truth of the system including knowing the existence and locations of everything in the environment. As discussed herein, the white cell may be controlled by scenarios managed by the M&S API 110 in a cloud-based system, which advantageously provides improved speed of functionality and ability to control clocks of the system. Thus, although each player has only their own view of the environment because they each have access only to information from their own sensors and are limited to their individual state estimation process, the white cell may have access to all information, including every player's information. The white cell may have privileges to modify any data in any of the databases in order to change objects, team/player capabilities, and any other information, if necessary.
In various embodiments, to provide the functionality shown in
In various embodiments, to provide the functionality and view of
Thus, in
In
In some instances, a bar may be selected (e.g., with a right click) and a user may also select an action (e.g., to simulate a launch). For the simulated launch, the system could inject another object in the simulation in plane with any target that was selected. In various embodiments, the M&S API 110 may input data at a compile time to simulate the changes to the scenario (e.g., injecting an additional object) that is stored in the scenario manager 101 using a SIM API instance 170 associated with the relevant scenario. In some aspects, operators may view and interact with the components of
In some examples, the analysis shown in this illustrative view may be used to determine and show an intercept analysis (e.g., how to get an object to a desired plane, how to get one object within a certain distance of another object, and/or how to have one object intercept another object, among other objectives). In further examples, an operator may configure the launch window analysis to show different objects, which are in orbit and may select a launch site, such as a ground system, to determine which launch windows may direct inject a target into the same plane as the object that is in orbit. In some aspects, the information provided in this illustrative view shows launch windows and in further embodiments, the information provided in this illustrative view shows a simulated launch and injecting objects. Various results of this view show inertial results of launching objects and the results can advantageously be shown in different types of frames.
In some embodiments, as a method of the launch window analysis, a timespan is assigned, one of more target units are assigned, one or more launch sites are selected, and the analysis is conducted. When a launch site is selected, in some aspects multiple launch sites may be selected. Launch sites may be selected based on certain properties (e.g., all launch sites within a specified geographic area may be selected based on their geographic location within the geographic area). In some embodiments, a specified geographic area may be selected, and the launch sites located within that area are automatically selected to include in the launch window analysis. The results of the analysis show, for example, the time window in which an object can be launched from the one or more sites in order to meet any specific objective(s) desired, such as launching an object during a certain timeframe into a particular orbit while avoiding other objects.
The launch window analysis may also be useful from a threat perspective. For example, if there are launch sites associated with threats (e.g., launch sites located in countries that that may be considered adversarial), it may be desirable to determine when one or more assets are at risk of a launch from that geographic area. In some instances, it may be desirable to determine when it is possible for the adversarial site(s) to inject an object into a same plane as another (e.g., friendly) object. These windows could be determined using the launch window analysis as described herein. With respect to
As one illustrative example of a use of the launch window analysis, multiple analyses could be performed. For example, using the M&S API 110 together with the relevant scenarios saved in the one or more SIM API instances 170, the system can execute a simulated launch of a red threat object into a plane with a communication satellite (which could be an actual communication satellite or one that has not yet been launched). The analysis could be performed using the system to model that object in space (e.g., in orbit) with the analysis data shown visually by the PVI 130. More than one object may be launched so that there are multiple objects (e.g., multiple red and/or blue and/or other objects) in orbit and all objects (and various associated properties such as their domain, object type, condition, identity, and country) may be displayed by the PVI 130. It is also possible to run another analysis from a friendly launch site to determine when it is possible to launch, and where to launch from, in order to rendezvous with any one or more objects (e.g., the red threat object). Other properties associated with the objects and launches may be determined by the analysis, such as how much fuel it may take to rendezvous with one or more threats. Thus, in various embodiments it is possible to combine multiple launches into one or more scenarios and into one or more views.
In further embodiments, it is possible to run one launch at a time. For example, a red launch would be simulated, and then after the red object is launched, a blue launch would be simulated to launch a blue object. The system can provide sets of features. For example, the system can provide features that show timeframes that are a launch window analysis of timing for when to launch objects to achieve a certain objective (e.g., location on a specified plane in orbit, a desired interception of one or more objects, etc.). Additionally, in some embodiments, the system provides features that simulate the launch (and any additional launches) and injects the object or objects in orbit. These sets of features can be connected together (e.g., using the scenario saved in one or more SIM API instances 170), then the system may put the object(s) in orbit (e.g., by processing in the M&S API 110), and a user is able to see the object(s) in the frames using the PVI 130, such as the frame shown in the screenshot of
As a further example,
In some aspects of the state estimation process, measurements are taken, and the object calculates one or more estimations of the state data of the specific target that it's monitoring. The object may have both a model onboard (e.g., a mathematical model of the specific target) that provides one or more model estimates, as well as actual real-life observations of the specific target. The real-life observations may include data that is monitored in real-time, such as by external sensors. The systems of the object may subtract the real-time measurements from the model estimate(s) to determine what the specific target should be doing or other properties of the specific target. In various embodiments, the difference between what is calculated is called the residual. As shown in
In various aspects,
In various embodiments,
For example, the system may send the commands out through scenario manager 101 into the command-and-control component (e.g., Ground C2 159), which would manage the commands as necessary. The command(s) may communicate directly to the spacecraft (e.g., if it is a basic command) or the command may be a sequence of commands. In some embodiments, the command may be a sequence of commands that use a scripting language such as FlightJAS and are communicated to the ground stations (e.g., the service provider). Still further, it is contemplated that the commands may be sent out over WebSockets and may then be communicated to a satellite. Furthermore, the commands may be communicated to another system and translated into the relevant protocol and then communicated to the satellite. The satellite may then decode and decrypt the commands, which are then read by the satellite and telemetry data may be communicated
In some aspects, the tool shown in this view helps an operator understand which of these vehicles shown on the outer ring 902 may be targeted by vehicle 901. For instance, when the simulation initially starts, the results aren't very clear because there is time for maneuverability (e.g., there is some capability to retarget the objects 902); however, as the vehicle 901 gets closer, then the window of possibilities gets more and more narrow, thereby providing results of the transfer analysis. In various aspects, this transfer analysis may be referred to as a reachability analysis. Additional information may be shown in this view, such as chart 905 showing timing windows for reachability of each of vehicles 902. In some embodiments, the elements shown in
Advantageously, in the example embodiments shown with aspects related to reachability (e.g., in
The learning module 1628 may utilize machine learning and have access to training data and feedback 1639 to initially train behaviors of the learning module 1628. Training data and feedback 1639 contains training data and feedback data that can be used for initial training of the learning module 1628. The learning module 1628 may also be configured to learn from other data, such as scenario results, feedback, etc., which may be provided in an automated fashion (e.g., via a recursive learning neural network) and/or provided by a human. The learning module 1628 may additionally have access to one or more data model(s) 1649. The data model(s) 1649 may be built and updated by the learning module 1628 based on the training data and feedback 1639. The data model(s) 1649 may be provided in any number of formats or forms. Non-limiting examples of data model(s) 1649 include Decision Trees, Support Vector Machines (SVMs), Nearest Neighbor, and/or Bayesian classifiers.
The learning module 1628 may also be configured to access information from an agent actions decision database 1659 for purposes of building a historical agent actions database 1658. The agent actions decision database 1659 stores data related to agent decisions, including but not limited to historical agent actions, agent processing, agent notes, and other information related to agent actions. Agent actions information within the historical agent actions database 1658 may constantly be updated, revised, edited, or deleted by the learning module 1628 as the agent engine 1609 processes additional agent actions.
In some embodiments, the agent engine 1609 may include an agent actions engine 1629 that has access to the historical agent actions database 1658 and selects appropriate agent processing decisions (e.g., presented in agent information 1689) based on input from the historical agent actions database 1658 and based on inputs 1679 (inputs 1679 may include information about scenarios, agents, events, and actions, real-time information, historical information, and may include information that is internal or external to system 100). Non-limiting specific examples of information in inputs 1679 includes information about any agent action(s) recently executed for various agents and variations in actions by agents. The agent actions engine 1629 may make decisions regarding what information from inputs 1679 to provide to update agent actions based on any of the criteria described herein, including but not limited to information provided by the learning module 1628, information about current status of agent actions, feedback about agent actions (e.g., from agents, supervisors, agent training information, etc.), performance metrics related to agent actions or any of other aspects of agent actions. To enhance capabilities of the agent actions engine 1629, the agent actions engine 1629 may constantly be provided with information from training data and feedback 1639. Therefore, it may be possible to train an agent actions engine 1629 to have a particular output or multiple outputs. In various embodiments, the output of an AI application (e.g., learning module 1628) is an update AI agent action (e.g., a decision) that is sent, with appropriate information conveying how to apply the updated AI agent action(s) within system 100, via the agent actions engine 1629 and from agent information 1689, to the agent API 1619.
Using the inputs 1679 and the historical agent actions database 1658, the agent engine 1609 may be configured to provide agent information 1689 (e.g., one or more AI agent actions) to the agent API 1619 so that the agent API 1619 can publish one or more events, based on the agent information 1689. In various embodiments, the events may be published via the M&S API 110 to the scenario manager 101 to update one or more AI agent actions. An event of an updated AI agent action may be provided as an output of the agent actions engine 1629 (e.g., processed through the agent API 1619). For example, the AI agent(s) may advantageously function as a tactical decision aid for operators, where the AI agents autonomously map out waypoint plans that the operator then only needs to verify. The API may be a readily visible API that is exposed to an external system (e.g., a system external to the system 100 such as the agent engine 1609) and the API injects an event (e.g., an AI agent action event) into the system to modify an AI agent action on an ad-hoc basis based on the determinations (e.g., the agent information 1689) of the external system. AI agent actions become event-based data items, which can be determined in real-time and exposed to authorized applications to influence. In some aspects, there can be little or no manual configuration of agent actions and no static value for agent actions because the system 100 may use only AI agents and the AI agent actions may be set on an ad hoc basis. In some aspects, an AI or machine learning application may be enabled to integrate with the systems and methods disclosed herein in order to advantageously determine and implement updated and customizable AI agent actions. Such embodiments are advantageous by automating and quickly adjusting (with little or no manual configuration) agent actions so that outcomes are improved and resources are saved. Illustrative and non-limiting examples of advantages of using AI agents as disclosed herein, include an improved ability to assist maneuver planning, AI agents may be easily trained as they play many (e.g., thousands or millions) of individual RPO engagements, AI agents may be advantageously trained in a set of OpenAI Gym compliant environments, and the AI agents may be built to be lightweight and modular with advantageous speed and improved ease of debugging.
At step 1706, object information is received. The object information may be about how the at least one object interacts with the on-orbit operations. The on-orbit operations may be obtained in step 1708, where the on-orbit operations are an integration of the scenario data with the object information. At step 1710, the view of the on-orbit operations is provided. The view may be provided via the pilot vehicle interface.
The present disclosure, in various aspects, embodiments, and/or configurations, includes components, methods, processes, systems, and/or apparatus substantially as depicted and described herein, including various aspects, embodiments, configurations embodiments, sub-combinations, and/or subsets thereof. Those of skill in the art will understand how to make and use the disclosed aspects, embodiments, and/or configurations after understanding the present disclosure. The present disclosure, in various aspects, embodiments, and/or configurations, includes providing devices and processes in the absence of items not depicted and/or described herein or in various aspects, embodiments, and/or configurations hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and/or reducing cost of implementation.
The foregoing discussion has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as any claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, any claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Moreover, though the description has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
Systems and methods provided in the present disclosure effective performance of RPO mission planning, analysis, and trade studies.
In various embodiments, libraries are provided for performing RPO mission planning and analysis; Implementation of algorithms from textbooks, journal and conference papers, and technical documents are used; all code is written in C++ for speed, with APIs exposed to C and Python where necessary.
This appendix describes, for example, embodiments of the modeling and simulation (M&S) environment, including illustrative algorithms and capabilities.
In various embodiments:
numalg—Math Library
Ltd, 2011.
Astrodynamics Tool-Kit (astrotk) Library
Astrotk Library Overview
Astronomy Library
Astrodynamics Tool-Kit (astrotk)
Astronomy Library—IERS EOP Data
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Astrodynamics Tool-Kit (astrotk)
Mosaic is a common platform and interface for space mission operations, training, and simulation. It features a high-fidelity modeling and simulation backend tailored for Rendezvous and Proximity Operations (RPO), but extensible for general purpose astrodynamics simulation. Mosaic also features a graphical front-end for input and visualization, including inertial frames, relative frame orthographic projections, waypoint trajectory planning, and information displays. Application Programming Interfaces (API) are available for batch-scripting astrodynamics analysis. Mosaic is cloud-based, enabling easy access to “Simulation-as-a-Service”, a high-performance compute architecture supporting massively parallel Monte-Carlo-type analyses, and satellite operations from anywhere. This environment provides analyses and planning tools for both space defense and commercial Orbital Servicing, Assembly, and Manufacturing (OSAM) applications.
The Pilot Vehicle Interface (PVI) will include the following features:
The agents (AI) are developed using an in-house developed RL framework
The present Application for Patent claims priority to U.S. Provisional Application No. 63/509,001, entitled “Systems, Methods, and Storage Media for Controlling and Simulating Spacecraft Maneuvers,” filed Jun. 19, 2023, assigned to the assignee hereof, the contents of which are incorporated herein by reference in their entirety and for all proper purposes.
| Number | Date | Country | |
|---|---|---|---|
| 63509001 | Jun 2023 | US |