The present disclosure is generally directed to distributed processing systems and distributed processing nodes.
The various embodiments and configurations of the present disclosure address needs in the related art.
The present disclosure provides, among other things, distributed digital nodes and virtualization techniques.
The present disclosure can provide a number of advantages depending on the particular configuration.
These and other advantages will be apparent from the disclosure contained herein.
A system for a satellite according to at least one embodiment of the present disclosure comprises: at least one resource that communicates with a payload of the satellite, the at least one resource including a first abstracted physical communication interface; and a controller coupled to the at least one resource and including a second abstracted physical communication interface that communicates with the first abstracted physical communication interface over a communications link.
Any of the features herein, wherein the communications link comprises a USB connection, a UART connection, an Ethernet connection, an SPI connection, a PCI connection, a physical interface, a wireless interface, and/or an I2C connection.
Any of the features herein, wherein the at least one resource includes a payload server, an edge server, a compute resource, and/or a compute node.
Any of the features herein, wherein the compute node is a client node or a server node.
Any of the features herein, wherein the controller includes a third abstracted physical communication interface that communicates with a remote node.
Any of the features herein, wherein the at least one resource comprises an IPCC message encoder/decoder module, an IPCC message payload security module, and a bridge module that routes a message received over the communications link to the controller.
Any of the features herein, wherein a remote node is accessible by a primary user and by a secondary user.
Any of the features herein, wherein the bridge module manages the accessibility of the primary user and the secondary user, and wherein the primary user comprises one or more users.
Any of the features herein, wherein the at least one resource comprises a second remote node, wherein the bridge module associates the primary user with the remote node, and wherein the bridge module associates the secondary user with the second remote node.
Any of the features herein, wherein the remote node and the second remote node are remote to the at least one resource and the controller.
Any of the features herein, wherein a first simulation of the system comprises the at least one resource and the controller, wherein a second simulation of the system comprises at least one second resource and a second controller, wherein the primary user accesses the first simulation, and wherein the secondary user accesses the second simulation.
Any of the features herein, wherein the at least one resource comprises a simulation adaption layer.
Any of the features herein, wherein an Internet Protocol (IP) connectivity of the simulation adaption layer is configurable.
Any of the features herein, wherein the first abstracted physical communication interface and the second abstracted communication interface communicate with one another according to at least one of Internet Protocol (IP), TCP, UDP, and a direct API.
Any of the features herein, wherein the at least one resource includes two or more of a payload server, an edge server, a compute resource, and a compute node.
Any of the features herein, wherein the edge server and the payload server communicate with one another according to at least one of Internet Protocol (IP), TCP, UDP, and a direct API.
Any of the features herein, wherein the controller and/or the at least one resource selects an application programming interface (API) used for presenting data to a user.
Any of the features herein, wherein the controller and/or the at least one resource convert a request for an API into a JSON file and/or convert a JSON file into an API.
Any of the features herein, wherein the controller and/or the at least one resource convert a second request for a second API into a second JSON file and/or convert a second JSON file into a second API.
Any of the features herein, wherein the request and the second request are delivered asynchronously.
Any of the features herein, wherein the second request is delivered synchronously after a confirmation of the conversion of the request is received at the controller and/or at the at least one resource.
Any of the features herein, wherein the controller and/or the at least one resource adds API execution timing constraints in the JSON file while converting.
Any of the features herein, wherein the controller and/or the at least one resource convert a request for an API into a file and/or convert a file into an API, and wherein the controller and/or the at least one resource adds API execution timing constraints in the file while converting.
Any of the features herein, wherein the API execution timing constraints specify that the API is to be executed a predetermined time after a second API is executed.
Any of the features herein, wherein the API execution timing constraints specify that the API is to be executed a predetermined time before a second API is executed.
Any of the features herein, wherein the API execution timing constraints specify that the API is to be executed at the same time as a second API.
Any of the features herein, wherein the API execution timing constraints specify that the API is to be executed at a predetermined time.
Any of the features herein, wherein the converting is performed at least partially by a simulation adaption layer.
Any of the features herein, wherein the file comprises a JSON file, a text file, a Protocol Buffer, or a csv file.
Any of the features herein, wherein the file specifies an anticipated response format from the payload.
Any of the features herein, wherein the anticipated response format includes an expected time for response from the payload.
Any of the features herein, wherein an error message is generated when a payload response time is greater than the expected time.
Any of the features herein, wherein the at least one resource executes the API a predetermined number of times when a payload response time is greater than the expected time.
Any of the features herein, wherein the file comprises a plurality of parameters specifying the API execution.
Any of the features herein, wherein the plurality of parameters comprises two or more of a data type, a data length, a relative execution time, an API type, an API name, a number of arguments, and an execution mode.
Any of the features herein, wherein the API execution uses data previously sent to and stored on the at least one resource.
Any of the features herein, wherein the controller and/or the at least one resource adds one or more dependency API details in the JSON file while converting.
Any of the features herein, wherein the at least one resource executes one or more applications for interacting with the payload.
Any of the features herein, wherein each of the one or more applications is associated with a different address.
Any of the features herein, wherein each different address comprises an IP address.
Any of the features herein, wherein communication between the first abstracted physical communication interface and the one or more applications comprises IP communication and/or an inter process communication (IPC) message.
Any of the features herein, wherein communication between the second abstracted physical communication interface and one or more applications running on the controller comprises inter thread communication.
Any of the features herein, wherein the at least one resource further comprises: an encoder/decoder module, a security module, a registration module, a router module, and a listener/sender module.
Any of the features herein, wherein each application executable at the at least one resource is assigned a unique application ID.
Any of the features herein, wherein messages exchanged between the at least one resource and the controller comprise a header that includes the application ID.
Any of the features herein, wherein the header for each message further comprises a source hardware device ID, a destination hardware device ID, a field that enables and disables encryption/decryption, a field that enables authentication, and/or a packet sequence number.
Any of the features herein, wherein, for incoming messages from the controller to the at least one resource, the at least one resource processes the header before sending the incoming messages to an application of the at least one resource.
Any of the features herein, wherein, for outgoing messages from the at least one resource to the controller, the at least one resource adds the header to the outgoing messages before sending to the controller.
Any of the features herein, wherein application hardware sub-systems of the payload are remote to the at least one resource and the controller.
Any of the features herein, wherein data from the application hardware sub-systems is stored locally before being sent to the at least one resource and/or the controller.
Any of the features herein, wherein the application hardware sub-systems are tested in a virtual environment using the at least one resource.
Any of the features herein, wherein the application hardware sub-systems, the at least one resource, and the controller are cloud-based.
Any of the features herein, wherein the at least one resource and the controller are cloud-based.
Any of the features herein, further comprising: the satellite, wherein the satellite includes the at least one resource and the controller.
Any of the features herein, wherein the at least one resource comprises a computing node simulated in a cloud server, a computing resource simulated in a cloud server, a desktop computer, and a high-performance computing (HPC) device.
Any of the features herein, wherein the at least one resource corresponds to a computing node, computing resource execution, a sensor hardware subsystem simulated in a cloud server, a sensor hardware subsystem simulated in a desktop computer, a sensor hardware subsystem simulated in a high-performance computing (HPC) device, and/or a satellite payload hardware subsystem.
A method of simulating collection of satellite data according to at least one embodiment of the present disclosure comprises: providing at least one cloud-based computing node for controlling a payload of a satellite; providing at least one remote computing node including an application hardware sub-system that simulates the payload of the satellite; and simulating operation of the payload of satellite using the application hardware sub-system based on signals received by the at least one remote computing node from the at least one cloud-based computing node.
Any of the features herein, wherein the application hardware sub-system comprises a sensor that is simulated by software of the at least one remote computing node.
Any of the features herein, wherein the application hardware sub-system comprises a physical sensor that is separate from but in communication with the at least one remote computing node.
A system according to at least one embodiment of the present disclosure comprises: three or more distributed computes that interact with one another in a cloud environment and at least one of which is in communication with a remote resource via a cloud connection, wherein each of the three or more distributed computes are configured to receive information from the remote resource and/or provide commands executable by the remote resource.
Any of the features herein, wherein the remote resource comprises a simulated resource.
Any of the features herein, wherein the simulated resource comprises a simulated sensor.
Any of the features herein, wherein the simulated sensor comprises a sensor configured for use on a satellite.
Any of the features herein, wherein the simulated resource comprises at least one of simulated software, simulated hardware, and a simulated compute node.
Any of the features herein, wherein the cloud environment is executed on a satellite.
Any of the features herein, wherein the cloud environment is executed on a terrestrial server.
Any of the features herein, wherein at least one of the three or more distributed computes are virtualized.
Any of the features herein, wherein all of the three or more distributed computes are virtualized.
Any of the features herein, wherein the three or more distributed computes comprise at least one of a micro controller, a micro-processor, a payload server, and an edge server.
Any of the features herein, wherein communication with the remote resource is facilitated by at least one of a physical input output hardware abstraction layer (IO-HAL) and an application hardware abstraction layer (AH-AL).
Any of the features herein, wherein the IO-HAL comprises an abstracted physical communication interface.
Any of the features herein, wherein the AH-AL comprises an abstracted application layer of a physical communication interface.
Any of the features herein, wherein the communication with the remote resource is facilitated by transmitting one or more JSON files.
Any of the features herein, wherein the one or more JSON files define at least one of a current API execution time, a timing dependent API, a relative execution time, an execution mode, and a number of arguments.
Any of the features herein, wherein the one or more JSON files are converted into one or more API calls.
Any aspect in combination with any one or more other aspects.
Any one or more of the aspects disclosed herein.
Any one or more of the features as substantially disclosed herein.
Any one or more of the features as substantially disclosed herein in combination with any one or more other features as substantially disclosed herein.
Any one of the aspects/features/implementations in combination with any one or more other aspects/features/implementations.
Use of any one or more of the aspects or features as disclosed herein.
It is to be appreciated that any aspect or feature described herein can be claimed in combination with any other aspect(s) or feature(s) as described herein, regardless of whether the features come from the same described implementation.
The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
Embodiments of the present disclosure will be described in connection with a virtual FlatSat and distributed digital nodes.
The on-board controller 104 may include a micro controller, a micro-processor, a processor, and/or the like that can be connected to the core AHS 124A-124N to regulate or manage the core AHS 124A-124N. The core AHS 124A-124N may be or comprise, for example, an Attitude Determination and Control System (ADCS), Inertial Measurement Units (IMUs), sensors (e.g., temperature sensors, pressure sensors, etc.), and/or the like. The on-board controller 104 may be connected to the core AHS 124A-124N through a Physical Interface (PI). The PI may be or comprise, for example, a Universal Serial Bus (USB), a Universal Asynchronous Receiver-Transmitter (UART), Ethernet, Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), a Peripheral Component Interconnect (PCI), and/or the like.
The payload server 108 comprises processing circuitry capable of supporting multiple missions with resource demands of the payload server AHS 116A-116N. For example, the payload server 108 may perform tasks such as running payload applications for on-orbit operations, data processing, data communication, security, other onboard non-mission critical operations such as offloading one or more tasks from the edge server 112 and/or the on-board controller 104, and/or the like.
The edge server 112 comprises processing circuitry capable of supporting multiple missions with resource demands of the edge server AHS 120A-120N. For instance, the edge server 112 may enable computing through the use of one or more artificial intelligence (AI) and/or machine learning (ML) data models, algorithms, and/or the like. Such AI and/or ML models may enable and/or assist with station keeping, resource optimization, debris avoidance (e.g., when navigating a satellite), decommissioning management, combinations thereof, and/or the like.
The computing nodes such as the on-board controller 104, the payload server 108, and the edge server 112 may be respectively connected to the core AHS 124A-124N, the payload server AHS 116A-116N and the edge server AHS 120A-120N. In some examples, the computing nodes and the AHS are real target hardware platforms that are interconnected to each other with physical IO buses such as through USB, UART, Ethernet, I2C, and/or any other similar communication IO bus. The physical IO bus connections between any of the on-board controller 104, the payload server 108, and the edge server 112 may include low level IO drivers. The low level IO drives may, in some examples, be abstracted using IO Hardware Abstraction Layer (HAL).
In some cases, in addition to the IO abstraction of the low level IO drives, Internet Protocol (IP) based protocols are used over the IO bus interface drivers. For example, Remote Network Driver Interface Specification (RNDIS) drivers or Abstract Control Model (ACM) drivers and/or similar protocols can be used with USB, Point-to-Point Protocol (PPP) drivers and/or similar protocols can be used with UART, and Internet Protocol over Ethernet (IPoE) drivers and/or similar protocols can be used with Ethernet. As a result, interconnection between the three computing hardware/computing nodes (e.g., the on-board controller 104, the payload server 108, the edge server 112) effectively becomes IP-based interconnection. This may enable an International Packet Communications Consortium (IPPC) header and frame formatting and application-level security to be built on top of the IP layer.
In some cases, the IO bus interface connections between the on-board controller 104 and the core AHS 124A-124N may implement an IO bus abstraction framework. The IO abstraction may be implemented because the software running on the on-board controller 104 (as well as other software and applications) accesses the core AHS 124A-124N for the purposes of reading and writing data. By abstracting the IO bus, changes to the core AHS 124A-124N will not affect the application running in the on-board controller 104 and accessing the core AHS 124A-124N.
In some cases, the physical IO bus interface connection between the payload server 108 and the payload server AHS 116A-116N, as well as the physical IO bus interface connection between the edge server 112 and the edge server AHS 120A-120N, may or may not implement an IO bus abstraction framework. For example, IO bus abstraction framework may not be implemented in instances where the IO bus driver and the application that access the hardware to read/write data are offered by a hardware vendor. In this example, the application may be treated like a black box, such that the system 100 does not directly access the hardware sub-system for reading and/or writing data. As a result, the abstraction of the IO bus driver may be unnecessary. In some examples, the AHS is changed and the corresponding driver and application to be run in the payload server 108 will also change. Additionally, the AHS may be provided by the application hardware vendor. As a result, in some cases the system 100 may not implement hardware abstraction between the payload server 108 and the payload server AHS 116A-116N and/or between the edge server 112 and the edge server AHS 120A-120N. Additionally or alternatively, the connectivity between the payload server 108 and the payload server AHS 116A-116N and the connectivity between the edge server 112 and the edge server AHS 120A-120N may not require IP-based interconnections (e.g., the PI interfaces may be sufficient). It is to be understood, however, that various embodiments of the present disclosure may implement IO bus abstractions and/or IP-based interconnections between the payload server 108 and the payload server AHS 116A-116N and/or between the edge server 112 and the edge server AHS 120A-120N.
With reference to
With reference to
Each application 304A, 304B may include an application data unit, IPCC header information, and a listener/sender. The listener/sender may enable the applications 304A-304B to interface with the IPCCFW, as illustrated in
The IPCCFW of the payload server 108 and/or the edge server 112 comprises a plurality of layers. The IPCC comprises an input output vendor driver (IO-VD) 308, an input output vendor driver porting layer (IO-VDP) 312, an input output hardware abstraction layer (IO-HAL) 316, a network stack 320, and an IPCC library 324. The IO-VD 308 may operate as a physical IO bus interface to enable communication between the on-board controller 104 and the payload server 108 and/or the edge server 112. The IO-VDP 312 may include a wrapper layer and an abstraction layer. On top of the IO-VDP 312, the network stack 320 may be added. The network stack 320 may be RNDIS in the case of a USB interface, PPP in the case of an UART interface, and Point-to-Point Protocol over Ethernet (PPPoE) in the case of an Ethernet interface. The IPCC library 324 may be positioned on top of the network stack 320 in the stack.
The IPCC library 324 may contain various sub-modules, such as an encoder/decoder 328, security 332, a bridge/router/application registration service 336, an IPCC payload 340, IPCC information 344, and a listener/sender 348. The encoder/decoder 328 may encode and/or decode messages or other information communicated between the payload server 108 and/or the edge server 112 and the on-board controller 104. The security 332 may perform authentication of messages or other information communicated with the IPCCFW, and may additionally or alternatively perform ciphering and de-ciphering. The bridge 336 may perform routing and/or packet forwarding of messages or other information communicated via the IPCCFW. In some cases, the bridge 336 may perform application registration services, such as when one or more of the applications 304A-304B are initialized and run on the payload server 108 and/or the edge server 112. The IPCC payload 340 may comprise information being communicated via the IPCCFW. The IPCC information 344 may comprise information pertaining to IPCC or, more generally, to the IPCCFW. For example, the IPCC information 344 may comprise information about the bandwidth capabilities of the IPCCFW.
The listener/sender 348 may be web socket-based (e.g., TCP/IP, UDP/IP) or native Linux IPC message queue. The listener/sender 348 may be similar to or the same as the listener/sender present in one or more of the applications 304A-304B, such that the applications 304A-304B can communicate with the IPCC library 324 (and vice versa). In some cases, the listener/sender 348 and the listener/sender of the applications 304A-304B may be used to establish, maintain, and/or terminate communication between the IPCC library 324 and the applications 304A-304B.
In instances where the applications 304A-304B are written as virtual machines or otherwise containerized, the communication between the applications 304A-304B and the IPCC library 324 may benefit from IP-based communications. In other cases where the applications 304A-304B are written as native libraries that support standard IPC communications (e.g., message queues), the communication between the applications 304A-304B and the IPCC library 324 may benefit from non IP-based communications. As a result, both Linux IPC messaging and IP-based interfaces may be provided.
When the applications 304A-304B communicate via IPCC, the applications 304A-304B may undergo service registration, and the IPCCFW assigns each application 304A, 304B an application ID. This may occur at the time of Power-on Initialization (PoI) or whenever the application begins initiation. In some cases, the registration may occur through a pre-configuration provided at the build time of the application. The IPCCFW will then provide one or more Application Programming Interfaces (APIs) for registration and de-registration. The APIs may be or comprise interfaces rendered on a display for a user to view and/or with which the user may interact. The application may use the APIs for registering and de-registering, and in some cases an initial registration of the application may be required to access the computing nodes (e.g., the payload server 108, the edge server 112, etc.). The application ID may be used for the purposes of communication between the registered application and other applications, as well as for IPCC session maintenance for the application. In some examples, the application ID may be used by the IPCCFW for further IPCC message mapping/forwarding, as well as to restrict application access for security purposes.
When any of the applications 304A-304B wish to send application data unit(s) (ADUs) to the on-board controller 104, the application may provide IPCC header information along with the ADU. The IPCC header information may include a variety of parameters. In one example, the IPCC header includes a source hardware device ID (e.g., originating from the on-board controller 104, the payload server 108, or the edge server 112), a source application or session ID, a destination hardware device ID (e.g., intended for the on-board controller 104, the payload server 108, or the edge server 112), a destination application or session ID, a packet sequence number, an IPCCFW level encryption or decryption (which may be or comprise a binary indicator of whether encryption/decryption is desired at the IPCCFW level), and/or an IPCCFW level message authentication (which may be or comprise a binary indicator of whether message authentication is desired at the IPCCFW level). Such parameters may have different data sizes (e.g., 1 bit, 1 byte, 2 bytes, etc.). It is to be understood that the above parameter information may be the base fields for the IPCCFW, and that additional or alternative IPCC information may be added to support various scenarios and applications.
Once the IPCCFW receives the IPCC header information and the ADU from the application, the IPCC header information may be used to perform several functions. For example, the security 332 of the IPCCFW may use the header information to enable encryption and/or authentication. As another example, the bridge 336 may perform registration services, such as generating an application ID, perform maintenance on sequence number, combinations thereof, and/or the like. As yet another example, the IPCC library 324 may perform encoding/decoding of the IPCC header (e.g., using the encoder/decoder 328). The encoder/decoder 328 may treat the data unit as a raw byte stream, generate IPCC data packet(s), and send the IPCC data packet(s) across to the destination hardware device (e.g., if the IPCC designated the on-board controller 104 as the destination hardware device, the IPCCFW sends the packet to the on-board controller 104). The IPCCFW may also remove the IPCC header information and map/forward the ADU to the respective application. The mapping and forwarding may be performed by the bridge 336. For example, when one or more of the applications 304A-304B running on the payload server 108 wants to send a message and/or data to the on-board controller 104, the applications 304A-304B may provide the ADU to communicate with the on-board controller 104 and may send the IPCC header along with the ADU. In some cases, the IPCC header information can be another raw byte system, but follows the byte order specified by the IPCCFW. The byte system may enable any one or more of the applications 304A-304B to be containerized such that, for example, the applications 304A-304B do not need to depend on any particular programming language. Additionally or alternatively, such a configuration may enable the applications 304A-304B to be agnostic to the IPCC packet format, IPCC message/header encoding/decoding, combinations thereof, and/or the like. As previously mentioned, while the discussion of applications 304A-304B is made with respect to the payload server 108, the same features may be applied to the applications 304A-304B run on the edge server 112, as well as with respect to any computing node or computer that uses IPCC interfaces for communication across processing nodes.
With reference to
With reference to
In some cases, one or more application hardware sub-systems (e.g., the payload server AHS 116A-116N, the core AHS 124A-124N, and/or the edge server AHS 120A-120N) can be connected to the main software simulated computing nodes. The connection may be simulated (e.g., a simulated sensor such as a temperature sensor, an IMU, etc.) in a remote computer, with the remote computer connected to the main software simulated computing nodes (e.g., on-board controller 104, payload server 108, or edge server 112) through a TCP/IP socket connection, as illustrated in
With reference to
The input or output data from the application hardware 644 is accessed or retrieved by the AH-VD 636, which invokes the AH-VDP 632. The AH-VDP 632 then invokes an AHAL API which in turn invokes the AH-SAL 628. The AH-SAL 628, after conversion, invokes a socket interface such as a TCP/IP interface 624. As a result, data from the application hardware 644 is transmitted via a socket interface to the soft simulated on-board controller 104 running in a virtual machine in a cloud platform.
With reference to
Turning to
In some cases such as those depicted in
With reference to
The AH-SAL discussed herein (e.g., the AH-SAL 612, the AH-SAL 628, the AH-SAL 812, the AH-SAL 828, etc.) converts the API into a JSON file format (or, more generally, another file format). The conversion may include converting the API name to an API ID with comma separation, and converting API arguments into comma separated arguments with a name followed by a value. The JSON file may then be transmitted via the TCP/IP socket. Similarly, the AH-SAL may parse a received JSON file formatted byte stream and extract the API ID, the number of arguments, the argument values, and/or the like. The AH-SAL may then invoke the respective AHAL API with the extracted parameters. For pointer type arguments, the AH-SAL may allocate memory accordingly, update the data into the allocated memory, and update the address in the pointer variable.
Turning next to
In some cases, the configuration shown in
With reference to
With reference to
In some cases, the configuration shown in
With reference to
With reference to
In some cases, the configuration shown in
Within the remote computer 820, data from the application hardware 1404 can be fetched by the IO-SAL 1028. The IO-SAL 1028 then, after conversion, invokes the TCP/IP 824 to send the data to the payload server 108 and/or the edge server 112 in the cloud platform.
With reference to
Turning to
In some cases, the configuration shown in
With reference to
In some cases, the entire simulation is run as a single instance in the simulation environment, such that one application hardware sub-system (e.g., a temperature sensor) can be tested with the simulation environment. As a result, to test another application hardware sub-system (e.g., an IMU) with the simulation environment, a different instance of the entire simulation may be run. In other cases, there may be a single instance where some or all components will be simulated. In these cases, two, three, or more application hardware sub-systems (e.g., a temperature sensor and an IMU) can be connected in parallel to the same instance of simulation for testing. In some cases, there may be a single generic instance of flight software simulation, and any application hardware sub-system can be tested with the simulation environment. In these instances, the number of application hardware sub-systems may be restricted (e.g., a maximum of six temperature sensors, a maximum of two IMUS, a maximum of two reaction-wheels, etc.).
The IO-SAL (e.g., the IO-SAL 1016, the IO-SAL 1028) may convert API calls to a JSON file formatted byte stream such that the API name is converted to an API ID with comma separation, API arguments are converted to comma separated number of arguments, followed by the arguments. The JSON file is then transmitted via the TCP/IP socket. At the receiving end, the IO-SAL parses the JSON file formatted byte stream and extracts the API ID and the arguments, and then invokes the respective IO-HAL API with the appropriate parameters. For pointer type arguments, the IO-SAL may allocate memory accordingly, update the data into the allocated memory, and update the address in the pointer variable.
Turning to
For the connections between each of the soft simulated main computing hardware (e.g., on-board controller 104, the payload server 108, the edge server 112), IPCCFW may be used on top of an IP connection, with all connections being IP-based. In the full simulation mode, the on-board controller 104 may omit the use of physical IO drivers, and the IO-HAL may bridge the communication with the IPCCFW. The on-board controller 104 may communicate with the payload server 108 and/or the edge server 112 via IP communication without any physical IO drivers. The soft simulated on-board controller may include the IO-HAL and the IO-VDP API to provide TCP/IP socket interfaces that can be used to communicate with AHS as depicted, for example, in the call flow diagram of
The software may be under certain timing constraints or requirements to ensure that the hardware operates correctly. When the driver software tries to remotely interface with the hardware devices, there may be timing issues that arise. To address these issues, the IO-SAL and the AH-SAL APIs and the JSON schemas may be defined to account for the timing differences. For example, each IO-SAL API translation may specify the timing requirements of the current API (e.g., API 1) and a subsequent API (e.g., API 2) execution time. There may be different hardware interface timing requirements that can be addressed using the general schema introduced in the JSON file (or, more generally, any other readable file).
Turning to
As depicted in
With reference to
As another example, API 2 may have a timing dependency with API 1, such as when API 1 is invoked at a first time, and API 2 is to be invoked after a second time, where the time interval in between (e.g., the second time minus the first time) is the time interval minimally required to be followed before invoking API 2 for the application hardware to perform. Such an execution may be depicted by the call flow diagram of
As yet another example, API 2 may have a timing dependency with API 1, such as when API 1 is invoked at a first time, then API 2 to be invoked after a second time but before a third time, where the time interval between the first time and the second time is minimally required to be followed before invoking API 2, and the API 2 is to be invoked before the third time minus the first time for the application hardware to perform.
With reference to
In some examples of timing requirements for API execution, the hardware may require immediate or time constrained acknowledgement or response from the software. In these examples, the IO-SAL/AH-SAL running in the SaaS platform can send an acknowledgement/response API ahead of time to the IO-SAL/AH-SAL running on the remote computer. The information sent may include the expected hardware event requiring the acknowledgement/response, and an API ID with the acknowledgement/response encoded in the JSON file with the timing constraints. As depicted in
With reference to
In some embodiments, API 1 and API 2 may both be sending APIs (e.g., an API that sends information to the application hardware), API 1 and API 2 may both be receiving APIs (e.g., an API that receives information from the application hardware), or one could be a sending API, while the other is a receiving API. In some cases, one or more APIs may be time-dependent on other APIs, and the APIs may be chained one after another in the respective order in the same JSON file at the sender side IO-SAL/AH-SAL. The receiving side IO-SAL/AH-SAL then handles the timing reference required to execute the APIs in the given order and given time reference. In some examples, the IO-SAL and/or the AH-SAL on the remote node/computer may be built with a timer-based sequencer and scheduler to handle the back to back and timing-dependent API executions.
In some cases, such as in a remote connectivity environment or a simulation environment, network transfer latency may be estimated and pre-configured in the IO-SAL and AH-SAL, or may be dynamically monitored and measured by the SaaS platform and dynamically configured/changed. If the network latency is configured to be greater than or equal to the timing requirements (e.g., the latency is greater than or equal to the difference between the second time at which API 2 is executed and the first time at which API 1 is executed), then the IO-SAL and/or AH-SAL running on the SaaS platform may sequence the timing dependent APIs in a single JSON file with the required timing details and expect the IO-SAL and/or AH-SAL running in the remote node/computer to handle the timing constraints required to run the APIs locally interfacing with the application hardware. If the network latency is configured to be less than the timing requirements (e.g., the latency is less than the difference between the second time at which API 2 is executed and the first time at which API 1 is executed), the network latency may be treated as being in a safe latency zone, and if the timing requirement can be met by remotely sequencing the APIs from the SaaS platform, then the IO-SAL and/or the AH-SAL running on the SaaS platform may maintain and manage the timing reference for the APIs. The IO-SAL and/or the AH-SAL running on the SaaS platform may sequence the APIs from the SaaS platform itself. In other words, the lack of network latency may enable the APIs to be managed without needing to send a single JSON file and expect the local SAL to manage the timing requirement. In other examples, a user may be able to configure the SaaS platform such that the timing requirements are satisfied. In some cases, the delivery of the JSON files may be done asynchronously (e.g., the first request in a first JSON file is delivered at a first time, and the second request in a second JSON file is delivered later). In other cases, the JSON files may be done synchronously.
The SaaS and/or cloud platforms discussed herein may be configured to permit multiple users (e.g., a primary user, a secondary user, etc.) to access the SaaS and/or cloud platform. For a multi user connection with the SaaS platform, the nodes may be peer-to-peer and real-time (or near real-time) connections. The real time interactions and peer-to-peer connectivity requirements between the nodes may use peer-to-peer TCP/IP or UDP/IP socket connections. The SaaS platform may enable the user to login (e.g., to a remote node and/or direct on to the SaaS platform), and for each user session an IP session may be maintained for connection.
Any of the steps, functions, and operations discussed herein can be performed continuously and automatically.
The exemplary systems and methods of this disclosure have been described in relation to a virtual FlatSat. However, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scope of the claimed disclosure. Specific details are set forth to provide an understanding of the present disclosure. It should, however, be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein.
Furthermore, while the exemplary embodiments illustrated herein show the various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a LAN and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined into one or more devices, such as a server, communication device, or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network. It will be appreciated from the preceding description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system.
Furthermore, it should be appreciated that the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire, and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
While the flowcharts have been discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosed embodiments, configuration, and aspects.
A number of variations and modifications of the disclosure can be used. It would be possible to provide for some features of the disclosure without providing others.
In yet another embodiment, the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure. Exemplary hardware that can be used for the present disclosure includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this disclosure can be implemented as a program embedded on a personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
Although the present disclosure describes components and functions implemented in the embodiments with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and are considered to be included in the present disclosure. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present disclosure.
The present disclosure, in various embodiments, configurations, and aspects, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, sub-combinations, and subsets thereof. Those of skill in the art will understand how to make and use the systems and methods disclosed herein after understanding the present disclosure. The present disclosure, in various embodiments, configurations, and aspects, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments, configurations, or aspects hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease, and/or reducing cost of implementation.
The foregoing discussion of the disclosure has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the disclosure are grouped together in one or more embodiments, configurations, or aspects for the purpose of streamlining the disclosure. The features of the embodiments, configurations, or aspects of the disclosure may be combined in alternate embodiments, configurations, or aspects other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment, configuration, or aspect. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Moreover, though the description of the disclosure has included description of one or more embodiments, configurations, or aspects and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights, which include alternative embodiments, configurations, or aspects to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges, or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges, or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
The phrases “at least one,” “one or more,” “or,” and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more,” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising,” “including,” and “having” can be used interchangeably.
The term “automatic” and variations thereof, as used herein, refers to any process or operation, which is typically continuous or semi-continuous, done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material.”
Aspects of the present disclosure may take the form of an embodiment that is entirely hardware, an embodiment that is entirely software (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Any combination of one or more computer-readable medium(s) may be utilized. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The terms “determine,” “calculate,” “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.
The present application claims the benefits of and priority, under 35 U.S.C. § 119(e), to U.S. Provisional Application Ser. No. 63/389,320, filed on Jul. 14, 2022, entitled “VIRTUAL FLATSAT AND DISTRIBUTED DIGITAL NODES.” The entire disclosure of the application listed above is hereby incorporated by reference, in its entirety, for all that it teaches and for all purposes.
Number | Date | Country | |
---|---|---|---|
63389320 | Jul 2022 | US |