Wireless connectivity continues to evolve to meet demands for ubiquity, convenience, reliability, speed, responsiveness, and the like. For example, each new generation of cellular communication standards, such as the move from 4G/LTE (fourth generation long-term evolution) networks to 5G (fifth generation) networks, has provided a huge leap in capabilities along with new and increasing demands on the infrastructures that enable those networks to operate. For example, 5G supports innovations, such as millimeter-wave frequencies, massive MIMO (Multiple Input Multiple Output), and network slicing, which enhance connectivity for unprecedented numbers of devices and data-intensive applications.
More recently, innovations in 5G networking (and its successors) have expanded from terrestrial-based communication infrastructures to so-called non-terrestrial network (NTN) infrastructures. NTN infrastructures leverage satellites and high-altitude platforms to extend 5G coverage and capabilities, such as to serve remote and otherwise underserved areas. Effective deployment of NTN solutions can help support connectivity and applications for rural users, emergency responders, global Internet-of-Things (IoT) deployments, etc.
However, non-terrestrial communication carry complexities and design concerns that are not present in terrestrial-based communications, which can add significant technical hurdles to NTN deployments. For example, effective ground-to-satellite communications involves accounting for orbital dynamics, handovers and/or other transitions between satellites, path loss, propagation delay, atmospheric conditions, inter-satellite and/or inter-beam interference, spectrum and regulatory considerations, and other considerations. New approaches continue to be developed to find technical solutions for overcoming, or at least mitigating, these and other technical hurdles.
Systems and methods are described herein for providing network and protocol architectures to achieve efficient high speed data services in an integrated terrestrial-non-terrestrial network (iTNTN). As used herein, an iTNTN can include at least a non-geostationary orbit (NGSO) satellite system and a satellite radio access network that uses terrestrial (e.g., 5G) standards and protocols. Embodiments specially configure packet-based routing and dynamic cell-CU-DU (cell to centralized unit to distributed unit) association to accommodate dynamically changing LEO satellite locations and other iTNTN characteristics. These and other configurations are used to enable features, including end-to-end IP data and Layer 2 data services, integrated LEO-GEO (low-Earth orbit and geosynchronous Earth orbit) and LEO-MEO (low-Earth orbit and medium-Earth orbit) services, direct UT-UT (user terminal to user terminal) services, and resource efficient multicast services.
A further understanding of the nature and advantages of various embodiments may be realized by reference to the following figures. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
Turning first to
The network architecture 100 may include one or more user terminals in designated cells 102 illuminated by a satellite network 104 including a plurality of satellites (104-1, 104-2, 104-3 . . . 104-N) communicatively coupled with a ground network. The ground network includes a satellite radio access network (RAN, SRAN); a global network operations center (GNOC) 108; a global resource manager (GRM) 110 (which can include at least a route determination function (RDF) module); and a core network (CN). The SRAN can include one or more satellite network nodes (SNNs) 106 (also referred to as SNN sites herein), such as SNN-A 106-1 and SNN-B 106-2, and an anchor node 112 (also referred to herein as an anchor node, AN). The CN can include an access and mobility function (AMF) module 114, one or more user plane function (UPF) modules 116, a session management function (SMF) 120, and a multicast gateway (MCG) 122. For example, a first UPF 116-1 is in a first country, and a second UPF 116-2 is in a second country.
The illustrated SNNs 106 can be implemented by any suitable network component for facilitating communications and data exchange between the satellites of the satellite network 104 and the ground network infrastructure. For example, the SNNs implement functions relating to relaying data between the satellite network 104 and the ground network, including managing uplink and downlink communications. The SNNs 106 can also help to ensure compatibility between satellite communication protocols and terrestrial network protocols.
User terminals in the cells 102 communicate with the ground network through the satellite network 104. At any given instant of time, the user terminals may communicate on a Ku user link of a satellite and the ground network/node may communicate on Ka/V/Q feeder link of a satellite in the satellite network 104. Other implementations can use any feasible spectrum bands for communications.
As illustrated, the satellites of the satellite network 104 can be implemented as a constellation of satellites. The satellite (for example, SAT3 104-3) with which the ground network/node communicates may be different from the satellite (for example, SAT1 104-1) with which the user terminal (controlled by that ground node) may be communicating. For example, the user terminals are communicating with the SAT1 104-1 to reach the ground node (for example, SNN-A 106-1 or SNN-B 106-2) that is communicating with the SAT3 104-3. Inter-satellite links (ISLs) may be used to establish connectivity between the satellites (104-1, 104-2, 104-3 . . . 104-N) in the satellite network 104. In an example, a lightweight software defined satellite networking concept may be used to find the best route between two satellites in the satellite network 104.
The SNN-A 106-1 and the SNN-B 106-2 may communicate with the SAT3 104-3 through active feeder links, such as the two active feeder links shown in
A person of ordinary skill in the art will understand that there may be any number of user terminals, satellites, or other components in the network architecture 100. As used herein, the user terminal may refer to a wireless device and/or a user equipment (UE). The terms “computing device,” “wireless device,” “user device,” and “user equipment (UE)” may be used interchangeably throughout the disclosure. A user device or the UE may include, but not be limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, etc.
In an example, the user devices may communicate with the satellite network 104 and/or the ground network and/or the core network via a set of executable instructions residing on any operating system. In an example, the user devices may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the user device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen, etc. A person of ordinary skill in the art will appreciate that the user devices may not be restricted to the mentioned devices and various other devices may be used.
The satellite network 104 may be communicatively coupled to the user devices in the cell 102 via a network. The satellite network 104 may communicate with the user devices in a secure manner via the network. The network may include, by way of example, but not limited to, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, some combination thereof, or so forth. The network may also include, by way of example, but not limited to, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fibre optic network, or some combination thereof. In particular, the network may be any network over which the user devices communicate with the satellite network 104.
Although
In the illustrated end-to-end protocol stack for user plane 200, packet data convergence protocol (PDCP) and service data adaptation protocol (SDAP) layers of the protocol stack (e.g., the 5G protocol stack) are implemented between the UT 202 and an anchor node 206-2 of a satellite radio access network (SRAN) 206. In earlier generations of mobile networks, the SRAN was implemented as a GPRS (general packet radio service) RAN. In 5G and next-generation mobile networks, the SRAN is sometimes referred to as a NG-RAN (next-generation RAN), and typically includes a base station (e.g., a gNodeB, or gNB), an evolved NodeB (eNB), a next-generation evolved NodeB (ng-eNB), or the like. These components manage communication between the network and mobile devices using new radio (NR) technologies. The SRAN 206 can also implement one or more satellite network nodes (SNNs) 206-1. The illustrated SNN 206-1 may be an implementation of the SNN-A 106-1 or the SNN-B 106-2 of
The PDCP layer implements access stratum encryption, integrity protection, and header compression. Both IP header compression as well as Layer-2 (L2) header compression are supported by the anchor node 206-2. The interface between the PHY, MAC, and RLC layers in the satellite user-link and the PDCP and SDAP layers in the anchor node 206-2 is based on 3GPP “F1” interface specifications (e.g., as defined by 3GPP TS 38.470). Typically, the PHY, MAC, and RLC layers are implemented in a distributed unit (DU) of a 5G architecture, while PDCP and SDAP layers are implemented in a central unit of the 5G architecture. As such, the illustrated end-to-end protocol stack for user plane 200 effectively splits the CU and DU functions between the satellite and ground portions of the network.
As illustrated, the SNN 206-1 and the anchor node 206-2 may connected via a network, such as a LAN or WAN. For example, as illustrated in
The anchor node 206-2 may be connected to multiple UPFs 208. To avoid overcomplicating the Figure, only a single UPF 208 is shown. For example, separate UPFs 208 can be implemented in different countries to permit user terminal position-based legal interception. The anchor node 206-2 may route sessions belonging to the user terminal 202 to an appropriate UPF 208 based on a location of the user terminal 202. Further, the UPF 208 may be connected to a server 210 to provide appropriate services to the user terminal 202.
For example, a protocol for a management plane between the user terminal 202 and the device management server 210 may use the end-to-end protocol stack for user plane 200. This may be carried over a separate data network name (DNN) between the user terminal 202 and the device management server 210. Air interface protocols may permit the user terminal(s) 202 to establish IP connections to multiple DNNs, and one of these DNNs may be for the management plane. Management plane protocol stacks can also be provided between satellites and ground elements, such as a route determination function (RDF). In addition to determining routes for normal operation, the RDF can be augmented to deal with inter-constellation interference proactively. For example, the RDF can determine and execute alternate routes when it is predicted that there will be a strong in-line interference with a satellite from a different constellation using the same band. To further improve this function, machine learning techniques can be employed whereby the signature of interference from interferer is learned over time and is applied in the re-routing algorithm.
As illustrated, the radio resource control (RRC) and PDCP layers are between a UT 302 and a SRAN 306. The PHY, MAC, and RLC layers are between the UT 302 and a user-link of each satellite of a satellite network 304. The interface between the PHY, MAC, and RLC layers of the satellite user-link and the PDCP and SDAP layers in an anchor node 306-2 are based on 3GPP F1-AP interface specifications. The interface between the SRAN 306 and core network functions (e.g., the 5G core network) is based on standard terrestrial NG-AP protocols, such as defined in 3GPP TS 38.413 standards. For example, the SRAN 306 communicates with the AMF 308 using a standard N2 interface, and the AMF communicates with a session management function 310 in the 5G core network using a standard N11 interface. The Ka-band feeder link can be standards-based, such as based on a standard DVB-S2X feeder link. The SRAN 306 can be implemented to accommodate other variants of forward error correction (FEC), such as Consultative Committee for Space Data Systems (CCSDS) FEC for optical communications, or 5G-NR FEC.
In some implementations, an Ethernet preamble and start-of-frame delimiter may not be transmitted over the satellite network 406. For uplink traffic, the user terminal 404 may strip the preamble, start-of-frame delimiter, and frame check sequence (FCS) from the Ethernet frame. For downlink traffic, the UPF 410, acting as a packet data unit (PDU) session anchor, may strip the preamble, start-of-frame delimiter, and FCS from the Ethernet frame.
As illustrated, in accordance with Metro Ethernet specifications, a customer service connection may be made at a user-network interface (UNI). The satellite network 406 between UNIs may be made through an Ethernet Virtual Connection (EVC). The EVC may maintain an Ethernet MAC address and unaltered frame contents, which can enable establishing layer 2 connectivity between locations. This can also ensure that traffic moves between the intended UNIs.
The direct discovery function (DDF) module 608 may be implemented as a 5G Direct Discovery Name Management Function (DDNMF), if the 5G core network supports it. DDNMF is defined in 3GPP as a proximity services feature. A primary use case for direct UT-to-UT sessions is to prevent ability to intercept communications between two user terminals on the ground. However, there are use cases where these communications must be allowed to be intercepted on the ground. In such cases, the ingress and egress satellites are provided with a second route such that the satellites replicate packets from a UT to two routes (one to the destination UT and another to the SRAN). In addition, the security key used for the UT-to-UT route and the UT-to-SRAN route will be the same. The control plane traffic for a direct UT-to-UT session may still be through the SRAN and may use the end-to-end protocol stack for control plane 300 illustrated in
The flow 600 is illustrated as including steps “A1”-“A13.” At steps “A1,” a source user terminal (UTo) 602 and a destination user terminal (UTt) 612 may each attach and establish a PDU session with a core network (CN) 606. For example, at stage “A1o” UTo 602 may attach and establish a PDU session with the CN 606 (or a particular CN 6060, not explicitly shown) via a source satellite (SAT1) 604, and the UTt 612 may attach and establish a PDU session with the CN 606 (or a particular CN 606t, not explicitly shown) via a destination satellite (SATk) 610. For example, in this step, each terminal can establish a radio link that allows it to begin data communication and can then establish a PDU session for specific data services. At steps “A2o” and A2t,” the UTo 602 and UTt 612 may register with the network, respectively. This can involve each terminal identifying and authenticating itself.
At step “A3,” UTo 602 and UTt 612 may perform registration with the DDF 608. In some cases, registration with the DDF 608 can occur at a different time (e.g., at an earlier step). In the illustrated case, registration with the DDF 608 occurs directly after establishing a PDU session as illustrated, such as to facilitate utilization of network services by the user terminals prior to engaging in local direct communications and/or for other reasons. At step “A4,” UTo 602 may perform UTt discovery via the DDF 608. The DDF 608 may send UTt discovery response at step “A5,” if the DDF 608 has up-to-date information on the UTt 612. Further, at step “A6,” the DDF 608 may send a presence query for the UTt 612 to the core network 606. In an example, if the UTt 612 is in an idle mode, the core network 606 may page the UTt 612 at step “A7.” At step “A8,” the UTt 612 may send a paging response to the core network 606. Based on the paging response, at step “A9,” the core network 606 may send a presence query to the UTt 612.
Further, at step A10, in response to the presence query at step “A9,” the UTt 612 may update its contact information at the DDF 608. Based on the updated contact information of the UTt 612, the DDF 608 may send an UTt discovery response to the UTo 602 at step “A11.” At step “A12,” the UTo 602 may establish a security association with the UTt 612 by sharing security keys. At step “A13,” direct UT-to-UT connectivity may be established, and UTo 602 and UTt 612 may begin to perform data transfers.
Some embodiments described herein provide integration of low Earth orbit (LEO) and geosynchronous Earth orbit (GEO) operation with user terminals that are of receiving on both GEO and LEO links. Some such embodiments assume that the user terminals only transmit on LEO links. Some such embodiments implement integrated LEO/GEO operations using dual connectivity features based on those defined in 3GPP TS 37.340. One technical difficulty to such an integrated LEO/GEO approach is that, when a user terminal is receiving on the GEO link, the network will expect the user terminal to transmit feedback to the GEO gateway. For example, receipt on the GEO link may require the user terminal to provide feedback to the GEO gateway at MAC, RLC, and RRC layers of the 5G control plane protocol stack in the return link. Since the user terminal does not have a direct return link to the GEO gateway (assuming the user terminal is only capable of transmitting on LEO links), embodiments provide protocol architecture support for achieving the return link to the GEO gateway.
In the illustrated architecture 700, the user terminal 702 includes a GEO-UT control plane protocol stack 706 and a LEO-UT data plane protocol stack 708. The GEO-UT control plane protocol stack 706 can be an implementation of the corresponding portion of
In the forward-link direction, signals (e.g., GEO control signals, data signals, etc.) are sent from the GEO-GN 704 up to a GEO satellite and down to the user terminal 702. For example, the GEO-GN control plane protocol stack 712 can transmit through forward-link components (e.g., forward-link RLC, MAC, and PHY layers), and the GEO-UT control plane protocol stack 706 can receive through corresponding forward-link components (e.g., forward-link PHY, MAC, and RLC layers).
In the return-link direction, it is assumed that the signals cannot be transmitted from the user terminal 702 back to the GEO-GN 704. Instead, return-link signals (e.g., LEO control signals, data signals, etc.) are transmitted from the user terminal 702 via a LEO system data plane. Return-link signals can be transmitted through return-link components of the GEO-UT control plane protocol stack 706 (e.g., return-link RLC and MAC layers) and into and through components of the LEO-UT data plane protocol stack 708 (e.g., IP, SDAP/PDCP. RLC, MAC, and PHY layers). The return-link signals can be received through corresponding components of the satellite user-link 714 and LEO-GN 710 (e.g., return-link PHY, MAC, RLC, SDAP/PDCP, and IP layers). The return-link signals can be forwarded from the LEO-GN 710 to the core network 718 (e.g., at the IP layer).
The core network 718 can then forward return-link control plane signals to the GEO-GN 704, where it can be received by return-link components of the GEO-GN control plane protocol stack 712 that correspond to those of the GEO-UT control plane protocol stack 706 (e.g., return-link MAC and RLC layers). In this way, a return-link control plane path is established for the GEO system via the LEO data path. As illustrated, instead of a PHY layer in the return-link portions of the GEO-UT control plane protocol stack 706 and the GEO-GN control plane protocol stack 712, each can include a control application (Control-App).
Bandwidth limits tend to impose significant constraints on the capacity of a communication network. One technique for increasing bandwidth efficiency is to support multicast communications. Multicast is a network communication technique whereby data is simultaneously sent from one source to multiple destinations (e.g., who have opted to receive the multicast stream), thereby efficiently distributing information to multiple receivers using reduced network bandwidth. Many Internet Protocol (IP) networks support so-called IP multicast, by which IP datagrams are efficiently sent to groups of interested receivers with a single transmission. For example, special IP address ranges can be designated for multicast, such as 224.0.0.0 to 239.255.255.255 in IPv4.
In the context of satellite networks, multicast can be used to send a same transmission to multiple user terminals serviced by a same beam of a same satellite (i.e., in a same beam coverage area), thereby effectively sharing the same bandwidth across multiple user terminals. Satellite multicast has long been a part of GEO satellite networks. For example, in many GEO satellite networks, stationary user terminals point at a designated GEO satellite, which communicates with a designated one or more GEO gateways. Because the satellite is in geosynchronous orbit, forward- and return-link communications with the user terminal typically remain serviced by a same GEO satellite and a same one or more GEO gateways (e.g., except in relatively rare cases of gateway failures, or the like). Even with mobile user terminals, the satellite-to-gateway links remain relatively static, and handoffs of the user link tend to be relatively infrequent. For example, an aircraft making a transcontinental flight would tend to remain within the coverage area of a single GEO satellite for most or all of its flight.
Thus, in GEO satellite network contexts, beam assignments for user terminals tend to remain mostly constant (static). It can be relatively straightforward to establish multicast groups in the context of static (or mostly static) beam assignments. For example, for a given transmission, multicast groups can be determined by determining which user terminals may be interested in receiving that transmission and which groups of those user terminals share a same beam assignment. Subsequently, that transmission can be assigned to a particular GEO-gateway for transmission via a particular GEO satellite to a particular group of interested user terminals sharing a particular beam. Even if there is some overhead involved with setting up the multicast groups and corresponding multicast streams, those groups and streams will remain static, or relatively static, over the course of the transmission.
Multicast implementations can be much more challenging in the context of satellite networks having a constellation of LEO satellites, because the positions of LEO satellites are constantly changing with respect to the surface of the Earth as they traverse non-geosynchronous orbits. As the LEO satellites' positions change, so do their beam coverage areas, such that user terminals are continuously being serviced by different satellites and by different LEO gateways. Similar to the GEO context, establishing a multicast group involves determining which groups of user terminals sharing a same user beam may be interested in a same transmission. However, in the LEO context, this becomes a highly dynamic determination. At each time, transmitting a stream to a particular user terminal may involve transmitting that stream from a different gateway, through a different LEO satellite, and/or to a different beam; which can change which groups of user terminals can be grouped together into a multicast group.
IP multicast has been proposed in 3GPP 5G-NR standards as part of Multicast Broadcast Services (MBS), such as defined in 3GPP TS 23.247. The approach proposed in the 3GPP standards implements a multicast architecture in the core network. To date, this approach has gained very little traction at least because the approach involves adding special new network functions on the radio side of the core network to be compatible with multicast. Those functions can add complexity and cost to the network deployment. For example, the 3GPP standards propose a new multicast broadcast UPF (MB-UPF) and a new MB-SMF, along with new interfaces to support those new functions. The roles of the new network functions and interfaces includes figuring out and managing a multicast solution that accounts for the dynamically updating locations and assignments of user terminals, satellites, gateways, beams, cells, etc.
Embodiments described herein provide an efficient IP multicast approach that can be successfully deployed in a LEO-based satellite communication network. The approach described herein is transparent (agnostic) to the core network. As such, a customer can provide its own core network, and the IP multicast-enabled LEO network deployment can transparently be attached thereto. Embodiments of the described approach uses unicast bearers all the way to the gateway, and the gateway then determines which unicast bearers to fuse together into fewer multicast bearers. Thus, rather than attempting a static determination in the core network, the described approach can make dynamic determinations at the gateways.
Turning first to
The SRAN 808 can have a tunneled connection with the UPF 810 and can forward the IGMP membership report to the UPF 810 via the tunneled connection as a unicast communication (e.g., using the GPRS tunneling protocol, or other suitable protocol). The UPF 810 can forward the unicast message containing the IGMP membership report to the multicast gateway 812. In some implementations, the message is sent from the UPF 810 to the multicast gateway 812 using a combination of the user datagram protocol (UDP) and IP, or UDP/IP. UDP is generally a connectionless transport layer protocol that allows for sending of datagrams without first establishing a connection between a sender and receiver. UDP is fast and efficient, but it does not guarantee reliable delivery, ordering, or error checking of packets. IP can be made responsible for addressing and routing packets of data to help ensure that they travel across networks and arrive at their correct destinations. The combined UDP/IP protocol can use IP to handle the delivery of UDP datagrams with minimal overhead and without relying on pre-establishment of reliable sessions or connections.
As illustrated, in the control plane illustrated by
The multicast gateway 812 can communicate at least some of the MMI to the multicast content server 814. In some implementations, communications between the multicast gateway 812 and the multicast content server 814 use Protocol Independent Multicast-Sparse Mode (PIM-SM). For example, PIM-SM can be used to construct a multicast tree (e.g., a source-based tree).
Turning to
The multicast gateway 812 receives the multicast content, replicates it, encapsulates it in unicast headers, and sends a point-to-point (PTP) stream to each group member over the core network (e.g., including UPF 810). For example, the multicast gateway 812 sends unicast transmissions according to the UDP/IP protocol to the UPF 810, and the UPF 810 forwards the unicast transmissions to the SRAN 808 via a tunneled connection (e.g., using UDP/IP and the GPRS tunneling protocol). For N user terminals 802, N corresponding unicast bearers are used to send signals from the multicast gateway 812 to the SRAN 808 via the UPF 810.
The SRAN 808 (e.g., a gateway node in the SRAN) determines which sets of PTP (i.e., unicast) streams in each beam can be consolidated onto a single point-to-multipoint (PTM) stream for delivery as a multicast stream to a group of user terminals 802 that are multicast group members. Each consolidated PTM stream can be transmitted using a single PTM radio bearer in a downlink carrier in a cell 804. For example, a single downlink transmission can be transmitted to each cell 804 for each multicast session. As illustrated, for N (N=7) user terminals 802 in M (M=3) cells 804, the N PTP streams are fused into M PTM streams. The determination and creation of PTM streams at the SRAN 808 involves novel RRC communications between the SRAN 808 and the user terminals 802. Such RRC communications are described in more detail below.
As noted above, the described IP multicast approach is transparent to the core network (e.g., the 5G packet core). For example, the multicast gateway 812 can interface with the core network functions (e.g., the UPF 810) at an “N6” reference point, similar to any distributed node (DN), as defined in 5G specifications, and the like. The approach does not rely on special or dedicated signaling interfaces at the UPF 810, and the approach does not place additional burdens on the satellite, as compared to unicast sessions.
There are many possible use cases for the described IP multicast approach. One example use case is using multicast to more efficiently transmit same video content to multiple sites. For example, the President of the United States is giving a State of the Union address, which is being transmitted live to people across the United States and across the globe. Depending on whether user terminals receiving the speech are in the same or different beams at any particular time, one or more SRANs 808 can dynamically determine which streams can be fused together onto single PTM radio bearers for transmission to multicast groups of user terminals.
Another example use cases is using multicast to efficiently handle point-to-multipoint push-to-talk services. In such contexts, a group of users is moving around for a common purpose, such as to respond as emergency first responders to an incident, to engage in a military campaign or exercise, etc. When a team leader, commander, or the like pushes “talk,” the communication terminals are set up so that all receivers concurrently receive the same stream. Thus, multicast can be used to send the same stream to all receivers in the same beam via a single PTM radio bearer.
At stage 4308, embodiments can forward the join messages by the ground node to a multicast gateway. At stage 4312, embodiments can generate, responsive to receiving the join messages forwarded in stage 4308, multicast membership information for the multicast session. At stage 4316, embodiments can communicate the multicast membership information from the multicast gateway to a multicast content server that hosts the multicast content associated with the multicast session. In some embodiments, the communicating at stage 4316 includes coordinating between the multicast gateway and the multicast content server to construct a multicast tree for the multicast session (e.g., a Protocol Independent Multicast-Sparse Mode (PIM-SM) multicast tree). At stage 4320, embodiments can receive the multicast content by the multicast gateway from the multicast content server. Embodiments can further replicate the multicast content and can encapsulate the replicated multicast content into N point to point (PTP) (unicast) streams. Each PTP stream is indicated (e.g., in header information) as destined for a corresponding one of the N UTs
At stage 4332, embodiments can receive (e.g., by the ground node from the multicast gateway) the N PTP streams of replicated multicast content and can fuse the N PTP streams into M point to multipoint (PTM) streams. For example, at stage 4324, embodiments can determine M cells as serving the N UTs (M is a positive integer; multicast increases transmission resource efficiency when M is less than N). At stage 4328, embodiments can construct, for each of the M cells, a corresponding one of M multicast radio bearers, each multicast radio bearer to carry a corresponding one of the M PTM streams to a corresponding one of the M cells. At stage 4336, embodiments can send the M PTM streams to the N UTs in the M cells via the M multicast radio bearers.
Each SNN site 1002 consists of several radio frequency terminals (RFTs) 1008, such as RFTs 10 Aug. 1, 1008-K. Each RFT 1008 contains the equipment used to track and maintain a radio connection with a satellite 1010. Each SNN site 1002 can also include Feederlink Convergence Appliance (FCA) nodes 1012 that implement modem processing and payload control channel functions. A signal processing framework (e.g., a field-programmable gate array (FPGA) based signal processing framework) allows for advanced low-density parity-check (LDPC) codes to be implemented on the feeder link and also allows for any ranging waveforms and functions used to assist in positioning, navigation, timing and/or other similar features. The FCA nodes 1012 also implement the Digital Video Broadcasting-Satellite-Second Generation Extension (DVB-S2x) protocol functions for the feeder-link channel between the SNN site 1002 and the satellite 1010 with which it has contact.
In some implementations, the RFTs 1008 are outdoor equipment, the FCA nodes 1012 are indoor equipment, and remaining components of the SNN site 1002 are also indoor equipment, which may be housed in rack containers. The remaining components can include an SNN switching infrastructure 1014 (e.g., a 25G/10G switching infrastructure), a pair of timing reference units, a pool of servers implementing SNN functions, element management units, antenna management units, resource management units, NAS units, and any other platform functional entities, such as cloud platform orchestrators, statistics collectors, logging and debugging processors, LDAP authentication clients, etc. At least some of the timing reference units (i.e., in at least a few of the GRANs) are cesium frequency reference units, while the rest may be rubidium-based frequency reference units.
As illustrated, the FCA nodes 1012 are coupled with the SNN switching infrastructure 1014 by the fiber links. The SNN switching infrastructure 1014 connects to a site network infrastructure 1016 (e.g., a customer front end (CFE) WAN infrastructure), via which it can communicate with other portions of the terrestrial infrastructure, including the POP sites 1004.
Each POP site 1004 POP houses an anchor node 1018 and core user plane equipment 1020. The anchor node 1018 performs upper layer protocols of the NodeB (e.g., 5G gNB) functions for a set of administrative regions (ARs) in coordination with the SNN sites 1002 with which it communicates. The core user plane equipment 1020 provides a connection to external IP networks, such as the Internet and private networks. In some implementations, components of the anchor node 1018 are housed in rack containers. This can include a respective portion of the 10 Gigabit Ethernet switching infrastructure, a pool of anchor nodes (e.g., implementing PDCP, SDAP, RRC, 5G Next Generation Application Protocol (NGAP), and GTP protocol layers), element management units, NAS units, and platform functional entities, such as cloud platform orchestrators, statistics collectors, logging and debugging processors, LDAP authentication clients, etc.
Embodiments of the anchor node 1018 can include several components and can perform several functions. One example anchor node 1018 component is a central unit (CU) control plane (CP) processor. The CU CP manages signaling and control-related tasks, such as session management, mobility management, and establishing connections between the network and user equipment. Embodiments of the CU CP processor can perform some or all of the following: setup an N2 interface with an AMF for assigned cells, implement label edge router (LER) functions (described below), coordinate with distributed units (DUs) in satellites to authenticate (e.g., using Internet key exchange version 2 (IKEv2) procedures), setup IPSec tunnels, setup stream control transmission protocol (SCTP) connections, setup F1 interfaces, receive satellite ephemeris data from a cell ephemeris processor and generate cell system information (SIB), propagate cell system information and configuration (e.g., multi-operator core network (MOCN) radio resource sharing policy) to the satellite DUs, perform RRC connection establishment with user sessions, perform AMF selection for user equipment-selected slices (e.g., for public land mobile network (PLMN) and/or MOCN) and coordinate UE registration and/or PDU session establishment between user equipment and a selected AMF, perform CU UP instance assignment and E1 bearer setup, setup user equipment context in the DU for the bearers (e.g., along with the QoS, network slice, user equipment aggregated maximum bit rate (UE AMBR), etc.), coordinate with other CU UP instances over Xn interface for handovers, execute global resource manager (GRM) commanded handovers (e.g., coordinating with DUs, other CU CPs, CU UPs, user terminals, AMF, and/or UPF), perform cell and/or satellite selection for paging and paging dilation, coordinate with GEO ground node (e.g., gNB over Xn interface) and user terminal for implementing LEO/GEO NR dual connectivity (DC) (e.g., as described herein), etc.
Another example anchor node 1018 component is a central unit (CU) user plane (UP) processor. The CU UP manages actual transmission of user data, such as by being responsible for forwarding and routing of user data packets to and from the SRAN and the CN. Embodiments of the CU UP processor can perform some or all of the following: setup IPSec tunnels with the satellite DUs using the CU CP-provided IPSec derived keys, setup GTP tunnel and F1-U interfaces with the satellite DUs upon E1 bearer setup from the CU CP for user sessions, setup GTP sessions with assigned GPF for the user sessions, implement SDAP and/or PDCP functions (e.g., QoS flow mapping, header compression/decompression, ciphering/de-ciphering, PDCP sequence numbering, PDCP reordering/duplication detection), perform data transport and/or flow control and/or retransmission over the F1-U interface with satellite DUs, implement label edge router (LER) functions (described below), monitor data inactivity and coordinate with CU CP for RRC operations (e.g., inactivity, suspend, paging, resume, etc.), perform data forwarding to other CU UP instances over Xn interface for user session handovers, perform data forwarding to the GEO gNB for LEO/GEO NR DC, etc.
Another example anchor node 1018 component is one or more cell ephemeris processors, which can receive satellite ephemeris files (e.g., from a SOC), compute neighbor satellite geometry and corresponding system information (e.g., system information block 19, SIB 19, information), propagate information to relevant CU CP instances for each of the configured cells, etc. Another example anchor node 1018 component is one or more element management components, which can coordinate with a cloud orchestrator for component software image repository and upgrades, coordinate with the GNOC and/or GRM to receive static and dynamic configurations (e.g., antenna definitions, neighbor site relationships, AR-TAC-POP relationships, AR boundary definitions, contact schedules, etc.), implement FCAPS functionality for the site, implement a web-based local/remote graphical interface and ReST-based management interface, etc. Another example anchor node 1018 component is one or more on-premises cloud platform orchestration components, which can auto discover nodes and maintain centralized registries; perform component application profile assignments and container orchestration; perform image caching and deployment of software and configuration on nodes; setup container interconnect virtual networking, routing, security, and policy enforcement; perform node health monitoring, fault handing, and reconfigurations, including required redundancy; maintain site install configurations, etc. Another example anchor node 1018 component is one or more site support components (e.g., NAS, LDAP, Log, LUI, etc.), which can collect SRAN component statistics and push to cloud storage, coordinate with central active directories to authorize and authenticate SRAN users and their roles, perform component diagnostics log collection and tools for visualization and/or filtering, etc. Embodiments can include any or all of these and/or other anchor node 1018 components, and the anchor node 1018 components can perform any or all of these and/or other anchor node-related functions.
Embodiments can be architected in a modular fashion to support scalability. For example, additional satellites 1010 can be served by adding RFTs 1008 incrementally to the SRAN. Data processing functions in the SRAN can be implemented using a load distributed architecture with a pool of traffic carrying processor instances (e.g., anchor nodes, etc.). Additional capacity can be served by adding additional processor instances, as needed. An on-premises cloud architecture is employed to allow the system to dynamically instantiate and configure the system functions as needed, without requiring hardcoded installs, thereby reducing inefficiencies of hardware resource usage.
Some embodiments of the architecture 1000 are configured to be compatible with a legacy architecture.
On a first side of the combined architecture 1100, “Gen2” components include salient portions of the architecture 1000 of
On a second side of the combined architecture 1100, “Gen2” components include other salient portions of the architecture 1000 of
As described with reference to
In an example deployment case, a majority of Gen2 SNN sites 1002 are located at legacy Gen1 SNN sites 1102. As such, it may be desirable to reuse as much of the existing Gen1 SNN site 1102 infrastructure as possible for implementing the Gen2 SNN sites 1002. It may also be desirable to share legacy resources with new Gen2 deployments, including dynamic sharing of Gen1 RFTs 1108 between the Gen1 and Gen2 systems. To reuse Gen1 RFTs 1108 for the Gen2 system, the Gen2 SNN 1002 can provision the Gen2 FCA nodes 1012 for corresponding Gen1 RFTs 1108 (e.g., as described above). Gen1 changes may be limited to software and FPGA image upgrades to support Gen2 operations. For example, to minimize changes in the Gen1 system, fibers from RCUs (in the Gen1 RFTs 1108) that are coming indoors can terminate at Gen2 FCA nodes 1012, instead of at Gen1 RBNs 1112. The Gen2 FCA nodes 1012 can setup fiber interfaces with Gen1 RBNs 1112 and can largely mimic a legacy Gen1 RCU to Gen1 RBN 1112 interface, so that the Gen1 RBN 1112 operates as if it is directly communicating with a Gen1 RCU. In this way, a Gen2 FCA node 1012 can effectively relay control signaling and IQ samples between a Gen1 RCU and a Gen1 RBN 1112.
As illustrated, the Gen2 SNN site 1002 can include a Gen2 resource management system (RMS) 1122, and the Gen1 SNN site 1102 can include a Gen1 RMS 1124. One feature of such RMSs is coordination of a “RFT sharing mechanism.” Continuing the example deployment case, the Gen2 RMS 1122 may coordinate with the Gen1 RMS 1124 to borrow a Gen1 RFT 1108. When Gen1 contact is assigned, FCA 1012-3 commands the RCU of the Gen1 RFT 1108 to switch the configuration to use Gen1 feeder link physical layer channelization and packet formatting between the RCU and an RBN 1112-2. It also enables a packet relay function within the RBN 1112-2 to relay all the packets (both control and traffic) between the Gen1 RCU and the Gen1 RBN 1112-2.
When Gen2 contact is assigned, an appropriate FCA node 1012 (e.g., FCA node 1012-2) commands the RCU of its connected Gen2 RFT 1008 to switch its configuration to use Gen2 feeder link physical layer channelization and packet formatting (e.g., according to Digital IF Interoperability (DIFI) standards) between the RCU and a connected RBN 1112 (e.g., RBN 1112-1). Feeder-link physical layer processing can terminate at the RBN 1112-1, and the upper layer packets can be sent directly from the RBN 1112-1 via switches of the Gen2 SNN switching infrastructure 1014. In some cases, the Gen1 and Gen2 systems may use different time references. For example, the Gen1 system may use a GPS-based time reference, while the Gen2 system may use an internal system time. In such cases, there may be small offsets (e.g., nanoseconds), so that use of the RBN 1112-1 may involve synchronizing its operation to the Gen2 time reference.
The RFT sharing mechanism can be extended to using Gen2 RFTs 1008 for the Gen1 system. As an example use case, when a Gen1 RFT 1108 is to be replaced (e.g., due to faulty HW, HW obsolescence, etc.), it can be replaced with Gen2 RFT 108 and augmented with an associated FCA node 1012. The FCA node 1012 can, in turn, connect to the RBN 1112 of the corresponding RFT (i.e., formerly a Gen1 RFT 1108). In such a use case, the Gen1 RMS 1124 would borrow the Gen2 RFT 1008 from the Gen2 RMS 1122 when needed.
Embodiments can be designed to interface with satellites (e.g., Gen2 satellites 1010 of
As illustrated, for each polarization orientation, the RFT 1200 includes a BUC-AMP 1204, a LNB 1206, and an RCU 1208. Each BUC-AMP 1204 is a combination of a block upconverter (BUC) and an amplifier in a single integrated unit. The BUC portion of the BUC-AMP 1204 takes RF signals (e.g., baseband or intermediate frequency (IF) signals) from a corresponding one of the RCUs 1208 and upconverts the signal to a higher frequency for transmission by the tracking antenna 1202. The amplifier portion of the BUC-AMP 1204 increases the power (gain) of the RF signals prior to transmission by the tracking antenna 1202 to help ensure that the signals are strong enough to reach the satellite while overcoming any losses that occur during transmission (e.g., atmospheric loss). In some embodiments, the BUC portion of the BUC-AMP 1204 up-converts IF signals to Ka-band frequencies (27-30 GHZ), and the power amplifier portion of the BUC-AMP 1204 amplifies the composite signal to a level suitable for transmission to the satellite while also providing uplink power control based on a beacon receiver and ephemeris data.
The LNB 1206 includes a Ka-band low-noise amplifier (LNA) and a block down-converter (BDC) integrated into a single component. The LNA portion of the LNB 1206 allows the satellite gateway to receive RF signals from the satellite and provides low noise amplification. The BDC portion of the LNB 1206 provides non-inverting block down-conversion from Ka-band to a receive IF. The down-converted signal can also include a beacon used for satellite tracking and/or for uplink power control.
The RCUs 1208 are implemented as a pair of RCUs 1208 (labeled “RCU-2”), each for a respective polarization orientation (e.g., one for RHCP and one for LHCP). The RCU-2 (the pair of RCUs 1208) is connected to the modems (e.g., indoor modem components, as part of the FCA in
Embodiments of the tracking antenna 1202 (e.g., LEO tracking antenna) operate in the Ka band, covering a complete 360-degree azimuth and 5-degree-90-degree elevation range. The tracking antenna 1202 can track the satellite per programmed ephemeris with support for automatic tracking with the help of embedded beacon tracking receivers and an antenna control unit (ACU). The antenna control units can interface with the AMS (e.g., the antenna management subsystem shown in the SNN site 1002 of
A single FCA can handle modem functions for both polarizations of the RFT 1200. For each feeder-link channel, the FCA can implement feeder-link air interface physical layer functions based on DVB-S2x, including: FEC encoding and/or decoding; interleaving and/or scrambling functions; π/2-BPSK, QPSK, 8PSK, 16APSK, 32APSK and 64APSK modulation and/or demodulation; SNR control; frequency and/or timing estimation; transmit and/or receive frame timing; multiplexing and/or demultiplexing; fragmentation and/or reassembly of upper layer data (e.g., AN, SOC TT&C) into GSE packets; adaptive coding modulation for feeder-link channels; and coordination with an ephemeris processor and RCU to apply delay and/or doppler compensation. For each feeder-link channel, the FCA can also receive satellite contact allocation and configure the feeder link channels, perform system time transfer and ranging functions (for positioning, navigation, timing, etc.), perform label switch routing functions selecting the feeder-link channel based on a load balancing hash in the label in the forward direction and AN/SOC selection in the return direction, etc.
Embodiments of the architectures described herein can include several service access control (SAC) components. One example of SAC components is one or more feeder link ephemeris processors, which can receive Gen2 satellite ephemeris files (e.g., from a security operations center, SOC), compute feeder link delay and/or doppler compensation for active contacts for propagation to RFTs, provide satellite ephemeris to antenna systems (e.g., AMS and/or ACU) for active contacts, etc. Another example of SAC components is one or more antenna management components, which can monitor RFT and/or ACU status and coordinate with RMS for appropriate RFT assignment for satellite contacts, coordinate with the ephemeris processor to feed the assigned satellite ephemeris to an ACU and command the ACU to point and/or acquire and track the satellite, etc. Another example of SAC components is one or more resource management components, which can perform periodic SRAN component status monitoring and reporting to GRM, perform satellite contact assignment to RFTs based on GNOC-provided satellite contact schedule adhering to priority and least-recently-used RFT constraints, perform coordination with Gen1 RMS for implementing the RFT sharing mechanism (e.g., as described above), etc. Another example of SAC components is one or more element management components, which can coordinate with a cloud orchestrator for component software image repository and upgrades, coordinate with the GNOC and/or GRM to receive static and dynamic configurations (e.g., antenna definitions, neighbor site relationships, AR-TAC-POP relationships, AR boundary definitions, contact schedules, etc.), implement FCAPS functionality for the site, implement a web-based local/remote graphical interface and ReST-based management interface, etc. Another example of SAC components is one or more on-premises cloud platform orchestration components, which can auto discover nodes and maintain centralized registries; perform component application profile assignments and container orchestration; perform image caching and deployment of software and configuration on nodes; setup container interconnect virtual networking, routing, security, and policy enforcement; perform node health monitoring, fault handing, and reconfigurations, including required redundancy; maintain site install configurations, etc. Another example of SAC components is one or more site support components (e.g., NAS, LDAP, Log, LUI, etc.), which can collect SRAN component statistics and push to cloud storage, coordinate with central active directories to authorize and authenticate SRAN users and their roles, perform component diagnostics log collection and tools for visualization and/or filtering, etc. Embodiments can include any or all of these and/or other SAC components, and the SAC components can perform any or all of these and/or other SAC-related functions.
The cell and CU configuration is pushed from the GRM 1302 (e.g., and GNOC) to the POP anchor node CUs 1312 through configuration files. Upon instantiation of CU instances, the POP anchor node CUs 1312 provide health information to the GRM 1302 periodically. The RMS at the SNNs (SNN EMS/RMS 1310) also periodically provides SNN health information to the GRM 1302 to aid the GRM 1302 in contact planning. After the contact planning, the plan is pushed, via the SOC 1304, to the SNN EMS/RMS 1310 and to the satellites 1306. To push the contact plan to the satellites 1306, the SOC 1304 may use out-of-band telemetry tracking and command (TT&C) channel (or an in-band TT&C channel, if one exists via some other SNN). The GRM 1302 also pushes the default routing labels to be used by the edge routers in the satellite 1306 and POP anchor node CUs 1312 for CU-DU communication via the SNNs (and optionally intermediate satellites).
For the assigned contacts, the SNN EMS/RMS 1310 can allocate an RFT (SNN RFT 1308). The satellite 1306 and SNN RFT 1308 set up DVB-S2x channels after compensating for delay and/or doppler. The RFT-satellite assignment information is propagated to POP anchor node CUs 1312 to aid in their label routing. If IPSec (e.g., Internet Security Association and Key Management Protocol, ISAKMP) security association is not already setup with the CUS (corresponding to the assigned cells) at the satellite 1306, the satellite 1306 can initiate an appropriate setup procedure (e.g., IKEv2). To reduce the number of IPSec associations at the satellite 1306, a single exchange may be used for all the CUs in a POP as opposed to using an individual exchange with every CU in that POP.
Over the IPSec tunnel, using the derived keys (e.g., from IKEv2), the satellite 1306 can initiate SCTP/F1 setup procedure. Upon receiving F1 setup from the satellite 1306 (i.e., the satellite DUs), if an NG interface is not already setup with the AMF(s) 1314, the POP anchor node CUs 1312 setup the NG interface. In some cases, there may be more than one AMF 1314, depending on the number of MOCNs and/or PLMNs supported in the cell. The POP anchor node CUs 1312 can then generate the system information pertaining to the activated cells in the DUs including satellite ephemeris, system information schedule, network sharing configuration, etc. The satellites 1306 can start broadcasting information blocks, according to the schedule in the cells which can also be synchronized with a beam hopping schedule. For example, the satellites 1306 can begin broadcasting master information blocks (MIBs), which contain essential information for initial cell selection and synchronization in cellular networks (e.g., system bandwidth, system frame number, etc.), and system information blocks (SIBs), which provide detailed operational parameters of the network (e.g., configuration and access protocols).
Embodiments described herein can associate cells with CUs and DUs in any feasible manner.
The cells can be polygons defined on the surface of the earth. Since this is a geographically static configuration, the cells are associated with the POPs responsible for the corresponding geographic area. Mapping of cells to the CU instances in the POP can also be done through static configuration based on capacity estimates of the CUs in terms of number of cells that each can support. An example set of cell-to-CU mappings can be as follows:
An example set of cell-to-beam mappings can be as follows:
In accordance with the above mappings, an example set of cell-DU-CU mappings can be as follows:
As described above, in contrast to terrestrial architectures (e.g., and also to some GEO-based deployments), the association of cells to the satellites and DUs is very dynamic in a LEO-based satellite system. Given that a satellite may be carrying cells serviced by multiple POP CUs, and one DU cannot have multiple F1 interfaces with multiple CUs (this is not allowed by the 5G architecture), the satellites must instantiate individual DUs with each of the CUs mapped to the cells carried by the satellite. The list of cells contained in a specific DU (mapped to a specific CU) in the satellite changes dynamically as and when the cells are assigned to and removed from the satellite. Since a GRM can be aware of the overall picture, the GRM can maintain the cell to DU mapping and can provide the corresponding configuration to the satellite. For example, there may be no predefined mapping between a cell and a beam. Instead, this mapping can also be handled by the GRM based on estimated traffic on each of the cells, assigned frequencies to the cells, and the number of available beams in the satellite.
At stage 1508, embodiments can determine (e.g., by the global resource manager) a cell set carried by a beam of a satellite during a mapping timeframe. The cell set is a subset of the cells that dynamically changes as the satellite traverses a non-geosynchronous orbital path (i.e., the satellites are NGSO satellites, such as LEO satellites). In some embodiments, determining the cell set includes selecting the cell set to assign to a beam of the satellite at least based on estimated traffic on the plurality of cells and assigned frequencies to the plurality of cells.
At stage 1512, embodiments can determine (e.g., by the global resource manager), based on the cell-to-CU mapping, a CU set for the mapping timeframe as those of the CUs that are servicing the cell set.
At stage 1516, embodiments can transmit a configuration to the satellite (e.g., by the global resource manager). The configuration directs the satellite to instantiate a distributed unit (DU) set in the satellite having a one-to-one correspondence with the CU set, such that each instantiated DU is configured to interface with a corresponding one of the CUs of the CU set during the mapping timeframe, thereby defining a cell-DU-CU mapping for the mapping timeframe. In some embodiments, the transmitting at stage 1516 includes pushing at least the cell-to-CU mapping from the global resource manager to an anchor node that communicatively couples a satellite radio access network (SRAN) portion of the iTNTN with a core network (CN) portion of the iTNTN. Some embodiments can also push the configuration from the global resource manager to one or more terrestrial satellite network node (SNN) sites in the SRAN portion of the iTNTN for transmission to the satellite. For example, the SNN sites have radio frequency terminals (RFTs), that the transmission to the satellite is via one of the RFTs of one of the SNN sites.
In some such embodiments, the transmitting at stage 1516 further includes pushing default routing labels from the global resource manager to the AN (inside the POP). For example, the routing label stacks for each LSP (label switched path) between edge routers can change dynamically as the topology of the network changes, due to the moving constellation. The labels in each stack identify the next hop in the path. The labels are tagged with start and end times, so that the receiving edge router knows when to obsolete a current label stack and start using a next one. As described herein, the default routing labels enable label-based routing of communications between a POP edge router and a satellite edge router via one of the SNN sites in accordance with the cell-DU-CU mapping (i.e., the satellite edge router is in the satellite, and the POP edge router is in a point of presence (POP) in which the anchor node is disposed).
At stage 1520, embodiments can direct communications, during the mapping timeframe, between the plurality of CUs and the plurality of cells via the DUs instantiated in the satellite based on the cell-DU-CU mapping. For example, the directing can use label-based routing, as described herein.
In some embodiments, the mapping timeframe is one of a sequence of timeframes, each associated with a corresponding location of the satellite along its non-geosynchronous orbital path. In such embodiments, the determining at stage 1508 can include determining a sequence of cell sets comprising a corresponding cell set for each of the sequence of timeframes, the determining at stage 1512 can include determining a sequence of CU sets comprising a corresponding CU set for each of the sequence of cell sets. Further, in such embodiments, the configuration can direct the satellite, for each timeframe of the sequence of timeframes, to instantiate a DU set in the satellite having a one-to-one correspondence with the CU set for the timeframe, thereby defining a cell-DU-CU mapping for each timeframe of the sequence of timeframes.
In some embodiments, the satellite is one of a constellation of satellites, each traversing the non-geosynchronous orbital path in a different corresponding location distributed along the non-geosynchronous orbital path (i.e., they form a NGSO constellation, or a portion thereof). In such embodiments, the determining at stage 1508 can include determining, for each satellite of the constellation, a corresponding cell set as those of the plurality of cells being carried by the satellite during the mapping timeframe. Further, the determining at stage 1512 can include determining, for each satellite of the constellation, a corresponding CU set as those of the plurality of CUs servicing the corresponding cell set for the satellite during the mapping timeframe. Further, the transmitting at stage 1516 can include transmitting the configuration to the constellation, such that the configuration directs each satellite of the constellation to instantiate a corresponding DU set, thereby defining a plurality of cell-DU-CU mappings including a corresponding cell-DU-CU mapping for each satellite during the mapping timeframe.
In some embodiments, each satellite produces multiple beams, each illuminating a corresponding geographic coverage area at the mapping timeframe. In such embodiments, the determining at stage 1508 can include determining, for each beam, a corresponding cell set as those of the plurality of cells being carried by the beam during the mapping timeframe; and the determining at stage 1512 can include determining, for each beam of the constellation, a corresponding CU set as those of the plurality of CUs servicing the corresponding cell set for the beam during the mapping timeframe. Further, in such embodiments, the configuration can direct the satellite to instantiate a corresponding DU set for each of the plurality of beams, thereby defining a plurality of cell-DU-CU mappings including a corresponding cell-DU-CU mapping for each beam during the mapping timeframe.
Beginning with the feeder-link, the physical layer generally follows DVB-S2x standards. The following table provides a general overview of exemplary features for both the forward and return directions:
The user link physical layer can be based on the 5G NR air interface release 17/18. The following table provides a general overview of exemplary features for both the forward and return directions.
The following table summarizes example forward physical layer channels and signals, as adopted from NR standards.
In the above table, “PDSCH/DM-RS” is the physical downlink shared channel/demodulation reference signal, “PDCCH/DM-RS” is the physical downlink control channel/demodulation reference signal, “PBCH/DM-RS” is the physical broadcast channel/demodulation reference signal, “PSS” is the primary synchronization signal, “SSS” is the secondary synchronization signal, “PTS” is the phase tracking signal, “CSI-RS” is the channel state information reference signal, and “PRS” is the positioning reference signal. A synchronization signal block (SSB), which consists of the PSS PBCH and SSS, can be used for the acquisition burst. PBCH carries the system information needed for UTs-to-RACH (random access channel) communications and login to the system.
The following table summarizes example return physical layer channels and signals, as adopted from NR standards.
In the above table, “PUSCH/DM-RS/PT-RS” is the physical uplink shared channel/demodulation reference signal/phase tracking reference signal, “PUCCH/DM-RS” is the physical uplink control channel/demodulation reference signal, “PRACH” is the physical random access channel, “SRS” is the sounding reference signal, “ZC” sequence is the Zadoff-Chu sequence used for signal processing, and “LDPC” is the low-density parity-check method for error correction. The PHY layer can support delay-efficient, two-step PRACH transmission. In the two-step PRACH transmission, a UT transmits a PRACH preamble/PUSCH (e.g., MsgA), and the UT receives the PDCCH/PUSCH bursts (e.g., MsgB) as a response from the network.
High capability terminals are assumed to receive two channels simultaneously to increase the downlink peak user throughput by a factor of two (e.g., above 1.4 Gbps). Similarly, the peak user PHY throughput in the return link can be more than 540 Mbps with two carrier aggregations, with 39 dBW of EIRP on each carrier (total 42 dBW). Minimum capability terminals are assumed to operate in a half-duplexing mode (i.e., they cannot transmit and receive at the same time), and their peak throughput is expected to be about 40% of full-duplexing mode throughput: 120 Mbps (downlink) and 23 Mbps (uplink).
The following table summarizes feeder link (Ka-band) impairment and mitigation techniques for use in channel impairment and interference management.
The following table summarizes user link (Ku-band) impairment and mitigation techniques for use in channel impairment and interference management.
The following table shows an expected throughput during an in-line interference event relative to interference free scenarios for different SNRs for various interference-to-noise (I/N) scenarios. The normalized throughput (bits/sec/Hz) is derived from a Shannon capacity with 2.5 dB margin, without any adjustment for protocol overheads. As the most robust MCS is designed to support SINR around-9 dB, the normalized throughput will be much lower than the values in the table when SINR becomes below-9 dB.
Another physical layer consideration is synchronization. A synchronization reference point can be defined at the satellite. For example, the satellite can derive system timing and frequency synchronization reference from the global navigation satellite system (GNSS). Similarly, the ground nodes and user terminals can be equipped with a GNSS receiver. The timing and frequency reference for ground nodes and user terminals can be based on timing and frequency derived from the GNSS receiver and the knowledge of their position relative to the satellites. Satellites can periodically broadcast satellite ephemeris data to the user terminals.
Embodiments can also support using other sources for system timing and frequency synchronization reference. The satellite motion can introduce Doppler effects to the fixed terminals in the magnitude of 20 ppm at an elevation angle of 20 degrees. The Doppler from the motion of aero terminals can be as high as 1.7 ppm. The pre-Doppler/delay compensation, using satellite ephemeris and relative and positions of user terminals and satellites, will significantly reduce the timing and frequency uncertainty introduced by the motion of the LEO satellites to no more than 0.1 ppm, such that the receiver only need to handle residual Doppler.
Another physical layer consideration is beam hopping. The following table shows exemplary beam hopping parameters that can support flexible and efficient beam hopping for user link traffic channels and access channels.
The dwell duration for individual cells and downlink/uplink allocation will be dynamically scheduled by MAC, based on instantaneous traffic demand. It is expected that burst transmission and arrival time uncertainty due to synchronization errors and beam switching time is within CP (Cyclic Prefix) duration. Terminals that are semi or coarsely synchronized (timing error more than CP duration but less than 8 us downlink and 16 us in uplink) may have to de-puncture (neutralize) soft bits associated with the first or the last OFDM symbol received during the dwell period. Similarly, the satellite may choose not to send or use the first or last OFDM symbol at the expense of capacity reduction of around 3.5% (downlink) to 7% (uplink).
PRACH opportunities for each cell are 3.3 opportunities/sec and the PRACH receiver can detect more than 10 preambles for each opportunity, resulting in more than 33 RACH opportunities per second. Assuming 50 users in a cell and RACH request rate of 1/120 seconds from a user, the RACH load is 50/120 requests per second, and the resulting RACH utilization is (50/120)/33=1.2%. As the collision probability is less than 1.2%, random access channel can easily support the random-access completion of all UTs within a satellite coverage within 5 seconds and at least 20% of UTs within 1 second.
The physical layer can be designed to support integrated LEO/GEO operation (e.g., as described above with reference to
This following section addresses the service-link Radio Link Control (RLC), Medium Access Control (MAC), and scheduler at the satellite. Flow Control between the centralized units (CUs) and distributed units (DUs) is briefly discussed and assumes an F1 (CU-DU) interface and functionality and an NR user plane protocol. Packet processing performed by MAC and RLC layers in forward and return directions is also described. In some implementations, the packet data processing can follow 5G-NR frameworks and can interface with a 5G core network. In such implementations, the service link physical layer is assumed to be based on 5G-NR as well.
As described herein, each satellite can include a DU implementation that includes MAC and RLC entities. Embodiments of the satellite-resident MAC entities can provide several functions, including: facilitating data transfer of different logical channels including broadcast, common, paging and multicast channels; multiplexing and/or demultiplexing of MAC service data unit signals from different logical channels (radio bearers) to and/or from the physical layer; dynamic scheduling of user terminals and their associated logical flows; handling retransmission (e.g., using hybrid automatic repeat request, HARQ); and performing power control link adaptation functions. Embodiments of the satellite-resident RLC entities can provide several functions, including handling transfer of PDPCP PDU and RRC control messages, handling segmentation and reassembly of packets, and providing retransmission and recovery mechanisms (when configured in acknowledged mode, “AM” or “AM mode”). Corresponding RLC and MAC entities can also reside at the user terminals with some difference in their respective functions, specifically at the MAC.
One Layer 2 function is quality or service (QOS) support.
Each user terminal can have multiple PDU sessions, and packets associated with those PDU session can map to different radio bearers. In other words, a single radio bearer would not carry traffic from different PDU sessions. QFI is used to map packet to the appropriate radio bearer (e.g., data radio bearer, DRB) that reflects the flow's QoS characteristics.
The scheduler can allocate resources based on the radio bearer attributes. The radio bearer attributes reflect the QoS flow attributes or are derived therefrom. For example, setting up the radio bearer attributes from the QoS flows may involve GBR aggregation. A radio bearer can reflect the QoS attributes of the QoS flow's 5QI. An example approach for 5QI-to-QoS characteristics mapping can be found in 3GPP TS 23.501. For example, the QoS attributes can include: resource type (GBR, non-GBR, or delay critical GBR), default priority level, packet delay budget (PDB), packet error rate, guaranteed flow bit rate (GFBR) for both uplink and downlink, and maximum flow bit rate (MFBR) for both uplink and downlink. GFBR and MFBR apply only to GBR flows.
Such QoS parameters can provide all the salient information used by the MAC layer, RLC, and scheduler for either configuring or handling packets associated with a specific radio bearer. Regarding the MAC and RLC, embodiments include a 5G-based RLC/MAC. RLC and MAC run between the user terminals and the satellite. AM mode may be configured to provide additional reliability when desired. RLC mode selection is based on traffic flow and the desired target packet error rate. RLC AM configuration applies to non-GBR flows with low FER target (e.g., below 10−3). For example, configurations operate with target 10−3 frame error rate over the service link, and with HARQ, the frame error rate can be even lower. Therefore, 5QI mapping to a packet error rate of 106 may rely on operating with RLC AM mode. GBR flows may also be mapped to RLC AM rate, if they have a low packet error rate and are delay tolerant. For example, “5QI 4” maps to a packet error rate of 106 and has a packet delay budget of 300 milliseconds. Both unacknowledged mode (UM) and AM modes provide segmentation and reassembly capabilities. Mapping to RLC AM or UM mode can be predefined or based on a set of rules.
Regarding the scheduler, embodiments of the scheduler can support different traffic types for different applications, services, etc. The following table shows examples of different traffic types supported by the scheduler and example applications using those services. The examples reflect the QoS characteristics of a wide range of services.
Embodiments of the scheduler allocate resources for the radio bearers and the QoS characteristics of its associated flows. This can apply also to multicast bearers. Embodiments can schedule group data flows into three main categories. Within each category, additional treatment and configuration can be applied. A first category can be a “strict priority” scheduler that allocates resources for signaling radio bearers, radio bearers carrying signaling traffic, RLC status PDU, and MAC control messages. The flows assigned to this scheduler carry signaling packets and are delay sensitive. A second category can be a “GBR scheduler,” which can rely on a credit-based scheduler to track the resources given to the specific bearer so that guaranteed rates are met. In addition to the GBR rate, delay budget can be considered, while also trying to use the channel most efficiently in order to reduce the overhead associated with each transmission including processing, control channel signaling, and HARQ feedback. Use of semi-persistent scheduling and configured (uplink) grants can offload dynamic slot allocation and related processing at the scheduler. This can also reduce overhead associated with signaling of downlink and uplink grants on PDCCH. A third category can be a “weighted fairness” type of scheduler. For non-GBR traffic, such a scheduler can be used to provide throughput/resource fairness and to allocate resources to GBR flows beyond the GBR and up to the MBR.
Several features can be included in the uplink schedulers in the user terminals for providing desired QoS. One such feature is that the user terminal can continuously report backlog for its flows using a logical channel group. A logical channel group may contain multiple flows with similar QoS requirements. Another feature is that the uplink scheduler can allocate terminals according to logical channel group priorities and QoS requirements. Another feature is that uplink allocation grants can be for the UT and may not be flow specific. The user terminal (uplink) scheduler can pick the flows based on a set of rules and RRC configuration that comes from the SRAN. The configuration can include the flow priority and prioritized data rate. Another feature is that the user terminal follows a strict priority in selecting a flow; once a certain allocation rate is reached for a flow, its priority can drop with respect to the other flows, and other flows can be selected.
Return link resource allocation can be primarily on-demand and can use the user terminals' backlog reporting for the different type of flows. However, to provide better application layer performance, embodiments can include an unsolicited uplink grant (UUG) feature. According to this feature, a user terminal can be provided with uplink grants from an available pool of resources without an explicit request for resources from the user terminal. Such unsolicited uplink grants are used by user terminals to transmit acknowledgements (e.g., TCP ack) without waiting for more than a RTT to get grants and then transmit an acknowledgement. This can significantly improve the application layer throughput.
Some other features and aspects of the scheduler can relate to beam hopping.
In addition to the cell-slot demand, the beam hopping scheduler must consider the satellite beam hopping capability, how much in advance the hopping cycle can be changed, and/or whether the cell-slot can be chosen on demand. For example,
Some other features and aspects of the scheduler can relate to half-duplex operation. Embodiments of the scheduler can support half-duplex implementations for user terminals that use both GEO and LEO satellite communications. In addition to half-duplex operation and accounting for blocking, system and parameter configuration can be optimized for maximizing transmission opportunities and throughput. For example, transmit-receive (Tx-Rx) offset configuration can be optimized for each beam and for each contact and can account for the roll/pitch profile and the beam delay spread of the satellites.
Another Layer 2 function is CU-DU flow control. Embodiments rely on an “F1” interface to transfer PDCP packets between the CU and the DU. Some relevant aspects of the F1 interface are defined in 3GPP TS 38.425. The F1 interface includes two components: F1-C, which manages the control plane; and F1-U, which is dedicated to the user plane (handling user data transmissions). The CU-DU flow control may be primarily concerned with the F1-U component and can include the following (the information below is per radio bearer): provision of F1-U interface sequence numbers, a retransmission mechanism between the CU and DU, information on PDCP packet delivery and whether or not those packets are delivered to lower layers of the user equipment, information on PDCP packets to be discarded, information on the desired buffer size in the DU and data rate in bytes (accounting for longer potential CU-DU delays in certain embodiments described herein can involve special treatment for this estimate), use of assistance information and CU-DU RTT delay to allow the CU to make better decisions and to assist the DU in its buffer management, and flow control and operation information that is specific to the radio bearer and to whether it is using RLC AM or UM mode.
Another Layer 2 function is facilitating lossless satellite handover. During a satellite handover, all the user terminals in a cell are assigned to a different satellite.
Another Layer 2 function is facilitating link adaptation and power control. In the forward direction, the most efficient modulation and coding scheme (MCS) can be selected to meet the desired target FER of 10−3 for a first transmission. The MCS selection can be based on user terminal-reported forward-link channel quality. Also, link adaptation can be used when selecting aggregation level of downlink control signaling (e.g., as needed for uplink and downlink grants). In the return direction, a combination of uplink power control and MCS selection can be used to maximize user channel efficiency and meet desired target FER, while ensuring compliance with regulatory requirements associated with transmit power per Hz.
Embodiments of the user-link air interface power control can include an open-loop and a closed-loop component. The open-loop power control ensures a desired initial power level at initial access and during handover time. Based on the forward signal level, the UT adjusts its initial transmit power to be received at the satellite with a desired nominal level. System information provides satellite transmit power and the desired target nominal level for the UT's power level determination. The closed-loop power control adjusts UT transmit power level based on filtered received SINR and the desired operating point.
Embodiments described herein can use RRC as the layer-3 control plane protocol between the SRAN and the user terminal (UT). The RRC layer can be implemented based on the 3GPP 5G NR RRC protocol but customized for the unique aspects of the architecture described herein. The RRC can be part of the control plane protocol stack illustrated in
Embodiments of the RRC provide several features. One such feature of the RRC is customized system information broadcast. For example, embodiments of the RRC include a specially designed broadcast mechanism to efficiently conveys constellation ephemeris to UTs to facilitate quick initial acquisition of the constellation and accurate delay and Doppler compensation. Another feature of the RRC is latency-optimized connection establishment, maintenance, re-establishment, recovery, and release. For example, RRC signaling procedures can support piggy backed NAS layer signaling to minimize signaling latencies. Another feature of the RRC is configuration of various user plane protocol layers (PHY, MAC, RLC, PDCP, SDAP).
Another feature of the RRC is enhanced security and integrity protection for signaling and user data. For example, the RRC protocol is ciphered and integrity-protected end-to-end (between UT and SRAN). This can be implemented over and above independent end-to-end encryption of 5GC signaling carried over RRC connections. Integrity protection and ciphering can be provided by the PDCP layer for signaling radio bearers (SRBs) just as for data radio bearers (DRBs). Embodiments of the RRC can manage ciphering and integrity keys for signaling and data bearers in coordination with the 5GC core network, thus avoiding reliance on additional dedicated key provisioning in the SRAN. Security associations can be established automatically at connection setup and maintained across handovers and connection reestablishment.
Another feature of the RRC is establishment, modification, and release of signaling and data bearers. For example, embodiments of the RRC support the configuration of data radio bearers (DRBs) corresponding to PDU sessions using QoS parameters signaled by the 5GC. The RRC can configure the PDCP, RLC and MAC layers, accordingly, to support the corresponding flows. As described herein, embodiments of network architectures use a novel approach to dynamic label-switched routing infrastructure between the SRAN and the satellites (described below), which are connected through a constantly changing set of feeder links and ISLs (e.g., optical inter-satellite links, OISLs). When setting up data bearers, the RRC can also configure the appropriate label stack in the endpoints to ensure that the correct QOS-specific label-switched paths get used for each bearer. For example,
Another feature of the RRC is establishment of UT-UT sessions. For example, embodiments of the RRC provide special support for data bearers to support UT-UT communication. After registration and authentication with the 5GC, UT-UT sessions are set up like regular PDU sessions, but the SRAN then configures the UTs and satellite payloads to map these bearers to label switched paths that do not transit any GNs or the 5GC. The SRAN can provide the UT with necessary routing information identifying the current serving satellite and cell of the peer UT. The receiving satellite can then map this to the corresponding satellite-satellite label-switched path (LSP), which has previously been set up by the GRM's routing table updates. When the serving satellite of either peer UT changes, the SRAN updates the UT-UT routing information through RRC-level handover procedures. For UT-UT Lawful interception, the PDU data is routed to the SRAN also.
Another feature of the RRC is maintaining connection continuity constellation movement and UT movement using customized mobility procedures to achieve fast, efficient, and seamless handovers.
In a first mobility scenario, “cell-wise satellite handoff” occurs in which a cell transitions between two satellites. In one implementation, such a scenario occurs for approximately 2.2 cells per second per satellite, or approximately once every 7 minutes per cell. Such a scenario can impact all UTs in a cell, as well as peer UTs (in UT-UT sessions). To address such a scenario, all UTs in a cell can be moved to the MAC scheduler in the new satellite. Scheduling can be suspended and resumed at activation time to minimize retransmissions. Peer UTs in UT-UT sessions can also be reconfigured.
In a second mobility scenario, a “feeder-link route change” occurs in which a serving satellite transitions between SNNs or there is a scheduled contact change. Such a scenario can impact all UTs served by a satellite. To address such a scenario, RLC/MAC contexts can remain in a same satellite, and only labels/routes can get updated. UTs need not be aware of the change.
In a third mobility scenario, an “ISL route change” occurs in which satellites move out of each other's fields of view, or there is a scheduled ISL contact change. Such a scenario can impact all UTs of at least some cells served by a satellite. To address such a scenario, RLC/MAC contexts can remain in a same satellite, and only labels/routes can get updated. UTs need not be aware of the change.
In a fourth mobility scenario, a “cell-wise frequency handoff” occurs in which frequency slot assignments of one or more cells change. Such a scenario can impact all UTs of a cell. To address such a scenario, there may be no context movement, but the UT, PHY and MAC may be reconfigured to use the new frequency-slot.
In a fifth mobility scenario, a “cell-slot handoff” occurs in which a UT moves between cells in the same TAC serviced by a same satellite. Such a scenario can impact specific UTs and any peer UTs (in UT-UT sessions). To address such a scenario, MAC context may be recreated and/or reconfigured in the same satellite. Peer UTs in UT-UT sessions may also be reconfigured.
In a sixth mobility scenario, a “UT-wise satellite handoff” occurs in which a UT moves between cells in the same TAC but serviced by a different satellite. This is similar to the cell-wise satellite handoff described above, but may only affect one UT. Such a scenario can impact specific UTs and any peer UTs (in UT-UT sessions). To address such a scenario, the RLC/MAC context may be moved to the new satellite, and peer UTs in UT-UT sessions may also be reconfigured.
In a seventh mobility scenario, an “anchor node handoff” occurs in which a UT moves between cells to a different TAC serviced by a different anchor node in a same POP. Such a scenario can impact specific UTs and any peer UTs (in UT-UT sessions). To address such a scenario, a standard 5G handover can be performed via Xn or N2 to a different AN in the same POP. The AMF and UPF can remain the same.
In an eighth mobility scenario, a “POP handoff” occurs in which a UT moves between cells to a different TAC serviced by a different anchor node in a different POP. Such a scenario can impact specific UTs and any peer UTs (in UT-UT sessions). To address such a scenario, a standard 5G handover can be performed via Xn or N2 to a different AN in a different POP. Potentially, the AMF and/or UPF may also be moved.
The first four mobility scenarios described above are conditions occurring due to natural dynamics of the LEO satellite constellation. In all these cases, reconfiguration can be handled in a manner completely transparent to the core network (e.g., the 5GC), without invoking any core network mobility procedures. For example, in the first mobility scenario above, the cells at the edge of the satellite footprint transition from the region of responsibility of one satellite to that of a neighbor satellite due to the movement of the footprint (e.g., at approximately 6 km/sec). On the average, 2.2 cells per second transition between satellites in this way, requiring all the UTs in each such cell to be handed over to the next satellite. The exact time of the satellite handover is known well in advance from the GRM, so the AN pre-configures the target satellite to serve the cell ahead of time using standard Inter-gNB-DU mobility procedures and updating necessary routing information in the satellites. The only interruption in service may occur during the UT's repointing, retuning, and acquisition of the new satellite (which may typically take much less than 20 ms). For UT-UT sessions, the peer UTs are informed of the change of the destination satellite.
The second and third mobility scenarios above may only reconfigure transit routing links between the UTs' serving satellite and the SRAN. UTs may not be affected by the reconfiguration, which takes place by updating routing tables in the satellites. The fourth mobility scenario above occurs due to a planned reconfiguration of cell-slots. As this is pre-planned, the new configuration can be conveyed to the UTs in the cell in advance, similar to the procedures used in the first mobility scenario. There may be no movement of UT contexts between nodes, and the only interruption in service may occur during the UT physical layer reconfiguration and acquisition of the new frequency-slot (which, again, may typically take much less than 20 ms).
While the first four mobility scenarios described above are due to constellation dynamics, the remaining four (i.e., the fifth through eighth mobility scenarios) occur due to movement of mobile UTs between cells. When a UT moves into a different cell, it signals the SRAN, which then triggers a specific handover procedure depending on the applicable mobility case. Among these UT-related mobility events, the most frequent is likely to be the fifth scenario, in which a UT moves into its neighboring cell, and the new cell is served by the same satellite and is part of the same tracking area (i.e., the same group of cells served by the same logical gNB). In this case, the anchor node AN (e.g., the logical gNB) performs the reconfiguration using RAN-level RRC procedures and without involving the core network.
The sixth mobility scenario is similar to the fifth, with the only difference being that the serving satellite changes. The RRC reconfiguration procedures are also similar. In both scenarios, the anchor node remains the same, such that the data traffic forwarding point in the core network and the SRAN is not changed. The only interruption in service is during the UT repointing to the new satellite (in the sixth scenario), physical layer reconfiguration, and acquisition of the new frequency-slot. User mobility between cells may invoke standard handover procedures (e.g., as defined in 5G standards), if the user moves between areas served by different anchor nodes and/or PoPs. Scenarios 7 and 8 are examples of such scenarios, where handover procedures, such as those described in 3GPP TS 23.502 Sec. 4.9.1 can be used to relocate the UT context to a different anchor node. In these cases, the UT does need to use a RACH procedure to access the new cell, but the interruption is minimized by the use of non-contention RACH opportunities.
Another feature of the RRC is UT location reporting. Whether a UT is camped on a CFS (frequency-slot in a cell) in idle state or is connected to the network, it is aware of the current cell in which it is located due to system information it receives in the CFS. A moving UT reports its current location (minimally, its cell location) to the SRAN through RRC procedures periodically and/or based on a movement threshold criterion. UTs may not be required to report their exact coordinates to the network for mobility management procedures. The air interface may allow for optional reporting of terminal GPS coordinates. When the UT is in idle mode, the SRAN can track its current cell location to contact it later using RAN-based paging (e.g., the “efficient paging” below). When the UT is in connected mode, the reported cell location can be used by the SRAN to trigger the appropriate handover procedure.
Another feature of the RRC is efficient paging at the cell level and support for data transfer between UT and the network via an external GEO system. Paging in the network can be implemented efficiently due to several features that provide for targeted paging of UTs in the cell or cells in which the UT is likely to be found. The SRAN can track the UT's cell location by means of location reports. Based on configurable inactivity criteria, the SRAN can suspend the UT's RRC connection and move the UT to an RRC inactive state in the SRAN while it remains CM-Connected in the AMF. In this state, the SRAN still maintains the UT's RRC context and can resume it through RAN-based paging when a trigger from the core network is received, as long as it remains within a group of cells served by the same anchor node (e.g., in the same TAC). This can be more efficient than core network-based paging (e.g., as defined in 5G specifications) at least because the UT is paged at the cell level, which tends to use fewer resources.
In RAN-based Paging, the SRAN can implement paging dilation. This can involve paging only in the target cell or TAC initially; in the event that no response is received, the paging can be expanded to additional surrounding cells. When a UT's RRC context is released, it enters an RRC idle state. In this state, the UT reports its current TAC to the core network, and paging can be handled by the core network, typically at the TAC level (i.e., in all the cells of the tracking area). However, even in core network-based paging, the SRAN supports optimized cell-level paging based on optional 5G-defined paging assistance information, such as paging attempt information and recommended cells and RAN nodes for paging, such as described in 3GPP TS 38.300. If the AMF supports paging assistance information, the SRAN can use that feature to implement targeted cell-level paging.
Embodiments of unique network architectures are described herein. Assuming that a user terminal (UT) is a fixed UT, the UT may go through the following steps to access these network architectures. As a first step, the UT may acquire time and frequency reference and location information. This can involve obtaining GNSS synchronization and getting a 3D fix on location. The time may vary depending on GNSS receiver status and capabilities. As a second step, the UT may acquire antenna tilt and true north offset. For example, antenna tilt may be determined either from an internal sensor or from measured tilt data entered at installation. Antenna true north offset may be determined from internal sensors, heading data entered at installation, or from an antenna calibration procedure executed after acquiring the constellation. In the case of a post-acquisition calibration procedure, the process of acquiring the constellation could take considerably longer due to the heading uncertainty.
As a third step, the UT may acquire the forward link on a LEO satellite of the satellite constellation. For example, the UT executes a search sequence in a multi-dimensional search space (space, time, frequency, polarization) until a satellite forward link signal is identified. This can be performed with no stored ephemerides (“cold start”) or with valid, loaded, or previously stored ephemerides (“warm start”). As a fourth step, the UT may acquire updated system broadcast information. If ephemeris for the initially acquired satellite is not available, the UT can track the satellite using signal strength or SNR during system information acquisition. The UT can acquire required system information, including updated constellation ephemerides, frequency plan, local and neighbor cell and satellite information, uplink and downlink parameters, synchronization parameters, etc. As a fifth step, the UT can select a cell for system access and complete the connection, authentication, registration, and bearer establishment process.
As noted with reference to the third step, the UT can be started up with no stored ephemerides (“cold start”) or with valid, loaded, or previously stored ephemerides (“warm start”). In a cold start initial startup condition, it is estimated that GNSS acquisition and Time to First Fix (TTFF) involves approximately two minutes until the local oscillator disciplined and approximately ten minutes for GNSS lock to continue in parallel with forward link acquisition. The forward link acquisition can take approximately six minutes per full scan, which can involve: a spatial scan of 72 spatial hypotheses at 5° spacing azimuth and fixed 30° elevation, 8 frequency×2 polarization hypotheses, 300 ms beam hopping acquisition cycle time, and one acquisition beam hopping cycle per hypothesis. Additionally, it can take less than one second for acquiring system information and for network connection, authentication, and registration with the CN (e.g., including bearer setup, etc.). Because the target satellite is moving while the scanning process is ongoing, a full spatial scan might not result in a hit. Multiple cycles of full scans may be needed until a satellite is acquired.
In comparison, in a warm start initial startup condition, it is estimated that GNSS acquisition and Time to First Fix (TTFF) takes only approximately two seconds. The forward link acquisition (with no heading uncertainty) can take approximately five seconds with a 300 ms beam hopping acquisition cycle time and one acquisition beam hopping cycle per hypothesis. As in the cold start condition, it can take less than one second for acquiring system information and for network connection, authentication, and registration with the CN (e.g., including bearer setup, etc.). If needed (e.g., if heading uncertainty exists), antenna true north offset calibration can take minutes (up to tens of minutes), but this can be performed in the background while service is available. Based on the above, warm start should complete in approximately 3-5 seconds with no azimuth uncertainty, and in less than 30 seconds if azimuth searching is needed.
Embodiments described herein use a novel dynamic label-switched routing infrastructure between the SRAN and the satellites (e.g., the LEO constellation), which are connected through a constantly changing set of feeder links and ISLs (e.g., OISLs). The label-switched routing layer can provide efficient and seamless connectivity between protocol entities in the satellite payload and those in the SRAN. The routing layer is used to carry both user plane and control plane protocols, as illustrated in
Features of embodiments of the RRC layer, interface layer, L2 layer, and PHY layer are described above. A satellite-side RRC (SAT-RRC) 2510 and an anchor-router-layer-side RRC (AN-RRC) 2516, both in the RRC layer, can be considered as end points of communications between the satellite constellation 2502 and the SRAN 2504. SAT-RRCs 2510 and AN-RRCs 2516 are connected to corresponding sides of label-switched paths (LSPs) via respective instances of interfaces (i.e., the interface layer), including respective F1 access point, SCTP, IP/IPsec, and/or other interfaces. The LSPs are effectively interconnections of a label-switched routing “cloud” that interconnect the satellites of the satellite constellation 2502 with the SRAN 2504 nodes.
LSPs can act as virtual circuits that connect one node in the label-switched routing cloud with another. The endpoints of the LSPs are label edge routers (LERs) and the intermediate nodes through which an LSP passes are transit label-switched routers (LSRs). As illustrated, there can be a satellite-side LER (SAT-LER) 2512, a satellite-side LSR (SAT-LSR) 2514, a SRAN-side LER (SRAN-LER) 2518, and a SRAN-side LSR (SRAN-LSR) 2520. In some embodiments, the SRAN-LER 2518 is in the anchor node 2508, and the SRAN-LSR is in the RFT 2506.
Referring back to
Returning to
As illustrated (and further in
In embodiments described herein, the label-based routing is implemented with particular features. For example, embodiments perform label-based routing of data packets received on an input interface based on a label stack in the header of each packet. Each label in the label stack identifies a next hop node to which to route the packet. When an LSR is reached (e.g., a SAT-LSR 2514 or a SRAN-LSR 2520), the LSR can pop the topmost label in the stack and route the packet to the neighbor node identified by the popped label. The label may also contain additional information that helps the LSR direct the packet to the correct link or queue. The LSR maintains a neighbor link table that identifies the link to use for each neighbor.
Occasionally, the LSR may need to take additional actions, such as rerouting around a failed link, or load balancing across a set of aggregated interfaces. The ingress LER performs the label attachment function. Label attachment consists of adding a stack of labels associated with the LSP to be used based on at least the destination of the packet. In some implementations, the labels are based further on the type of traffic (e.g., different traffic types are routed through the label-switched routing cloud via different LSPs). The LER can maintain an LSP routing table that identifies the traffic type, destination, and label stack to be used for each LSP. The egress LER can pop the last label in the stack and pass the packet up to the IP layer, which delivers the packet to the client application based on transport layer headers (e.g., UDP, SCTP).
The control plane component of the routing layer is responsible for maintaining the LSP routing tables used by the LERs and the neighbor link tables used by the LSRs. Unlike in conventional label-based approaches, the calculation of label-switched routes in embodiments herein is performed by the global resource manager (GRM) (e.g., by a route determination function (RDF)), which is a centrally located entity. Each node involved in implementing the routing layer has a resource management entity that interfaces with the GRM to maintain the necessary control information.
The AN-RM function 2612 takes care of updating the LSP routes in a satellite-side resource manager (SAT-RM) function 2616 located in the satellite 2614 endpoints. This communication occurs through the routing layer infrastructure using the F1-C interface between the CU and DU (e.g., gNB-CU Configuration Update and UE Context Modification for cell-level and UT-specific routes, respectively). SNN sites 2604 do not need to maintain LSP route tables because they act solely as transit nodes.
Embodiments of the GRM 2612 configure the SAT-RM 2616 in each satellite payload processor with the schedules of feeder link and ISL contacts using the TT&C infrastructure via the SOC. In turn, the SAT-RM 2616 updates the GRM 2612 with ISL status changes that may impact backhaul topology. The GRM 2612 configures the anchor nodes 2608 with neighbor SNN site 2604 relationships, and the AN-RMs 2610 update the GRM 2612 with status changes of anchor node 2608 nodes or anchor node-to-SNN links that might impact backhaul topology. In response to the status updates, the GRM can recompute affected routes and update the associated endpoint anchor nodes 2608 with the new routes.
Some conventional label-based routing approaches (e.g., conventional MPLS) create routes based on IGP routing information and status updates from routers and distributed via a label distribution protocol. In embodiments described herein, the routing is both mostly deterministic and highly dynamic, which calls for a very different route generation and distribution approach. In embodiments described herein, the GRM 2612 (see
In some embodiments, the GRM 2612 can also create UT- and VC-specific LSP routes for virtual connections (VCs) with non-default routing policies. The GRM 2612 can configure these ahead of time in the anchor node responsible for the corresponding subscribers so that they are available to be used when those PDU sessions are activated. This can involve identifying subscribers and PDU sessions that are accessible to both the SRAN and the GRM. Normally, subscriber permanent identities may not be known to the 5G RAN by design due to privacy considerations. Embodiments of the GRM 2612 may need to create on-demand LSP routes when UTs register for UT-UT communication. If the identities of the UTs are known beforehand, these routes can be created ahead of time. For example, multiple LSP routes may be needed between the same pair of endpoints to support different classes of traffic (e.g., delay-sensitive vs. bulk); the GRM 2612 can use different metrics to compute optimal routes for the LSPs of the different traffic classes (e.g., latency vs. capacity).
Embodiments of the GRM 2612 determine label-switched routes for the LSPs primarily based on the schedules of feeder link contacts and ISL contacts generated by the GRM 2612 (see
For added clarity, several examples are described, according to the novel packet-based routing approach herein.
The illustrated example uses UT-POP traffic for a UT located in a cell (target cell 2902) that is anchored at AN1.1 in PoP1 2602 and is currently being served by satellite SAT1 2614-1. In the forward direction, SAT1 2614-1 receives a message from the UT and constructs the corresponding F1-U (or F1-AP) message, adding a routing header containing the label stack. This can be a PDU session-specific, UT-specific, or cell-default route. The illustrated route is {SAT2, SNN1, AN1.1}. The DU in a satellite receives the default DU-CU route label stack as part of DU-CU association configuration or a cell-level configuration update from the CU. This is the default route label stack used for UT-AN communication.
UT-specific or PDU session-specific routes can be configured in the DU via a F1-AP UE context update procedure. For example, SAT1 2614-1 looks up the ISL link for SAT2 2614-2 and forwards the packet to SAT2 2614-2. SAT2 2614-2 pops the top label, such that the remaining stack is {SNN1, AN1.1}. Accordingly, SAT2 2614-2 finds that the next hop is SNN1. It can look up the present feeder link to SNN1 and can forward the packet to SNN1. As illustrated, the present feeder link between SNN1 2604-1 and SAT2 2614-2 at the time of the transaction is RFT1.1, so that RFT1.1 effectively becomes the next hop. A load-balancing index in the label can be used to select a feeder link channel. RFT1.1 in SNN1 2604-1 pops the top label, such that the remaining stack is {AN1.1}. Accordingly, RFT1.1 finds the next hop is AN1.1 in POP1 2602. It looks up the transport address of AN1.1 and forwards the packet to it. If this is a user data packet, the load-balancing index in the label can identify the specific CU-UP instance. AN1.1 (CU-CP or CU-UP instance) can pop the top label, such that the remaining stack is { } (i.e., empty, or null). Accordingly, AN1.1 can know that this is the end node. It can remove the routing header and passe the packet to upper layers.
For traffic in the reverse direction, AN1.1 (CU-CP or CU-UP instance) has the label stack for this cell/UT/PDU session in the corresponding UT context. This has been obtained previously from GRM 2612. The label stack to be used in this case is {SNN1, SAT2, SAT1}. Based on current knowledge of the current RFT-satellite assignments shared by the SNN with the POPs, the anchor node can update this to {RFT1.1, SAT2, SAT1}. AN1.1 can construct a corresponding F1-U or F1-AP message with the routing header and send it to RFT1.1. RFT1.1 can pop the top label to confirm that the next hop is SAT2 2614-2 and can forward the packet to SAT2 2614-2, accordingly. A load-balancing index in the label can be used to select a feeder link channel. SAT2 2614-2 can pop the top label to find the next hop is SAT1 2614-1, can look up the ISL link for SAT1 2614-1, and can forward the packet to SAT1 2614-1 over the appropriate ISL. SAT1 2614-1 can pop the top label to find it is the end node. It can remove the routing header and pass the packet to upper layers, where the F1-U or F1-AP terminates. The upper layer handles the packet by delivering it to the UT or processing the control message locally.
Another example routing case is for UT-to-UT sessions. This routing case can be similar to the previous one, except that the route may not involve any SNNs or feeder links (i.e., only satellites and ISLs). UT-UT session routes are inherently UT-specific routes that are configured into the UE contexts of each participating UT at the corresponding DUs through F1-AP configuration update procedures once both endpoints have registered with the central UT-UT registration server. The control plane path for this signaling uses the UT-AN routing infrastructure described previously. In addition to the DU-to-DU label stack used to route the traffic for a UT-UT session, the satellite DU can also use a DU-CU label stack to route session control signaling and lawful intercept traffic.
Another example routing case is for multicast sessions. Traffic on a multicast session is unidirectional (i.e., only in the forward direction). It flows from the multicast gateway (MCG) in the core towards the POPs, and via the satellites to the UTs participating in the multicast session. The traffic is carried over multiple unicast PDU sessions up to the POP, where they are combined at the CU-UP level into a single multicast bearer per multicast session per cell. Additional features are described and illustrated with reference to
In some conventional label-based routing approaches, the routing label identifies its LSP and/or a table entry. In the label-based routing approaches described herein, the routing label identifies the next hop neighbor. Such an approach can tend to avoid large tables and to eliminate the need to update tables in transit nodes as the topology of transit links continually changes (which, as described herein, it a concern not addressed in many conventional approaches). Accordingly, the LSP can then look up a very small neighbor table. For example, the routing label is a 32-bit label with fields that contain a next-hop node type and identifier, a load-balancing hash/index, a priority and congestion indicator, and a flag to indicate the last label in the stack.
In embodiments described herein, the central GRM is tasked with tracking scheduled backhaul topology changes and updating system wide LSP routes proactively. Additionally, the GRM can handle on-demand route creation triggered by the establishment of UT-UT sessions and PDU sessions, as well as route updates due to cell transitions between satellites, UTs moving between cells, and/or UT blockage mitigation. As described herein, feeder link and ISL setups and teardowns and the transitioning of cells between satellites can occur continually, and a large number of affected LSP routes can be affected by these constant topology changes. As such, embodiments of the GRM continually update all satellites and SNNs with link schedules, and all anchor nodes with updated routes through direct interfaces to those nodes. These interfaces are described above and further as follows.
For example, embodiments of the GRM interface with the anchor nodes via a GRM-AN interface. Such an interface can be via the WAN infrastructure to each AN-CU instance and can be used to configure and update LSP routes at the cell, UT, and PDU session levels. The GRM-AN interface can also be used to obtain status and load updates on anchor nodes and AN-SNN links. The GRM-AN interface can also provide UT mobility events to the GRM that require the GRM to generate updated routes for a UT, such as blockage reports (triggering a satellite handover) and UT location updates.
Embodiments of the GRM can also interface with the SNN sites via a GRM-SNN interface. Such an interface can be via the WAN infrastructure to each SNN-RM and can be used to configure feeder link contacts. The GRM-SNN interface can also be used to obtain status and load updates on RFTs and feeder links. Embodiments of the GRM can also interface with the satellites via a GRM-SAT interface. Such an interface can be via the TT&C channel to each SAT-RM and can be used to configure ISL and feeder link contacts, cell-AN associations, and default routes. The GRM-SAT interface can also be used to obtain status and load updates on ISLs.
As described herein, initial LSP routes (e.g., all such routes) can be determined by the GRM from feeder link and ISL schedules. However, there can also be unexpected link outage and/or restoration events, and the GRM is configured to react to such events. The SAT-RM in the satellite and the SNN-RM in the SNN sites can convey link outage and restoration events to the GRM so that the GRM it can recalculate affected routes. To minimize packet loss due to failed links until the GRM can calculate and distribute new routes, embodiments can also support fast local rerouting for ISL failures. This can be done by providing each transit satellite node with a local fallback sub-route to be used in case of failure of a direct ISL link to a neighbor. An LSR that finds a failed ISL output interface can temporarily use the fallback route until the GRM provides an updated route that no longer uses the failed link.
As described above, embodiments of SRANs described herein include anchor nodes and SNN sites. As illustrated in
Turning first to the service data adaption protocol (SDAP), QOS flows in the types of networks described herein (e.g., 5G networks) do not inherently have a one-to-one mapping with the radio bearers. 3GPP specifications define an additional SDAP layer above the PDCP layer to map one or more QoS flows to the radio bearer (DRB).
Turning to packet data convergence protocol (PDCP), the PDCP layer provides service to the SDAP and RRC layers, including transfer of user and control plane data, ciphering and integrity protection, and header compression. Regarding ciphering and integrity protection, embodiments can use AES-256 encryption. The PDCP Sequence Number (SN) can be either 12 bits or 18 bits. The 18-bit sequence is especially useful for GEO compatibility of the air interface and/or when WAN infrastructure delays are on the order of hundreds of milliseconds. According to 5G standards, the maximum size of the PDCP service data unit (SDU) is 9,000 bytes for both data and control, which permits carriage of 9000 bytes jumbo frames. To prevent packet loss during handover, PDCP status reporting can be enabled.
Regarding header compression, for Ethernet-type PDU sessions, PDCP provides Ethernet header Compression (EHC) to compress the Ethernet header.
At the SDAP/PDCP (i.e., CU-related) level, embodiments additionally support network slicing and virtual connections (VCs). Network slicing is a mandatory 5G feature which allows network resources to be dedicated to each of several network slices. Inside each slice, a 5G QoS Indicator (5QI) can be applied to different flows. Each slice can carry one or more virtual connections (i.e., PDU Sessions) among the 15 virtual connections a UT can have. Each virtual connection can support up to 10 QoS Flows. An example of such a virtual connection is illustrated by
In some embodiments of the network architectures described herein, each UT can support a maximum of eight slices, which can be standardized slices, non-standardized slices, or a mixed of standardized slices and non-standardized slices. Standardized slices are defined by 3GPP standards. For example, 3GPP currently defines five standardized slice types (SST) as shown in the following table.
“SST” is an 8-bit value, where values 0-127 are reserved for standardized slices, and values 128-255 are reserved for non-standardized SST. In some embodiments, the slices in a UT (e.g., 8 slices) will be purposed based on needs of different business cases, such as business-to-customer, military, government, etc. To accommodate these cases, network slicing using non-standardized SST values can be used. A service provider share concept can be used to schedule and provide radio resource slicing from a SRAN perspective.
Descriptions herein refer to user terminals. In some embodiments, “reference” user terminals (RUTs) are designed to verify and validate end-to-end system goals, constraints, etc. The RUT supports the same architecture and functional interfaces as a production user terminal for a fixed environment. In some implementations, variants of RUTs are designed to represent variants of production user terminals, such as to be representative of a mobile environment (aero, maritime, and land).
The functional block diagram in
A controller FPGA on the CM can provide antenna and RF control to the antenna sub-system and can also control the RCM module. The controller FPGA can also control the power of the BFA and the RCM and can provide fault management services for the CM system. A modem FPGA can provide modem functionality in accordance with air interface specifications (e.g., as described herein). The modem FPGA can connect to an IF transceiver that has two transmit path interfaces and two receive path IF interfaces. In some implementations, the CM includes an inertial navigation system (INS) module and/or other modules and/or interfaces to support aero mode. Embodiments of the CM can incorporate DC-DC converters to power some or all terminal elements.
Embodiments of the RUT are responsible for communicating traffic to and/or from the user interfaces, for initiating network calls through system architectures described herein, handling fault management and recovery, handling logging and statistics, etc. Some embodiments of UTs are implemented by two chip processing platforms: one for UT management and network services, and one for modem activities. Nonetheless, embodiments of the RUT can be implemented using a single processing unit (e.g., with multiple cores), combining all the UT processing under one system.
For example,
The air interface modem can be based on 5G NR standards. A forward-link modem FPGA can support two 250 MHz bandwidth, adjacent channels for purposes of carrier aggregation. There can be two receive signals of 500 MHz BW each, and only one of them may be active at any time to support a third-party antenna subsystem. On the return link, the SOC may support two transmit carriers of 125 MHz BW for uplink carrier aggregation. The two 125 MHz carriers may be contiguous and sent as 250 MHz bandwidth at 4 GHz IF. There may be two transmit signals of 250 MHz, and only one of them may be active at any time to support a third-party antenna subsystem. The IF transceiver can interface to the modem FPGA using a JESD interface.
There can be dedicated ARM processors in the modem FPGA to assist the hardware accelerators in functions such as cell search, link adaptation, beam hopping, handovers, and cold/warm acquisition. The modem can provide TX_ON_1, TX_ON_2, RX_ON_1, and RX_ON_2 signals to be used by the antenna subsystem as enable signals for transmit and receive, respectively. The modem can use a GPS-provided reference clock as a reference for the baseband processor and IF transceiver and can supply a 25 MHz reference signal for the RCM.
Various systems are described herein. Embodiments of those systems and/or components of those systems can be implemented using a computational system.
The processor(s) 4270 can include one or more cores, such as a multi-core processor for parallel processing. The processor(s) 4270 can be special-purpose processors and/or general-purpose processors that are configured for special purposes describes herein. For example, the main memory 4230 (and/or read-only memory 4240 and/or external storage device 4210) can include non-transitory, processor-readable memory having instructions stored thereon. When the instructions are executed, they can effectively reconfigure the processor(s) 4270 by causing the processor(s) 4270 to perform specific instructions corresponding to implementing specific features of embodiments described herein. For example, methods and processes described herein can be implemented by programming the processor(s) 4270 to perform the steps of those methods and processes.
The communication port(s) 4260 may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) 4260 may be chosen depending on a network, such a local area network (LAN), wide area network (WAN), or any network to which the computational system 4200 connects. The main memory 4230 may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory 4240 may be any static storage device(s) including, but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor 4270. The mass storage device 4250 may be any current or future mass storage solution, which may be used to store information and/or instructions.
The bus 4220 communicatively couples the processor 4270 with the other memory, storage, and communication blocks. The bus 4220 may be, e.g. a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), universal serial bus (USB), etc., for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 4270 to the computer system 4200. Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to the bus 4220 to support direct operator interaction with the computer system 4200. Other operator and administrative interfaces may be provided through network connections connected through the communication port(s) 4260. In no way should the exemplary computational system 4200 limit the scope of the present disclosure.
The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
Also, configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks.
Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. Components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered.
This application claims priority from U.S. provisional patent application number 63/541,148, filed on Sep. 28, 2023, titled “SYSTEM AND METHODS FOR 5G BASED NGSO OPERATION WITH NON-TRANSPARENT SATELLITES”; and from U.S. provisional patent application No. 63/579,459, filed on Aug. 29, 2023, titled “NETWORK AND PROTOCOL ARCHITECTURES FOR 5G COMMUNICATION USING NON-GEOSTATIONARY SATELLITES”; the entire disclosures of which are incorporated in their entirety herein.
Number | Date | Country | |
---|---|---|---|
63541148 | Sep 2023 | US | |
63579459 | Aug 2023 | US |