The 3rd Generation Partnership Project (3GPP) is a collaboration between groups of telecommunications associations. The 3GPP standard encompasses radio access networks (RAN), telecommunications associations services and systems aspects, and core network and terminals. The 3GPP standard caters to a large majority of telecommunications networks and is the standard body behind Universal Mobile Telecommunications System (UMTS)/3G, Long-Term Evolution (LTE)/4G, and New Radio (NR)/5G. In 3GPP networks, the reduction of network latency has been of increased interest as bandwidths have risen.
To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:
The FIGURES of the drawings are not necessarily drawn to scale, as their dimensions can be varied considerably without departing from the scope of the present disclosure.
The following detailed description sets forth example embodiments of apparatuses, methods, and systems relating to a direct radio interface. Features such as structure(s), function(s), and/or characteristic(s), for example, are described with reference to one embodiment as a matter of convenience; various embodiments may be implemented with any suitable one or more of the described features.
In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the embodiments disclosed herein may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the embodiments disclosed herein may be practiced without the specific details. In other instances, well-known features are omitted or to not obscure the illustrative implementations.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof where like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense. For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C).
UE 102 can include a UE interface engine 118. Base station 104 can include a base station interface engine 120 and a MAC layer 198. Data center 106 can include a data center interface engine 122 and a data center gateway 124. Service provider control node 108 can include a provider control engine 126. Provider control engine 126 can include a data center control register 128.
Using UE interface engine 118, base station interface engine 120, data center interface engine 122, data center gateway 124, provider control engine 126, and data center control register 128, system 100 can be configured to establish a direct radio interface (DRI) between base station 104 and data center 106. For example, UE 102 can connect to base station 104 and base station 104 can be in communication with data center 106 over standard 3GPP using S-GW 110 and P-GW 112. A DRI connection can be requested either by data center interface engine 122 or UE interface engine 118. The DRI connection request is sent to service provider control node 108 and the DRI connection can be allowed by provider control engine 126. In an example, service provider control node 108 can be a serving general packet radio service support node. Data center control register 128 can be configured to include user data that will be used by provider control engine 126 to help determine if a DRI connection should be allowed, and if so, under what parameters. Once the DRI connection is established, base station 104 can communicate with data center 106 through the DRI connection and bypass the service provider's core infrastructure (e.g., S-GW 110 and P-GW 112). The term “DRI” includes an interface that allows an electronic device (e.g., a server) to communicate with a specific base station's air interface on a radio bearer level. The DRI connection targets specific base stations and specific radio bearers.
In an example, system 100 can be configured to establish a communication path using a service provider's core infrastructure and the service provider's radio access network (RAN) between a base station (e.g., base station 104) and a data center (e.g., data center 106), request a new DRI communication path be established as a new communication path between the base station and the data center, and establish the DRI communication path between the base station and the data center, where the DRI communication path bypasses the service provider's core infrastructure and/or the service provider's RAN. The service provider's core infrastructure can be part of a 3GPP network and include an S-GW (e.g., S-GW 110) and a P-GW (e.g., P-GW 112). In general, the term “core infrastructure” includes the functional communication facilities that interconnect primary nodes and delivery routes used to exchange information among various sub-networks. The term “core infrastructure” includes a network core, core network, and backbone network.
UE 102 can include mobile devices, personal digital assistants, smartphones, tablets, wearable technology, laptop computers, Internet of Things (IoT) devices, desktop computers, or other similar devices. Base station 104 can be a base transceiver station, cell site, base station, etc. that is configured to facilitate communications (e.g., wireless communication) between UE 102 and a network (e.g., network 116). Data center 106 can include one or more servers and/or one or more cloud networks. Data center 106 can be used to house computer systems and associated components, such as telecommunications and storage systems. S-GW 110 can forward user data packets, while also acting as a mobility anchor for the user plane during handovers (e.g., inter-eNodeB handovers) and as the anchor for mobility between LTE and other 3GPP technologies.
P-GW 112 can provide connectivity from UE 102 to external packet data networks by being a point of exit and entry of traffic for UE 102. UE 102 may have simultaneous connectivity with more than one P-GW for accessing multiple packet data networks (PDNs) and P-GW 112 may be referred to as a PDN gateway. P-GW 112 can be configured to perform policy enforcement, packet filtering, charging support, lawful interception, packet screening, etc. Another role of P-GW 112 can be to act as the anchor for mobility between 3GPP and non-3GPP technologies such as WiMAX, 3GPP2, Code Division Multiple Access (CDMA), 1 Times (or Single-Carrier) Radio Transmission Technology (1×RTT or X1), Evolution-Data Optimized (EvDO), etc. Service provider control node 108 can function as a serving general packet radio service support node (SGSN).
It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present disclosure. Substantial flexibility is provided by system 100 in that any suitable arrangements and configuration may be provided without departing from the teachings of the present disclosure.
For purposes of illustrating certain example techniques of system 100, it is important to understand the communications that may be traversing the network environment. The following foundational information may be viewed as a basis from which the present disclosure may be properly explained.
3GPP is a collaboration between groups of telecommunications associations. The initial scope of 3GPP was to establish a globally applicable third-generation (3G) mobile phone system specification based on an evolved global system for mobile communications (GSM). The scope was later broadened to include the development and maintenance of GSM and related 2G and 2.5G standards, including General Packet Radio Service (GPRS), GSM Evolution (EDGE), and Universal Mobile Telecommunications Service (UMTS). The scope of 3GPP was further broadened to include related 3G standards, including High Speed Packet Access (HSPA), LTE related 4G standards (including LTE Advanced and LTE Advanced Pro), next generation and related 5G standards, and an evolved IP Multimedia Subsystem (IMS) developed in an access independent manner. The 3GPP standard encompasses RAN, services and systems aspects, and core network and terminals. The 3GPP standard caters to a large majority of telecommunications networks and is the standard body behind UMTS, which is the 3G upgrade of GSM.
In 3GPP networks, the reduction of network latency has been of increased interest as bandwidths have risen from Wide Band Code Division Multiple Access (WCDMA)/3G to Long-Term Evolution (LTE)/4G to NR/5G. Current attempts to address network latency have mainly been though RAN/core network (CN) specifications (e.g., 3G direct tunneling) which allows for traffic to bypass an SGSN. SGSN was adopted in LTE with the split of mobility management entity (MME) (control) and S-GW/P-GW (data) and with the introduction of hybrid automatic repeat request (HARQ) in high speed downlink packet access (HSDPA) for 3G which allowed for a much faster reaction than using radio link control-acknowledgement mode (RLC-AM). Also, network equipment optimizations as well as operator network deployments (e.g., tougher through-node latency requirements, backhaul improvements, improving or shortening the distance from gateways to the Internet, etc.) have also been attempted to effect reductions in network latency. For 5G networks, initiatives such as mobile edge computing (MEC) and moving servers into the RAN itself (e.g., central offices) have also been attempted to reduce network latency. However, running services (e.g., virtualized in the RAN) are a concern for larger content providers which desire control of their execution environments, and such implementations have physical access concerns. An approach is needed that allows for a reduction of network latency, especially in a 3GPP-related network, while still allowing the content providers to be in control of their execution environments.
A system to enable a DRI, as outlined in
Radio bearers are channels offered by Layer-2 (in an OSI model) to higher layers for the transfer of either user or control data. Layer-2 provides the upper layers (e.g., Layers 3-7) transmission information by means of the radio bearer and signaling radio bearers. Typically, the radio bearer is between the base station and the UE and a server or data center is not aware of the radio bearer. System 100 can extend the radio bearer up to the data station (e.g., data center gateway 124) and the radio bearer service is part of the DRI connection and allows for a link between the base station and the data center, which is defined by a certain set of parameters (e.g., transport channel parameters, downlink physical channel parameters, uplink physical channel parameters, etc.) or characteristics (e.g., acknowledgment on packets received enabled or disabled, packet ordering handling enabled or disabled, buffering size, priority, error correction parameters, etc.). Whenever the UE is being provided with any service, the service is associated with a radio bearer specifying the configuration for Layer-2 and the Physical Layer in order to have its quality of service (QoS) clearly defined. The RAN (e.g., WCDMA RAN, LTE RAN, NR RAN, etc.) can provide radio access bearer connections between the base station and the data center with different parameters or characteristics in order to match requirements for different radio bearers. The signaling radio bearer can be used during connection establishment to establish the radio access bearer and to deliver signaling while on the connection (e.g., to perform a handover, reconfiguration or release, etc.).
Base station packet flow to the MAC layer (e.g. MAC layer 198) is similar to current implementations of base station packet flow. The DRI connection can communicate packets over a light-touch protocol (e.g., Ethernet+ virtual local area network (VLAN) with MAC indicating radio bearer and the VLAN for target data center domain selection). The packets are then routed to target servers for termination based on a radio bearer ID. Optional support for mobility is provided by a data center interface engine (e.g., data center interface engine 122) and a provider control node (e.g., service provider control node 108) together with currently existing nodes. Running services are not a concern for larger content providers as the data center domain is separated from the RAN/CN.
Compared to other current RAN/CN low latency implementations, the DRI connection to the data center can be at a low network level (e.g., Layer-2). The low network level can allow for an increased emphasis on data (from voice) and rapid growth in cloud networks, as well as the significant channel quality boost that 5G can bring. The DRI connection can also help enable low-latency services with sub-millisecond roundtrip requirements while leveraging the high bandwidth that 5G provides.
The DRI connection can provide a direct fast-path between the base station and the data center at the MAC layer (the lower sublayer of Layer-2 below the PCDP sub-layer, BMC sub-layer, and RLD sub-layer of Layer 2). In an example, the DRI connection can leverage a standard 3GPP connection for initialization and authentication. Once the connection to the data center is established, the DRI connection is opened for a dedicated radio bearer which is communicated from a baseband processing block just above the Layer-2 MAC layer. The radio bearer data can then be communicated across the DRI connection, which is a thin frame protocol (e.g., a low overhead encapsulation of data), into a data center gateway.
The DRI connection can be used in dense areas where high-bandwidth beamforming (or spatial filtering) 5G cells are primarily deployed and where the bulk of processing occurs in relatively few locations per area. In addition, the DRI connection can allow for a reduction in latency as well as decreasing the distance data travels between the data center domains and the base station by avoiding the service provider's RAN/CN without mixing or relocating the RAN/CN. Reducing latency can improve service quality and allow new types of services such as assistance to self-driving cars, industrial applications, IoT devices, etc. Decreasing the distance data travels between the data center and base station is a critical aspect not just to reduce latency but also to enable leveraging of high-throughput beamforming 5G cells due to notable asymmetry in air interface/backhaul behavior.
In an example, the DRI connection is based on a trusted collaboration or agreement between a content or cloud supplier (e.g., Google™, Microsoft Azure™, Amazon Web Services™, etc.) and a service provider (e.g., AT&T™, Vodafone™, etc.). The collaboration or agreement can be an extension of current collaborations or agreements between the content or cloud supplier and the service provider and can help define when the service provider will allow the DRI connection (e.g., the service provider can request specific parameters or characteristics for the DRI connection). The DRI connection can help remove many of the latency issues in current RAN/CNs. Also, the DRI connection can allow for a fast connection between UEs and selected cloud/content providers. Furthermore, the DRI connection can allow for significantly lower end-to-end latency since much larger portions of the 3GPP nodes and stack are bypassed. In addition, the DRI connection can allow the endpoints to decide themselves what characteristics/overhead the connection should have and therefore improve the DRI connection on a per-use-case basis. The DRI connection can allow for a continued separation between the service provider controlled RAN/CN and the cloud/content provider's data center, which generally is a requirement for some service providers.
By removing latency driving services (complex networking functions that control packet ordering and manage the flows, radio link control (RLC) acknowledgement/no-acknowledgement (ack/nack) handling/buffering, IPsec and air crypto, etc.) as well as the service provider's core infrastructure (e.g., most of the RAN/CN architecture including the S-GW and P-GW with associated stacks, aggregation node for split RAN, etc.) for packets on the DRI connection path, it is possible to substantially reduce the latency primarily by removing time spent in queues and buffers, handover mechanisms, raw compute cycles of nonessential functions, etc. Additionally, optional services such as encryption or ciphering, packet ordering, reliability, large packets, etc. can also be provided to the endpoints (e.g., UE 102 and data center 106) through an enablement library. These functions can be mixed and matched to allow for the endpoints to communicate with relatively minimal overhead provided by the lower layers. Mobility usage, such as handover, between baseband units belonging to the same central office could be efficiently handled as the DRI connection to the data center will be the same (same data center gateway 124 ID and same base station 104 ID). In the case of handover between nodes belonging to different central offices, a negotiation would be needed and, depending on deployment and how the data center handles its different services, a data center service migration could also be triggered.
Elements of
Turning to the infrastructure of
In system 100, network traffic, which is inclusive of packets, frames, signals, data, etc., can be sent and received according to any suitable communication messaging protocols. Suitable communication messaging protocols can include a multi-layered scheme such as the Open Systems Interconnection (OSI) model, or any derivations or variants thereof (e.g., TCP/IP and UDP/IP, by way of nonlimiting example). Additionally, radio signal communications over a cellular network may also be provided in system 100. Suitable interfaces and infrastructure may be provided to enable communication with the cellular network.
The term “packet,” as used herein, refers to a unit of data that can be routed between a source node and a destination node on a packet switched network. A packet includes a source network address and a destination network address. These network addresses can be IP addresses in a TCP/IP messaging protocol. The term “data,” as used herein, refers to any type of binary, numeric, voice, video, textual, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another in electronic devices and/or networks.
In an example implementation, base station 104, data center 106, S-GW 110, P-GW 112, and service provider control node 108 are meant to encompass network appliances, servers, routers, switches, gateways, bridges, load balancers, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. Base station 104, data center 106, S-GW 110, P-GW 112, and service provider control node 108 may include any suitable hardware, software, components, modules, or objects that facilitate the operations thereof, as well as suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information. Each of base station 104, data center 106, S-GW 110, P-GW 112, and service provider control node 108 may be virtual or include virtual elements.
In regard to the internal structure associated with system 100, each of base station 104, data center 106, S-GW 110, P-GW 112, and service provider control node 108 can include memory elements for storing information to be used in the operations outlined herein. Each of base station 104, data center 106, S-GW 110, P-GW 112, and service provider control node 108 may keep information in any suitable memory element (e.g., random access memory (RAM), read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), application specific integrated circuit (ASIC), etc.), software, hardware, firmware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Moreover, the information being used, tracked, sent, or received in system 100 could be provided in any database, register, queue, table, cache, control list, or other storage structure, all of which can be referenced within any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein.
In certain example implementations, the functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an ASIC, digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.), which may be inclusive of non-transitory computer-readable media. In some of these instances, memory elements can store data used for the operations described herein. This includes the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein.
In an example implementation, elements of system 100, such as base station 104, data center 106, S-GW 110, P-GW 112, and service provider control node 108 may include software modules (e.g., UE interface engine 118, base station interface engine 120, data center interface engine 122, data center gateway 124, provider control engine 126, etc.) to achieve, or to foster, operations as outlined herein. These modules may be suitably combined in any appropriate manner, which may be based on particular configuration and/or provisioning needs. In example embodiments, such operations may be carried out by hardware, implemented externally to these elements, or included in some other network device to achieve the intended functionality. Furthermore, the modules can be implemented as software, hardware, firmware, or any suitable combination thereof. These elements may also include software (or reciprocating software) that can coordinate with other network elements in order to achieve the operations as outlined herein.
Additionally, each of base station 104, data center 106, S-GW 110, P-GW 112, and service provider control node 108 may include a processor (or core of a processor) that can execute software or an algorithm to perform activities as discussed herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM) or an ASIC that includes digital logic, software, code, or electronic instructions), or any suitable combination thereof. Any of the potential processing elements, modules, and machines described herein should be construed as being encompassed within the broad term ‘processor.’
Turning to
After communication between UE 102 and data center 106 using the service provider's core infrastructure 200 is established, a DRI communication path 132 can be requested, and if approved, established. DRI communication path 132 does not need to be merged with the service provider's core infrastructure 200 and could be established as a separate proprietary communication path. DRI communication path 132 can provide a network path directly between base station 104 and data center 106 and once DRI communication path 132 has been established, the service provider's core infrastructure 200 and the service provider's RAN 202 can be bypassed. In a specific example, if the service provider's core infrastructure 200 is created using 3GPP, then the 3GPP architecture above the MAC layer can be bypassed. In an example, service provider's communication path 130 can remain active and DRI communication path 132 can be established on top of the service provider's core infrastructure 200 (e.g., current 3GPP-based RAN/CN architecture targeting NR/5G deployment).
Once DRI communication path 132 is established, a radio bearer is established between UE interface engine 118 and data center gateway 124. The MAC layer packets (PDUs) for the radio bearer are directed from the MAC layer directly over DRI communication path 132 to data center 106. Because a DRI connection interfaces at a very low level, it is generally unaffected by or can easily be adopted to changes such as split RAN.
Using a Radio Resource Control (RRC) protocol, data center interface engine 122 can be configured to identify a new radio bearer as a DRI bearer. The RRC protocol is used in UMTS and LTE on an air interface and can exist between UE 102 and base station 104 at the IP level. The RRC protocol is specified by 3GPP and the major functions of the RRC protocol include connection establishment and release functions, broadcast of system information, radio bearer establishment, reconfiguration and release, RRC connection mobility procedures, paging notification, and release and outer loop power control. By means of signaling functions, the RRC protocol can be used to configure the user and control planes according to the network status and allow for radio resource management strategies to be implemented by base station interface engine 120.
Base station interface engine 120 can leverage an RRC session from UE 102 and allocate a logic channel identifier (LCID) for packets being communicated on DRI communication path 132. The DRI radio bearer would then not be known outside of base station 104. The RRC session could avoid setting up protocol layers for the radio bearer such as Packet Data Convergence Protocol (PDCP) or alternatively keep an “empty” placeholder for the radio bearer in the protocol layers. Other approaches such as keeping the set up for the service provider's core infrastructure 200 and establishing and marking the evolved packet system (EPS) bearer ID or evolved universal terrestrial radio access network (E-UTRAN) radio access bearer (E-RAB) could be implemented. However, those would then also be allocated in legacy 3GPP nodes.
It should be noted that DRI communication path 132 can be used as a fast path connection in an MEC deployment. Also, although DRI communication path 132 bypasses large portions of the service provider's core infrastructure 200 (e.g., the 3GPP nodes/stack), DRI communication path 132 can allow for essential functions such as authentication, data encryption, handover, charging, etc. (although not necessarily handled in the same manner as current implementations). In an example, the essential functions can be communicated over service provider's communication path 130 if service provider's communication path 130 remains active. Encryption or ciphering can be entirely pushed to the endpoints (e.g., UE 102 and data center 106) where it can be adapted to the applications as needed but can also ensure that no service provider tampers with the data. Non-essential functions such as packet ordering handling and delivery notification can also be handled by the endpoints.
Turning to
MAC data 188 can include mapping to logical channels, sequence number, packet size, information to be transferred, etc. Baseband data 190 can include coded and modulated information. Radio data 192 can include information transferred wirelessly over a wireless connection. DRI data 194 can be associated with radio bearer data as well as other information to be transferred. Ethernet data 196 can include source and destination MAC address, higher layer protocol type, and packet checksum as well as other information to be transferred.
The raw stack for the DRI connection only includes a minimal set of functionalities to transmit PDUs from the MAC layer. Base station interface engine 120 can be configured to use MAC PDUs along with metadata in an Ethernet frame or jumbo Ethernet frame with a MAC address set to a radio bearer ID and a VLAN set to data center gateway 124 and then into data center 106. It should be noted that alternative framing implementations can also be used. Data center gateway 124 can be configured for protocol conversion and an ID check between DRI and data center transport. Data center gateway 124 can also be configured to handle optional functions such as statistics and charging, local breakout for debug, and lawful interception.
The MAC layer remains unchanged with the DRI connection. In an example, data can flow to/from UE 102, through the RLC, MAC layer, and physical layer of base station 104, and to/from data center 106. By leveraging the HARQ flow in MAC, it is possible to handle most packet retransmissions efficiently. However, higher level (e.g., RLC-acknowledged mode (AM)) functions such as retransmission and reordering are not supported in favor of lower latency and lower complexity. The endpoints can be configured to apply appropriate functions to give as robust a network service as required for specific usage.
Turning to
Destination portion 142 can include a destination MAC address for DRI data frame 140. Source portion 144 can include a source MAC address for DRI data frame 140. VLAN portion 146 can include data for either intra-data center usage (different customers) to differentiate between data centers of the same owner or between data centers with different owners (e.g., a data center owned by Google™, a data center owned by Amazon™, etc.). The data in VLAN portion 146 can be deployment specific and can help ensure that packets from base station 104 are routed to the correct data center. Type portion 148a can include the type of packet associated with DRI data frame 140. For example, type portion 148a illustrated in
Turning to
Type portion 148b can include the type of packet associated with DRI control frame 156. For example, type portion 148b illustrated in
Turning to
Destination portion 164 can include a destination identifier for Ethernet DRI packet 158. Source portion 166 can include a source identifier for Ethernet DRI packet 158. Type portion 168 can include a type identifier for Ethernet DRI packet 158. DRI ID portion 170 can include an identifier (e.g., a server's Ethernet address) of the source associated with Ethernet DRI packet 158. Application data portion 172 can include the payload of Ethernet DRI packet 158. A checksum portion 174 can include checksum data.
In response to receiving Ethernet DRI packet 158, data center gateway 124 can communicate a data center gateway/base station packet (e.g., DRI data frame 140) to base station 104. The data in DRI ID portion 170 can be used to propagate the data in radio bearer ID portion 150. For example, a DRI translation table 204a can be used to translate a DRI ID from DRI ID portion 170 into a radio bearer ID in radio bearer ID portion 150 in one direction and to translate a radio bearer ID from radio bearer ID portion 150 into a DRI ID in DRI ID portion 170.
Turning to
Ethernet header portion 176 can include an Ethernet header for IP DRI packet 160. IP header portion 178 can include an IP header for IP DRI packet 160. DRI IP 180 can include an IP address for the source of IP DRI packet 160. Server IP 182 can include an IP address for the server that will receive IP DRI packet 160. Application data portion 184 can include the payload of IP DRI packet 160. Checksum portion 186 can include checksum data.
In response to receiving IP DRI packet 160, data center gateway 124 can communicate a data center gateway/base station packet (e.g., DRI data frame 140) to base station 104. The data in DRI IP 180 can be used to propagate the data in radio bearer ID portion 150. For example, a DRI translation table 204b can be used to translate a DRI IP from DRI IP portion 180 into a radio bearer ID in radio bearer ID portion 150 in one direction and to translate a radio bearer ID from radio bearer ID portion 150 into a DRI IP in DRI IP portion 180.
Turning to
Turning to
Note that with the examples provided herein, interaction may be described in terms of two, three, or more network elements. However, these embodiments are for purposes of clarity and example only, and are not intended to be limiting. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that system 100 and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of system 100 as potentially applied to a myriad of other architectures.
It is also important to note that the operations in the preceding flow diagrams (i.e.,
Although the present disclosure has been described in detail with reference to particular arrangements and configurations, these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. Moreover, certain components may be combined, separated, eliminated, or added based on particular needs and implementations. Additionally, although system 100 has been illustrated with reference to particular elements and operations that facilitate the communication process, these elements and operations may be replaced by any suitable architecture, protocols, and/or processes that achieve the intended functionality of system 100.
Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.
Example C1 is at least one machine readable storage medium having one or more instructions that when executed by at least one processor, cause the at least one processor to establish a communication path using a service provider's core infrastructure between a base station and a data center, where the service provider's core infrastructure includes one or more servicing gateways and one or more packet data network gateways, request a direct radio interface path be established as a new communication path between the base station and the data center, and establish the direct radio interface path between the base station and the data center, where the direct radio interface path bypasses the service provider's core infrastructure.
In Example C2, the subject matter of Example C1 can optionally include where the service provider's core infrastructure is part of a 3rd Generation Partnership Project (3GPP) network.
In Example C3, the subject matter of any one of Examples C1-C2 can optionally include where the bypassed service provider's core infrastructure includes 3GPP architecture above a Layer 2 media access control sub-layer.
In Example C4, the subject matter of any one of Examples C1-C3 can optionally include where the direct radio interface path to the data center is at a Layer 2 network level.
In Example C5, the subject matter of any one of Examples C1-C4 can optionally include where a media access control layer remains unchanged when the direct radio interface path is established.
In Example C6, the subject matter of any one of Examples C1-05 can optionally include where the communication path using the service provider's core infrastructure is maintained when the direct radio interface path is established.
In Example C7, the subject matter of any one of Examples C1-C6 can optionally include where the request that the direct radio interface path be established is communicated to a service provider control node associated with the service provider.
In Example C8, the subject matter of any one of Examples C1-C7 can optionally include where a user equipment associated with the base station requests the direct radio interface path be established and the request is communicated to a service provider control node from the base station.
In Example A1, a server in a data center can include memory, a data center interface engine, and at least one processor. The data center interface engine is configured to cause the at least one processor to communicate on a communication path using a service provider's core infrastructure between a base station and the data center, where the service provider's core infrastructure includes one or more servicing gateways and one or more packet data network gateways, request a direct radio interface path be established as a new communication path between the base station and the data center, and establish the direct radio interface path between the base station and the data center, where the direct radio interface path bypasses the service provider's core infrastructure.
In Example A2, the subject matter of Example A1 can optionally include where the service provider's core infrastructure is part of a 3rd Generation Partnership Project (3GPP) network.
In Example A3, the subject matter of any one of Examples A1-A2 can optionally include where the bypassed service provider's core infrastructure includes 3GPP architecture above a Layer 2 media access control sub-layer.
In Example A4, the subject matter of any one of Examples A1-A3 can optionally include where the direct radio interface path to the data center is at a Layer 2 network level.
In Example A5, the subject matter of any one of Examples A1-A4 can optionally include where a media access control layer remains unchanged when the direct radio interface path is established.
Example M1 is a method including establishing a communication path using a service provider's core infrastructure between a base station and at least one server, where the service provider's core infrastructure includes one or more servicing gateways and one or more packet data network gateways, requesting a direct radio interface path be established as a new communication path between the base station and the at least one server, and establishing the direct radio interface path between the base station and the at least one server, where the direct radio interface path bypasses the service provider's core infrastructure.
In Example M2, the subject matter of Example M1 can optionally include where the service provider's core infrastructure is part of a 3rd Generation Partnership Project (3GPP) network.
In Example M3, the subject matter of any one of the Examples M1-M2 can optionally include where the bypassed service provider's core infrastructure includes 3GPP architecture above a Layer 2 media access control sub-layer.
In Example M4, the subject matter of any one of the Examples M1-M3 can optionally include where the direct radio interface path to the at least one server is at a Layer 2 network level.
In Example M5, the subject matter of any one of the Examples M1-M4 can optionally include where a media access control layer remains unchanged when the direct radio interface path is established.
In Example M6, the subject matter of any one of Examples M1-M5 can optionally include where the communication path using the service provider's core infrastructure is maintained when the direct radio interface path is established.
Example S1 is a system for establishing a direct radio interface connection. The system can include memory, one or more processors, means for establishing a communication path using a service provider's core infrastructure between a base station and at least one server, where the service provider's core infrastructure includes one or more servicing gateways and one or more packet data network gateways, means for requesting a direct radio interface path be established as a new communication path between the base station and the at least one server, and means for establishing the direct radio interface path between the base station and the at least one server, where the direct radio interface path bypasses the service provider's core infrastructure.
In Example S2, the subject matter of Example S1 can optionally include where the service provider's core infrastructure is part of a 3rd Generation Partnership Project (3GPP) network.
In Example S3, the subject matter of any one of the Examples S1-S2 can optionally include where the direct radio interface path to the at least one server is at a Layer 2 network level.
In Example S4, the subject matter of any one of the Examples S1-S3 can optionally include where a media access control layer remains unchanged when the direct radio interface path is established.
In Example S5, the subject matter of any one of the Examples S1-S4 can optionally include where the communication path using the service provider's core infrastructure is maintained when the direct radio interface path is established.
In Example S6, the subject matter of any one of the Examples S1-S5 can optionally include where the request that the direct radio interface path be established is communicated to a service provider control node associated with the service provider.
Example AA1 is an apparatus including means for establishing a communication path using a service provider's core infrastructure between a base station and a data center, where the service provider's core infrastructure includes one or more servicing gateways and one or more packet data network gateways, means for requesting a direct radio interface path be established as a new communication path between the base station and the data center, and means for establishing the direct radio interface path between the base station and the data center, where the direct radio interface path bypasses the service provider's core infrastructure.
In Example AA2, the subject matter of Example AA1 can optionally include where the service provider's core infrastructure is part of a 3rd Generation Partnership Project (3GPP) network.
In Example AA3, the subject matter of any one of Examples AA1-AA2 can optionally include where the bypassed service provider's core infrastructure includes 3GPP architecture above a Layer 2 media access control sub-layer.
In Example AA4, the subject matter of any one of Examples AA1-AA3 can optionally include where the direct radio interface path to the data center is at a Layer 2 network level.
In Example AA5, the subject matter of any one of Examples AA1-AA4 can optionally include where a media access control layer remains unchanged when the direct radio interface path is established.
In Example AA6, the subject matter of any one of Examples AA1-AA5 can optionally include where the communication path using the service provider's core infrastructure is maintained when the direct radio interface path is established.
In Example AA7, the subject matter of any one of Examples AA1-AA6 can optionally include where the request that the direct radio interface path be established is communicated to a service provider control node associated with the service provider.
In Example AA8, the subject matter of any one of Examples AA1-AA9 can optionally include where a user equipment associated with the base station requests the direct radio interface path be established and the request is communicated to a service provider control node from the base station
In Example AA9, the subject matter of any one of Examples AA1-AA8 can optionally include where the data center includes at least one server.
Example X1 is a machine-readable storage medium including machine-readable instructions to implement a method or realize an apparatus as in any one of the Examples A1-A5, AA1-AA9, or M1-M6. Example Y1 is an apparatus comprising means for performing any of the Example methods M1-M6. In Example Y2, the subject matter of Example Y1 can optionally include the means for performing the method comprising a processor and a memory. In Example Y3, the subject matter of Example Y2 can optionally include the memory comprising machine-readable instructions.