This disclosure generally relates to mobile communication networks such as a fifth-generation (5G) communication networks, and more particularly to seamless split rendering session relocation between split rendering servers supporting extended reality applications in mobile communication networks.
A mobile communication network can include hardware that enable communications between user terminals and servers, such as edge servers. A communication network can include access networks, a core network and one or more communication devices. A communication network may establish sessions for communicating data, for example, communicating data carrying voice, electronic mail (email), data carrying short message service messages, multimedia and/or content data and so on. A communication system may provide services to user terminals. Non-limiting examples of services provided comprise two-way or multi-way calls, data communication or multimedia services and access to a data network, such as the Internet. In a wireless communication network, communications between user terminals and base stations of an access node occur over a wireless link.
A communication device is often referred to as a user equipment (UE) or a user device. A communication device is provided with an appropriate signal receiving and transmitting apparatus for enabling communications, for example enabling communications with a communication network or communications directly with other communication devices. The communication device may access a carrier provided by a base station or access point of an access network of a communication network and transmit and/or receive communications on the carrier.
A communication network and user terminals typically operate in accordance with a given standards or standards specifications, such as those provided by 3GPP (Third Generation Partnership Project) or ETSI (European Telecommunications Standards Institute).
Methods, apparatuses, and computer program products are provided for facilitating seamless split rendering session relocation when a quality of a rendered viewport for eXtended Reality (XR) media content from a current split rendering server becomes degraded. A new split rendering server can be provisioned in parallel with the current split rendering server and a rendered viewport negotiation or comparison can be carried out. A split rendering client can provide pose information and sensor data for viewport and/or field of view prediction to both the current split rendering server and the new split rendering server during a transition period. The current split rendering server can continue during this transition period to provide rendered viewport-specific XR media content to the split rendering client while an application function or session management function determines when the split rendering session should be relocated to the newly provisioned split rendering server. In this context, methods, systems, and computer program products supporting seamless relocation of XR split rendering services are presented.
According to some aspects, a method for seamless relocation of extended Reality (XR) and/or similar media content rendering services from a source server to a target server can be carried out. The communication network can comprise a source split rendering server and a target split rendering server and one or more computing devices configured to carry out methods for relocation of XR and/or similar media content rendering services from the source split rendering server to target split rendering server. The computer program product can comprise a non-transitory computer-readable storage medium that store instructions for carrying out methods for relocation of XR and/or similar media content rendering services from the source split rendering server to target split rendering server.
According to a first aspect, a method can comprise, during a time period, using a real-time communication (RTC) application function (AF) hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like, monitoring a split rendering session at a current split rendering server to detect recurring degradations in one or more qualities, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client, wherein the one or more qualities comprise one or more of: a quality of service (QOS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a quality of the XR media content stream, wherein the split rendering session is managed by the RTC-AF and associated with a split rendering client (SRC); determining, based on a duration of the recurring degradations in the one or more qualities exceeding a pre-determined threshold during said time period, that the split rendering session needs to be relocated from the current split rendering server; and initiating relocation of at least a portion of the split rendering session for the split rendering client from the current split rendering server to a new split rendering server.
In some embodiments, the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter. In some embodiments, the method further comprises: establishing a new WebRTC session, in an application layer of the communication network, between the new split rendering server and the RTC-AF. In some embodiments, the method further comprises: indicating, to the split rendering client, that a relocation procedure has been initiated. In some embodiments, the method further comprises: in the indication to the split rendering client that a relocation procedure has been initiated, including a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server. In some embodiments, the method further comprises: receiving, at the RTC-AF, first media content for a first viewport from the current split rendering server and second media content for the first viewport from the new split rendering server. In some embodiments, the method further comprises: indicating, to an application provider (AP) in the communication network, that the split rendering session needs to be relocated. In some embodiments, the method further comprises: requesting that the new split rendering server or another server in the communication network run at least a portion of provisioning of the split rendering services for the split rendering session. In some embodiments, the method further comprises: determining whether the AP has caused the new split rendering server to run at least the portion of provisioning of the split rendering session; and, in an instance in which said determining is in the affirmative, providing, from the RTC-AF, to the AP, an indication that the current split rendering server is to be un-provisioned. In some embodiments, the method further comprises: determining one or more characteristics of the current and new split rendering servers; determining, based at least on the one or more characteristics of the current and new split rendering servers, a timeline for completing relocation of the split rendering session from the current split rendering server; and signaling the relocation timeline to the current split rendering server, the new split rendering server, and the split rendering client. In some embodiments, the signaling the relocation timeline comprises indicating a relocation time relative to an extended reality runtime or a relocation frame identifier indicating a particular frame from the XR media content stream at by which the relocation procedure is carried out.
According to a second aspect, an apparatus can be provided that comprises at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus to perform at least: during a time period, causing a real-time communication (RTC) application function (AF) in a communication network to monitor a split rendering session at a current split rendering server to detect recurring degradations in one or more qualities, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client, wherein the one or more qualities comprise one or more of: a quality of service (QOS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a quality of the XR media content stream, wherein the split rendering session is managed by the RTC-AF and associated with a split rendering client (SRC); determining, based on a duration of the recurring degradations in the one or more qualities exceeding a pre-determined threshold during said time period, that the split rendering session needs to be relocated from the current split rendering server; and initiating relocation of at least a portion of the split rendering session for the split rendering client from the current split rendering server to a new split rendering server. In some embodiments, the RTC-AF can be hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like,
In some embodiments, the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: establishing a new WebRTC session, in an application layer of the communication network, between a new split rendering server and the RTC-AF. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: indicating, to the split rendering client and/or UE, that a relocation procedure has been initiated. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: in the indication to the split rendering client and/or UE that a relocation procedure has been initiated, including a request for the split rendering client and/or UE to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: receiving, at the RTC-AF in the DN, first media content for a first viewport from the current split rendering server and second media content for the first viewport from the new split rendering server. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: indicating, to an application provider (AP) in the communication network, that the split rendering session needs to be relocated. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: requesting that the new split rendering server or another server in the communication network run at least a portion of provisioning of the split rendering services for the split rendering session. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: determining whether the AP has caused the new split rendering server to run at least the portion of provisioning of the split rendering session; and, in an instance in which said determining is in the affirmative, providing, from the RTC-AF, to the AP, an indication that the current split rendering server is to be un-provisioned. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: determining one or more characteristics of the current and new split rendering servers; determining, based at least on the one or more characteristics of the current and new split rendering servers, a timeline for completing relocation of the split rendering session from the current split rendering server; and signaling the relocation timeline to the current split rendering server, the new split rendering server, and the split rendering client. In some embodiments, the signaling the relocation timeline comprises indicating a relocation time relative to an extended reality runtime or a relocation frame identifier indicating a particular frame from the XR media content stream at by which the relocation procedure is carried out.
According to a third aspect, a computer program product can comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause an apparatus to perform at least: during a time period, using a real-time communication (RTC) application function (AF) hosted in a session management function (SMF) or a data network (DN) in a communication network, monitoring a split rendering session at a current split rendering server to detect recurring degradations in one or more qualities, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client, wherein the one or more qualities comprise one or more of: a quality of service (QOS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a quality of the XR media content stream, wherein the split rendering session is managed by the RTC-AF and associated with a split rendering client (SRC); determining, based on a duration of the recurring degradations in the one or more qualities exceeding a pre-determined threshold during said time period, that the split rendering session needs to be relocated from the current split rendering server; and initiating relocation of at least a portion of the split rendering session for the split rendering client from the current split rendering server to a new split rendering server. In some embodiments, the RTC-AF can be hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like,
In some embodiments, the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: establishing a new WebRTC session, in an application layer of the communication network, between a new split rendering server and the RTC-AF. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: indicating, to the split rendering client and/or UE, that a relocation procedure has been initiated. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: in the indication to the split rendering client that a relocation procedure has been initiated, including a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: receiving, at the RTC-AF, first media content for a first viewport from the current split rendering server and second media content for the first viewport from the new split rendering server. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: indicating, to an application provider (AP) in the communication network, that the split rendering session needs to be relocated. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: requesting that the new split rendering server or another server in the communication network run at least a portion of provisioning of the split rendering services for the split rendering session. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: determining whether the AP has caused the new split rendering server to run at least the portion of provisioning of the split rendering services for the split rendering session; and, in an instance in which said determining is in the affirmative, providing, from the RTC-AF, to the AP, an indication that the split rendering session is to be un-provisioned. In some embodiments, the instructions, when executed by the at least one processor, further cause the apparatus to perform at least: determining one or more characteristics of the current and new split rendering servers; determining, based at least on the one or more characteristics of the current and new split rendering servers, a timeline for completing relocation of the split rendering session from the current split rendering server; and signaling the relocation timeline to the current split rendering server, the new split rendering server, and the split rendering client. In some embodiments, the signaling the relocation timeline comprises indicating a relocation time relative to an extended reality runtime or a relocation frame identifier indicating a particular frame from the XR media content stream at by which the relocation procedure is carried out.
According to a fourth aspect, a method can be carried out that comprises: monitoring, during a time period, using a network node configured as a real-time communication (RTC) application function (AF) in a core network (CN) or a data network (DN) in a communication network, one or more metrics associated with a split rendering session, the split rendering session being generated by a current split rendering server, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server, wherein the one or more metrics comprise one or more of: a quality of service (QOS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a metric associated with the XR media content stream; determining, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds, to relocate the split rendering session from the current split rendering server to a new split rendering server; and providing, to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the RTC-AF can be hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like,
In some embodiments, the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter. In some embodiments, the method can further comprise: managing, using the RTC-AF, a new WebRTC session established between the split rendering client and the new split rendering server. In some embodiments, the method can further comprise: providing, to the split rendering client, an indication that a relocation procedure has been initiated. In some embodiments, the method can further comprise: in the indication to the split rendering client that a relocation procedure has been initiated, including a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server. In some embodiments, the method can further comprise: sending, from the RTC-AF, to one of the current split rendering server or the new split rendering server, a request to send, to the split rendering client, an indication that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the method can further comprise: receiving, at the RTC-AF, from the current split rendering server, first XR media content for a first viewport associated with the XR media content stream; receiving, at the RTC-AF, from the new split rendering server, second XR media content for the first viewport associated with the XR media content stream; and determining, based on the first XR media content and the second XR media content, to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the method can further comprise: indicating, to an application provider (AP) in the communication network, that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the method can further comprise: requesting that the new split rendering server or another server in the communication network run at least a portion of provisioning of the split rendering session. In some embodiments, the method can further comprise: determining that the AP has caused the new split rendering server to run at least the portion of provisioning of the split rendering services for the split rendering session; and providing, from the RTC-AF, to the AP, an indication that the split rendering session is to be un-provisioned. In some embodiments, the method can further comprise: determining one or more characteristics of the current and new split rendering servers; determining, based at least on the one or more characteristics of the current and new split rendering servers, a time for completion of relocation of the split rendering session from the current split rendering server to the new split rendering server; and sending, to the current split rendering server and the new split rendering server, an indication that the new split rendering server is to relocate the split rendering session from the current split rendering server to the new split rendering server by the time for completion of relocation of the split rendering session. In some embodiments, the sending of the indication comprises sending an indication of the time for completion of relocation of the split rendering session relative to an XR runtime or relative to a relocation frame identifier indicating a particular frame from a sequence of frames in the XR media content stream by which time the relocation of the split rendering session is to be completed.
According to a fifth aspect, an apparatus can be provided that is configured to operate in a core network (CN) or a data network (DN) of a communication network. The apparatus can be configured to operate as a real-time communication (RTC) application function (AF). The apparatus can comprise: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus to perform at least: monitoring, during a time period, one or more metrics associated with a split rendering session, the split rendering session being generated by a current split rendering server, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server, wherein the one or more metrics comprise one or more of: a quality of service (QOS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a metric associated with the XR media content stream; determining, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds, to relocate the split rendering session from the current split rendering server to a new split rendering server; and providing, to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the RTC-AF can be hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like,
In some embodiments, the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: requesting establishment of a new WebRTC session between the new split rendering server and the RTC-AF. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: providing, to the split rendering client, an indication that a relocation procedure has been initiated. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: in the indication to the split rendering client that a relocation procedure has been initiated, including a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: sending, to one of the current split rendering server or the new split rendering server, a request to send, to the split rendering client, an indication that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: receiving, from the current split rendering server, first XR media content for a first viewport associated with the XR media content stream; receiving, from the new split rendering server, second XR media content for the first viewport associated with the XR media content stream; and determining, based on the first XR media content and the second XR media content, to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: indicating, to an application provider (AP) in the communication network, that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: requesting that an edge server in the communication network run at least a portion of provisioning of the split rendering session. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: determining that the AP has caused the edge server to run at least the portion of provisioning of the split rendering services for the split rendering session; and providing, to the AP, an indication that the split rendering session is to be un-provisioned. In some embodiments, the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: determining one or more characteristics of the current and new split rendering servers; determining, based at least on the one or more characteristics of the current and new split rendering servers, a time for completion of relocation of the split rendering session from the current split rendering server to the new split rendering server; and sending, to the current split rendering server and the new split rendering server, an indication that the new split rendering server is to relocate the split rendering session from the current split rendering server to the new split rendering server by the time for completion of relocation of the split rendering session. In some embodiments, the sending of the indication comprises sending an indication of the time for completion of relocation of the split rendering session relative to an XR runtime or relative to a relocation frame identifier indicating a particular frame from a sequence of frames in the XR media content stream by which time the relocation of the split rendering session is to be completed.
According to a sixth aspect, a computer program product can be provided that comprises a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause an apparatus to perform at least: According to a fourth aspect, a method can be carried out that comprises: monitoring, during a time period, using a network node configured as a real-time communication (RTC) application function (AF) in a core network (CN) or a data network (DN) in a communication network, one or more metrics associated with a split rendering session, the split rendering session being generated by a current split rendering server, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server, wherein the one or more metrics comprise one or more of: a quality of service (QOS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a metric associated with the XR media content stream; determining, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds, to relocate the split rendering session from the current split rendering server to a new split rendering server; and providing, to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the RTC-AF can be hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like,
In some embodiments, the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter. In some embodiments, the method can further comprise: requesting, using the RTC-AF, establishment of a new WebRTC session between the new split rendering server and the RTC-AF. In some embodiments, the method can further comprise: providing, to the split rendering client, an indication that a relocation procedure has been initiated. In some embodiments, the method can further comprise: in the indication to the split rendering client that a relocation procedure has been initiated, including a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server. In some embodiments, the method can further comprise: sending, from the RTC-AF, to one of the current split rendering server or the new split rendering server, a request to send, to the split rendering client, an indication that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the method can further comprise: receiving, at the RTC AF, from the current split rendering server, first XR media content for a first viewport associated with the XR media content stream; receiving, at the RTC-AF, from the new split rendering server, second XR media content for the first viewport associated with the XR media content stream; and determining, based on the first XR media content and the second XR media content, to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the method can further comprise: indicating, to an application provider (AP) in the communication network, that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the method can further comprise: requesting that an edge server in the communication network run at least a portion of provisioning of the split rendering session. In some embodiments, the method can further comprise: determining that the AP has caused the edge server to run at least the portion of provisioning of the split rendering services for the split rendering session; and providing, from the RTC-AF, to the AP, an indication that the split rendering session is to be un-provisioned. In some embodiments, the method can further comprise: determining one or more characteristics of the current and new split rendering servers; determining, based at least on the one or more characteristics of the current and new split rendering servers, a time for completion of relocation of the split rendering session from the current split rendering server to the new split rendering server; and sending, to the current split rendering server and the new split rendering server, an indication that the new split rendering server is to relocate the split rendering session from the current split rendering server to the new split rendering server by the time for completion of relocation of the split rendering session. In some embodiments, the sending of the indication comprises sending an indication of the time for completion of relocation of the split rendering session relative to an XR runtime or relative to a relocation frame identifier indicating a particular frame from a sequence of frames in the XR media content stream by which time the relocation of the split rendering session is to be completed.
According to a seventh aspect, an apparatus can be provided that comprises means, such as at least one processor, and/or means such as at least one memory storing instructions, such as computer-readable instructions and/or instructions embodied as computer program code(s). In some embodiments, the apparatus can comprise means such as a real-time communication (RTC) application function (AF) in a core network (CN) or data network (DN) of a communication network, such as a fifth-generation (5G) communication network. In some embodiments, the apparatus can comprise means for monitoring, during a time period, one or more metrics associated with a split rendering session, the split rendering session being generated by a current split rendering server, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server, wherein the one or more metrics comprise one or more of: a quality of service (QOS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a metric associated with the XR media content stream. In some embodiments, provisioning of split rendering servers for generating the split rendering session is managed by the RTC-AF. In some embodiments, the apparatus can comprise means for determining, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds, to relocate the split rendering session from the current split rendering server to a new split rendering server. In some embodiments, the apparatus can comprise means for providing, to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the RTC-AF can be hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like,
In some embodiments, the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter. In some embodiments, the apparatus can further comprise: means for requesting establishment of a new WebRTC session between the new split rendering server and the apparatus. In some embodiments, the apparatus can further comprise: means for providing, to the split rendering client, an indication that a relocation procedure has been initiated. In some embodiments, the apparatus can further comprise: means for including, in the indication to the split rendering client that a relocation procedure has been initiated, a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server. In some embodiments, the apparatus can further comprise: means for sending, to one of the current split rendering server or the new split rendering server, a request to send, to the split rendering client, an indication that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the apparatus can further comprise: means for receiving, from the current split rendering server, first XR media content for a first viewport associated with the XR media content stream; receiving, from the new split rendering server, second XR media content for the first viewport associated with the XR media content stream; and determining, based on the first XR media content and the second XR media content, to relocate the split rendering session from the current split rendering server to the new split rendering server. In some embodiments, the apparatus can further comprise: means for indicating, to an application provider (AP) in the communication network, that the split rendering session will be relocated from the current split rendering server to the new split rendering server. In some embodiments, the apparatus can further comprise: means for requesting that an edge server in the communication network run at least a portion of provisioning of the split rendering session. In some embodiments, the apparatus can further comprise: means for determining that the AP has caused the edge server to run at least the portion of provisioning of the split rendering services for the split rendering session; and means for providing, to the AP, an indication that the split rendering session is to be un-provisioned. In some embodiments, the apparatus can further comprise: means for determining one or more characteristics of the current and new split rendering servers; means for determining, based at least on the one or more characteristics of the current and new split rendering servers, a time for completion of relocation of the split rendering session from the current split rendering server to the new split rendering server; and means for sending, to the current split rendering server and the new split rendering server, an indication that the new split rendering server is to relocate the split rendering session from the current split rendering server to the new split rendering server by the time for completion of relocation of the split rendering session. In some embodiments, the sending of the indication comprises sending an indication of the time for completion of relocation of the split rendering session relative to an XR runtime or relative to a relocation frame identifier indicating a particular frame from a sequence of frames in the XR media content stream by which time the relocation of the split rendering session is to be completed.
Having thus described embodiments of the disclosure in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
As used the present disclosure and in technical specifications incorporated herein by reference in their entireties, the following symbols and abbreviations are defined as follows:
Mobile telecommunication services rely on a continuous connection to a communication network to work properly and take advantage of the full service. To ensure the continuous connection, a communication device operating consuming a service of the communication network monitors a connection quality and regularly transmits, to a base station of the communication network, measurement reports comprising measurements related to the quality of the connection (e.g., the quality of a connection with a base station of a radio access network of the communication network). In the event of a drop in quality of the connection, e.g. due to movement of the communication device out of a coverage area of a base station, the communication network supports a handover of the communication device to another base station to maintain quality of the connection (e.g., the radio connection) with the communication network.
For new generations of mobile communication networks such as 5G networks currently under standardization by the 3rd Generation Partnership Project (3GPP), virtual reality applications executed by the mobile stations are to be supported, as specified e.g. by 3GPP TR 26.928, version 16.1.0, version 17.0.0, and 18.0.0, the entire disclosures of each of which are hereby incorporated herein by reference in their entireties for all purposes. Among these applications, extended Reality (XR) and Cloud Gaming are some of the most important 5G media applications under consideration in the industry. XR is an umbrella term for different types of realities and refers to all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. Different application domains of XR Applications include entertainment, healthcare, education, etc.
Third-Generation Partnership Project (3GPP) Technical Specification (TS) 23.558, the entire disclosure of which is hereby incorporated herein by reference in its entirety for all purposes, defines architectural solutions for enabling Edge applications. The term ‘Edge application’ refers to user applications which are supported and hosted by server elements of particular networks, which are a part of the overall communication network referred to as an Edge network. Edge networks, also known as Edge Data Networks (EDN), are typically logically located between a core network [CN] of the communication network, a Cloud Data Network, and UEs. In some embodiments, an EDN may contain Edge Application Server(s) (EAS) and/or an Edge Enabler Server (EES). In some embodiments, operation of the EES may be supported by an Edge Configuration Server (ECS). The EDN that includes EAS(s) and/or EES(s) may also include EDGE-X interfaces to communicate with the servers of the network. The UE may contain an application client, which has the task for the proper execution of an MR application, AR application, VR application and/or XR application, and an Edge Enabler Client (EEC) as an interface to communicate with EDN network servers. More details about the EDN architecture and operation is disclosed in 3GPP TS 26.558, version 18.3.0, the entire disclosure of which is hereby incorporated herein by reference in its entirety for all purposes.
The EAS is the application server resident in the Edge Data Network, performing the server functions and is connected to the 3GPP core via Radio Access Network (RAN). The EES facilitates context transfer between EES(s) and EAS(s) and interacts with the 3GPP core either directly (e.g. via PCF) or indirectly (e.g. via SCEF, NEF, and/or SCEF+NEF). Additionally, the EES provides configuration information to Edge Enabler Client enabling exchange of application data traffic with the EAS. The ECS provides supporting functions needed for the Edge Enabler Client to connect with an EES. Additionally, the ECS supports the EES in identifying other EESs (and EASs) in case of application relocations. Furthermore, EDGE-9 symbolizes the connection to another EDN. In case of a relocation from one EAS to another, EDGE-9 is the connection from a source EES to a target EES.
Time-critical communications is a concept in communication networks and protocols, such as in 5G, 6G, and other technologies, for enabling services with reliable low latency requirements such as XR, which encompasses various immersive technologies, such as Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR). In VR, users are totally immersed in a simulated digital environment or a digital replica of reality. MR encompasses all forms of technology that blend virtual elements with real-world surroundings. AR overlays digital information onto real-life images observed through a device.
XR connectivity requirements depend on the level of split architecture and the targeted Quality of Experience (QoE), leading to a wide range of bit rates and strict latency requirements. To ensure XR applications work well over 5G networks, low-latency, high-quality is important. When the XR device is not powerful enough to achieve performance indicators like required data rate and latency, offloading of some processing tasks to the cloud (cloud edge server) is carried out. This process is called split rendering. Split rendering systems may divide rendering of XR content such as VR or AR content between a server and a client. XR systems may implement split rendering in which the workload is split between multiple components such as, such as an XR device and Split Rendering Server (SRS). Most frequently, computation-intensive tasks like rendering are executed on SRS. SRS can be a cloud render server as well as an edge render server. The XR device can communicate with the SRS through the network which can be 3GPP Radio Access Technology (RAT), and/or non-3GPP RAT.
In XR, VR, AR, and MR viewing environments, visual media content is generated for display at the user device, which can include a heads-up display or head-mounted display (HMD), or the like. The user device and/or display device (also referred to herein as a ‘split rendering client’) can be divided into sections or portions of 360 degrees of viewing perspective, which can be referred to as viewports, such that the visual media content can be generated for one or more viewports such that the rendered visual media content corresponds to a field of view (FOV) of the eyes of a user viewing the media content, inclusive of a stereoscopic offset to simulate natural parallax. As used herein, the term ‘viewport’ refers to a portion of the XR content to be rendered for a user based on pose information associated with the user and the user's associated field of view (FOV) within the full virtual scene, of which only the portion designated as the viewport during an associated time is being rendered.
The rendered viewport can then be adjusted in real-time based on changes in a user's perceived location, viewing direction, head movements, rate of change of movement of these characteristics, and other user movements or behaviors. To do this, the user device can track its own pose and movement based on a variety of sensors, gyroscopes, magnetometers, accelerometers, and/or the like, and the user device can then receive a full complement of rendered raw visual media content for the entire virtual environment being viewed and experienced by the user, and then the user device can clip the user motion and pose adjusted viewport from the raw visual media content. However, such a solution requires the transmission of an unacceptably large amount of rendered visual media content, requires that the user device has exceedingly high computational capabilities, and incurs significant latency due to all rendering activities being carried out at the user device prior to display of the viewport during each associated period of time. The exceedingly high computational capabilities requirements, however, mean that the user device (e.g., virtual reality goggles, augmented reality glasses, heads-up display, head-mounted display, or the like) will be bulky and heavy due to increased battery capacity requirements, which will crode the immersiveness of the user experience, and will require processing chips and graphics cards having higher processing capacity, which increases the cost of the user device.
In order to reduce the need for computational capacity at the user device, reduce the cost of user devices, and reduce bandwidth requirements, the computational cost and latency cost of rendering the XR media content can be shared between the user device and one or more other devices. Split rendering sharing can be used to alleviate the XR media content rendering required to be carried out by the user device and reduce the latency of between the user's movements and the corresponding changes in the XR scene. The primary goal of split rendering is to reduce the latency between the user's movements and the corresponding changes in the XR scene. This latency, often referred to as motion-to-photon latency, can cause discomfort and reduce the sense of immersion for a user. Split rendering addresses this by minimizing the time it takes to generate and display each frame of the XR media content. Additional detail regarding split rendering is provided in 3GPP TS 26.565, version 0.5.0, the entire contents of which are hereby incorporated herein by reference in its entirety for all purposes.
The XR application's rendering pipeline generates a stereo pair of images, called eye buffers, one for each eye, based on the virtual scene and the user's pose. The pose data is sent to the compositor, which uses this information to generate a warp mesh for each eye. The warp mesh corrects any discrepancies between the rendered image and the actual display output due to the user's movements. The compositor takes the eye buffers and applies lens distortion correction and warp adjustments based on the warp mesh. Once the image is corrected based on user movements and pose information, the corrected image is known as a warped frame. The compositor's output, the warped frame, is sent to the user device for display to the user. By using a compositor and split rendering services, the user device can achieve very low latency and minimize the motion-to-photon latency.
By offloading the warp and lens correction computations to a separate compositor, the GPU's workload is reduced, leading to improved rendering performance. The split rendering approach means that the final image displayed to the user is correctly aligned with their perspective, even if there are slight discrepancies due to rapid head movements or changes in the user's pose.
In the XR context, split rendering sessions are facilitated by provisioning a compositor, such as a split rendering server, to carry out split rendering services for a user device. However, over time, a compositor, such as a split rendering server, can become overburdened, resource starved, or otherwise experience a reduction in rendering capacity, which can lead to a degradation in quality of the rendered visual media content.
Edge server provisioning for split rendering session services is often used to deliver high-quality extended reality (XR) experiences, particularly in scenarios where the rendering workload is distributed between different devices or subsystems. This provisioning involves setting up and managing servers at the edge of the network, closer to the end-users, to ensure low latency, efficient data transfer, and smooth communication between the XR devices, rendering hardware, and compositor.
The present disclosure addresses the lack of procedures for seamless offloading of split rendering tasks to split rendering servers under certain challenging scenarios. In the current state of technology, client-driven split rendering procedures are available that can operate well in reliable communication networks. When technical issues with edge server infrastructures (e.g., power outages, disaster recovery situations) and network congestion (e.g., audience spikes or localized congestion) are encountered, the offloading process may be degraded. In such cases, seamless and automation of XR split rendering session relocation is required. Further, relocating the session solely on the server side would result in service interruption and a drop in quality of experience (QoE) for the user.
However, there are not currently sufficient processes or methods in the relevant 3GPP standards or elsewhere for handling edge server location and distribution, XR server network connectivity issues, split rendering load balancing and scaling, split rendering session management, split rendering server provisioning and deprovisioning, data transfer and compression, and other aspects of XR split rendering.
Also, in practical deployments of split rendering services over edge servers, there are cases where a split rendering session may need to be relocated for reasons that are independent from the split rendering client (e.g., user device), and that the split rendering client cannot detect and/or about which the split rendering client is not made aware. As mentioned above, handling of split rendering session relocation entirely on the server side creates service interruptions and a degradation of QoE for the user if not carried out in cooperation with already established split rendering session(s). Currently, at the application-level (e.g., end-to-end), split rendering session mobility is agnostic to user device handover between different access nodes (e.g., radio access network [RAN], gNodeB [gNB], etc.) and provides no mechanism for monitoring for XR media content quality degradation, network traffic degradation, QoS metric degradation, QoE metric degradation, server KPI degradation during XR split rendering, nor are any mechanisms in place for how to relocate split rendering sessions when such degradation occurs.
The term QoS is used herein to refer to performance characteristics of a network service, and is usually used to assure that the network performs at or above certain levels of quality. QOS metrics can be selected from among: a packet loss rate (which refers to the percentage of packets that are sent from a source device to a destination device but do not arrive), a bit rate, a bandwidth, a latency (which is the amount of time it takes for a packet to travel from a source to a destination), a throughput, a transmission delay, an availability, a jitter (a variance in packet latency over time), bandwidth allocation (which may be used to ensure that adequate bandwidth is allocated to data flows), and/or the like.
The term QoE is used herein to refer to measures of a customer's experience with a network service and is typically more user-centric. QoE metrics can be selected from among: user satisfaction (which measures how delivered content or a network service meets user's expectations), perceptual quality (which is based on subjective opinions of users about the delivered content or network service), accessibility (which refers to the case with which users can access the delivered content or network service), reliability (which refers to how often a delivered content stream or network service are available to the user device and/or user without interruption), and/or the like.
Some embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms “data,” “content,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.
Additionally, as used herein, the term ‘circuitry’ refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term herein, including in any claims. As a further example, as used herein, the term ‘circuitry’ also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware. As another example, the term ‘circuitry’ as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device, and/or other computing device.
As defined herein, a “computer-readable storage medium,” which refers to a non-transitory physical storage medium (e.g., volatile or non-volatile memory device), can be differentiated from a “computer-readable transmission medium,” which refers to an electromagnetic signal. Such a medium may take many forms, including, but not limited to a non-transitory computer-readable storage medium (e.g., non-volatile media, volatile media), and transmission media. Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves. Signals include man-made transient variations in amplitude, frequency, phase, polarization or other physical properties transmitted through the transmission media. Examples of non-transitory computer-readable media include a magnetic computer readable medium (e.g., a floppy disk, hard disk, magnetic tape, any other magnetic medium), an optical computer readable medium (e.g., a compact disc read only memory (CD-ROM), a digital versatile disc (DVD), a Blu-Ray disc, or the like), a random access memory (RAM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), a FLASH-EPROM, or any other non-transitory medium from which a computer can read. The term computer-readable storage medium is used herein to refer to any computer-readable medium except transmission media. However, it will be appreciated that where embodiments are described to use a computer-readable storage medium, other types of computer-readable mediums may be substituted for or used in addition to the computer-readable storage medium in alternative embodiments.
While various inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be examples and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
In the following, certain embodiments are explained with reference to mobile communication devices capable of communication via a wireless cellular system and mobile communication systems serving such mobile communication devices. Before explaining in detail the exemplifying embodiments, certain general principles of a wireless communication system, access systems thereof, and mobile communication devices are briefly explained with reference to
According to some embodiments, a communication device or terminal can be provided for wireless access via cells, base stations or similar wireless transmitter and/or receiver nodes, providing access points for a radio access system.
Access points and hence communications there through are typically controlled by at least one appropriate controller apparatus so as to enable operation thereof and management of mobile communication devices in communication therewith. In some embodiments, a control apparatus for a node may be integrated with, coupled to and/or otherwise provided for controlling the access points. In some embodiments, the control apparatus can be arranged to allow communications between a user equipment and a core network or a network entity of the core network. For this purpose, the control apparatus may comprises at least one memory, at least one data processing unit such as a processor or the like, and an input and/or output interface. Via the interface, the control apparatus can be coupled to relevant other components of the access point. The control apparatus can be configured to execute an appropriate software code to provide the control functions. It shall be appreciated that similar components can be provided in a control apparatus provided elsewhere in the network system, for example in a core network entity. The control apparatus can be interconnected with other control entities. The control apparatus and functions may be distributed between several control units. In some embodiments, each base station can comprise a control apparatus. In alternative embodiments, two or more base stations may share a control apparatus.
Access points and associated controllers may communicate with each other via a fixed line connection and/or via a radio interface. The logical connection between the base station nodes can be provided for example by an X2 or the like interface. This interface can be used for example for coordination of operation of the stations and performing reselection or handover operations.
The communication device or user equipment may comprise any suitable device capable of at least receiving wireless communication of data. For example, the device can be handheld data processing device equipped with radio receiver, data processing and user interface apparatus. Non-limiting examples include a mobile station (MS) such as a mobile phone or what is known as a ‘smart phone’, a portable computer such as a laptop or a tablet computer provided with a wireless interface card or other wireless interface facility, personal data assistant (PDA) provided with wireless communication capabilities, or any combinations of these or the like. Further examples include wearable wireless devices such as those integrated with watches or smart watches, eyewear, helmets, hats, clothing, car pieces with wireless connectivity, jewelry and so on, universal serial bus (USB) sticks with wireless capabilities, modem data cards, machine type devices or any combinations of these or the like.
In some embodiments, a communication device, e.g., configured for communication with the wireless network or a core network entity, may be exemplified by a handheld or otherwise mobile communication device (or user equipment UE). A mobile communication device may be provided with wireless communication capabilities and appropriate electronic control apparatus for enabling operation thereof. Thus, the communication device may be provided with at least one data processing entity, for example a central processing unit and/or a core processor, at least one memory and other possible components such as additional processors and memories for use in software and hardware aided execution of tasks it is designed to perform. The data processing, storage and other relevant control apparatus can be provided on an appropriate circuit board and/or in chipsets. Data processing and memory functions provided by the control apparatus of the communication device are configured to cause control and signaling operations in accordance with certain embodiments as described later in this description. A user may control the operation of the communication device by means of a suitable user interface such as touch sensitive display screen or pad and/or a key pad, one of more actuator buttons, voice commands, combinations of these, or the like. A speaker and a microphone are also typically provided. Furthermore, a mobile communication device may comprise appropriate connectors (either wired or wireless) to other devices and/or for connecting external accessories, for example hands-free equipment, thereto.
In some embodiments, a communication device may communicate wirelessly via appropriate apparatus for receiving and transmitting signals. In some embodiments, a radio unit may be connected to the control apparatus of the device. The radio unit can comprise a radio part and associated antenna arrangement. The antenna arrangement may be arranged internally or externally to the communication device.
In the context of a fifth-generation (5G) network, such as illustrated in
In some embodiments, such as illustrated in
In some embodiments, the UE 102 can comprise a single-mode or a dual-mode device such that the UE 102 can be connected to one or more RANs 104. In some embodiments, the RAN 104 may be configured to implement one or more radio access technologies (RATs), such as Bluetooth, Wi-Fi, and GSM, UMTS, LTE or 5G NR, among others, that can be used to connect the UE 102 to the CN 101. In some embodiments, the RAN 104 can comprise or be implemented using a chip, such as a silicon chip, in the UE 102 that can be paired with or otherwise recognized by a similar chip in the CN 101, such that the RAN 104 can establish a connection or line of communication between the UE 102 and the CN 101 by identifying and pairing the chip within the UE 102 with the chip within the CN 101. In some embodiments, the RAN 104 can implement one or more base stations, towers or the like to communicate between the UE 102 and the AMF 108 of the CN 101.
In some embodiments, the communications network 100 or components thereof (e.g., base stations, towers, etc.) can be configured to communicate with a communication device (e.g., the UE 102) such as a cell phone or the like over multiple different frequency bands, e.g., FR1 (below 6 GHZ), FR2 (mmWave), other suitable frequency bands, sub-bands thereof, and/or the like. In some embodiments, the communications network 100 can comprise or employ massive multiple input and multiple output (massive MIMO) antennas. In some embodiments, the communications network 100 can comprise multi-user MIMO (MU-MIMO) antennas. In some embodiments, the communications network 100 can employ edge computing whereby the computing servers are communicatively, physically, computationally, and/or temporally closer to the communications device (e.g., UE 102) in order to reduce latency and data traffic congestion. In some embodiments, the communications network 100 can employ other technologies, device, or techniques, such as small cell, low-powered RAN, beamforming of radio waves, WiFi-cellular convergence, non-orthogonal multiple access (NOMA), channel coding, and the like.
As illustrated in
It will be appreciated that example embodiments of the invention disclosed and/or otherwise described herein arise in the context of a telecommunications network, including but not limited to a telecommunications network that conforms to and/or otherwise incorporates aspects of a fifth-generation (5G) architecture. While
While the methods, devices, and computer program products described herein are described within the context of a fifth-generation (5G) core network and system, such as illustrated in
Aspects of the present disclosure introduce procedures for effectively managing split rendering session transitions during relocation. The disclosed approach includes, in some embodiments, monitoring for existing split rendering sessions and triggering relocation based on monitoring of media session degradation. This may lead to uninterrupted offloading in dynamic scenarios. Based on the relocation trigger result from monitoring, split rendering can be relocated or provisioned to a new server, and split rendering sessions can be efficiently transferred to a new server. Further negotiation between the two split rendering sessions, based on timing information, can facilitate seamless switching during relocation of a split rendering session. Further, extensions to the SplitRenderingConfiguration resource, along with the addition of a new application-specific message for split rendering transfer format, providing essential relocation information, are described.
3GPP TR 26.928 defines the terms Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and extended Reality (XR) are defined as follows:
Virtual reality (VR) is a rendered version of a delivered visual and audio scene. The rendering is designed to mimic the visual and audio sensory stimuli of the real world as naturally as possible to an observer or user as they move within the limits defined by the application. Virtual reality usually, but not necessarily, requires a user to wear a head mounted display (HMD), to completely replace the user's field of view with a simulated visual component, and to wear headphones, to provide the user with the accompanying audio. Some form of head and motion tracking of the user in VR is usually employed to allow the simulated visual and audio components to be updated in order to ensure that, from the user's perspective, items and sound sources remain consistent with the user's movements. Additional means to interact with the virtual reality simulation may be provided.
Augmented reality (AR) is when a user is provided with additional information or artificially generated items or content overlaid upon their current environment. Such additional information or content will usually be visual and/or audible and their observation of their current environment may be direct, with no intermediate sensing, processing and rendering, or indirect, where their perception of their environment is relayed via sensors and may be enhanced or processed.
Mixed reality (MR) is an advanced form of AR where some virtual elements are inserted into the physical scene with the intent to provide the illusion that these elements are part of the real scene.
Extended reality (XR) refers to all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. XR includes representative forms such as AR, MR and VR and the areas interpolated among them. The levels of virtuality range from partially sensory inputs to fully immersive VR. A key aspect of XR is the extension of human experiences especially relating to the senses of existence (represented by VR) and the acquisition of cognition (represented by AR).
Likewise, the term XR is used herein as a superordinate category covering AR, MR and VR including purely virtual applications.
According to the descriptions of use cases of virtual reality applications such as XR applications given by 3GPP TR 22.842, version 17.2.0, the entire disclosure of which is hereby incorporated herein by reference in its entirety for all purposes, many of the XR applications are expected to be stateful, meaning that the applications have a state describing the UE application status at a certain point of time. Additionally, the fact that the UEs are expected to interact (e.g., in gaming applications) implies that the various UEs' states should be shared among the UEs. This introduces several challenges during a handover (HO) procedure:
The present disclosure addresses these challenges and generally relates to a handover in a communication network such as a mobile communication network, e.g. a 5G network. Note that the present disclosure relates to various types and generations of communication networks configured to communicate over the transmission medium using any of various radio access technologies (RATs), also referred to as wireless communication technologies, or telecommunication standards, such as GSM, UMTS (associated with, for example, WCDMA or TD-SCDMA air interfaces), LTE, LTE-Advanced (LTE-A), 5G new radio (5G NR), HSPA, 3GPP2 CDMA2000 (e.g., 1×RTT, 1×EV-DO, HRPD, cHRPD), etc.
The communication network is equipped with a plurality of base stations including a source base station and a target base station. The handing concerns a UE executing an XR application over a connection with the source base station. As used herein, the term “user equipment” may refer to any of various types of computer systems devices which are mobile or portable and which perform wireless communications. Examples of UEs include mobile telephones or smart phones, portable gaming devices, laptops, wearable devices (e.g., smart watch, smart glasses), Personal Digital Assistants (PDAs), portable Internet devices, music players, data storage devices, or other handheld devices, etc. In general, the term “UE” can be broadly defined to encompass any electronic, computing, and/or telecommunications device (or combination of devices) which is easily transported by a user and capable of wireless communication.
The present disclosure is particularly directed to enable the target base station (target gNodeB or gNB in 5G terminology) to support an XR application of the UE in terms of radio capabilities and regarding the resources at the Edge Data Network (EDN), as already introduced above. In embodiments, the target base station and the target EDN(s) are properly prepared to undertake the UE XR capabilities and the proper User Plane Function (UPF) is selected considering the XR capabilities of the Application Function (AF) regarding XR application aspects. Also, during handover, a proper UPF may be re-selected considering the XR capabilities of the target AF, also if the application were to be relocated during handover.
The split rendering mechanisms specified in 3GPP TS 26.565, version 0.5.0, are client driven. The split rendering client (e.g., UE 102) contacts an edge server and requests a list of available computing resources, possibly associated with a desired compute capacity (RAM, CPU, GPU, . . . ). The server returns a list of server uniform resource identifiers (URI) that match the resource request from the split rendering client. Then, the split rendering client can select a server from the list and offload some or all of the split rendering processing to the selected server.
However, the approach described in 3GPP TS 26.565, version 0.5.0 only works well when network conditions are static and when edge server resources are reliable. Network conditions, though, are rarely static and edge server resources are often unreliable; meaning that these and other factors can degrade the quality of XR media content output from the split rendering session and split rendering session offloading, such as:
Split rendering clients (e.g., UE 102) in mobility (triggering a handover from one access network and/or RAN to another), facing changing location and network conditions (e.g., travelling in a train, or switching from home network, i.e., Wi-Fi, to 3GPP RAT, etc.), split rendering server congestion, under-resourced server provisioning, edge server infrastructures experiencing technical problems which degrades reliability (e.g., electricity outage, disaster recovery, air conditioning issues, etc.), and when the network faces audience spikes, which typically reduces the QoS and creates geographically localized congestion.
In those cases, 3GPP TS 26.565, version 0.5.0 does not provide mechanisms for shifting the offload tasks to another edge server, or to tell the split rendering client to do so. Furthermore, when the offloading task is shifted, it is important to preserve session continuity in a seamless manner for the client, which is not addressed as well.
In order to address these and other problems with current systems and practices for split rendering session support, new call flows are added for split rendering relocation. A split rendering session for which rendering is being supported or fully handed by a current split rendering server is monitored, such as by an element of a mobile communication network (e.g., network 100) or network function of a core network. The monitoring can be carried out by a data network (e.g., DN 116), a RAN node, a session management function (e.g., SMF 110), or the like. The element of the network (e.g., DN 116) or network function (e.g., SMF 110) can monitor for QoS, such as by monitoring or measuring a packet loss rate, a bit rate, a bit error rate, a throughput, a transmission delays, an availability rate, a jitter, or the like. If the element of the network (e.g., DN 116) or network function (e.g., SMF 110) detects a deterioration of one or more QoE metrics and/or one or more QoS metrics, or otherwise determines that the split rendering session should be relocated from the current split rendering server, the element or function, SMF 110, etc.) can indicate the same to the current split rendering server, the split rendering client (e.g., UE 102), an application provider, a target split rendering server, a SWAP server, and/or the like (e.g., provide an indication that the split rendering session should be relocated from the current split rendering server to the split rendering client, an application provider, a target split render server, and/or a SWAP server).
In an instance in which the element of the network (e.g., DN 116) or network function SMF 110) determines that the split rendering session should be relocated away from the current split rendering server to another server, the element of the network or network function can then request the provisioning of the target split rendering server. The element (e.g., DN 116) or network function (e.g., SMF 110) can also indicate to the split rendering client (e.g., UE 102) that it is to provide pose and movement information to both (1) the current split rendering server as it has already been doing during the time that the current split rendering server has been provide split rendering services to the split rendering client, and (2) the target split rendering server. The element of the network (e.g., DN 116) or network function (e.g., SMF 110) can then indicate to the target split rendering server that it is to receive pose and movement information from the split rendering client (e.g., UE 102) and render a second split rendering session in parallel with the current split rendering session being rendered by the current split rendering server. After the split rendering client sends the pose and movement information to both the current and target split rendering servers, the current split rendering server renders viewport-specific XR visual media content and sends the rendered viewport-specific XR visual media content to split rendering client (e.g., UE 102), while at the same time the target split rendering server renders the same viewport-specific XR visual media content and sends the rendered viewport-specific XR visual media content and/or information about the rendered viewport-specific XR visual media content to the element of the network (e.g., DN 116) or network function (e.g., SMF 110) in the mobile communication network (e.g., network 100).
Examples of pose information that can be transmitted from the split rendering client (e.g., UE 102) to one or more split rendering servers are illustrated below in Table 1.
Further detail about the pose information and format can be found in 3GPP TS 26.565, version 0.5.0, the entire contents of which are hereby incorporated herein by reference in its entirety for all purposes.
The element or network function (e.g., DN 116, SMF 110, etc.) in the mobile communication network (e.g., network 100) receives the rendered viewport-specific XR visual media content or the information about the rendered viewport-specific XR visual media content from the target split rendering server, and determines from that content or information provided by the target split rendering server one or more QoS metrics and/or QoE metrics for the rendered viewport-specific XR visual media content from the target split rendering server. The one or more QOS metrics and/or QoE metrics for the rendered viewport-specific XR visual media content from the target split rendering server can be compared by the element or function against a predetermined threshold or baseline, and/or against comparable QoS metrics and/or QoE metrics for the rendered viewport-specific XR visual media content from the source split rendering server. Accordingly, the element or function (e.g., DN 116, SMF 110, etc.) in the mobile communication network (e.g., network 100) can determine whether the QoS metrics and/or QoE metrics for the rendered viewport-specific XR visual media content from the target split rendering server are sufficiently good and/or better than the QoS metrics and/or QoE metrics for the rendered viewport-specific XR visual media content from the source split rendering server—and if so, to determine that the split rendering session should be relocated from the source split rendering server to the target split rendering server.
Upon determining that the split rendering session should be relocated from the source split rendering server to the target split rendering server, the element or function (e.g., DN 116, SMF 110, etc.) in the mobile communication network (e.g., network 100) can identify a particular point in time, frame, viewport, or the like, at which point the split rendering session is relocated from the source split rendering server to the target split rendering server.
In some embodiments, specific relocation information can be added to SplitRenderingConfiguration structure about the QOS metrics and/or QoE metric(s), configurational information about the source or target split rendering server, an identifier for the target split rendering server, and/or the like. Relocation parameters can be added to the existing data model (SplitRenderingConfiguration resource), such as provided in Table 7.2.2-2 in 3GPP TS 26.565, version 0.5.0. Table 2, below, illustrates properties for the data model for the SplitRenderingConfiguration resource, including additional properties of relocationStatus and relocationParameters, according to an example embodiment.
In some embodiments, timeline information, such as that provided in relocationStatus or relocationParameters, may be used, e.g., throughout the relocation procedure or relocation lifecycle, to manage new resource requests to split rendering servers, e.g., the current split rendering server. For example, if the system knows that the current split rendering session will be relocated in a particular number of seconds, such as 2 seconds, 3 seconds, 4 seconds, 5 seconds, or the like, it can anticipate the allocated resources at the current split rendering server will be freed up upon completion of the split rendering session relocation to the new split rendering server, and that those freed up resources at the current split rendering server can be used for other purposes.
Turning now to
In the CNA 200, the processor 202 (and/or co-processors or any other circuitry assisting or otherwise associated with the processor 202) may be in communication with the memory device 204 via a bus for passing information among components of the CNA 200. The memory device 204 may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory device 204 may be an electronic storage device (e.g., a computer readable storage medium) comprising gates configured to store data (e.g., bits) that may be retrievable by a machine (e.g., a computing device like the processor 202). The memory device 204 may be configured to store information, data, content, applications, instructions, or the like for enabling the CNA 200 to carry out various operations of one or more core network functions in accordance with an example embodiment of the present disclosure. For example, the memory device 204 could be configured to buffer input data for processing by the processor 202. Additionally or alternatively, the memory device 204 could be configured to store instructions of core network functions for execution by the processor 202.
The processor 202 may be embodied in a number of different ways. For example, the processor 202 may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processor 202 may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally or alternatively, the processor 202 may include one or more other processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading.
In an example embodiment, the processor 202 may be configured to execute instructions stored in the memory device 204 or otherwise accessible to the processor 202. Alternatively or additionally, the processor 202 may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 202 may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Thus, for example, when the processor 202 is embodied as an ASIC, FPGA or the like, the processor 202 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor 202 is embodied as an executor of instructions, the instructions may specifically configure the processor 202 to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor 202 may be a processor of a specific device (e.g., an encoder and/or a decoder) configured to employ an embodiment of the present invention by further configuration of the processor 202 by instructions for performing the algorithms and/or operations described herein. The processor 202 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor 202.
In embodiments that include a communication interface 206, the communication interface 206 may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data from and/or to a network and/or any other device or module in communication with the CNA 200, such as an NF, an NRF (e.g., 124), a UE (e.g., 102), a RAN (e.g., 104) core network services, an application server and/or function (e.g., 112), a database or other storage device, etc. In this regard, the communication interface 206 may include, for example, an antenna (or multiple antennas) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally or alternatively, the communication interface 206 may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface 206 may alternatively or also support wired communication. As such, for example, the communication interface 206 may include a communication modem and/or other hardware and/or software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms. In some embodiments, a session management function (e.g., 110) can comprise a 5GC session management function for any suitable CUPS architecture, such as for the gateway GPRS support node (GGSN-C), TWAG-C, BNG-CUPS, N4, Sxa, Sxb, Sxc, evolved packet core (EPC) SWG-C, EPC PGW-C, EPC TDF-C, and/or the like.
In some embodiments, the CNA 200 may represent a user equipment (e.g., 102) that is configured to be connected to other core network entities or network equipment. In some embodiments, user equipment can comprise a mobile telephone (cell phone) or the like.
As illustrated, the CNA 200 can include the processor 202 in communication with the memory 204 and configured to provide signals to and receive signals from the communication interface 206. In some embodiments, the communication interface 206 can include a transmitter and a receiver. In some embodiments, the processor 202 can be configured to control the functioning of the CNA 200, at least in part. In some embodiments, the processor 202 may be configured to control the functioning of the transmitter and receiver by effecting control signaling via electrical leads to the transmitter and receiver. Likewise, the processor 202 may be configured to control other elements of CNA 200 by effecting control signaling via electrical leads connecting the processor 202 to the other elements, such as a display or the memory 204. The processor 202 may, for example, be embodied in a variety of ways including circuitry, at least one processing core, one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits (for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), and/or the like), or some combination thereof. Accordingly, although illustrated in
The CNA 200 may be capable of operating with one or more air interface standards, communication protocols, modulation types, access types, and/or the like. Signals sent and received by the processor 202 may include signaling information in accordance with an air interface standard of an applicable cellular system, and/or any number of different wireline or wireless networking techniques, comprising but not limited to Wi-Fi, wireless local access network (WLAN) techniques, such as Institute of Electrical and Electronics Engineers (IEEE) 802.11, 802.16, 802.3, ADSL, DOCSIS, and/or the like. In addition, these signals may include speech data, user generated data, user requested data, and/or the like.
For example, the CNA 200 and/or a cellular modem therein may be capable of operating in accordance with various first generation (1G) communication protocols, second generation (2G or 2.5G) communication protocols, third-generation (3G) communication protocols, fourth-generation (4G) communication protocols, fifth-generation (5G) communication protocols, Internet Protocol Multimedia Subsystem (IMS) communication protocols (for example, session initiation protocol (SIP) and/or the like. For example, the CNA 200 may be capable of operating in accordance with 2G wireless communication protocols IS-136, Time Division Multiple Access TDMA, Global System for Mobile communications, GSM, IS-95, Code Division Multiple Access, CDMA, and/or the like. In addition, for example, the CNA 200 may be capable of operating in accordance with 2.5G wireless communication protocols General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), and/or the like. Further, for example, the CNA 200 may be capable of operating in accordance with 3G wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), and/or the like. The CNA 200 may be additionally capable of operating in accordance with 3.9G wireless communication protocols, such as Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), and/or the like. Additionally, for example, the CNA 200 may be capable of operating in accordance with 4G wireless communication protocols, such as LTE Advanced, 5G, and/or the like as well as similar wireless communication protocols that may be subsequently developed. In some embodiments, the CNA 200 may be capable of operating according to or within the framework of any suitable control and user plane separation (CUPS) architecture, such as for the gateway GPRS support node (GGSN-C), trusted wireless access gateway (TWAG-C), broadband network gateways (BNGs), N4, Sxa, Sxb, Sxc, evolved packet core (EPC) SWG-C, EPC PGW-C, EPC TDF-C, and/or the like.
It is understood that the processor 202 may include circuitry for implementing audio and/or video and logic functions of the CNA 200. For example, the processor 202 may comprise a digital signal processor device, a microprocessor device, an analog-to-digital converter, a digital-to-analog converter, and/or the like. Control and signal processing functions of the CNA 200 may be allocated between these devices according to their respective capabilities. The processor 202 may additionally comprise an internal voice coder (VC), an internal data modem (DM), and/or the like. Further, the processor 202 may include functionality to operate one or more software programs, which may be stored in memory 204. In general, the processor 202 and software instructions stored in memory 206 may be configured to cause the CNA 200 to perform actions. For example, the processor 202 may be capable of operating a connectivity program, such as a web browser. The connectivity program may allow the CNA 200 to transmit and receive web content, such as location-based content, according to a protocol, such as wireless application protocol, WAP, hypertext transfer protocol, HTTP, and/or the like.
In some embodiments, the CNA 200 may also comprise a user interface including, for example, an earphone or speaker, a ringer, a microphone, a display, a user input interface, and/or the like, which may be operationally coupled to the processor 202. The display may, as noted above, include a touch sensitive display, where a user may touch and/or gesture to make selections, enter values, and/or the like. The processor 202 may also include user interface circuitry configured to control at least some functions of one or more elements of the user interface, such as the speaker, the ringer, the microphone, the display, and/or the like. The processor 202 and/or user interface circuitry comprising the processor 202 may be configured to control one or more functions of one or more elements of the user interface through computer program instructions, for example, software and/or firmware, stored on the memory 204 accessible to the processor 202, for example, a volatile memory, a non-volatile memory, devices comprising the same, and/or the like. The CNA 200 may include a battery for powering various circuits related to the mobile terminal, for example, a circuit to provide mechanical vibration as a detectable output. The user input interface may comprise devices allowing the CNA 200 to receive data, such as a keypad (e.g., a virtual keyboard presented on a display or an externally coupled keyboard) and/or the like.
As shown in
The CNA 200 may include volatile memory and/or non-volatile memory, which can comprise some or all of the memory 204 or can alternatively be a separate memory within or connected to the CNA 200. For example, volatile memory may include Random Access Memory (RAM) including dynamic and/or static RAM, on-chip or off-chip cache memory, and/or the like. Non-volatile memory, which may be embedded and/or removable, may include, for example, read-only memory, flash memory, magnetic storage devices, for example, hard disks, floppy disk drives, magnetic tape, optical disc drives and/or media, non-volatile random access memory (NVRAM), and/or the like. Like volatile memory, non-volatile memory may include a cache area for temporary storage of data. At least part of the volatile and/or non-volatile memory may be embedded in processor 202. The memories may store one or more software programs, instructions, pieces of information, data, and/or the like. For example, the memory 204 may store software or instructions of one or more network functions of the core network which may be used by the apparatus for performing operations disclosed herein.
The memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying CNA 200. The memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying CNA 200. In the example embodiment, the processor 202 may be configured using computer code stored at memory and/or to the provide operations disclosed herein with respect to the base stations, WLAN access points, network nodes including the UEs, and the like. Likewise, the CNA 200 can be configured to be any other component or network equipment from the core network.
Some of the embodiments disclosed herein may be implemented in software, hardware, application logic, or a combination of software, hardware, and application logic. The software, application logic, and/or hardware may reside on the memory 204, the processor 202, or electronic components, for example. In some example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a “computer-readable medium” may be any non-transitory media that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer or data processor circuitry, with examples depicted at
In some embodiments, the apparatus 300 is or comprises exemplary specialized hardware particularly dimensioned and configured to carry out any of the methods, processes, and approaches described herein. In some embodiments, the apparatus 300 can be a part of the system 100 or in communication with a component thereof. It will be appreciated that the apparatus 300 is provided as an example of one embodiment and should not be construed to narrow the scope or spirit of the invention in any way. In this regard, the scope of the disclosure encompasses many potential embodiments in addition to those illustrated and described herein. As such, while
The apparatus 300 may be embodied as a desktop computer, laptop computer, mobile terminal, mobile computer, mobile phone, mobile communication device, game device, digital camera and/or camcorder, audio and/or video player, television device, radio receiver, digital video recorder, positioning device, a chipset, a computing device comprising a chipset, any combination thereof, and/or the like. In some example embodiments, the apparatus 300 is embodied as a mobile computing device, such as mobile telephones, mobile computers, personal digital assistants (PDAs), pagers, laptop computers, desktop computers, gaming devices, televisions, e-papers, and other types of electronic systems, which may employ various embodiments of the invention.
The apparatus 300 can include a computing device 302 including a processor 304, and storage, such as a non-volatile memory 306 and/or volatile memory 308. In some embodiments, the processor 304 may, for example, be embodied as various means including circuitry, one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), or some combination thereof. Accordingly, although illustrated in
In addition to broad-band systems, some Narrow-band Advanced Mobile Phone System (NAMPS), as well as Total Access Communication System (TACS), mobile terminals may also benefit from embodiments of this invention, as should dual or higher mode phones (e.g., digital/analog or TDMA, CDMA, and/or analog phones). Additionally, the apparatus 300 or a component thereof may be capable of operating according to Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX) protocols.
It is understood that the processor 304 may comprise circuitry for implementing audio and/or video and logic functions of the apparatus 300. For example, the processor 304 may comprise a digital signal processor device, a microprocessor device, an analog-to-digital converter, a digital-to-analog converter, and/or the like. Control and signal processing functions of the mobile terminal may be allocated between these devices according to their respective capabilities. The processor may additionally comprise an internal voice coder (VC), an internal data modem (DM), and/or the like. Further, the processor 304 may comprise functionality to operate one or more software programs, which may be stored in memory. For example, the processor 304 may be capable of operating a connectivity program, such as a web browser. The connectivity program may allow the apparatus 300 to transmit and receive web content, such as location-based content, according to a protocol, such as Wireless Application Protocol (WAP), hypertext transfer protocol (HTTP), and/or the like. The apparatus 300 may be capable of using a Transmission Control Protocol and/or Internet Protocol (TCP/IP) to transmit and receive web content across the internet or other networks.
The apparatus 300 may also comprise a user interface 312 including, for example, an earphone or speaker, a ringer, a microphone, a user display, a user input interface, and/or the like, which may be operationally coupled to the processor 304. In this regard, the processor 304 may comprise user interface circuitry configured to control at least some functions of one or more elements of the user interface, such as, for example, the speaker, the ringer, the microphone, the display, and/or the like. The processor 304 and/or user interface circuitry comprising the processor 304 may be configured to control one or more functions of one or more elements of the user interface through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor 304 (e.g., non-volatile memory 306, volatile memory 308, and/or the like). Although not shown, the apparatus 300 may comprise a battery for powering various circuits related to the apparatus 300, for example, a circuit to provide mechanical vibration as a detectable output. The apparatus 300 can further comprise a display 314. In some embodiments, the display 314 may be of any type appropriate for the electronic device in question with some examples including a plasma display panel (PDP), a liquid crystal display (LCD), a light-emitting diode (LED), an organic light-emitting diode display (OLED), a projector, a holographic display, or the like. The user interface 312 may comprise devices allowing the apparatus 300 to receive data, such as a keypad, a touch display (e.g., some example embodiments wherein the display 314 is configured as a touch display), a joystick (not shown), and/or other input device. In embodiments including a keypad, the keypad may comprise numeric (0-9) and related keys (#, *), and/or other keys for operating the apparatus 300.
The apparatus 300 may comprise memory, such as the non-volatile memory 306 and/or the volatile memory 308, such as RAM, read only memory (ROM), non-volatile RAM (NVRAM), a subscriber identity module (SIM), a removable user identity module (R-UIM), and/or the like. In addition to the memory, the apparatus 300 may comprise other removable and/or fixed memory. In some embodiments, the volatile memory 308 may include Random Access Memory (RAM) including dynamic and/or static RAM, on-chip or off-chip cache memory, and/or the like. In some embodiments, the non-volatile memory 306, which may be embedded and/or removable, may include, for example, read-only memory, flash memory, magnetic storage devices (e.g., hard disks, floppy disk drives, magnetic tape, etc.), optical disc drives and/or media, non-volatile random access memory (NVRAM), and/or the like. Like the volatile memory 308, the non-volatile memory 306 may include a cache area for temporary storage of data. The memories may store one or more software programs, instructions, pieces of information, data, and/or the like which may be used by the mobile terminal for performing functions of the mobile terminal. For example, the memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying the apparatus 300.
In some example embodiments, the apparatus 300 includes various means for performing the various functions herein described. These means may comprise one or more of the processor 304, the non-volatile memory 306, the volatile memory 308, the user interface 312, or the display 314. The means of the apparatus 300 as described herein may be embodied as, for example, circuitry, hardware elements (e.g., a suitably programmed processor, combinational logic circuit, and/or the like), a computer program product comprising computer-readable program instructions (e.g., software or firmware) stored on a computer-readable medium (e.g., storage 306 or 308) that is executable by a suitably configured processing device (e.g., the processor 304), or some combination thereof.
In some example embodiments, one or more of the means illustrated in
The processor 304 may, for example, be embodied as various means including one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), one or more other types of hardware processors, or some combination thereof. Accordingly, although illustrated in
The memory 306 and/or 308 may comprise, for example, volatile memory, non-volatile memory, or some combination thereof. In this regard, the memory 306 and/or 308 may comprise a non-transitory computer-readable storage medium. Although illustrated in
In some embodiments, the apparatus 300 can further comprise a communication interface (not shown) that may be embodied as any device or means embodied in circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., the memory 306 and/or 308) and executed by a processing device (e.g., the processor 304), or a combination thereof that is configured to receive and/or transmit data from and/or to another computing device. In some example embodiments, the communication interface is at least partially embodied as or otherwise controlled by the processor 304. In this regard, the communication interface may be in communication with the processor 304, such as via a bus. The communication interface may include, for example, an antenna, a transmitter, a receiver, a transceiver and/or supporting hardware or software for enabling communications with one or more remote computing devices. In some embodiments, e.g., wherein the apparatus is embodied as an apparatus 300, the communication interface may be embodied as or comprise the transmitter and the receiver. The communication interface may be configured to receive and/or transmit data using any protocol that may be used for communications between computing devices. In this regard, the communication interface may be configured to receive and/or transmit data using any protocol that may be used for transmission of data over a wireless network, wireline network, some combination thereof, or the like by which the apparatus 300 and one or more computing devices may be in communication. As an example, the communication interface may be configured to receive and/or otherwise access content (e.g., web page content, streaming media content, and/or the like) over a network from a server or other content source. The communication interface may additionally be in communication with the memory 306 and/or 308, user interface 312 and/or the processor 304, such as via a bus.
The user interface 312 may be in communication with the processor 304 and configured to receive an indication of a user input and/or to provide an audible, visual, mechanical, or other output to a user. As such, the user interface 312 may include, for example, a keyboard, a mouse, a joystick, a display, a touch screen display, a microphone, a speaker, and/or other input and/or output mechanisms. In embodiments in which the apparatus 300 is embodied as an apparatus 300, the user interface 312 may be embodied as or comprise the user input interface, such as the display 314 (shown in
The processor 304 may be embodied as various means, such as circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., the memory 306 and/or 308) and executed by a processing device (e.g., the processor 304), or some combination thereof and, in some embodiments, is embodied as or otherwise controlled by the processor 304. The processor 304 may further be in communication with one or more of the memory 306 and/or 308, or user interface 312, such as via a bus.
The processor 304 may be configured to receive a user input from a user interface 312, such as a touch display. The user input or signal may carry positional information indicative of the user input. In this regard, the position may comprise a position of the user input in a two-dimensional space, which may be relative to the surface of the touch display user interface. For example, the position may comprise a coordinate position relative to a two-dimensional coordinate system (e.g., an X and Y axis), such that the position may be determined. Accordingly, the processor 304 may determine an element, instruction, and/or command that corresponds with a key, or image, displayed on the touch display user interface at the determined position or within a predefined proximity (e.g., within a predefined tolerance range) of the determined position. The processor 304 may be further configured to perform a function or action related to the key corresponding to the element, instruction, and/or command determined by the processor 304 based on the position of the touch or other user input.
Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein may be improved user equipment or network equipment configuration. As such, any embodiment of a method, system, approach, device, apparatus, or computer program described or illustrated herein is understood to comprise any or all of the components, functionalities, elements, or steps of any other embodiment such that any method can be carried out by the CNA 200, the apparatus 300, or by any other suitable system or device, and likewise can be carried out according to a computer program code envisioned within the scope of this disclosure.
In some embodiments, edge server provisioning and split rendering session relocation can be facilitated, controlled, managed, or otherwise coordinated by a Real-Time Communication Application Function (RTC-AF), such as an RTC-AF in a fifth-generation core network (5GCN) or the like. An RTC-AF can be a hardware-based network function, a logical network function, a software-defined network function, a fully virtualized network function, or provided in any other suitable manner. An RTC-AF can be configured to communicate with a PCF (e.g., 114), an SMF (e.g., 110), a Network Exposure Function (NEF), a UDM (e.g., 118) a UPF (e.g., 106a, 106b), a gNodeB (e.g., 104), an AMF (e.g., 108), an AUSF (e.g., 120), a Security Edge Protection Proxy (SEPP), a UE (e.g., 102), an Application Provider, a Real-Time Communication Application Server (RTC-AS), one or more split rendering servers (SRS), a SWAP server, any other function, element, node, server, or component of a core network (e.g., 101) or a data network (e.g., 116), and/or the like.
The apparatus 400 can include or be in communication with a computer-readable storage medium 410, which can be similar to or the same as that described elsewhere in this disclosure. The computer-readable storage medium 410 can be embodied as and/or stored on any suitable computer program product, such as a transitory or non-transitory storage medium. The computer-readable storage medium 410 can be configured to communicate some or all of the computer program code or instructions 408 to the memory 406 of the RTC-AF 402, whether initially, iteratively, continuously, sporadically, on demand, as needed, or otherwise.
The apparatus 400 or RTC-AF 402 can be configured to communication with a network 412, such as the Internet, a communication network, an internet-of-things (IoT) network, and/or the like. The network 412 can be a core network (e.g., CN 101) or data network (e.g., DN 116). The network 412 can comprise or be in operable communication with a split rendering services network 414 that includes various components, subsystems, hardware, functions, nodes, or network elements configured to facilitate rendering for a split rendering session with a UE (e.g., UE 102). The split rendering services network 414 can include, for example, a current split rendering server 416 (current SRS 416), an Real-Time Communication Application Server 418 (RTC-AS 418), a new SRS 420, and/or an Application Provider 422. The split rendering services network 414 can also be in communication with the network 412. Alternatively, the split rendering services network 414 can comprise or be comprised within the network 412. The specific network architecture or sub-system configuration used may not materially affect the operations of the RTC-AF 402. Instead, the RTC-AF 402 can be configured to be operably coupled to, in communication with, and/or part of the split rendering services network 414, according to some embodiments.
Alternatively, one or more of the current SRS 416, new SRS 420, RTC-AS 418, and/or Application Provider 422 can be located in one or more other sub-systems or sub-networks of the communication network (e.g., 5GS, 5GCN, DN, etc.). Nevertheless, the RTC-AF 402 can be configured to receive, measure, calculate, or otherwise determine metrics, such as QoS metrics, QoE metrics, KPIs of the current and/or new SRS 416, 420, and/or the like, associated with a split rendering session, and determine whether and when to relocate the split rendering session from the current SRS 416 to the new SRS 420. The RTC-AF 402 could, for example, determine based on one or more QoE metrics, that there has been a degradation in a render quality of the viewport observed by the user when consuming the XR media content as rendered by the current SRS 416. Additionally or alternatively, the RTC-AF 402 could, for example, determine based on one or more QOS metrics, that there has been a degradation in a data quality, network usability, packet transmissibility, packet reception, packet interpretation, network traffic management, server resource availability, packet traffic congestion, or other QoS metrics, and/or the like. Additionally or alternatively, the RTC-AF 402 could, for example, determine based on one or more key performance indicators (KPIs) associated with the current SRS 416 that there has been a degradation in the operability, operating efficiency, operating efficacy, resource availability, or other KPIs, and/or the like. Based on such a degradation being observed by the RTC-AF 402, the RTC-AF 402 can determine that the split rendering session needs to be relocated to another server or edge resource. The RTC-AF 402 can pick the new SRS 420, e.g., from among a plurality of SRSs. The RTC-AF 402 can also communicate the need for split rendering session relocation to the new SRS 420, the RTC-AS 418, and/or the Application Provider 422.
While the real-world movements of the user and corresponding real-world movement, pose, position, angle, attitude, pitch, roll, and yaw of the client device in
In the context of mobile communication networks, e.g., the network 100, and understanding the real-world ‘reference system’ of a split rendering client device, e.g., the UE 102, that corresponds to the virtualized ‘coordinate system’ of the XR visual media content being rendered by split rendering server(s), several of the various options for XR end-to-end rendering architecture are briefly described below with reference to
The XR Device 502 and/or the XR Server 504 illustrated in
The XR Server 504 and the XR Device 6023 illustrated in
The viewport-independent XR media delivery approach 800 illustrated in
The viewport-independent XR media delivery approach 900 illustrated in
The viewport-independent XR media delivery approach 1000 illustrated in
Referring now to
In some embodiments, in operation 1, the RTC-AF 1106, which may comprise hardware or be virtualized on a hardware node (e.g., DN 116, SMF 110, etc.) constantly (or regularly) monitors for split-rendering media session degradations. When degradation (e.g., recurrent degradation) of the quality of the rendered viewport, a QoS metric, a QoE metric, or the like is observed, e.g., within a predetermined time period, and/or the severity of degradation of the quality of the rendered viewport is sufficiently severe, the RTC-AF 1106 may indicate to the Application Provider 1108 that the current split rendering session is threatened, and that the split rendering session needs to be relocated.
Then, the approach 1100 can include a step of provisioning and configuring a new split-rendering session. In some embodiments, the same split, or higher configuration of viewport rendering by UE 1102 versus new RTC-AF 1106 can be used. In some embodiments, final say regarding rendering split configuration is left up to the Application Provider 1108.
In some embodiments, in step 2, the Application Provider 1108 requests and sets up the edge server(s) used for the split-rendering, e.g., as described in 3GPP TS 26.506, the entire disclosure of which is hereby incorporated herein by reference in its entirety for all purposes. However, the Application Provider 1108 may use any other suitable method to provision edge servers configurations or leave it to a mobile network operator (currently serving UE 1102 or home network of UE 1102, for example) to set up appropriate edge servers to run the split-rendering process.
In some embodiments, in step 3, the Application Provider 1108 provisions the split-rendering session using a first RTC-AF 1108 and RTC-AS 1104, and sets up the split-rendering session. If the edge servers were provisioned in step 2, the edge servers identifiers may be provided in this split rendering session so the UE 1102 and Application Provider 1108 can consistently usc the same edge server identifiers during provisioning of the new split rendering server by the RTC-AS 1104 and/or RTC-AF 1106 with regard to the split rendering session. In step 4, relocation of the split rendering session to the new split rendering server is achieved. In step 5, once the relocation of the split rendering session to the new split rendering server is complete, the RTC-AF 1106 tells the AP 1108 that the current SRS can be un-provisioned.
Referring now to
At step 2 of the process 1200, the provisioning of the new split rendering server 1210 is announced to the Application 1202 as part of Service Access Information.
At step 3, the Application 1202 requests client media functions from the split rendering client 1106 to setup a new split-rendering session.
At step 4, the split rendering client 1206 requests that the current split rendering server 1208 transfer relevant configurational information and rendering parameters to the now-provisioned new split rendering server 1210.
In some embodiments, additional steps (e.g., 5-8) can be carried out by the new split rendering server 1210 without sending rendered frames back to the split rendering client 1206. In some embodiments, the negotiation regarding the terms of relocation of the split rendering service and the transition from the current split rendering server 1208 to the new split rendering server 1210 can be addressed during the relocation.
For example, at step 5, the new split rendering server 1210 starts the split rendering process. At step 6, the new split rendering server 1210 establishes a WebRTC session for split rendering of XR viewport for the split rendering client 1206.
At step 7, the new split rendering server 1210 informs the Application 1202 that split rendering is successfully running on the edge resource (i.e., new split rendering server 1210).
At step 8, the split rendering client 1206 sends uplink metadata, such as pose information, action information, positional information, and/or the like, to the new split rendering server 1210. Additionally, in parallel, the split rendering client 1206 sends the uplink metadata, such as pose information, action information, positional information, and/or the like, to the current split rendering server 1208 in addition to sending the same uplink metadata to the new split rendering server 1210.
In some embodiments, the pose information or the like is simply duplicated at the split rendering client 1206 and one of the duplicates is sent to each of the current split rendering server 1208 and the new split rendering server 1210. Alternatively, the pose information can be sent from the split rendering client 1206 to only one of the current split rendering server 1208 or the new split rendering server 1210, and the receiving server (one of 1208 or 1210) forwards the same pose information and the like on to the other server (the other of 1208 or 1210). Additionally or alternatively, the split rendering client 1206 can provide the pose information or the like can be provided to another network-side entity, such as a network entity hosting the RTC-AF 1212 or the Application Provider 1214, which may be a data network (e.g., DN 116) or a session management function (e.g., SMF 110). In some embodiments, the split rendering client 1206 is not able to detect or determine that the degradation of rendered viewport quality is occurring. In some embodiments, the split rendering client 1206 may be able to determine that degradation of the rendered viewport quality is occurring, the split rendering client 1206 may be configured to initiate or request relocation of the split rendering session or split rendering services from the current split rendering server 1208, such as to a new split rendering server 1210. In some embodiments, the RTC-AF 1212 or a media session handler 1204 may receive an indication from another entity that the degradation of rendered viewport quality has occurred (e.g., repeatedly occurred during a particular time period, or has occurred with such severity, as to warrant relocation of the split relocation services to a new split relocation server).
Referring now to
In operation 2 of the approach 1300, in response to being provisioned for split rendering, the new split rendering server 1304b creates a description of expected split rendering output and what input it expects to receive from the split rendering client 1302. This input and output information can be provided from the new split rendering server 1304b to an XR Source Management element of the split rendering client 1302.
In operation 3 of the approach 1300, the split rendering client 1302 may establish transport connections, such as a WebRTC session, and request the buffer streams from a Media Access Function (MAF) of the split rendering client 1302, which in turn establishes a connection to the new split rendering server 1304b to stream pose and retrieve split rendering buffers.
Then, in operations 4-12, the split rendering session relocation process is starting.
In operation 4 of the approach 1300, the RTC-AF 1306 negotiates the split rendering session relocation with the current and new split rendering servers 1304a,b. In some embodiments, the result of this negotiation can be, among other things, a timestamp in the future or a yet-to-be-received frame number, from which the MAF of the split rendering client 1302 will start receiving buffer frame(s) from the new split rendering server 1304b and at which point the current split rendering server 1304a will stop sending buffer frame(s) to the MAF of the split rendering client 1302.
In operation 5 of the approach 1300, the RTC-AF 1306 tells the MAF of the split rendering client 1302 and the current and new split rendering servers 1304a,b to start the split rendering session and/or server relocation process. The RTC-AF 1306 may then indicate, to the MAF of the split rendering client 1302 and the current or new split rendering server 1314a,b, an indication of the point in time or a particular frame at which point the relocation of the split rendering session and/or service will be triggered.
In operation 6 of the approach 1300, the rendering will then temporarily switch from a Rendering Loop to a Rendering Relocation Loop, by an XR Source Manager of the split rendering client 1302 retrieves pose and user input from an XR runtime 1301.
In operation 7 of the approach 1300, the XR Source Manager of the split rendering client 1302 shares the pose predictions and user input actions with the current and new split rendering servers 1304a,b.
In operation 8 of the approach 1300, both the current and new split rendering servers 1304a,b use that pose prediction and user input action information to render XR viewport frames.
In operation 9 of the approach 1300, rendered XR viewport frame can be encoded and streamed downstream to the MAF of the split rendering client 1302. In some embodiments, the source of the frame changes from the current split rendering server 1304a to the new split rendering server 1304b when the transition trigger is reached. As mentioned elsewhere, the transition trigger may be a transition timestamp (point in time at which transition of active viewport rendering from current split rendering server 1304b to new split rendering server 1304b occurs). Additionally or alternatively, the transition trigger may be based on a frame counter (frame number at which point transition of active viewport rendering from current split rendering server 1304b to new split rendering server 1304b occurs).
In operation 10 of the approach 1300, the raw buffer frames are passed from the MAF of the split rendering client 1302 to the XR Runtime 1301 to be displayed to the user.
In operation 11 of the approach 1300, the XR Runtime 1301 achieves the final composition and rendering of the XR viewport based on the raw buffer frames passed from the MAF of the split rendering client 1302 to the XR Runtime 1301.
In operation 12 of the approach 1300, once the transition has been done in the Rendering Relocation Loop, the rendering goes back to the Rendering Loop with the new split rendering server 1304b being the sole or principal rendering server. Thereafter, the relocation procedure ends by closing the established connections.
In operation 13 of the approach 1300, a Scene Manager of the split rendering client 1202 can then terminate the connection with the current split rendering server 1304a.
In some embodiments, when provisioning a split rendering session or new split rendering server, an AF or split rendering client can indicate guidelines for provisioning the session. For example, before or when split rendering session relocation is triggered, the ProvisioningSessionType can be set to “BIDIRECTIONAL.” In some embodiments, the aspId can be configured and can be a unique identifier for an Application Service Provider that offers split rendering. In some embodiments, the externalApplicationld can uniquely identify the application and can be terminated by the sub-string “+3gpp-sr”, such as: “urn:com:example:game+3gpp-sr”.
In some embodiments, edge resource configurations can be set, determined, and communicated between split rendering servers, application functions, and split rendering clients. For example, a split rendering application may define edge resource configurations in a similar manner to 3GPP TS 26.512, version 16.10.0 and version 17.5.0, the entire disclosures of each of which are hereby incorporated herein by reference in their entireties for all purposes. In some embodiments, edge resource configurations can be used for split-rendering session. In some embodiments, eligibilityCriteria can be present and appRequest can be set to true. In some embodiments, easRequirements can indicate “SR” as the easType and can include “3gpp-sr” among the easFeatures. In some embodiments, serviceKpi can be present and indicate split rendering server processing and networking capabilities and requirements.
Referring now to
Further details about the SWAP protocol are described in 3GPP TS 26.113, version 0.6.0, the entire disclosure of which is hereby incorporated herein by reference in its entirety for all purposes. In some embodiments, the SWAP protocol may allow for definition of application-specific messages, such as for the establishment of a split rendering session. For Split Rendering, application-specific messages that are supported according to the approach 1400 can include a configuration message that carries the split rendering configuration information from the split rendering client 1402 to the split rendering server 1404. In some embodiments, such messages can be identified by a type “urn:3gpp:sr-msc:sr-configuration”. In some embodiments, a rendering description message may be supported that carries the description of the split rendered media from the split rendering server 1404 to the split rendering client 1402. In some embodiments, the rendering description message can be identified by the type “urn:3gpp:sr-msc:sr-description”. In some embodiments, the rendering description message provides the semantics of the media that is delivered over WebRTC from the split rendering server 1404 to split rendering client 1402.
In some embodiments, a SWAP message exchange for the establishment of a split rendering session, such as is depicted in the call flow diagram of
In some embodiments, the SWAP server 1406 can be or function as an RTC-AF (e.g., 1106, 1212, 1306). In some embodiments, an RTC-AF (e.g., 400) may be housed at, hosted on, co-provisioned with, mapped to, and/or provided in conjunction with the SWAP server 1406. For example, an RTC-AF (e.g., 400) can be a logical function or software-defined function and/or functionality of the SWAP server 1406 in a core network (e.g., CN 101), such as a fifth-generation core network (5GCN), as a data server in a data network (e.g., DN 116), in a radio access network (e.g., (R)AN 104), as an application server or application function (e.g., AS/AF 112), and/or as a dedicated server of the core network, and/or the like.
In some embodiments, possible prerequisites may include that the split rendering client 1302 has discovered the identifier of the split rendering server 1402 that it will use for its split rendering session, that the split rendering client 1402 has retrieved the address of the SWAP server 1406 as part of the configuration, and/or the like.
In some embodiments, the operations of the approach 1400 can include operation 1 in which the split rendering client 1402 sends the configuration message as an application-specific SWAP message to the SWAP server 1406. In some embodiments, the configuration message may provide an identifier indicating the target split rendering server 1404 as a matching criteria.
In operation 2, the SWAP server 1406 may use the provided matching criteria to locate the split rendering server 1404.
In operation 3, the SWAP server 1406 can forward the configuration message to the target split rendering server 1406.
In operation 4, the SWAP server 1406 confirms the successful forwarding of the message to the split rendering client 1402.
In operation 5, the split rendering server 1404 processes the SR configuration message. In some embodiments, the split rendering server 1404 may, for example, verify application and resource availability, launch the application, configure its rendering, and create a rendering description.
In operation 6, the split rendering server 1404 sends the rendering description message as an application-specific SWAP message to the SWAP server 1406.
In operation 7, the SWAP server 1406 forwards the application-specific SWAP message to the split rendering client 1402.
In operation 8, the SWAP server 1406 acknowledges, to the split rendering server 1404, the successful forwarding of the application-specific SWAP message to the split rendering client 1402.
In operation 9, the split rendering client 1402 processes the rendering description and identifies the required data channel and media sessions.
In operation 10, the split rendering client 1402 sends a connect message with the SDP offer to the split rendering server 1404. The SDP offer may reflect the negotiated media and data channel streams.
In operation 11, the SWAP server 1406 acknowledges the forwarding of the connect message with the SDP offer to the split rendering server 1404.
In operation 12, the split rendering server 1404 replies to the SWAP server 1406 with an accept message that includes the SDP answer. The SDP answer reflects the information that was provided in the split rendering description.
In operation 13, the SWAP server 1406 acknowledges, to the split rendering server 1304, the forwarding of the accept message to the split rendering client 1402.
In some embodiments, a variety of such application-specific messages can be supported, such as messages that facilitate relocation negotiation between two split rendering sessions. For example, during a split rendering session transfer and/or relocation, the such application-specific message can include a rendering transfer description message that carries a description of the session transfer to be done from the current split rendering server 1404 to a new split rendering server (not shown). The rendering transfer description message can be identified by the type “urn:3gpp:sr-mse:sr-transfer”. The rendering transfer description message can provide the semantics of the media that is delivered over WebRTC from the split rendering server 1404 to the split rendering client 1402.
Referring now to
As illustrated in
In operation 2 of the approach 1500, the SWAP server 1506 uses the provided matching criteria to locate the current split rendering server 1504a and the new split rendering server 1504b.
In operation 3 of the approach 1500, the SWAP server 1506 forwards the configuration transfer message to the current split rendering server 1504a.
In operation 4 of the approach 1500, the SWAP server 1506 confirms the successful forwarding of the message to the split rendering client 1502.
In operation 5 of the approach 1500, the current split rendering server 1504a processes the split rendering session transfer configuration message. It may prepare the configuration message and runtime information to be sent to the new split rendering server 1504b.
In operation 6 of the approach 1500, the current split rendering server 1504a sends the configuration message to the SWAP server 1506. The current split rendering server 1504a provides the identifier of the new split rendering server 1504b targeted for relocation of the split rendering session.
In operation 7 of the approach 1500, the SWAP server 1506 forwards the configuration message to the new split rendering server 1504b.
In operation 8 of the approach 1500, the SWAP server 1506 confirms the successful forwarding of the message to the new split rendering server 1504b.
In operation 9 of the approach 1500, the new split rendering server 1504b processes the split rendering session transfer configuration message. In some embodiments, the new split rendering server 1504b may verify application and resource availability, launch the application, configure its rendering, and/or create a rendering description.
In operation 10 of the approach 1500, the new split rendering server 1504b sends the rendering description message as an application-specific SWAP message to the SWAP server 1506.
In operation 11 of the approach 1500, the SWAP server 1506 forwards the message to the split rendering client 1502.
In operation 12 of the approach 1500, the SWAP server 1506 acknowledges the successful forwarding of the message to the new split rendering server 1504b.
In operation 13 of the approach 1500, the new split rendering server 1504b processes the rendering description and identifies the required data channel and media sessions.
In operation 14 of the approach 1500, the split rendering client 1502 sends a connect message with the SDP offer to the new split rendering server 1504b. The offer reflects the negotiated media and data channel streams.
In operation 15 of the approach 1500, the SWAP server 1506 acknowledges the forwarding of the message to the new split rendering server 1504b.
In operation 16 of the approach 1500, the new split rendering server 1504b replies with an accept message that includes the SDP answer. The SDP answer reflects the information that was provided in the split rendering description.
In operation 17 of the approach 1500, the SWAP server 1506 acknowledges the forwarding of the message to the split rendering client 1502.
In some embodiments, a particular format, such as message format, or message contents, can be used to indicate the triggering of a split rendering session relocation. In that regard, session transfer configuration information such as that shown below in Table 3 in, e.g., JSON format, can be used to identify and signal relevant split rendering session transfer and configurational information.
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
In some embodiments, the apparatus may be a data server in a data network (DN). In some embodiments, the apparatus may comprise CNA 200 and the computer program code may include computer program code for the RTC-AF and computer program code for one or more of a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, and/or UPF of a core network of the communication network. In some embodiments, the RTC-AF can be hosted in a session management function (SMF), a data server in a data network (DN), a policy control function (PCF), an access and mobility management function (AMF), NSSF, AUSF, UDM, NRF, UPF, and/or the like.
Referring now to
As described above,
A computer program product is therefore defined in those instances in which the computer program instructions, such as computer-readable program code portions, are stored by at least one non-transitory computer-readable storage medium with the computer program instructions, such as the computer-readable program code portions, being configured, upon execution, to perform the functions described above, such as in conjunction with the flowcharts of at least
Accordingly, blocks of the flowcharts support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.
In some embodiments, certain ones of the operations above may be modified or further amplified. Furthermore, in some embodiments, additional optional operations may be included. Modifications, additions, or amplifications to the operations above may be performed in any order and in any combination.
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. Any application, publication, technical document, or the like that is cited in this disclosure is hereby incorporated herein by reference in its entirety for all purposes.
In general, the routines executed to implement the embodiments, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, may be referred to herein as “computer program code” or simply “program code”. Program code typically comprises computer-readable instructions that are resident at various times in various memory devices (e.g., 204, 306, 308, 404) and storage devices in a computer and that, when read and executed by one or more processors (e.g., 202, 302, 402) in the computer, cause the computer to perform the operations necessary to execute operations and/or elements embodying the various aspects of the embodiments of the present disclosure. Computer readable program instructions for carrying out operations of the embodiments of the invention may be, for example, assembly language or either source code or object code written in any combination of one or more programming languages.
In certain alternative embodiments, the functions and/or acts specified in the flowcharts, sequence diagrams, and/or block diagrams may be re-ordered, processed serially, and/or processed concurrently without departing from the scope of the invention. Moreover, any of the flowcharts, sequence diagrams, and/or block diagrams may include more or fewer blocks than those illustrated consistent with embodiments of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the disclosure. It will be further understood that the terms “comprise” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, to the extent that the terms “includes”, “having”, “has”, “with”, “comprised of”, or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”.
While a description of various embodiments has illustrated all of the inventions and while these embodiments have been described in considerable detail, it is not the intention of the applicants to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. The invention in its broader aspects is therefore not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departing from the spirit or scope of the applicant's general inventive concept.
This patent application claims the benefit of priority of U.S. Provisional Patent Application No. 63/519,807 filed Aug. 15, 2023, which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63519807 | Aug 2023 | US |