Not applicable.
Not applicable.
Virtual conferencing may refer to a service that allows conference events and/or data to be shared and/or exchanged simultaneously with multiple participants located in geographically distributed networking sites. The service may allow participants to interact in real-time and may support point-to-point (P2P) communication (e.g. one sender to one receiver), point-to-multipoint (P2MP) communication (e.g. one sender to multiple receivers), and/or multipoint-to-multipoint (MP2MP) communication (e.g. multiple senders to multiple receivers). Some examples of virtual conferencing may include chat rooms, E-conferences, and virtual white board (VWB) services, where participants may exchange audio, video, and/or data over the Internet. Some technical challenges in virtual conferencing may include real-time performance (e.g. real-time data exchanges among multi-parties), scalability (e.g. with one thousand to ten thousand (10K) participants), and interactive communication (e.g. MP2MP among participants, participants with simultaneous dual role as subscriber and publisher) support.
In one embodiment, the disclosure includes a network element (NE) comprising a memory configured to store a digest log for a conference, a receiver configured to receive a first message from a first of a plurality of participants associated with the NE, wherein the first message comprises a signature profile of the first participant, a processor coupled to the receiver and the memory and configured to track a state of the conference by performing a first update of the digest log according to the first message, and a transmitter coupled to the processor and configured to send a second message to a first of a plurality of service proxies that serve the conference, wherein the second messages indicate the updated digest log.
In one embodiment, the disclosure includes a method for synchronizing service controls for a conference at a local service proxy in an Information Centric Networking (ICN) network, the method comprising receiving a first message from a first of a plurality of participants associated with the service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received by employing an ICN content name based routing scheme, tracking a state of the conference by performing a first update for a digest log according to the first message, and sending a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
In yet another embodiment, the disclosure includes a computer program product for use by a local service proxy serving a conference in an ICN network, wherein the computer program product comprises computer executable instructions stored on a non-transitory computer readable medium such that when executed by a processor cause the local service proxy to receive a first message from a first of a plurality of participants associated with the local service proxy, wherein the first message comprises a signature profile of the first participant, and wherein the first message is received via an ICN content name based routing scheme, track a state of the conference by performing a first update for a digest log according to the first message, wherein performing the first update comprises recording the first participant's signature profile in the digest log, and send a second message to indicate the first update to a first of a plurality of remote service proxies serving the conference.
These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
It should be understood at the outset that, although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
Conference applications and/or systems may support real-time information and/or media data exchange among multiple parties located in a distributed networking environment. Some conference applications and/or systems may be implemented over a host-to-host Internet Protocol (IP) communication model and may duplicate meeting traffics over Wide Area Network (WAN) links. For example, in a server-centric model, a central server may control and manage a conference, process data from participants (e.g. meeting subscribers), and then redistribute the data back to the participants. The server-centric model may be simple for control management, but the duplication and/or redistribution of conference data may lead to high data traffic concentration at the server. For example, the data traffic in a server-centric model may be in an order of N2 (O(N2)), where N may represent the number of participants. Thus, the server-centric model may not be suitable for large scale conferences (e.g. with 1000 to 10K participants).
Some conference service technologies may employ a Content Delivery Networking (CDN) model and/or a P2P content distribution networking model (e.g. multi-server architecture) to reduce the high data traffic in a server-centric model. However, CDN models and/or P2P content distribution networking models may be over-the-top (OTT) solutions, where content (e.g. audio, video, and/or data) may be delivered over the Internet from a source to end users without network operators being involved in the control and/or distribution of the content (e.g. content providers operate independently from network operators). As such, networking optimizations and/or cross-layer optimizations may not be easily performed in CDN models and/or P2P content distribution networking models, and thus may lead to inefficient bandwidth utilization. In addition, CDN models and/or P2P content distribution networking models may not support MP2MP communication and may not be suitable for real-time interactive communication.
Some other conference service technologies, such as Chronos, may employ a Name Data Networking (NDN) model to improve bandwidth utilization and reduce traffic load by leveraging NDN features, such as sharable and distributed in-net storage and name-based routing mechanisms. NDN may be a receiver driven, data centric communication protocol, in which data flows through a network when requested by a consumer. The data access model in NDN may be referred to as a pull-based model. However, conference events and/or updates may be nondeterministic as conference participants may publish meeting updates at any time during a conference. In order to access nondeterministic conference events in a pull-based model, participants may actively query conference events. For example, every participant in Chronos may periodically broadcast a query to request meeting updates from other participants. Thus, control signaling overheads in Chronos may be significant. In addition, the coupling of the data plane and the control plane in Chronos may lead to complex support for simultaneous data updates and/or recovery. Thus, Chronos may not be suitable for supporting real-time interactive large scale conferences.
ICN architecture is a type of network architecture that focuses on information delivery. ICN architecture may also be known as content-aware, content-centric, or data oriented networking architecture. ICN models may shift the IP communication model from a host-to-host model to an information-object-to-object model. The IP host-to-host model may address and identify data by storage location (e.g. host IP address), whereas the information-object-to-object model may employ a non-location based addressing scheme that is content-based. Information objects may be the first class abstraction for entities in an ICN communication model. Some examples of information objects may include content, data streams, services, user entities, and/or devices. In an ICN architecture, information objects may be assigned with non-location based names, which may be used to address the information objects, decoupling the information objects from locations. Routing to and from the information objects may be based on the assigned names. ICN architecture may provision for in-network caching, where any network device or element may serve as a temporary content server, which may improve performance of content transfer. The decoupling of information objects from location and the name-based routing in ICN may allow mobility to be handled efficiently. ICN architecture may also provision for security by appending security credentials to data content instead of securing the communication channel that transports the data content. As such, conference applications may leverage ICN features, such as name-based routing, security, multicasting, and/or multi-path routing, to support real-time interactive large scale conferences.
Disclosed herein is hybrid conference service control architecture which may employ a combination of push and pull mechanisms for synchronizing conference updates. The hybrid conference service control architecture may comprise a plurality of distributed service proxies serving a plurality of conference participants. Each service proxy may serve a group of participants that is associated with the service proxy. The service proxy may synchronize and consolidate conference updates with remote service proxies serving the same conference and distribute the consolidated conference updates to the associated participants. Conference updates may include participants' fingerprints (FPs), which may include signatures and/or credentials of the participants, and/or update sequence numbers associated with the FPs. The synchronization of control flows between a participant and a service proxy may employ a push mechanism. However, the synchronization of control flows among the service proxies may employ a push and/or a pull mechanism. In an embodiment, each service proxy and each participant may maintain a digest log to track conference updates. Each digest log may comprise a snapshot of a current localized view (e.g. in the form of a digest tree) of the conference and a history of participants' FPs. For example, each service proxy may comprise a proxy digest tree with a snapshot view of the associated participants (e.g. local digest tree) and other remote service proxies serving the conference and a history of FP updates corresponding to the proxy digest tree. Each participant may comprise a digest tree with the root of the proxy digest tree (e.g. global root digest) and a history of FP updates corresponding to the global root digest. Thus, synchronization may operate according to a child-parent relationship between a service proxy and an associated participant. The disclosed hybrid conference service control architecture may leverage native ICN in-network storage and name-based routing. The disclosed hybrid conference service control architecture may provide control plane and data plane separation. The disclosed hybrid conference service control architecture may provide efficient synchronization of conference updates and fast recovery from network interrupts. The disclosed hybrid conference service control architecture may be suitable for supporting real-time interactive large scale conferences.
The SRs 120 may be routers, virtual machines (VMs), and/or any network devices that may be configured to synchronize controls and/or signaling for a large scale conference (e.g. chat rooms, E-conference services, Virtual White Board (VWB) services, etc. with about 1000 to about 10K participants) among each other and with a plurality of conference participants, where such participants may participate in the conference via UEs 130. For example, each SR 120 may act as a conference proxy and may be referred to as a service proxy. In an embodiment, an SR 120 may host one or more VMs, where each VM may act as a service proxy for a different conference. It should be noted that each SR 120 may serve a different group of conference participants.
Each UE 130 may be an end user device, such as a mobile device, a desktop, a mobile device, a cellphone, and/or any network device configured to participate in one or more large scale conferences. For example, a conference participant may participate in a conference by executing a conference application on a UE 130. The conference participant may request to participate in a particular conference by providing a FP and a conference name. The conference participant may also subscribe and/or publish data for the conference. In network 100, each UE 130 may be referred to as a service client and may synchronize conference controls and/or signaling with a service proxy.
In an embodiment, network 100 may be an ICN-enabled network, which may employ ICN name-based routing, security, multicasting, and/or multi-path routing to support large scale conference applications. Network 100 may comprise a control plane separate from a data plane. For example, network 100 may employ a two-tier (e.g. proxy-client) architecture for conference controls and signaling in the control plane. The two-tier architecture may comprise a proxy layer and a client layer. The proxy layer may include a plurality of service proxies situated in a plurality of SRs 120 and the client layer may include a plurality of service clients situated in a plurality of UEs 130.
In an embodiment, control paths 191 and 192 may be logic paths specifically for exchanging conference controls and signaling. A service proxy situated in a SR 120 may exchange conference controls and signaling with remote service proxies situated in other SRs 120 serving the conference via control path 191. In addition, each service proxy may exchange conference controls and signaling with associated service clients situated in UEs 130 via control path 192. It should be noted that the service proxies may participate in control plane functions, but may not participate in data plane functions. As such, data communications (e.g. audio, video, rich text exchanges) among the service clients may be independent from the conference controls.
In an embodiment, a push mechanism may be employed for synchronizing controls (e.g. FPs, other signed information, events, etc.) in a conference between a service proxy and a service client, as well as among service proxies. A push mechanism may refer to a sender initiated transmission of information without a request from an interested recipient as opposed to an ICN protocol pull mechanism where an interested recipient may pull information from an information source. For example, a service client may initiate and push a FP update to a service proxy serving the service client. Upon receiving the FP update, the service proxy may consolidate all received FP updates into a first proxy update and may push the first proxy update to other remote service proxies serving the same conference. The service proxy may also receive a second proxy update from one of the other remote service proxies and may push the second proxy update to participants served by the service proxy. The push mechanism may enable real-time or nearly real-time communications, which may be a performance factor for conference services.
In another embodiment, a service client and a service proxy may synchronize conference controls by employing substantially similar push mechanisms as described herein above. However, synchronizations between service proxies may employ a pull mechanism. For example, a first service proxy may send an outstanding synchronizing (sync) interest to a second service proxy, where the sync interest may comprise a most recent global conference view at the first service proxy. When the second proxy receives a FP update from a client served by the second proxy, the second proxy may consolidate the FP update and update a global conference view at the second proxy. The second proxy may detect that the first service proxy's global conference view indicated in the sync interest is out of date, and thus may send a sync response to the first service proxy indicating the latest updated global conference view. It should be noted that after a service client receives updated FPs, the service client may fetch data over native ICN in-network storage with name-based routing. In some embodiments, conference data (e.g. audio, video, and/or data) may be exchanged among the participants by employing the pulling model in the ICN protocol.
It is understood that by programming and/or loading executable instructions onto the NE 200, at least one of the processor 230 and/or memory device 232 are changed, transforming the NE 200 in part into a particular machine or apparatus, e.g., a multi-core forwarding architecture, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an ASIC that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
The application layer 310 may comprise an application pool 311, which may comprise a plurality of applications, such as chat, VWB, and/or other applications. The service API to application layer 320 may comprise a set of APIs for interfacing between the application layer 310 and the service layer 330. The APIs may be well-defined function calls and/or primitives comprising input parameters, output parameters, and/or return parameters.
The service layer 330 may comprise a sync proxy 336 and other service modules 335. For example, the sync proxy 336 may serve a conference (e.g. chat, VWB, etc.) with a plurality of other sync proxies and may act as a control proxy for a group of conference participants and/or service clients (e.g. situated in UEs 130). The other service modules 335 may manage and/or control other services. The ICN layer 340 may comprise ICN protocol layer modules, which may include a content store (CS) (e.g. for caching interest and/or data), a forwarding information base (FIB) (e.g. name-based routing look up), and/or a pending interest table (PIT) (e.g. records of forwarded interests). The L2/L3 layer 350 may comprise networking protocol stack modules, which may include data and/or address encoding and/or decoding for network transmissions. The L2 layer and the L3 layer may be referred to as the data link layer and the network layer in the OSI model. The S-UNI layer 361 may interface (e.g. signaling functions between networks and users) with one or more conference participants (e.g. situated in UEs 130) situated in the network. The S-NNI layer 362 may interface (e.g. signaling functions between networks) with one or more SRs (e.g. SRs 120) situated in the network.
The sync proxy 336 may communicate with the other remote service proxies to synchronize FPs of conference participants. The sync proxy 336 may comprise a FP processor 331, a heartbeat signal processor 332, a digest log 333, and an application cache 334. The FP processor 331 may receive FP updates (e.g. FPs of participants) from service clients and/or remote service proxies. The FP processor 331 may also send FP updates to service clients and/or remote service proxies. The FP processor 331 may track and maintain FP updates received from the service clients and/or the remote service proxies as discussed more fully below. The FP updates may be sent and/or received in the form of notification messages and/or sync messages via the ICN layer 340, the L2/L3 layer 350, and/or the S-NNI layer 360. It should be noted that the notification messages and/or sync messages may include ICN interest packets and/or ICN data packets, which may be handled according to ICN protocol (e.g. forwarded according to a FIB and/or cached in a CS).
The heartbeat signal processor 332 may monitor and exchange liveliness (e.g. functional and connectivity statuses) of the remote service proxies and/or the service clients attached to the service proxy 300. For example, the heartbeat signal processor 332 may generate and send heartbeat indication signals (e.g. periodically and/or event-driven) to the remote service proxies and/or the attached service clients. The heartbeat signal processor 332 may also listen to heartbeat indication signals from the remote service proxies and the attached service clients. In some embodiments, the heartbeat signal processor 332 may send a heartbeat response signal to confirm the reception of a heartbeat indication signal. When the heartbeat signal processor 332 detects missing heartbeat indication signals and/or heartbeat response signals from a remote service proxy and/or an attached service client over a duration that exceeds a predetermined timeout interval, the heartbeat signal processor 332 may send a network failure signal to the application layer 310 to notify the application serving the faulty service client and/or the faulty remote service proxy of the network failure. The heartbeat signals may be sent and/or received in the form of heartbeat messages via the ICN layer 340, the L2/L3 layer 350, and the S-NNI layer 360. It should be noted that the heartbeat messages may include ICN interest packets and/or ICN data packets, which may be handled according to ICN protocol (e.g. forwarded according to a FIB and cached in a CS).
The digest log 333 may be a cache or any temporary data storage that records recent FP updates. The digest log 333 may store a snapshot of a local view of the conference service including all the attached participants (e.g. including FPs) and all the remote proxies (e.g. some digest information) at a specified time, where the local view may be represented in the form of a digest tree as discussed more fully below. The digest log 333 may also store a history of FP updates corresponding to the digest tree, where each entry may be in the form of <global root digest>:<global digest tree>:<local digest tree> as discussed more fully herein below. The application cache 334 may be a temporary data storage that stores FPs that are in transmission (e.g. transmission status may not be confirmed). The FP processor 331 may manage the digest log 333 and/or the application cache 334 for storing and tracking FP updates. In an embodiment, the FP processor 331 and the heartbeat signal processor 332 may serve one or more conferences (e.g. a chat and a VWB). In such an embodiment, the FP processor 331 may employ different digest logs 333 and/or different application caches 334 for different conferences.
The sync client 436 may be a service client configured to participate in a conference. The sync client 436 may communicate with a service proxy (e.g. service proxy 400) serving the conference or more specifically a sync proxy (e.g. sync proxy 336) in the network. The sync client 436 may communicate with the service proxy via the ICN layer 440, the L2/L3 layer 450, and the S-UNI control layer 461. The sync client 436 may comprise a FP processor 431, a heartbeat signal processor 432, a digest log 433, and an application cache 434.
The FP processor 431 may be substantially similar to FP processor 331. However, the FP processor 431 may send FP updates (e.g. join, leave, re-join a conference) to a service proxy and may receive other participant's FP updates from the service proxy.
The heartbeat signal processor 432 may be substantially similar to heartbeat processor 332. However, the heartbeat signal processor 432 may monitor and exchange liveliness (e.g. functional statuses) indications with the service proxy and may employ substantially similar mechanisms for detecting network failure at the service proxy and notifying application layer 410.
The digest log 433 may be substantially similar to digest log 333, but may store a digest tree with a most recent global root digest received from the associated service proxy and a history of FP updates corresponding to the global root digest (e.g. <global root digest>:<user FP>) as discussed more fully below. The application cache 434 may be substantially similar to application cache 334. In an embodiment, the FP processor 431 and the heartbeat signal processor 432 may serve one or more conferences (e.g. a chat and a VWB). In such an embodiment, the FP processor 431 may employ different digest logs 433 and/or different application caches 434 for different conferences.
The service proxy P1 may track and update the states at each node 610, 620, and 630 by tracking updates received from the remote service proxies in the conference and/or the attached service clients. When service proxy P1 receives a FP update (e.g. Um,fp
The global state dg1(t) and the local state dpt(t) at a service proxy P1 at a time instant t may be computed as shown below:
may represent the total number of FP updates sent by the attached service clients at time t and
may represent the total number of FP updates sent by the remote service proxies serving the conference.
In an embodiment, a service client connecting to service proxy P1 may maintain a digest log (e.g. digest log 433) to track updates received from service proxy P1. For example, the service client may generate an entry in the digest log history of the digest log to record the received FP update (e.g. <G1,dg
Each digest tree may represent a snapshot of a localized view of the conference at a specified time. Each digest tree may comprise a different tree structure at a particular time instant.
At step 1030, participant U1 may send (e.g. via a push) a join update message to service proxy P1, where the join update message may comprise the participant U1's signature profile (e.g. U1−FP0). At step 1031, after receiving the join update message, service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at service proxy P1 according to the received join update message. For example, the FP history may comprise the following entries:
Last entry: <G1,0>:<P1,0, P2,0, P3,0>
Current entry: <G1,1>:<P1,1, P2,0, P3,0>:<U1−FP0>
where the global state dg1 and the local state dp1 at service proxy P1 may each be incremented by one.
At step 1040, service proxy P1 may send a first digest update message (e.g. G1,1/U1−FP0) to participant U1. In response to the join update message, service proxy P1 may send (e.g. via a push) a first join update message (e.g. with updated state P1,1/P2/U1−FP0) to service proxy P2 at step 1050 and a second join update message (e.g. with updated state P1,1/P3/U1−FP0) to service proxy P3 at step 1060. At this time, service proxies P1, P2, and P3 may be synchronized with participant U1's joining update.
At step 1110, participant U3 may send (e.g. via a push) a notification message to service proxy P2, where the notification message may comprise participant U3's signature profile (e.g. U3−FPj). At step 1111, in response to the notification message, service proxy P2 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at service proxy P2 according to the notification message. For example, the FP history may comprise the following entries:
Last entry: <G2,n>:<P1,m, P2,k>
Current entry: <G2,n+1>:<P1,m, P2,k+1>:<U3−FPj>
where the global state dg2 and the local state dp2 at service proxy P2 may each be incremented by one.
At step 1120, after updating the proxy digest log at service proxy P2, service proxy P2 may send (e.g. via a push) a first digest update message (e.g. G2,n+1/U3−FPj) to participant U3. At step 1130, in response to the received notification message, service proxy P2 may send (e.g. via a push) the notification message (e.g. with the updated state P2,k+1/U3−FPj) to service proxy P1.
At step 1131, in response to the notification message, service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at the service proxy P1 according to the received notification message. For example, the FP history may comprise the following entries as shown below:
Last entry: <G1,n>:<P1,m, P2,k>
Current entry: <G1,n+1>:<P1,m, P2,k+1>:<U3−FPj>
where the global state dg1 at service proxy P1 may be incremented by one and the service proxy P2's local state dp2 may be updated according to the received FP update. At step 1140, after updating the proxy digest log at service proxy P1, service proxy P1 may send (e.g. via a push) a second digest update message (e.g. G1,n+1/U3−FPj) to participant U1. It should be noted that the global state at service proxies P1 and P2 may be synchronized at this time.
At step 1210, service proxy P1 may send a sync interest message (e.g. an interest packet to initiate a pull process) to service proxy P2. The sync interest message may indicate the current global state (e.g. G1,n) at service proxy P1. The sync interest message may serve as an outstanding interest for a next conference update.
At step 1220, participant U3 may send (e.g. via a push) a notification message to service proxy P2, where the notification message may comprise participant U3's signature profile (e.g. U3−FPj). At step 1221, in response to the notification message, service proxy P2 may update a digest tree and a FP history in proxy digest log (e.g. digest log 433) at service proxy P2 according to the notification message. For example, the FP history may comprise the following entries:
Last entry: <G2,n>:<P1,m, P2,k>
Current entry: <G2,n+1>:<P1,m, P2,k+1>:<U3−FPj>
where the global state dg2 and the local state dp2 at service proxy P2 may each be incremented by one.
At step 1230, after updating the proxy digest log at service proxy P2, service proxy P2 may send (e.g. via a push) a first digest update message (e.g. G2,n+1/U3−FPj) to participant U3. At step 1240, service proxy P2 may detect that service proxy P1 comprises a sync interest message with an out dated global state of n, and thus may respond to the sync interest message by sending a sync response message (e.g. G2,n+1/U3−FPj) to service proxy P1.
At step 1241, in response to the notification message, service proxy P1 may update a digest tree and a FP history in a proxy digest log (e.g. digest log 333) at the service proxy P1 according to the received notification message. For example, the FP history may comprise the following entries as shown below:
Last entry: <G1,n>:<P1,m, P2,k>
Current entry: <G1,n+1>:<P1,m, P2,k+1>:<U3−FPj>
where the global state dg1 at service proxy P1 may be incremented by one. At step 1250, after updating the proxy digest log at service proxy P1, service proxy P1 may send (e.g. via a push) a second digest update message (e.g. G1,n+1/U3−FPj) to participant U1. It should be noted that the global state at service proxies P1 and P2 may be synchronized at this time. In addition, each service proxy P1 and/or P2 may send another pending sync interest message to pull a next conference update after receiving a sync response message (e.g. FP updates from remote service proxies). In some embodiments, sync interest messages may be aggregated into a single message per access link (e.g. links 141).
Upon receiving the notification message, method 1300 may proceed to step 1320. At step 1320, method 1300 may determine whether there are missing updates (e.g. occurred during the temporary interruption) from the connected component. For example, method 1300 may compare a last state of the connected component indicated in the notification message to a most recent recorded state in a digest log (e.g. digest log 333 and/or 433). If there is no missing update (e.g. the received last state and the most recent recorded state are identical), then method 1300 may proceed to step 1330. At step 1330, method 1300 may update the digest log and return to step 1310.
If there are one or more missing updates (e.g. the received last state and the most recent recorded state are different), method 1300 may proceed to step 1340. At step 1340, method 1300 may send a recovery request message to the connected component, for example, indicating the most recent recorded state and the received current state (e.g. gap of missing updates). At step 1350, method 1300 may wait for a recovery data message. Upon receiving the recovery data message, method 1300 may continue to step 1330 to update the digest log. Method 1300 may be repeated for the duration of a conference.
If the timer has not expired, method 1400 may check if a digest update message is received at step 1440. If method 1400 does not receive a digest update message, method 1400 may return to step 1420 and continue to wait for the expiration of the timer. If method 1400 receives a digest update message, method 1400 may proceed to step 1450. At step 1450, method 1400 may update a digest log (e.g. digest log 333) according to the received digest update message. For example, the digest update message may comprise the conference updates that occurred during the temporary network interruption.
If the timer expires at step 1420, method 1400 may proceed to step 1430. At step 1430, method 1400 may send a recovery sync message to request conference update recovery. At step 1431, method 1400 may wait for a recovery update message. If method 1400 receives a recovery update message, method 1400 may proceed to step 1450 to update the digest log. In an embodiment, the recovery update message may comprise some or all of the conference updates that occurred during the temporary network interrupt. It should be noted that the conference may return to a steady state (e.g. all service proxies comprise a same global sate) at the end of the method 1400. In addition, method 1400 may be repeated when another network interruption occurs during the recovery process and may employ a different wait period (e.g. with exponential back-off) for the timer.
It should be noted that in some embodiments a local service proxy may detect missing heartbeat messages from a first remote service proxy (e.g. due to long-term network connectivity failure between the local service proxy and the first remote service proxy) while maintaining network connection with a second remote service proxy. In such embodiments, a network partition may occur. However, the local service proxy may employ method 1300 and/or 1400 to request to recover conference updates at the first remote service proxy from the second remote service proxy.
In an embodiment, service clients (e.g. service client 400 and/or UE 130) and/or service proxy (e.g. service proxy 300 and/or SR 120) may synchronize conference updates (e.g. FP updates and/or states) after receiving a notification message from a corresponding component, for example, via method 1100 and/or 1200. Some examples of notifications may include join notifications, log off notifications, re-join notifications, and/or any other types of notifications. A join notification process may be initiated by a service client joining a conference for the first time and may include steps, such as login authorization at a login server, publishing of a login message to the network, and/or sending of login notification. When the service client sends a join message (e.g. including the service client's FP) to a service proxy, the service proxy may cache the service client's FP, update the service proxy's local digest tree (e.g. digest tree 600), recompute the service proxy's root digest, and push the join notification to other remote service proxies.
A log off notification process may be initiated by a service client intentionally leaving (e.g. sending a log off message) a conference and may include steps, such as log off authorization at a login server, publishing of log off message to the network, and/or sending of log off notification. When the service client sends a log off message to a service proxy, the service proxy may delete a corresponding leaf node in the service proxy's digest tree, recompute the service proxy's root digest, and push the log off notification to other remote service proxies. The conference session may be closed after the log process, for example, the service client may send a close request message to the service proxy and the service proxy may respond with a close reply message.
A re-join notification process may be initiated by a service client intentionally leaving a conference (e.g. a log off notification) and then subsequently re-joining the conference. After the service proxy receives a log off notification from the service client, the service proxy may preserve some information (e.g. FP updates and/or states) of the leaving service client for a predetermined period of time (e.g. re-join timeout interval). When the service client re-joins the conference within re-join timeout interval, the service proxy may resume the last state of the service client just prior to the log off process. However, when the service client joins the conference after the re-join timeout interval, the joining process may be substantially similar to a first-time join process.
A recovery process may occur after a network (e.g. network 100) experiences a temporary interruption and/or disconnection. For example, a conference component (e.g. a sync proxy 336 and/or a sync client 436) may continue to send heartbeat signals during the interruption, but may not receive heartbeat signals from a connected component. When the duration of the interruption is within a predetermined timeout interval (e.g. disconnect timeout interval), each conference component may maintain digest log states (e.g. at digest log 333 and/or 433) and may continue with the last state (e.g. prior to the interruption) after recovering from the interruption. After recovery, each component may detect missing updates (e.g. occurred during the interruption) from a connected component and may request digest log history from the connected component (e.g. via method 1300 and/or 1400).
However, when the network experiences a long-term network failure (e.g. longer than the disconnect timeout interval), a disconnection process may be performed at the service proxy and/or the service client. For example, when the network disruption occurs between a service proxy and a service client, the service proxy may detect the failure and may disconnect the service client by employing substantially similar mechanisms as in a log off notification process. When the network disruption occurs at a service proxy, other remote service proxies may detect the network failure and each service proxy may update the service proxy's digest log, for example, by deleting the node that correspond to the faulty service proxy and re-computing the global state.
In an embodiment of a multi-tier hybrid conference service network (e.g. network 100), conference controls and/or signaling exchanged between service proxies and/or service clients in a control plane (e.g. control paths 191 and 192) may include session setup and/or close messages, service-related synchronization messages, heartbeat messages, and/or recovery messages. For example, the session setup and/or close messages, service-related messages, and/or recovery messages may be initiated and/or generated by a FP processor (e.g. FP processor 331 and/or 431) at a sync proxy (e.g. sync proxy 336) and/or a sync client (e.g. sync client 436). The messages may be in the form of an interest packet and/or a data packet structured according to the ICN protocol. For example, an interest packet may be employed for sending a notification message and may employ a push mechanism. Some interest packets may be followed by data packets (e.g. response messages).
In an embodiment, conference service control messages, such as session setup and/or close messages, service-related notification messages, heartbeat messages, and/or recovery messages, may be sent as ICN protocol interest packets 1500. The name field 1510 in each packet 1500 may begin with a routing prefix (e.g. <Routing-Prefix>), which may be name-based and may identify a recipient of the interest packet 1510. The following list examples of routing prefixes:
The ProxyIDR may be a remote sync proxy ID that identifies the sync proxy (e.g. sync proxy 336) situated in the SR (e.g. SR 120 may host one or more sync proxies). The ISP may be the name of an ISP that provides Internet service to a conference participant (e.g. UE 130). The DeviceID may be a UE ID or a sync client ID that identifies the conference participant (e.g. sync client 436 situated in the UE 130).
In a session setup and/or close process, a sync client may send session setup and/or close messages to a sync proxy requesting to connect to and/or disconnect from a conference, respectively. For example, an interest packet for a session setup and/or close message may comprise a name field 1510 as shown below:
<Routing-Prefix>:<ServiceID>:<ClientID>:<Msg-Type>
where Routing-Prefix may be the routing prefix for a sync proxy as shown in Table 1 herein above. The ServiceID, ClientID, and Msg-Type may indicate information as shown below:
In a notification process, a sync client may send notification messages to a sync proxy requesting to join, leave, and/or re-join a conference, and/or other notification information. For example, an interest packet for an notification message from a sync client to a sync proxy may comprise a name field 1510 as shown below:
<Routing-Prefix>:<ServiceID>:<Msg-Type>:<dr>:<User-FP>
where Routing-Prefix may be the routing prefix for a sync proxy as shown in Table 1 herein above. The ServiceID, Msg-Type, dg, and User-FP may indicate information as shown below:
It should be noted that User-FP may be generated at a UE (e.g. UE 130) by an application (e.g. a chat application) situated in in an application pool (e.g. application pool 411) at a service client (e.g. service client 400). The following shows an example of a User-FP:
<ISP>:<SR-ID>:<ServiceID>:<Service-AccountID>:<msg-Seq>
where ISP, SR-ID, ServiceID may be as described herein above, Service-AccountID may correspond to the UE account ID in the ISP network, and msg-Seq may include the participant's signature information, credential information, security parameters, and/or an associating update sequence number. For example, the update sequence number may be employed for identifying the User-FP content.
In response to notifications received from a sync client, a local sync proxy may send notification messages to a remote sync proxy to update the remote sync proxy of the joining, leaving, and/or re-joining of sync clients, and/or other sync clients' published information. For example, an interest packet for an notification message from a local sync proxy to a remote sync proxy may comprise a name field 1510 as shown below:
<Routing-Prefix>:<ServiceID>:<Msg-Type>:<ProxyID>:<dp_pre>:<dp_curr>:<User-FP>
where Routing-Prefix may be the routing prefix for a remote sync proxy as shown in Table 1 herein above. The ServiceID, Msg-Type, ProxyID, dp_pre, dp_curr, and User-FP may indicate information as shown below:
In response to notifications received from a remote sync proxy, a local sync proxy may send notification messages to a sync client (e.g. unicast) to update the sync client of the joining, leaving, and/or re-joining of sync clients attached to the remote sync proxy. For example, an interest packet for a notification message from a local sync proxy to an attached sync client may comprise a name field 1510 as shown below:
<Routing-Prefix>:<ServiceID>:<Flag>:<dg_pre><dg_curr>:<User-FP>
where Routing-Prefix may be the routing prefix for a sync client as shown in Table 1 herein above. The ServiceID, Flag, dr_pre, dr_curr, and User-FP may indicate information as shown below:
In a sync process (e.g. a pull mechanism), a local service proxy may send a sync interest packet as an outstanding request such that the local service proxy may receive a next conference update from a remote sync proxy. For example, an interest packet for a sync interest message may comprise a name field 1510 as shown below:
<ServiceID>:<dg_curr>
where ServiceID and dg_curr may indicate information as shown below:
Heartbeat messages may be sent by a sync proxy and/or a sync client to indicate liveliness (e.g. functional indicator and/or connectivity). For example, a local sync proxy may send heartbeat messages to a remote sync proxy as well as connected sync clients and a sync client may send heartbeat messages to a connected sync proxy. For example, an interest packet for a heartbeat message may comprise a name field 1510 as shown below:
<Routing-Prefix>:<OriginatorID>:<Flag>:<Sequence_no>
where Routing-Prefix may vary depending the intended recipient as shown in Table 1 herein above. The OriginiatorID, Flag, and sequence_no may indicate information as shown below:
It should be noted that heartbeat messages may be sent periodically and/or driven by predetermined events. In some embodiments, a recipient of a heartbeat message may send a confirmation message (e.g. as a data packet as discussed more fully below).
A recovery process may refer to a network recovery subsequent to a temporary network interruption at a sync client and/or a sync proxy. During the interruption, the sync client and/or the sync proxy may miss updates (e.g. notifications messages) from a corresponding connected component. After recovery, the sync client and/or the sync proxy may detect missed updates when receiving notification messages received from a corresponding connected component. For example, when a local service proxy employs a push mechanism for conference update synchronization with a remote service proxy, the local service proxy may detect missed notifications from a remote sync proxy by determining whether the dp_pre in a notification message received from the remote sync proxy are identical (e.g. no gap) to the local state logged in a last entry associated with the remote service proxy. A sync client may detect missed notifications from a sync proxy by determining whether the dg_pre in a notification message received from the sync proxy are identical (e.g. no gap) to the last global state logged at the sync client. For example, an interest packet for a recovery message may comprise a name field 1510 as shown below:
<Routing-Prefix>:<ServiceID>:<Msg-Type>:<digest_last>:<digest_new>
where Routing-Prefix may vary depending the intended recipient as shown in Table 1 herein above. The ServiceID, Msg-Type, digest_last, and digest_new may indicate information as shown below:
It should be noted that the digest_last and digest_new may vary depending on the sender and/or the recipient. For example, when a sync client request digest log history from a sync proxy, the digest_last and digest_new may refer to the global state dg. When a sync proxy request digest log history from a remote sync proxy, the digest_last and digest_new may refer to the remote sync proxy's local state dpn. The digest_last and digest_new may indicate the missing digest log history.
In an embodiment, conference service control messages, such as session setup and/or close response messages and/or recovery data messages, may be sent as ICN protocol data packets 1600. In a session setup and/or close process, a sync proxy may respond to a sync client by sending a session setup response and/or a session close response. For example, a data packet for a session setup and/or close response message may comprise a name field 1610 substantially similar to the name field in a session setup and/or close interest described herein above. The data field 1620 in a data packet for the session setup and/or close response may include a global state (e.g. dg) at the sync proxy and/or an acknowledgement to the requested session setup and/or close.
In a recovery process, a sync proxy may respond to a connected component's recovery request message by sending a digest log history. The depth (e.g. number of log entries) of the digest log may be determined by the digest_last and digest_new indicated in a recovery interest packet as described herein above and/or the depth in the cache as maintained by a responding component. For example, a data packet for a recovery response message may comprise a name field 1610 substantially similar to the name field in a recovery interest described herein above. The data field 1620 in a data packet for the recovery response may include history of FPs that is updated between the digest_last and digest_new indicated in the name field 1610.
In a sync process, a sync proxy may respond to an outstanding sync interest message from a remote sync proxy by sending a sync data response message (e.g. global state dg_curr in the sync interest message is out of date). The sync proxy may send a sync data response message comprising the updated global state (e.g. dg_new) and a FP corresponding to the global state transition from dg_curr to dg_new.
At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g. from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). For example, whenever a numerical range with a lower limit, Rl, and an upper limit, Ru, is disclosed, any number falling within the range is specifically disclosed. In particular, the following numbers within the range are specifically disclosed: R=Rl+k*(Ru−Rl), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent. Moreover, any numerical range defined by two R numbers as defined in the above is also specifically disclosed. Unless otherwise stated, the term “about” means ±10% of the subsequent number. Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. The discussion of a reference in the disclosure is not an admission that it is prior art, especially any reference that has a publication date after the priority date of this application. The disclosure of all patents, patent applications, and publications cited in the disclosure are hereby incorporated by reference, to the extent that they provide exemplary, procedural, or other details supplementary to the disclosure.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.
The present application claims priority to U.S. Provisional Patent Application 61/824,656, filed May 17, 2013 by Guoqiang Wang, et. al., and entitled “Multitier “Push” Service Control for Virtual Whiteboard Conference Over Large Scale ICN Architecture”, and U.S. Provisional Patent Application 61/984,505, filed Apr. 25, 2014 by Asit Chakraborti, et. al., and entitled “Multitier “Push” Service Control for Virtual Whiteboard Conference Over Large Scale ICN Architecture”, both which are incorporated herein by reference as if reproduced in their entirety.
Number | Date | Country | |
---|---|---|---|
61824656 | May 2013 | US | |
61984505 | Apr 2014 | US |