System and method for virtual parallel resource management

Information

  • Patent Grant
  • 9825876
  • Patent Number
    9,825,876
  • Date Filed
    Friday, May 27, 2016
    8 years ago
  • Date Issued
    Tuesday, November 21, 2017
    6 years ago
Abstract
An improved system and method are disclosed for providing virtual parallel access to a shared resource. In one example, the method includes receiving a request from a device to take control of the shared resource. After determining that another device is currently in control of the shared resource, a timer is started. Control of the shared resource will automatically pass from the device currently in control to the requesting device when the timer expires. Input received from the device currently in control is executed. Input received from the device that has requested control is buffered and executed once control is transferred.
Description
BACKGROUND

The manner in which collaborative sharing occurs is typically serial in nature. Accordingly, what is needed are a system and method that addresses such issues.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding, reference is now made to the following description taken in conjunction with the accompanying Drawings in which:



FIGS. 1A and 1B illustrate embodiments of collaborative environments;



FIG. 2 illustrates one embodiment of a state machine that may be maintained by a device operating within an environment such as one of the collaborative environments of FIGS. 1A and 1B;



FIG. 3 illustrates one embodiment of a timeline showing virtual parallel access to a shared resource based on which device is currently in control of the shared resource;



FIG. 4 illustrates a flow chart of one embodiment of a process by which a device running a state machine within an environment such as one of the collaborative environments of FIGS. 1A and 1B may handle received input;



FIG. 5 illustrates a flow chart of one embodiment of a process by which a device running a state machine within an environment such as one of the collaborative environments of FIGS. 1A and 1B may handle a request for ownership;



FIG. 6 illustrates a flow chart of one embodiment of a process by which a device within an environment such as one of the collaborative environments of FIGS. 1A and 1B may handle a notification that it is going to lose control of a shared resource;



FIG. 7 illustrates a sequence diagram of one embodiment of a process that may be executed to manage access to a shared resource within an environment such as one of the collaborative environments of FIGS. 1A and 1B; and



FIG. 8 illustrates one embodiment of a system that may be used as a device within an environment such as one of the collaborative environments of FIGS. 1A and 1B.





DETAILED DESCRIPTION

It is understood that the following disclosure provides many different embodiments or examples. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.


Referring to FIG. 1A, one embodiment of a collaborative environment 100 is illustrated with multiple devices 102, 104, 106, and 108. The devices 102, 104, 106, and 108 are involved in a collaborative resource sharing session in which one or more resources are shared in a manner that makes the sharing appear to be at least somewhat parallel from the perspective of users of the devices, but is not actually parallel in terms of access to the resource. As will be described below in greater detail, resource access is synchronized with some actions being executed in real time or near real time and other actions being buffered to provide virtual parallel access to multiple users even though only a single user actually has access to the resource in a given time frame. The user that has actual access to the resource(s) at a particular time may be referred to herein as being in control of the shared resource or as being the owner of the shared resource.


In the present example, the devices 102, 104, 106, and 108 include collaboration software 110, 112, 114, and 116, respectively, that includes software instructions needed to participate in the collaborative environment 100. The collaboration software 110, 112, 114, and 116 may be part of other software on the devices, or may be a stand-alone program. For example, the collaboration software 110, 112, 114, and 116 may be provided by functionality integrated with software that provides functionality other than collaboration, or may be part of a software program dedicated to collaboration. Accordingly, it is understood that the functionality in the present disclosure may be provided in many different ways and is not limited to the specific examples described herein.


In the present example, the device 102 includes a state machine 118 that controls resource sharing interactions among the devices 102, 104, 106, and 108 to enable the collaborative environment 100. Although shown as a single state machine, the state machine 118 may represent multiple state machines (e.g., a separate state machine for each device 102, 104, 106, and 108 that tracks the state of the corresponding device). For each device 102, 104, 106, and 108, the state machine 118 may track such states as whether that device currently has control of the shared resource (e.g., is the owner or is not the owner), whether it is gaining control, or whether it is losing control.


One or more of the other devices 104, 106, and/or 108 may also have a state machine, as demonstrated by the state machine 120 of the device 108. For example, the collaboration software may have a viewer version and a sharer version, with only the sharer version capable of providing state machine functionality. In such embodiments, a state machine on a single device may control the collaboration session. In other embodiments, all collaboration software may have a state machine, but only one state machine may be active for a particular collaboration session. In still other embodiments, all collaboration software may have a state machine and the state machines of two or more devices may be synchronized to provide the collaboration session. A single state machine may control multiple resources or a different state machine may be assigned to each resource.


Some variations may occur due to the particular collaborative environment 100, such as whether the devices 102, 104, 106, and 108 are communicating in a peer-to-peer manner or via a server, but the basic tracking mechanism of the state machine 118 may remain the same regardless of the environment. In the present example, the device 102 is operating as a server, although it is understood that the device 102 need not be a standalone server (e.g., the device 102 may be a peer-to-peer endpoint or a mobile device providing server capabilities in a client/server environment). Furthermore, the device 102 may still participate in the collaboration session and may be treated by the state machine 118 as any other device for purposes of resource access.


The device 102 includes or is coupled to one or more resources, which may include internal resources 122 (e.g., software) and external resources 124 (e.g., keyboard, mouse, display, printer, manufacturing equipment, medical equipment, diagnostics equipment, and/or any other type of resource to which the device 102 may be coupled either directly or via a network). It is understood that some overlap may occur between internal and external resources, as many external resources are controlled via software. For example, the physical casing of a mouse may be viewed as an external resource for purposes of description, but it is controlled and interacts with the user via a mouse pointer using software, which may be viewed as an internal resource. Accordingly, the terms “internal” and “external” are descriptive when referring to a particular resource and are not intended to limit a particular resource or its manner of operation. Access to the resource 122 and/or the resource 124 may be shared with the devices 104, 106, and 108 within the collaborative environment 100.


The collaborative environment 100 may be viewed as having a sharing device and one or more using (e.g., viewing) devices. The sharing device, which is the device 102 in the present example, is the device that actually has control of the resource(s) being shared and makes the resource available to the other devices. For example, if the resource being shared is an application, the sharing device 102 is the device on which the application is running. The using devices are the devices that use the shared resource provided by the sharing device, such as the devices 104, 106, and 108.


Referring to FIG. 1B, another embodiment of a collaborative environment 130 is illustrated with multiple devices 132, 134, and 136 that include collaboration software 138, 140, and 142, respectively. The device 132 includes a state machine 144. One or more of the devices 134 and 136 may also include a state machine, as illustrated by state machine 146 of the device 136. The device 132 includes one or more resources, such as resources 148 and/or 150. As these components are similar or identical to similar components described with respect to FIG. 1A, they are not described in detail in the present figure.


In the present example, the devices 102, 104, and 106 may all communicate with one another. For example, the device 134, rather than sending a message only to the device 132 as would occur in the client/server model of FIG. 1A, may also send a message to the device 136. Upon receipt of the message from the device 134, the device 136 may be configured to use the state machine 146, may be configured to update only certain information (e.g., a cursor position), or may wait for a message from the state machine 144 before taking any action. Accordingly, variations in messaging may occur depending on the communication structure used within a collaborative environment.


Referring to FIG. 2, one embodiment of a state diagram 200 illustrates states 202, 204, 206, and 208. The state diagram 200 may be implemented by the state machine 118 of FIG. 1A in order to provide the collaborative environment 100 in which other devices (e.g., the devices 104, 106, and 108) can share access to the resources 122 and/or 124. In the present example, the device 102 and resources 122 and/or 124 are represented as a sharer/resource 210. The other devices 104, 106, and 108 are represented as a user 212.


The state 202 is an “owner” state and represents a device that is currently in control of the resources 122 and/or 124. The “owner” state is a unique state in that there is only one owner at any given time. Generally, actions generated by a device in state 202 will be executed. It is noted that the sharer is generally treated as any other device by the state diagram 120, and hardware events such as mouse and keyboard events may be intercepted and handled in the same way as input from other devices. In this respect, the only difference between the sharer and users is that the sharer may have the ability to take control of the session away from everyone. The state 204 is a “not owner” state and represents a device that is not currently in control. Generally, actions generated by a device in state 204 will not be executed. Some exceptions may apply to certain actions or information generated in state 202 and/or state 204, as will be described below. Such exceptions may depend on the particular resource or resources being shared and configuration parameters governing the sharing.


The state 206 is a “losing ownership” state and represents a device that is transitioning from the “owner” of state 202 to the “not owner” of state 204. Generally, actions generated by a device in state 206 will be executed, although some exceptions may apply to certain actions. The state 208 is a “gaining ownership” state and represents a device that is transitioning from the “not owner” of state 204 to the “owner” of state 202. Generally, actions generated by a device in state 208 will be buffered and executed when the device becomes the owner.


The transition from the “not owner” state 204 to the “gaining ownership” of state 208 may be triggered by one or more types of events. For example, input in the form of mouse clicks, keyboard clicks, audio input, video input, and/or other types of input, including the execution of defined automated events, may serve to trigger the state transition from state 204 to state 208. Accordingly, the transition may be triggered in many different ways and may be configurable.


The transition periods that occur during state 206 and state 208 enable a parallel sharing process to occur. More specifically, assuming the existence of proper input, the direct execution of actions for one device and the buffering of actions for another device that are later executed enable at least two users to perform virtual parallel actions without causing any conflicts. This is illustrated below with respect to FIG. 3.


With additional reference to FIG. 3, one embodiment of a timeline 300 illustrates the virtual parallel access of one or more resources by Device 1, Device 2, and Device 3. For purposes of example, Device 1 may be the device 102 of FIG. 1A, Device 2 may be the device 104, and Device 3 may be the device 106. The timeline 300 moves from left to right and includes six specific times t1-t6. Actions by Device 1 are represented by line 302, actions by Device 2 are represented by lines 304 and 308, and actions by Device 3 are represented by line 306.


From time t1 to time t2, Device 1 is in control and input from Device 1 is being executed. Input from Device 2 and Device 3 is being ignored and/or no input is being received from those devices. Device 1 is in the “owner” state 202. Device 2 and Device 3 are in the “not owner” state 204.


At time t2, Device 2 requests control and Device 1 voluntarily hands control to Device 2 without going through a full transition period. From time t2 to time t3, Device 2 is in control and input from Device 2 is being executed. Input from Device 1 and Device 2 is being ignored and/or no input is being received from those devices. Device 2 is in the “owner” state 202. Device 1 and Device 3 are in the “not owner” state 204.


At time t3, Device 3 requests control. Device 2 does not voluntarily hand control to Device 3 and a timer is started for the transition period. From time t3 to time t4, Device 2 is in control and input from Device 2 is being executed. Input from Device 1 is being ignored and/or no input is being received, and input from Device 3 is being buffered. Device 2 is in the “losing ownership” state 206. Device 1 is the “not owner” state 204. Device 3 is in the “gaining ownership” state 208.


It is understood that input may be buffered on the device sending the input as well as on the device receiving the input. For example, Device 3 is sending input to Device 1 (as the device 102 with the state machine) from time t3 to time t4 and that input is being buffered since Device 3 is not currently the owner. However, it may be desirable for Device 3 to buffer the input before it is sent. Such sending side buffering may be based on network characteristics (e.g., interpacket delay due to network latency) and/or a predefined delay local to Device 3. This sending side buffering enables Device 3 to send the input in a regulated manner (e.g., every twenty or forty milliseconds) rather than simply sending the input whenever it is detected (e.g., with a zero millisecond delay).


At time t4, the transition period expires and control is automatically transferred from Device 2 to Device 3. From time t4 to time t5, Device 3 is in control and input from Device 3 is being executed. Buffered input received from Device 3 during the period between time t3 and time t4 may be executed during this time. Input from Device 1 and Device 2 is being ignored and/or no input is being received. Device 3 is in the “owner” state 202. Device 1 and Device 2 are in the “not owner” state 204.


At time t5, Device 2 requests control. Device 3 does not voluntarily hand control to Device 2 and a timer is started for the transition period. From time t5 to time t6, Device 3 is in control and input from Device 3 is being executed. Input from Device 1 is being ignored and/or no input is being received, and input from Device 2 is being buffered. Device 3 is in the “losing ownership” state 206. Device 1 is the “not owner” state 204. Device 2 is in the “gaining ownership” state 208.


In some embodiments, there may be a waiting period during which the device that was recently in control (e.g., Device 2) is not allowed to again take control. In other embodiments, the waiting period may be applied to any device, not just a device that was recently in control. In still other embodiments, there may be no such limits. It is understood that requests for control may be queued. For example, if Device 1 asks for control just after time t5, its request may be queued and executed after Device 2 takes control. In such cases, there may be a waiting period between the time Device 2 takes control and the time the request by Device 1 is executed. This waiting period ensures that Device 2 has time to accomplish more than could be accomplished if the request by Device 1 was executed as soon as Device 2 takes control. In other embodiments, there may be no such waiting period and the request by Device 1 may be executed as soon as Device 2 takes control. In this case, Device 2 would have the transition time before it takes control from Device 3 and the transition time before it loses control to Device 1. It is understood that such options may be configurable within the collaboration software 110.


At time t6, the transition period expires and control is transferred from Device 3 to Device 2. At time t6, Device 2 is in control and input from Device 2 is being executed. Buffered input received from Device 2 during the period between time t5 and time t6 may be executed during this time. Input from Devices 1 and 3 is being ignored and/or no input is being received. Device 2 is in the “owner” state 202. Device 1 and Device 3 are in the “not owner” state 204.


Accordingly, during the transition times that occur between time t3 and time t4 and between time t5 and time t6, two devices may access the shared resource in a virtual parallel manner. Buffering input from a device that is going to gain control and later executing the buffered input after the device gains control enables such virtual parallel access without causing a conflict with the device that has access before the transition occurs.


Referring again to FIG. 2, in some embodiments, there may be a transition directly from the “not owner” state 204 to the “owner” state 202 when a device (other than the sharer) that is in control abruptly ends its participation in the session (e.g., leaves or is disconnected). This transition enables the sharer to regain control of the session since the state machine 200 will not enter the transition steps of state 206 and state 208.


In the present example, the state machine 118 maintains a copy of the state machine for each of the devices 102, 104, 106, and 108, even if a particular device has not requested control. However, devices that cannot take control (e.g., do not have permission or are not properly configured to take control) may not have a corresponding state machine. In other embodiments, a state machine may exist only for devices that have requested control. For example, when a device requests control for the first time, the state machine 118 may create a state machine for that device.


In a more detailed example of the state diagram 200, assume that the resources 122 and 124 include an application that allows editing (e.g., a word processor or spreadsheet application), a keyboard, and a mouse. In this example, it is understood that the keyboard and mouse are not physically shared, but the control that they provide (e.g., via actions received as input via the device 102) may be shared in such a way that they can be virtually shared and controlled, with the device in control being able to edit or otherwise manipulate the application. Other devices' mouse cursors may be represented by virtual cursors (e.g., faded, dotted, or otherwise denoted as not representing the cursor in control). The actual resource being shared may be viewed as the application or display (which may be shared by sending display information to the devices 104, 106, and 108), the mouse and keyboard (which may be shared by sending their actions, such as cursor movements and keystrokes, to the devices 104, 106, and 108), or a combination of both the application/display and the keyboard/mouse events.


In the present embodiment, the device 102 is the sharer and the devices 104, 106, and 108 are viewers, although any of the devices can be in control. Input takes the form of keyboard and mouse events, which are injected into the collaboration session by whichever device is the current owner. Mouse movements by non-owners may be shown as virtual cursors and keyboard events are generally rejected unless they represent a control request or are being buffered following a control request.


A non-owner may become an owner by clicking the left mouse button. In other words, when in the “not owner” state 204, a left mouse button (LMB) down event may trigger the ownership transition process previously described. During the transition process, keyboard events from the incoming owner will be buffered and not reflected on the display. When that device becomes the owner, the real mouse cursor will be moved to the position of the virtual mouse cursor of the new owner and the buffered keyboard events will be executed. For example, the real mouse cursor may be moved to a particular cell of a spreadsheet and typed input may appear. This information will be sent to the non-owning devices. The previous owner will be assigned a virtual mouse cursor positioned where their mouse was located when they lost ownership.


In some embodiments, the current owner may be notified that they are going to lose ownership. The current owner may then yield control voluntarily (e.g., prior to the end of the transition period) or may wait until they automatically lose control at the end of the transition period. In still other embodiments, the current owner may have temporarily locked ownership and may prevent an ownership change by, for example, holding down the left mouse button. This lock enables the current owner to finish before being interrupted, and may continue for as long as the left mouse button is held down (e.g., a left mouse button up event may unlock the owner state), may continue for a defined time period (e.g., there may be a maximum lock time allowed by the state machine 118 even if the left mouse button is held down), and/or until another defined event occurs. In some embodiments, the owner may be unable to lock ownership after being notified of an impending transition (e.g., left mouse button events generated by the owner may be ignored or the lock option may be otherwise disabled).


For example, assume that the application is Excel (the spreadsheet program produced by Microsoft Corp. of Seattle, Wash.). The current owner may be typing something into a cell when they are notified that they are going to lose control. The transition period gives them some time to finish what they are typing, while also allowing the incoming owner to click on a cell and begin typing. This transition period enables the two users to perform parallel editing, although it is virtual in nature since the incoming owner's input will not appear until they gain ownership.


It is understood that the transition period may be any length of time and may be configurable. Generally, the transition period will be defined to provide enough time for a particular action or set of actions to be completed (e.g., typing into a cell), but not so long as to disrupt an active sharing of the resource. For example, a transition period between five and ten seconds may be used in the Excel example, although any other times may be set. Different resources and/or devices may be given unique transition periods in some embodiments. For example, a user of the device 102 may be leading the collaboration session and may be given a longer transition period to ensure he is able to complete typing. By default, however, all transition periods may be assigned the same amount of time.


In the current example where the state machine 118 handles information for all of the devices in the collaboration session, the state machine 118 may use a table or another data structure to track each of the devices in the collaboration session. One example of such a table is shown below as Table 1.













TABLE 1







CURRENT
MOUSE AND





CURSOR
KEYBOARD
OWNER


DEVICE
STATE
POSITION
EVENTS
LOCKED?







102
Not owner
x1, y1
N/A
N/A


104
Not owner
x2, y2
N/A
N/A


106
Owner
x3, y3
List of events
No


108
Not owner
x4, y4
N/A
N/A









For each device 102, 104, 106, and 108, Table 1 tracks the current state, the current cursor position, a list of mouse and keyboard events (including buffered events for a device gaining ownership), and whether the ownership is locked (e.g., whether the left mouse button is down) to prevent an ownership change. It is understood that the information in Table 1 may change depending on the nature of the resource or resources being shared. The sharer may also keep track of other information, such as the identity of the current owner and the next owner, and may maintain a timer for transitions.


From the perspective of the other devices 104, 106, and 108, the collaboration software may not be aware of parallel access operations or even their own mouse states. For example, the devices 104, 106, and 108 may simply send their mouse and keyboard events to the device 102 (e.g., the server hosting the collaboration session via the state machine 118) as though they are controlling the device 102.


Generally speaking, actual sharing of the resources (e.g., the display) of the device 102 uses a protocol that is able to provide the needed functionality. Such functionality includes enabling the sharer to send its screen display to the viewers and enabling the sharer to allow specific viewers to control the sharer's display. The functionality may also include enabling viewers to send mouse and keyboard events to the sharer and enabling the sharer to identify the source device for a particular received event. The functionality may also include enabling mouse position synchronization between the sharer and the viewers. For example, if the viewer sends its mouse position as (x,y), the sharer needs to be able to reflect this by moving the virtual or real cursor (depending on the state of ownership of the viewer) to (x,y).


In the present embodiment where the implementation depends on the sharer, no specific special extensions may be needed. However, certain messages may be useful to provide more transparency in the collaboration session. For example, the sharer may send a message to a viewer to notify the viewer of their state. Such a message may be sent to the viewer each time the viewer's state changes. This enables the viewer (e.g., the device) to notify the user of the device that it is losing ownership. Another message may be sent from a viewer to the sharer if the viewer explicitly yields ownership. Yet another message may be sent from the sharer to all viewers to identify a new owner. This information may then be presented to users of the viewing devices.


Referring to FIG. 4, a flow chart 400 illustrates one embodiment of a process that may be executed by a device with a state machine, such as the device 102 of FIG. 1A with the state machine 118. In the present example, the state machine 118 handles all coordination of ownership and transitions. The process 400 may be used to handle input received from one or more of the devices 102, 104, 106, and/or 108.


In step 402, input is received from a device (e.g., the device 104 of FIG. 1A) in a collaboration session. As described previously, input may be handled by the state machine 118 in a similar manner for all devices, so the state machine 118 may not differentiate between input from the device 102 or input from the other devices 104, 106, and 108. In some embodiments, there may be defined input from the device 102 that overrides all other input regardless of whether the device 102 is the current owner. This enables the user of the device 102 to regain control of the collaboration session at any time. Generally however, input from the device 102 will not be prioritized unless a preference is defined for the state machine 118 giving priority to the device 102. It is understood that various priority configurations may be available, with particular devices being given priority over other devices.


In step 404, a determination is made as to whether the device from which the input was received is currently in control. For example, the method 400 may determine if the device is in the “owner” state 202. If the device is in control, the method 400 moves to step 406, where the input is executed (e.g., an action is performed) with the input device viewed as a controlling device with any parameters defined for the state 202. In some embodiments, the state 202 may have the least number of limitations on input, but may still be restricted to input that is compatible with the application being shared. Accordingly, the input will likely not be able to launch an application, but may access and use some or all of the functionality of the application being shared. In other embodiments, some functionality may be limited. For example, if the application provides scripting functions, the physical sharer (e.g., the device 102) may choose to allow or limit access to the scripting functions. If the device is not in control as determined in step 404, the method 400 moves to step 408.


In step 408, a determination is made as to whether the device from which the input was received is currently losing control. For example, the method 400 may determine if the device is in the “losing ownership” state 206. If the device is losing control, the method 400 moves to step 410, where the input is executed (e.g., an action is performed) with the input device viewed as a device that is losing control with any parameters defined for the state 206. In some embodiments, the state 206 may be limited somewhat, such as being unable to close the application, launch or otherwise bring another program into the foreground, and/or perform certain other functions. If the device is not losing control as determined in step 408, the method 400 moves to step 412.


In step 412, a determination is made as to whether the input is executable without needing control. For example, the method 400 may determine if the input reflects mouse movement of a virtual mouse cursor. If the input is executable without needing control, the method 400 moves to step 414, where the input is executed (e.g., an action is performed) with the input device viewed as a device that is not in control. It is understood that events such as mouse movements may be treated differently from keyboard input, in which case steps 412 and 414 may not be needed. If the input is not executable without control as determined in step 412, the method 400 moves to step 416.


In some embodiments, steps 412 and 414 may not occur. Some events may be handled outside of the state machine 118 (e.g., out-of-band) and would not be handled in the method 400. For example, if cursor movements were handled by another component and not by the state machine 118, steps 412 and 414 would not be needed to handle those movements. Accordingly, steps 412 and 414 may be used only if the state machine 118 is configured to handle events that are executable without control.


In step 416, a determination is made as to whether the device from which the input was received is currently gaining control. For example, the method 400 may determine if the device is in the “gaining ownership” state 208. If the device is gaining control, the method 400 moves to step 418, where the input is buffered. If the device is not gaining control as determined in step 416, the method 400 moves to step 420.


In step 420, a determination is made as to whether the device from which the input was received is currently requesting control. If the device is requesting control, the method 400 moves to step 422, where the control transition is initiated. If the ownership cannot be changed (e.g., if ownership is locked or otherwise unavailable), the method 400 may execute whatever process is defined for such an event. Such processes may include notifying the device from which the input was received that the ownership transfer request failed, buffering the request until ownership becomes available or for a defined period of time, and/or taking other actions. If the device is not requesting control as determined in step 420, the method 400 moves to step 424. In step 424, the input is ignored.


Referring to FIG. 5, a flow chart 500 illustrates one embodiment of a process that may be executed by a device with a state machine, such as the device 102 of FIG. 1A with the state machine 118. In the present example, the state machine 118 handles all coordination of ownership and transitions. The process 500 may be used to handle a request for ownership received from one or more of the devices 102, 104, 106, and/or 108. For purposes of illustration, an example of a table listing each device's state is provided and the previously described example using cursor locations and mouse/keyboard events is used. The initial table contains the following information as shown in Table 2 below, with the device 106 currently being in control.













TABLE 2







CURRENT
MOUSE AND





CURSOR
KEYBOARD
OWNER


DEVICE
STATE
POSITION
EVENTS
LOCKED?







102
Not owner
x1, y1
N/A
N/A


104
Not owner
x2, y2
N/A
N/A


106
Owner
x3, y3
List of events
No


108
Not owner
x4, y4
N/A
N/A









In step 502, a request is received from a device (e.g., the device 104 of FIG. 1A) that wants to take control of a shared resource in the collaboration session. In step 504, a second device (e.g., the device 106) is identified as currently being in control. In step 506, a determination may be made if control (e.g., ownership) is locked. If control is locked, the method 500 moves to step 508, where a determination may be made as to whether the request for control is to be denied. For example, a timer may be set and the lock may remain in place until the timer expires or the method 500 may simply wait the current lock to be removed (e.g., via a left mouse button up event). Alternatively, the collaboration software 110 may be configured to deny the request and require that the request be resubmitted. Regardless of the parameters on which the determination of step 508 may be based, if the request is not denied, the method 500 returns to step 506. If the request is denied, the method 500 moves to step 510 and denies the request. It is understood that the ability to lock ownership may not be available in some embodiments, in which case steps 506, 508, and 510 may be omitted. If control is not locked as determined in step 506, the method 500 moves to step 512.


In step 512, the second device is notified that it is going to lose control. The notification may include an amount of time remaining or another indicator of the impending loss of control. In step 514, a timer is started for the transition period. The table may now contain the following information as shown in Table 3 below.













TABLE 3







CURRENT
MOUSE AND





CURSOR
KEYBOARD
OWNER


DEVICE
STATE
POSITION
EVENTS
LOCKED?







102
Not owner
x1, y1
N/A
N/A


104
Gaining
x2, y2
List of buffered
N/A



ownership

events


106
Losing
x3, y3
List of events
No



ownership


108
Not owner
x4, y4
N/A
N/A









In step 516, which is during the transition period, input received from the second device is executed and input received from the requesting device is buffered. In step 518, a determination is made as to whether the timer has expired. If the timer has expired as determined in step 518, the method 500 moves to step 522, where control is transitioned from the second device to the first device. If the timer has not expired as determined in step 518, the method 500 continues to step 520.


In step 520, a determination is made as to whether the second device has relinquished control. For example, in response to step 512, the second device may voluntarily relinquish control prior to the expiration of the timer. If the second device has not relinquished control, the method 500 returns to step 516. If the second device has relinquished control, the method 500 continues to step 522, where control is transitioned from the second device to the first device. Following step 522, the method 500 moves to step 524, where any actions that were buffered for the first device are executed. The table may now contain the following information as shown in Table 4 below.













TABLE 4







CURRENT
MOUSE AND





CURSOR
KEYBOARD
OWNER


DEVICE
STATE
POSITION
EVENTS
LOCKED?







102
Not owner
x1, y1
N/A
N/A


104
Owner
x2, y2
List of events
No


106
Not owner
x3, y3
N/A
N/A


108
Not owner
x4, y4
N/A
N/A









Referring to FIG. 6, a flow chart 600 illustrates one embodiment of a process that may be executed by a device that is currently in control of a shared resource in a collaboration session. The device may include a state machine, such as the device 102 of FIG. 1A with the state machine 118, or may interact with a state machine located on another device. The process 500 may be used to handle a notification that control is going to be lost.


In step 602, a notification is received that control is going to be lost. The notification may include information such as an amount of control time remaining. In step 604, a determination may be made as to whether the user of the device wants to relinquish control. For example, a pop up box may appear on a display with a question such as “Relinquish control?” and the user may be able to select “Yes” or “No” as an answer. In other embodiments, the user may simply ignore the pop up box and the device will understand that as a “no” response. It is understood that some embodiments may not provide an option for relinquishing control. If the determination of step 604 indicates that control is to be relinquished, the method 600 moves to step 606. In step 606, a message is sent indicating that control has been relinquished. If the determination of step 604 indicates that control is not to be relinquished, the method 600 moves to step 608.


In step 608, input may be sent for execution. In step 610, a determination may be made as to whether control has been lost. For example, the determination may identify whether the transition period has timed out. This may be determined based on a message received from the state machine 118, based on an internal timer on the device losing control, and/or based on one or more other processes. If the determination indicates that control has not been lost, the method 600 may return to step 608 as shown or, in some embodiments, to step 604. If the determination indicates the control has been lost, the method 600 may end.


Referring to FIG. 7, a sequence diagram 700 illustrates one embodiment of a process that may be executed between multiple devices during a collaboration session. The present example includes the device 102 with the state machine 118, the device 104, and the device 106 of FIG. 1A. For purposes of illustration, a collaboration session is underway and the device 104 is in control. The state machine 118 is tracking the state and other information for each of the devices 102, 104, and 106.


In step 702, the device 104 sends input to the device 102. As noted previously, this input may be buffered by the device 104 prior to sending based on network characteristics and/or a locally defined delay. As the device 104 is currently in control, the input is executed in step 704. The executed input is sent to the device 104 for display in step 706 and sent to the device 106 for display in step 708. As with the device 104, this executed input may be buffered by the device 102 prior to sending based on network characteristics and/or a locally defined delay. It is noted that sending the executed input to both of the devices 104 and 106 occurs in the client/server model of FIG. 1A and may be handled differently in other environments. For example, in some embodiments, the executed input may not be sent to the device 104 (e.g., if the device 104 has its own state machine that is synchronized with the state machine of the device 102).


In step 710, the device 106 sends a request for ownership to the device 102. In step 712, a transition timer is started and, in step 714, a transition notification message is sent to the device 104. In steps 716, 718, 720, and 722, which occur during the transition period, the device 104 may continue sending input to the device 102, that input may be executed, and the executed input may be sent to the devices 104 and 106. In step 724, which occurs during the transition period, the device 106 sends input to the device 102. This input is buffered in step 726.


In step 728, the transition is performed, with the device 106 replacing the device 104 as the current owner. In step 730, the buffered input is executed. In steps 732 and 734, the executed input is sent to the devices 104 and 104 for display.


Referring to FIG. 8, one embodiment of a system 800 is illustrated. The system 1100 is one possible example of a device such as the device 102 of FIG. 1A. Embodiments of the device 102 include cellular telephones (including smart phones), personal digital assistants (PDAs), netbooks, tablets, laptops, desktops, workstations, telepresence consoles, and any other computing device that can communicate with another computing device using a wireless and/or wireline communication link. Such communications may be direct (e.g., via a peer-to-peer network, an ad hoc network, or using a direct connection), indirect, such as through a server or other proxy (e.g., in a client-server model), or may use a combination of direct and indirect communications. It is understood that the device 102 may be implemented in many different ways and by many different types of systems, and may be customized as needed to operate within a particular environment.


The system 800 may include a controller (e.g., a central processing unit (“CPU”)) 802, a memory unit 804, an input/output (“I/O”) device 806, and a network interface 808. The components 802, 804, 806, and 808 are interconnected by a transport system (e.g., a bus) 810. A power supply (PS) 812 may provide power to components of the computer system 800, such as the CPU 802 and memory unit 804, via a power system 814 (which is illustrated with the transport system 810 but may be different). It is understood that the system 800 may be differently configured and that each of the listed components may actually represent several different components. For example, the CPU 802 may actually represent a multi-processor or a distributed processing system; the memory unit 804 may include different levels of cache memory, main memory, hard disks, and remote storage locations; the I/O device 806 may include monitors, keyboards, and the like; and the network interface 808 may include one or more network cards providing one or more wired and/or wireless connections to a network 816. Therefore, a wide range of flexibility is anticipated in the configuration of the computer system 800.


The system 800 may use any operating system (or multiple operating systems), including various versions of operating systems provided by Microsoft (such as WINDOWS), Apple (such as Mac OS X), UNIX, and LINUX, and may include operating systems specifically developed for handheld devices, personal computers, servers, and embedded devices depending on the use of the system 800. The operating system, as well as other instructions, may be stored in the memory unit 804 and executed by the processor 802. For example, if the system 800 is the device 102, the memory unit 804 may include instructions for the state machine 118 and for performing some or all of the message sequences and methods described herein.


While the preceding description shows and describes one or more embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present disclosure. For example, various steps illustrated within a particular flow chart or sequence diagram may be combined or further divided. In addition, steps described in one flow chart or diagram may be incorporated into another flow chart or diagram. Furthermore, the described functionality may be provided by hardware and/or software, and may be distributed or combined into a single platform. Additionally, functionality described in a particular example may be achieved in a manner different than that illustrated, but is still encompassed within the present disclosure. Therefore, the claims should be interpreted in a broad manner, consistent with the present disclosure.

Claims
  • 1. A method comprising: receiving, by a first device of a plurality of devices involved in a collaboration session that uses a shared resource, first input from a second device of the plurality of devices;determining, by the first device, whether the first input is an executable input, wherein an executable input is an input that requires control of the shared resource used in the collaborative session;determining, by the first device, that a third device of the plurality of devices is currently in control of the shared resource;starting a timer, by the first device, wherein control of the shared resource will automatically pass from the third device to the second device when a time period defined by the timer expires, wherein the timer is started in response to the first input;executing, by the first device, second input received from the third device after the timer has been started and before the time period expires, wherein the second input represents at least one action to be taken using the shared resource;sending, by the first device, a result obtained by executing the at least one action represented by the second input to each of the plurality of devices;buffering, by the first device, third input received from the second device after the timer has been started and before the time period expires, wherein the third input represents at least one action to be taken using the shared resource; andexecuting, by the first device, the at least one action represented by the third input after control is passed from the third device to the second device; andsending, by the first device, a result obtained by executing the at least one action represented by the third input to each of the plurality of devices.
  • 2. The method of claim 1 further comprising: receiving, by the first device, fourth input from the second device, wherein the fourth input is received before the first input and while the third device is in control of the shared resource, and wherein the fourth input represents at least one action to be taken using the shared resource; andignoring, by the first device, the fourth input.
  • 3. The method of claim 1 further comprising determining, by the first device, whether the second input includes a restricted action, wherein the restricted action is ignored and not executed.
  • 4. The method of claim 1 wherein the first device is sharing the shared resource, the method further comprising taking control of the shared resource from any other device of the plurality of devices that currently has control.
  • 5. The method of claim 1 further comprising sending, by the first device, a message to the plurality of devices to inform the plurality of devices that the second device is now in control of the shared resource.
  • 6. The method of claim 1 further comprising sending, by the first device, a message to the third device indicating that the third device will lose control of the shared resource when the time period expires.
  • 7. The method of claim 1 further comprising receiving, by the first device, a message from the third device indicating that the third device relinquishes control to the second device prior to the expiration of the time period.
  • 8. The method of claim 1 wherein the shared resource is user-controllable and wherein the first, second, and third input correspond to user input.
  • 9. The method of claim 1 wherein the shared resource includes a user interface device.
  • 10. The method of claim 1 further comprising: receiving, by the first device, a locked indicator prior to starting the timer, wherein the locked indicator signifies that control cannot be transferred from the third device; andreceiving, by the first device, an unlocked indicator following the receipt of the locked indicator, wherein the unlocked indicator signifies that control can be transferred from the third device and wherein the timer is started after receiving the unlocked indicator.
  • 11. The method of claim 1 further comprising taking control of the shared resource, by the first device, after control is passed from the third device to the second device and the second device leaves the collaboration session without passing control to another device of the plurality of devices.
  • 12. The method of claim 1 wherein access to the shared resource by the first device is handled by the first device in an identical manner as access by any other device of the plurality of devices.
  • 13. The method of claim 1 further comprising: receiving, by the first device, fourth input from the third device, wherein the fourth input represents a request to take control of the shared resource after control of the resource has passed to the second device; anddetermining whether a cooling off period has expired, wherein the second device cannot gain control of the shared resource until the cooling off period has expired.
  • 14. The method of claim 13 wherein the cooling off period is applied to each of the plurality of devices.
  • 15. The method of claim 13 wherein the cooling off period is applied only to the second device.
  • 16. A method comprising: monitoring, by a first device of a plurality of devices involved in a collaboration session that uses a shared resource, a state of each of the plurality of devices as one of an owner state, a not owner state, a gaining ownership state, and a losing ownership state;receiving, by the first device, first input from a second device of the plurality of devices;determining, by the first device, whether the first input is an executable input, wherein an executable input is an input that requires control of the shared resource used in the collaborative session;identifying, by the first device, that the second device is in the not owner state and that a third device of the plurality of devices is currently in the owner state, wherein the not owner state indicates that the second device is currently not in control of the shared resource and wherein the owner state indicates that the third device is currently in control of the shared resource;starting a timer, by the first device, wherein control of the shared resource will automatically pass from the third device to the second device when a transition period defined by the timer expires, wherein the timer is started in response to the first input;modifying, by the first device, the state of the second device from the not owner state to the gaining ownership state, wherein second input received from the second device while in the gaining ownership state is buffered by the first device as the second input is received and not sent to each of the plurality of devices, wherein the second input represents at least one action to be taken using the shared resource;modifying, by the first device, the state of the third device from the owner state to the losing ownership state, wherein third input received from the third device while in the losing ownership state is executed by the first device as the third input is received and a result obtained by executing the third input is sent to each of the plurality of devices, wherein the third input represents at least one action to be taken using the shared resource;modifying, by the first device, the state of the third device from the losing ownership state to the not owner state upon expiration of the timer;modifying, by the first device, the state of the second device from the gaining ownership state to the owner state upon expiration of the timer;executing, by the first device, the second input once the second device is in the owner state; andsending, by the first device, a result obtained by executing the second input to each of the plurality of devices.
  • 17. The method of claim 16 further comprising: determining, by the first device, that the second device is no longer part of the collaboration session but is currently listed as being in the owner state; andmodifying, by the first device, the state of the first device from the not owner state to the owner state without passing through the gaining ownership state.
  • 18. The method of claim 16 further comprising: receiving, by the first device, fourth input from a fourth device of the plurality of devices, wherein the fourth input represents at least one action to be taken using the shared resource;identifying, by the first device, that the fourth device is not in either the owner state or the gaining ownership state;identifying, by the first device, that the fourth input does not represent a request to take control of the shared resource; andignoring the fourth input.
  • 19. The method of claim 18 further comprising: receiving, by the first device, a cursor position for each of the plurality of devices including the second and third devices; andsending, by the first device, cursor information for each of the plurality of devices to each of the other devices, wherein the cursor information includes the cursor position and a cursor type identifier, wherein the cursor type identifier distinguishes the cursor belonging to whichever device of the plurality of devices currently in the owner state and the losing ownership state, and wherein the cursor information is sent regardless of the state of the device from which the cursor position was received.
  • 20. A device comprising: a network interface;a processor coupled to the network interface; anda memory coupled to the processor and configured to store a plurality of instructions executable by the processor, the instructions including instructions for: monitoring a state of each of a plurality of devices as one of an owner state, a not owner state, a gaining ownership state, and a losing ownership state, wherein the plurality of devices are involved in a collaboration session that uses a shared resource being shared by the device,receiving first input from a second device of the plurality of devices via the network interface;determining whether the first input is an executable input, wherein an executable input is an input that requires control of the shared resource used in the collaborative session;identifying that the second device is in the not owner state and that a third device of the plurality of devices is currently in the owner state, wherein the not owner state indicates that the second device is currently not in control of the shared resource and wherein the owner state indicates that the third device is currently in control of the shared resource;starting a timer, wherein control of the shared resource will automatically pass from the third device to the second device when a transition period defined by the timer expires, wherein the timer is started in response to the first input;modifying the state of the second device from the not owner state to the gaining ownership state, wherein second input received via the network interface from the second device while in the gaining ownership state is buffered by the first device as the second input is received and not sent to each of the plurality of devices, wherein the second input represents at least one action to be taken using the shared resource;modifying the state of the third device from the owner state to the losing ownership state, wherein third input received via the network interface from the third device while in the losing ownership state is executed by the first device as the third input is received and a result obtained by executing the third input is sent to each of the plurality of devices, wherein the third input represents at least one action to be taken using the shared resource;modifying the state of the third device from the losing ownership state to the not owner state upon expiration of the timer;modifying the state of the second device from the gaining ownership state to the owner state upon expiration of the timer;executing the second input once the second device is in the owner state; andsending a result obtained by executing the second input to each of the plurality of devices.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/057,872, filed on Oct. 18, 2013, entitled SYSTEM AND METHOD FOR VIRTUAL PARALLEL RESOURCE MANAGEMENT, which published as U.S. Application Publication No. 2015-0113119 on Apr. 23, 2015. U.S. patent application Ser. No. 14/057,872 is incorporated by reference herein in its entirety.

US Referenced Citations (300)
Number Name Date Kind
5442637 Nguyen Aug 1995 A
5761309 Ohashi et al. Jun 1998 A
5790637 Johnson et al. Aug 1998 A
5818447 Wolf et al. Oct 1998 A
5889762 Pajuvirta et al. Mar 1999 A
6031818 Lo et al. Feb 2000 A
6128283 Sabaa et al. Oct 2000 A
6141687 Blair Oct 2000 A
6161082 Goldberg et al. Dec 2000 A
6195694 Chen et al. Feb 2001 B1
6202084 Kumar et al. Mar 2001 B1
6219638 Padmanabhan et al. Apr 2001 B1
6298129 Culver et al. Oct 2001 B1
6311150 Ramaswamy et al. Oct 2001 B1
6343067 Drottar et al. Jan 2002 B1
6360196 Poznaski et al. Mar 2002 B1
6389016 Sabaa et al. May 2002 B1
6438376 Elliott et al. Aug 2002 B1
6473425 Bellaton et al. Oct 2002 B1
6574668 Gubbi et al. Jun 2003 B1
6606112 Falco Aug 2003 B1
6741691 Ritter et al. May 2004 B1
6754181 Elliott et al. Jun 2004 B1
6766373 Beadle Jul 2004 B1
6826613 Wang et al. Nov 2004 B1
6836765 Sussman Dec 2004 B1
6842460 Olkkonen et al. Jan 2005 B1
6850769 Grob et al. Feb 2005 B2
6898413 Yip et al. May 2005 B2
6912278 Hamilton Jun 2005 B1
6940826 Simard et al. Sep 2005 B1
6963555 Brenner et al. Nov 2005 B1
6975718 Pearce et al. Dec 2005 B1
6987756 Ravindranath et al. Jan 2006 B1
6999575 Sheinbein Feb 2006 B1
6999932 Zhou Feb 2006 B1
7006508 Bondy et al. Feb 2006 B2
7010109 Gritzer et al. Mar 2006 B2
7013155 Ruf et al. Mar 2006 B1
7079529 Khuc Jul 2006 B1
7080158 Squire Jul 2006 B1
7092385 Gallant et al. Aug 2006 B2
7117526 Short Oct 2006 B1
7123710 Ravishankar Oct 2006 B2
7184415 Chaney et al. Feb 2007 B2
7185114 Hariharasubrahmanian Feb 2007 B1
7272377 Cox et al. Sep 2007 B2
7302496 Metzger Nov 2007 B1
7304985 Sojka et al. Dec 2007 B2
7345999 Su et al. Mar 2008 B2
7346044 Chou et al. Mar 2008 B1
7353252 Yang et al. Apr 2008 B1
7353255 Acharya et al. Apr 2008 B2
7412374 Seiler et al. Aug 2008 B1
7457279 Scott et al. Nov 2008 B1
7477282 Firestone et al. Jan 2009 B2
7487248 Moran et al. Feb 2009 B2
7512652 Appelman et al. Mar 2009 B1
7542472 Gerendai et al. Jun 2009 B1
7564843 Manjunatha et al. Jul 2009 B2
7570743 Barclay et al. Aug 2009 B2
7574523 Traversat et al. Aug 2009 B2
7590758 Takeda et al. Sep 2009 B2
7613171 Zehavi et al. Nov 2009 B2
7623476 Ravikumar et al. Nov 2009 B2
7623516 Chaturvedi et al. Nov 2009 B2
7656870 Ravikumar et al. Feb 2010 B2
7664495 Bonner et al. Feb 2010 B1
7769881 Matsubara et al. Aug 2010 B2
7774495 Pabla et al. Aug 2010 B2
7778187 Chaturvedi et al. Aug 2010 B2
7782866 Walsh et al. Aug 2010 B1
7917584 Arthursson Mar 2011 B2
8009586 Chaturvedi et al. Aug 2011 B2
8065418 Abuan et al. Nov 2011 B1
8200796 Margulis Jun 2012 B1
8407576 Yin et al. Mar 2013 B1
9009513 Yamamoto Apr 2015 B2
20010050923 Park et al. Dec 2001 A1
20020031212 O'Neil et al. Mar 2002 A1
20020037000 Park et al. Mar 2002 A1
20020038282 Montgomery Mar 2002 A1
20020042769 Gujral et al. Apr 2002 A1
20020062285 Amann et al. May 2002 A1
20020064167 Khan et al. May 2002 A1
20020080719 Parkvall et al. Jun 2002 A1
20020087887 Busam et al. Jul 2002 A1
20020097150 Sandelman et al. Jul 2002 A1
20020120757 Sutherland et al. Aug 2002 A1
20020124096 Loguinov et al. Sep 2002 A1
20020143548 Korall et al. Oct 2002 A1
20020150110 Inbar et al. Oct 2002 A1
20020152325 Elgebaly et al. Oct 2002 A1
20020156844 Maehiro Oct 2002 A1
20020166053 Wilson Nov 2002 A1
20020173303 Shibutani Nov 2002 A1
20020176404 Girard Nov 2002 A1
20020178087 Henderson et al. Nov 2002 A1
20020184310 Traversat et al. Dec 2002 A1
20030009565 Arao Jan 2003 A1
20030031210 Harris Feb 2003 A1
20030035441 Cheng et al. Feb 2003 A1
20030043764 Kim et al. Mar 2003 A1
20030044020 Aboba et al. Mar 2003 A1
20030046056 Godoy et al. Mar 2003 A1
20030046585 Minnick Mar 2003 A1
20030061025 Abir Mar 2003 A1
20030061481 Levine et al. Mar 2003 A1
20030072485 Guerin et al. Apr 2003 A1
20030076815 Miller et al. Apr 2003 A1
20030078858 Angelopoulos et al. Apr 2003 A1
20030088676 Smith et al. May 2003 A1
20030105812 Flowers, Jr. et al. Jun 2003 A1
20030110047 Santosuosso Jun 2003 A1
20030115251 Fredrickson et al. Jun 2003 A1
20030126213 Betzler Jul 2003 A1
20030135569 Khakoo et al. Jul 2003 A1
20030137939 Dunning et al. Jul 2003 A1
20030158722 Lord Aug 2003 A1
20030163525 Hendriks et al. Aug 2003 A1
20030163697 Pabla et al. Aug 2003 A1
20030172145 Nguyen Sep 2003 A1
20030174707 Grob et al. Sep 2003 A1
20030177186 Goodman et al. Sep 2003 A1
20030177422 Tararoukhine et al. Sep 2003 A1
20030187650 Moore et al. Oct 2003 A1
20030202480 Swami Oct 2003 A1
20030212772 Harris Nov 2003 A1
20030214955 Kim Nov 2003 A1
20030217171 Von Stuermer et al. Nov 2003 A1
20030217318 Choi Nov 2003 A1
20030220121 Konishi et al. Nov 2003 A1
20030229715 Baratakke et al. Dec 2003 A1
20040005877 Vaananen Jan 2004 A1
20040024879 Dingman et al. Feb 2004 A1
20040034776 Fernando et al. Feb 2004 A1
20040034793 Yuan Feb 2004 A1
20040039781 Lavallee et al. Feb 2004 A1
20040044517 Palmquist Mar 2004 A1
20040052234 Ameigeiras et al. Mar 2004 A1
20040062267 Minami et al. Apr 2004 A1
20040068567 Moran et al. Apr 2004 A1
20040100973 Prasad May 2004 A1
20040103212 Takeuchi et al. May 2004 A1
20040128554 Maher, III et al. Jul 2004 A1
20040133689 Vasisht Jul 2004 A1
20040139225 Takahashi Jul 2004 A1
20040139228 Takeda et al. Jul 2004 A1
20040139230 Kim Jul 2004 A1
20040143678 Chari et al. Jul 2004 A1
20040148434 Matsubara et al. Jul 2004 A1
20040153858 Hwang Aug 2004 A1
20040158471 Davis et al. Aug 2004 A1
20040162871 Pabla et al. Aug 2004 A1
20040203834 Mahany Oct 2004 A1
20040213184 Hu et al. Oct 2004 A1
20040228279 Midtun et al. Nov 2004 A1
20040240399 Corrao et al. Dec 2004 A1
20040249885 Petropoulakis Dec 2004 A1
20040249953 Fernandez et al. Dec 2004 A1
20040260952 Newman et al. Dec 2004 A1
20040267527 Creamer et al. Dec 2004 A1
20040267938 Shoroff et al. Dec 2004 A1
20040268257 Mudusuru Dec 2004 A1
20050004982 Vernon et al. Jan 2005 A1
20050008024 Newpol et al. Jan 2005 A1
20050015502 Kang et al. Jan 2005 A1
20050033843 Shahi et al. Feb 2005 A1
20050033985 Xu et al. Feb 2005 A1
20050050227 Michelman Mar 2005 A1
20050071481 Danieli Mar 2005 A1
20050086309 Galli et al. Apr 2005 A1
20050091407 Vaziri et al. Apr 2005 A1
20050105524 Stevens et al. May 2005 A1
20050119005 Segal et al. Jun 2005 A1
20050120073 Cho Jun 2005 A1
20050130650 Creamer et al. Jun 2005 A1
20050132009 Solie Jun 2005 A1
20050136911 Csapo et al. Jun 2005 A1
20050138119 Saridakis Jun 2005 A1
20050138128 Baniel et al. Jun 2005 A1
20050143105 Okamoto Jun 2005 A1
20050144288 Liao Jun 2005 A1
20050187781 Christensen Aug 2005 A1
20050187957 Kramer et al. Aug 2005 A1
20050195802 Klein et al. Sep 2005 A1
20050198499 Salapaka et al. Sep 2005 A1
20050201357 Poyhonen Sep 2005 A1
20050201485 Fay Sep 2005 A1
20050208947 Bahl Sep 2005 A1
20050220017 Brand et al. Oct 2005 A1
20050246193 Roever et al. Nov 2005 A1
20050249196 Ansari et al. Nov 2005 A1
20050254440 Sorrell Nov 2005 A1
20050270992 Sanzgiri et al. Dec 2005 A1
20050286519 Ravikumar et al. Dec 2005 A1
20060002355 Baek et al. Jan 2006 A1
20060062180 Sayeedi et al. Mar 2006 A1
20060069775 Artobello et al. Mar 2006 A1
20060072506 Sayeedi et al. Apr 2006 A1
20060120375 Ravikumar et al. Jun 2006 A1
20060121902 Jagadeesan et al. Jun 2006 A1
20060121986 Pelkey et al. Jun 2006 A1
20060148516 Reddy et al. Jul 2006 A1
20060165029 Melpignano et al. Jul 2006 A1
20060168643 Howard et al. Jul 2006 A1
20060171534 Baughman Aug 2006 A1
20060182100 Li et al. Aug 2006 A1
20060183476 Morita et al. Aug 2006 A1
20060187926 Imai Aug 2006 A1
20060195402 Malina et al. Aug 2006 A1
20060203750 Ravikumar Sep 2006 A1
20060205436 Liu et al. Sep 2006 A1
20060218624 Ravikumar et al. Sep 2006 A1
20060230166 Philyaw Oct 2006 A1
20060233117 Tomsu et al. Oct 2006 A1
20060246903 Kong et al. Nov 2006 A1
20060258289 Dua Nov 2006 A1
20070016921 Levi et al. Jan 2007 A1
20070019545 Alt Jan 2007 A1
20070025270 Sylvain Feb 2007 A1
20070078785 Bush et al. Apr 2007 A1
20070082671 Feng et al. Apr 2007 A1
20070110043 Girard May 2007 A1
20070111794 Hogan et al. May 2007 A1
20070116224 Burke et al. May 2007 A1
20070130253 Newson et al. Jun 2007 A1
20070136459 Roche et al. Jun 2007 A1
20070165629 Chaturvedi et al. Jul 2007 A1
20070190987 Vaananen Aug 2007 A1
20070206563 Silver et al. Sep 2007 A1
20070239892 Ott et al. Oct 2007 A1
20070253435 Keller et al. Nov 2007 A1
20070260359 Benson et al. Nov 2007 A1
20070274276 Laroia et al. Nov 2007 A1
20070280253 Rooholamini et al. Dec 2007 A1
20070294626 Fletcher Dec 2007 A1
20070297430 Nykanen et al. Dec 2007 A1
20080005328 Shively et al. Jan 2008 A1
20080019285 John et al. Jan 2008 A1
20080032695 Zhu et al. Feb 2008 A1
20080046984 Bohmer et al. Feb 2008 A1
20080069105 Costa Mar 2008 A1
20080080392 Walsh et al. Apr 2008 A1
20080091813 Bodlaender Apr 2008 A1
20080123685 Varma et al. May 2008 A1
20080130639 Costa-Requena et al. Jun 2008 A1
20080168440 Regnier et al. Jul 2008 A1
20080192756 Damola et al. Aug 2008 A1
20080235362 Kjesbu Sep 2008 A1
20080235511 O'Brien et al. Sep 2008 A1
20080244718 Frost et al. Oct 2008 A1
20080250408 Tsui Oct 2008 A1
20080273541 Pharn Nov 2008 A1
20080320096 Szeto Dec 2008 A1
20080320565 Buch et al. Dec 2008 A1
20090003322 Isumi Jan 2009 A1
20090006076 Jindal Jan 2009 A1
20090052399 Silver et al. Feb 2009 A1
20090055473 Synnergren Feb 2009 A1
20090088150 Chaturvedi et al. Apr 2009 A1
20090136016 Gornoi et al. May 2009 A1
20090156217 Bajpai Jun 2009 A1
20090182815 Czechowski et al. Jul 2009 A1
20090192976 Spivack et al. Jul 2009 A1
20090234967 Yu et al. Sep 2009 A1
20090240821 Juncker et al. Sep 2009 A1
20090257433 Mutikainen et al. Oct 2009 A1
20090300673 Bachet et al. Dec 2009 A1
20090327516 Amishima et al. Dec 2009 A1
20100011108 Clark et al. Jan 2010 A1
20100011111 Vizaei Jan 2010 A1
20100049980 Barriga et al. Feb 2010 A1
20100077023 Eriksson Mar 2010 A1
20100107205 Foti Apr 2010 A1
20100174783 Zarom Jul 2010 A1
20100191954 Kim et al. Jul 2010 A1
20100223047 Christ Sep 2010 A1
20100279670 Ghai et al. Nov 2010 A1
20100299150 Fein et al. Nov 2010 A1
20100299313 Orsini et al. Nov 2010 A1
20100312832 Allen et al. Dec 2010 A1
20100312897 Allen et al. Dec 2010 A1
20100325495 Talla Dec 2010 A1
20110040836 Allen et al. Feb 2011 A1
20110099612 Lee et al. Apr 2011 A1
20110122864 Cherifi et al. May 2011 A1
20110138290 Park Jun 2011 A1
20110141220 Miura Jun 2011 A1
20110145687 Grigsby et al. Jun 2011 A1
20110307556 Chaturvedi et al. Dec 2011 A1
20110314134 Foti Dec 2011 A1
20110320821 Alkhatib et al. Dec 2011 A1
20120078609 Chaturvedi et al. Mar 2012 A1
20120124191 Lyon May 2012 A1
20120176976 Wells Jul 2012 A1
20120263144 Nix Oct 2012 A1
20130067004 Logue et al. Mar 2013 A1
20130106989 Gage et al. May 2013 A1
20130111064 Mani et al. May 2013 A1
Foreign Referenced Citations (15)
Number Date Country
1404082 Mar 2004 EP
160339 Dec 2005 EP
1638275 Mar 2006 EP
1848163 Oct 2007 EP
1988697 Nov 2008 EP
1988698 Nov 2008 EP
2005-94600 Apr 2005 JP
2007-043598 Feb 2007 JP
10-2005-0030548 Mar 2005 KR
WO 2003079635 Sep 2003 WO
WO 2004063843 Jul 2004 WO
WO 2005009019 Jan 2005 WO
2006064047 Jun 2006 WO
WO 2006075677 Jul 2006 WO
WO 2008099420 Aug 2008 WO
Non-Patent Literature Citations (77)
Entry
International Search Report and Written Opinion of the International Searching Authority from PCT/US2006/040312, dated Mar. 2, 2007.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2006/047841, dated Sep. 12, 2008.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2007/002424, dated Aug. 14, 2007.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2007/068820, dated Jun. 11, 2008.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2007/068821, dated Jun. 14, 2008.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2007068823, dated Jun. 1, 2008.
Jeff Tyson, “How Instant Messaging Works”, www.verizon.com/learningcenter, Mar. 9, 2005.
Rory Bland, et al,“P2P Routing” Mar. 2002.
Rosenberg, “STUN—Simple Traversal of UDP Through NAT”, Sep. 2002, XP015005058.
Salman A. Baset, et al, “An Analysis of the Skype Peer-To-Peer Internet Telephony Protocol”, Department of Computer Science, Columbia University, New York, NY, USA, Sep. 15, 2004.
Singh et al., “Peer-to Peer Internet Telephony Using SIP”, Department of Computer Science, Columbia University, Oct. 31, 2004, XP-002336408.
Sinfia, S. and Oglieski, A., A TCP Tutorial, Nov. 1998 (Date posted on Internet: Apr. 19, 2001) [Retrieved from the Internet <URL:http//www.ssfnet.org/Exchange/tcp/tcpTutorialNotes.html>].
Pejman Khadivi, Terence D. Todd and Dongmei Zhao, “Handoff trigger nodes for hybrid IEEE 802.11 WLAN/cellular networks,” Proc. of IEEE International Conference on Quality of Service in Heterogeneous Wired/Wireless Networks, pp. 164-170, Oct. 18, 2004.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2008/078142, dated Mar. 27, 2009.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2008/084950, dated Apr. 27, 2009.
Hao Wang, Skype VoIP service—architecture and comparison, In: INFOTECH Seminar Advanced Communication Services (ASC), 2005, pp. 4, 7, 8.
Seta, N.; Miyajima, H.; Zhang, L;; Fujii, T., “All-SIP Mobility: Session Continuity on Handover in Heterogeneous Access Environment,” Vehicular Technology Conference, 2007. VTC 2007-Spring. IEEE 65th, Apr. 22-25, 2007, pp. 1121-1126.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2008/075141, dated Mar. 5, 2009.
Qian Zhang; Chuanxiong Guo; Zihua Guo; Wenwu Zhu, “Efficient mobility management for vertical handoff between WWAN and WLAN,” Communications Magazine, IEEE, vol. 41. issue 11, Nov. 2003, pp. 102-108.
Isaacs, Ellen et al., “Hubbub: A sound-enhanced mobile instant messenger that supports awareness and opportunistic interactions,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; vol. 4, Issue No. 1; Minneapolis, Minnesota; Apr. 20-25, 2002; pp. 179-186.
PCT: International Preliminary Report on Patentability of PCT/US2008/084950; dated Jun. 1, 2010; 5 pgs.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2014/039777, dated Sep. 30, 2014.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2014/39782, dated Oct. 17, 2014.
International Search Report and Written Opinion of PCT/US2015/43633, dated Oct. 26, 2015, 21 pgs.
PCT: International Search Report and Written Opinion of PCT/US2015/43630 (related application), dated Oct. 30, 2015, 20 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2014/039777; dated Jan. 28, 2016; 8 pgs.
PCT: International Preliminary Report on Patentability of PCT/US14/39782; dated Apr. 19, 2016; 9 pgs.
PCT: International Search Report and Written Opinion for PCT/US2011/024870; dated Oct. 26, 2011; 12 pages.
J. Rosenberg et al. “Session Traversal Utilities for NAT (STUN)”, draft-ieff-behave-rfc3489bis-06, Mar. 5, 2007.
PCT: International Search Report and Written Opinion for PCT/US2011/028685; dated Nov. 9, 2011; 10 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/029954; dated Nov. 24, 2011; 8 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/024891; dated Nov. 25, 2011; 9 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/031245; dated Dec. 26, 2011; 13 pages.
Wireless Application Protocol—Wireless Transport Layer Security Specification, Version Feb. 18, 2000, Wireless Application Forum, Ltd. 2000; 99 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/040864; dated Feb. 17, 2012; 8 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/041565; dated Jan. 5, 2012; 7 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/031246; dated Dec. 27, 2011; 8 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/049000; dated Mar. 27, 2012; 10 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/051877; dated Apr. 13, 2012; 7 pages.
PCT: International Search Report and Written Opinion for PCT/US2011/055101; dated May 22, 2012; 9 pages.
Balamurugan Karpagavinayagam et al. (Monitoring Architecture for Lawful Interception in VoIP Networks, ICIMP 2007, Aug. 24, 2008).
NiceLog User's Manual 385A0114-08 Rev. A2, Mar. 2004.
WISPA: Wireless Internet Service Providers Association; WISPA-CS-IPNA-2.0; May 1, 2009.
PCT: International Preliminary Report on Patentability of PCT/US2011/024870; dated Aug. 30, 2012; 7 pgs.
RFC 5694 (“Peer-to-Peer (P2P) Architecture: Definition, Taxonomies, Examples, and Applicability”, Nov. 2009).
Mahy et al., The Session Initiation Protocol (SIP) “Replaces” Header, Sep. 2004, RFC 3891, pp. 1-16.
PCT: International Preliminary Report on Patentability of PCT/US2011/024891; dated Aug. 30, 2012; 6 pgs.
T. Dierks & E. Rescorla, The Transport Layer Security (TLS) Protocol (Ver. 1.2, Aug. 2008) retrieved at http://tools.ietf.org/htmllrfc5246. Relevant pages provided.
J. Rosenberg et al., SIP: Session Initiation Protocol (Jun. 2008) retrieved at http://tools.ietf.org/html/rfc3261. Relevant pages provided.
Philippe Bazot et al., Developing SIP and IP Multimedia Subsystem (IMS) Applications (Feb. 5, 2007) retrieved at redbooks IBM form No. SG24-7255-00. Relevant pages provided.
PCT: International Preliminary Report on Patentability of PCT/US2011/028685; dated Oct. 4, 2012; 6 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/031245; dated Oct. 26, 2012; 9 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/029954; dated Oct. 11, 2012; 5 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/031246; dated Nov. 8, 2012; 5 pgs.
Rosenberg, J; “Interactive Connectivity Establishment (ICE): A Protocol for Network Address Translator (NAT) Traversal for Offer/Answer Protocols”; Oct. 29, 2007; I ETF; I ETF draft of RFC 5245, draft-ietf-mmusic-ice-19; pp. 1-120.
Blanchet et al; “IPv6 Tunnel Broker with the Tunnel Setup Protocol (TSP)”; May 6, 2008; IETF; IETF draft of RFC 5572, draftblanchet-v6ops-tunnelbroker-tsp-04; pp. 1-33.
Cooper et al; “NAT Traversal for dSIP”; Feb. 25, 2007; IETF; IETF draft draft-matthews-p2psip-dsip-nat-traversal-00; pp. 1-23.
Cooper et al; “The Effect of NATs on P2PSIP Overlay Architecture”; IETF; IETF draft draft-matthews-p2psip-nats-and-overlays-01.txt; pp. 1-20.
Srisuresh et al; “State of Peer-to-Peer(P2P) Communication Across Network Address Translators(NATs)”; Nov. 19, 2007; I ETF; I ETF draft for RFC 5128, draft-ieff-behave-p2p-state-06.txt; pp. 1-33.
PCT: International Search Report and Written Opinion for PCT/US2012/046026; dated Oct. 18, 2012; 6 pages.
Dunigan, Tom, “Almost TCP over UDP (atou),” last modified Jan. 12, 2004; retrieved on Jan. 18, 2011 from <http://www.csm.oml.gov/˜dunigan/net100/atou.html> 18 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/040864; dated Jan. 3, 2013; 6 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/041565; dated Jan. 10, 2013; 6 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/049000; dated Feb. 26, 2013; 6 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/051877; dated Mar. 26, 2013; 5 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2011/055101; dated Apr. 16, 2013; 7 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2012/046026; dated Jan. 30, 2014; 5 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2008/075141; dated Mar. 9, 2010; 5 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2007/068820; dated Dec. 31, 2008; 8 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2007/068823; dated Nov. 27, 2008; 8 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2006/047841; dated Nov. 6, 2008; 7 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2007/002424; dated Aug. 7, 2008; 6 pgs.
PCT: International Preliminary Report on Patentability of PCT/US2006/040312; dated May 2, 2008; 5 pgs.
PCT: International Preliminary Report on Patentability of PCT/IB2005/000821; dated Oct. 19, 2006; 10 pgs.
Chathapuram, “Security in Peer-To-Peer Networks”, Aug. 8. 2001, XP002251813.
International Search Report and Written Opinion of the International Searching Authority from PCT/IB2005/000821, dated Aug. 5, 2005.
International Search Report and Written Opinion of the International Searching Authority from PCT/US2006/032791, dated Dec. 18, 2006.
Related Publications (1)
Number Date Country
20160277307 A1 Sep 2016 US
Continuations (1)
Number Date Country
Parent 14057872 Oct 2013 US
Child 15166375 US