Peer name resolution protocol simple application program interface

Abstract
An application program interface (API) for sending and receiving endpoint registration data and peer-to-peer network cloud data has a registration call for adding endpoint data to a peer-to-peer network. The API may receive explicit data regarding address information or may be instructed to select and maintain suitable address information as the topology of the peer-to-peer network changes. Blocking and non-blocking calls are exposed for retrieving information peer-to-peer network endpoint data.
Description
BACKGROUND

Peer-to-peer networking offers users many opportunities for collaboration and sharing by connecting computers associated by geography or network characteristics. The chance to offer and discover services, files, data and programmatic offerings increases flexibility and utilization of computing resources.


A peer-to-peer network may include similar groupings of computers in clusters known as clouds. Clouds may be identified by their scope or by other attributes such as a department are associated with. Multiple clouds may exist within a given company or organizational entity. Offerings available on the peer-to-peer network are known as endpoints. Endpoints may be computers, files or programs. However, endpoints may also include services that are available at more than one physical location on the peer-to-peer network, that is, a service may have multiple Internet Protocol addresses and/or IP ports.


Endpoints that wish to join or maintain contact with a peer-to-peer network may use a service called the “Peer Name Resolution Protocol” (PNRP). PNRP is currently accessible via two mechanisms, a GetAddrlnfo application program interface (API) or a Winsock Namespace Provider API. The GetAddrlnfo API is relatively simple but does not make available all the functionality of PNRP. The NamespaceProvider API supports all the features of PNRP but has been found cumbersome and difficult to use by some developers. Beyond the coding challenges, the prior art APIs increased the difficulty of debugging systems incorporating PNRP. The adoption of PNRP and subsequently, the robust features available through PNRP for peer-to-peer networking, has been hampered by the difficulty of using the APIs currently available.


SUMMARY

The Simple PNRP API exposes all the functions available in PNRP while simplifying the programming and management overhead associated with building and maintaining peer-to-peer networks. Broadly, there are a class of calls for registering and updating endpoint information in a peer-to-peer network and another class of calls for discovering endpoints in the peer-to-peer network. The discovery calls may be blocking or non-blocking. The use of these simplified calls is expected to increase the utilization of PNRP by streamlining both the development and use of PNRP for peer-to-peer networks.


The Simple PNRP APIs greatly ease the developer's burden for coding PNRP interfaces and simplifies the debugging process during development and testing. The Simple PNRP APIs are expected to significantly advance the adoption of PNRP and, through that, the adoption of peer-to-peer networking in an increasing number of consumer, commercial, and business applications.




BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a simplified and representative block diagram of a computer network;



FIG. 2 is a block diagram of a computer that may be connected to the network of FIG. 1; and



FIG. 3 is a simplified and representative block diagram of a peer-to-peer network.




DETAILED DESCRIPTION

Although the following text sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the claims set forth at the end of this disclosure., The detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical, if not impossible. Numerous alternative embodiments could be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims.


It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based on any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this patent is referred to in this patent in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term by limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based on the application of 35 U.S.C. § 112, sixth paragraph.


Much of the inventive functionality and many of the inventive principles are best implemented with or in software programs or instructions and integrated circuits (ICs) such as application specific ICs. It is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles. disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation. Therefore, in the interest of brevity and minimization of any risk of obscuring the principles and concepts in accordance to the present invention, further discussion of such software and ICs, if any, will be limited to the essentials with respect to the principles and concepts of the preferred embodiments.



FIGS. 1 and 2 provide a structural basis for the network and computational platforms related to the instant disclosure.



FIG. 1 illustrates a network 10. The network 10 may be the Internet, a virtual private network (VPN), or any other network that allows one or more computers, communication devices, databases, etc., to be communicatively connected to each other. The network 10 may be connected to a personal computer 12, and a computer terminal 14 via an Ethernet 16 and a router 18, and a landline 20. The Ethernet 16 may be a subnet of a larger Internet Protocol network. Other networked resources, such as projectors or printers (not depicted), may also be supported via the Ethernet 16 or another data network. On the other hand, the network 10 may be wirelessly connected to a laptop computer 22 and a personal data assistant 24 via a wireless communication station 26 and a wireless link 28. Similarly, a server 30 may be connected to the network 10 using a communication link 32 and a mainframe 34 may be connected to the network 10 using another communication link 36. The network 10 may be useful for supporting peer-to-peer network traffic.



FIG. 2 illustrates a computing device in the form of a computer 110. Components of the computer 110 may include, but are not limited to a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120. The system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.


Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, FLASH memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a. manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.


The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation, FIG. 2 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.


The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 2 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile. optical disk 156 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.


The drives and their associated computer-storage media-discussed above and illustrated in FIG. 2, provide storage of computer readable instructions, data structures, program modules and other data for the computer 110. In FIG. 2, for example, hard disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating-system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 20 through input devices such as a keyboard 162 and cursor control device 161, commonly referred to as a mouse, trackball or touch pad. A camera 163, such as web camera (webcam), may capture and input pictures of an environment associated with the computer 110, such as providing pictures of users. The webcam 163 may capture pictures on demand, for example, when instructed by a user, or may take pictures periodically under the control of the computer 110. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 120 through an input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a graphics controller 190. In addition to the monitor, computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 195.


The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in FIG. 2. The logical connections depicted in FIG. 2 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.


When used in a LAN networking environment, the computer 110is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 2 illustrates remote application programs 185 as residing on memory device 181.


The communications connections 170172 allow the device to communicate with other devices. The communications connections 170172 are an example of communication media. The communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. A “modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media-includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Computer readable media may include both storage media and communication media.



FIG. 3 depicts an exemplary peer-to-peer network, that may be similar to or coupled to the network 10 of FIG. 1. A peer-to-peer network 200 may have individual clouds 202204206 coupled by networks 208210212. The networks may be wired or wireless and may support Internet protocol version 6 (IPv6). Cloud 202 may have computers 214216. Each computer may support multiple endpoints, for example, computer 216 is shown supporting endpoint 1218 and endpoint 2220. Cloud 204 is shown having computers 222 and 224. Computer 224 is shown supporting three endpoints 226228 and 230. Cloud 206 is shown having computers 232 and 234. For the sake of this example, the computers of cloud 206 are not shown supporting any endpoints. In this exemplary embodiment, each computer and its associated endpoints are shown within their respective clouds 202204206, indicating perhaps organization by network topology.


Each of the endpoints 218220226228230 may be processes, files, IP addresses, and the like. Each endpoint must be explicitly registered in order to be discovered on the peer-to-peer network 200, within their respective clouds. For example, when one endpoint 218 wishes to register in cloud 202 it may use the PeerPNRPRegister call documented below. The PeerPNRPRegister call may be restricted to its own link-local cloud 202. Similarly, an endpoint in cloud 204, for example, endpoint 230, may register locally in cloud 204 The PeerPNRPUpdateRegistration call may be used when data about the endpoint has changed, for example, an IP address. When an endpoint wishes to remove itself from the peer-to-peer network a PeerPNRPUnregister call may be issued. The methods by which peer-to-peer network registration information is propagated through a peer-to-peer network are known and well documented and are not discussed further here.



FIG. 4 represents a more complex embodiment of peer-to-peer networking. The peer-to-peer network 300 has clouds 302, 304 and 306. The clouds may be coupled by respective network connections 308, 310, and 312. Computer 314 in cloud 302 may have one or more endpoints (not depicted). Computer 316 is outside any cloud but has endpoint 1322 in cloud 302 and endpoint 2324 in cloud 304. The computer 316 configured in this manner may be attached to different points in a topologically organized network and may be able to publish different endpoints in different networks.


Computer 318 is shown in cloud 306 while computer 320 is shown configured in both clouds 304 and 306. In this case, the computer 320 may be a logical member of both clouds 304 and 306. It may publish the endpoint 326 in both clouds simultaneously.


Input and output information and structural documentation for registration-oriented calls follow. After registration, a handle may be returned for use in future calls regarding a particular endpoint having that handle.


Any call that returns endpoint data may return a null set, in other words, no data. This may be the case when the name registration data for a cloud or endpoint cannot be matched to an existing entity. In this case, the return with no data is significant and useful.


PeerPnrpRegister—This method is used to register a PNRP endpoint. The infrastructure will pick addresses in all clouds to register, and watch for address change events, re-registering as necessary.


Syntax

HRESULT WINAPI PeerPnrpRegister( INPCWSTRpcwzPeerName, INPPEER_PNRP_REGISTRATION_INFOpRegistrationInfo, OUTPHANDLEphRegistration);


Arguments

pcwzPeerNameName to registerpRegInfoOptional information about the registration. NULL willregister all addresses in all clouds. See Notes.phRegistrationHandle for the registration allowing for updates andunregistration.





















typedef struct peer_pnrp_register_info_tag {










  PWSTR
 pwzCloudName;



  ULONG
 cAddresses;



  SOCKADDR
**ppAddresses;



  WORD
 wPort;



  PWSTR
 pwzComment;



  ULONG
 cbPayload;



  PBYTE
 pbPayload;









 } PEER_PNRP_REGISTER_INFO,



*PPEER_PNRP_REGISTER_INFO;










The PEER_PNRP_REGISTER_INFO provides additional on how registrations should be performed. Conceptually, the simple mode for the API is passing null for this argument. Complex settings are accessed by actually using the structure.

pcwzCloudNameName of the cloud in which to register the peername, typically retrieved via PeerPnrpGetCloudInfo.Also understands the following special values:* NULL: the peername will be registered in all clouds.* PEER_PNRP_ALL_LINK_CLOUDS: the peername will be registered in all link local clouds* PEER_PNRP_GLOBAL_CLOUD: the peername will only be registered in the global cloud.cAddressesCount of the number of addresses to be published. Max is 4. The value may be 0, only if a payload isspecified. cAddress may also be the special value PEER_PNRP_AUTO_ADDDRESSES, which willcause the infrastructure to automatically choose good addresses for each specified cloud.ppAddressesArray of address/socket information that will be published with the name.wPortOnly used when cAddress equals PEER_PNRP_AUTO_ADDRESSES. Specifies which port topublish with the address chosen by our api. See further notes below.pcwzCommentComment field string for inclusion in CPA.cbPayloadThe byte count of the payload.pbPayloadThe payload information for the registration. Required if cAddress equals 0.


The pRegInfo in the call to PeerPnrpRegister may be NULL. This is equivalent to passing a PEER_PNRP_REGISTER_INFO with following values:

pwzCloudNameNULL (All clouds)cAddressesPEER_PNRP_AUTO_ADDRESSESppAddressesNULLwPort0pwzCommentNULLcbPayload0pbPayloadNULL


When PEER_PNRP_AUTO_ADDRESSES is used (or NULL is passed the pReglnfo), not only does the API pick good values for addresses to register, but it keeps the registrations up to date. As new clouds come up, we will automatically register in them. If the addresses on the local machine change, current registrations will be updated with the new addresses.


PeerPnrpUpdateRegistration —This method is used to update a registration of a PNRP endpoint.


Syntax

HRESULT WINAPI PeerPnrpUpdateRegistration(IN  HANDLE  hRegistration,IN  PPEER_PNRP_REGISTRATION_INFO pRegistrationInfo);


Arguments

hRegistrationHandle to the registration from PeerPnrpRegisterpRegInfothe updated registration info


Not all things about a registration may be changed. Specifically, the cloudname may not changed, and cAddresses can not change between auto-selected addresses and specified addresses. The updated registration data may include data specifying one or more clouds and data about one or more address/socket information pairs, although for a given pair, the address or socket information may be null.


PeerPnrpUnregister This method removes a PNRP endpoint registration


Syntax

VOID WINAPI PeerPnrpUnregister(  IN   HANDLE hRegistration);


Arguments:

phRegistrationHandle to the registration from PeerPnrpRegister


A participant in the peer-to-peer network 200 may wish to gather information about resources available on the peer-to-peer network, for example, other endpoints. The Simple PNRP API supports several calls for determining information about clouds and other endpoints. Cloud information may be returned by issuing a PeerPnrpGetCloud info call, as documented below.


PeerPnrpGetCloudlnfo This method will retrieve all cloud names.


Syntax

HRESULT WINAPI PeerPnrpGetCloudInfo(  ULONG *pcNumClouds,  PPEER_PNRP_CLOUD_INFO *ppCloudInfo);


Arguments

pcNumCloudsThe number of cloud names pointed to byppwzCloudNamesppCloudInfoThe cloud information. This param must be freed viaPeerFreeData.

















typedef struct peer_pnrp_cloud_info_tag


{








  WCHAR
wzCloudName[MAX_CLOUD_NAME];


  DWORD
dwScope;


  DWORD
dwScopeId;







} PEER_PNRP_CLOUD_INFO, *PPEER_PNRP_CLOUD_INFO;









There are two options for retrieving information about endpoints in the peer-to-peer network. The first initiates processes associated with retrieving endpoints from peernames and associated data in a synchronous (blocking) manner and returns results when the resolve process is completed. A second option resolves endpoints in a two step process. The first step initiates the data gathering in an asynchronous (non-blocking) manner. In most cases, this asynchronous method may offer more utility by allowing other activities, including other peer-to-peer network activities to take place in parallel. The second step involves retrieving the specific endpoint data accumulated during the non-blocking resolve step.


The synchronous (blocking) call is documented below.


PeerPnrpResolve This method performs a synchronous (blocking) resolve.


Syntax

HRESULT WINAPI PeerPnrpResolve(  IN PCWSTR pcwzPeerName,  IN PCWSTR pcwzCloudName  OPTIONAL,  IN OUT ULONG*pcEndpoints,  OUT PPEER_PNRP_ENDPOINT_INFO *ppEndPointInfo);


Arguments

pcwzPeerNameThe name to be resolved.pcwzCloudNameThe cloudname to perform the resolve. If NULL,resolves will be performed in all clouds. IfPEER_PNRP_ALL_LINK_CLOUDS,resolves in all link local clouds. IfPEER_PNRP_GLOBAL_CLOUD,resolve will only take place in the global cloudpcEndpointsOn input, specifies the desired number of results.On output, specifies the actual number pointed to byppEndpointInfo. Max is 500.ppEndPointInfoReturns an array of the resultantPNRP_ENDPOINT_INFOinformation. Must be freed by a call to PeerFreeData.


For non-blocking functionality, use PeerPnrpStartResolve. To use a specific timeout, use the asynchronous version.


If CloudName is null and the resolve is conducted in all clouds, then resolves will be issued in each cloud simultaneously. The method will return as soon as it has received enough results from any combination of clouds.


The asynchronous (non-blocking) call and the related calls for stopping the resolve process and retrieving results when available are documented below.


PeerPnrpStartResolve This method performs an asynchronous (non-blocking) resolve. This is the recommended way of performing resolves, especially if multiple endpoints are desired.


Syntax

HRESULT WINAPI PeerPnrpStartResolve(  INPCWSTRpcwzPeerName,  INPCWSTRpcwzCloudName,  INULONGcMaxEndpoints,  INHANDLEhEvent,  OUTPHANDLEphResolve,);


Arguments

pcwzPeerNameThe name to be resolved.pcwzCloudNameThe cloud name to perform the resolve. If NULL,resolves will be performed in all clouds.cMaxEndpointsThe maximum number of endpoints to return to thecallerhEventAn event handle to signal when new resolves are readyfor consumption by calling PeerPnrpGetEndpointphResolveHandle which can be used to cancel the resolvevia PeerPnrpStopResolve


As results are found, the hevent gets signaled. The application may then call PeerPnrpGetEndpoint to retrieve the resolved Endpoint(s). In another embodiment, results may be returned via a callback to the requesting process.


PeerPnrpStopResolve This method will cancel an in-progress resolve from a call to PeerPnrpStartResolve.


Syntax

VOID WINAPI PeerPnrpStopResolve(  IN HANDLE hResolve);


Arguments

hResolveThe resolve handle from PeerPnrpStartResolve


When the process resolving the peer-to-peer network has enough data, data is made available about the endpoints. The data may then be retrieved in the second step using the PeerPnrpGetEndpoint call.


PeerPnrpGetEndpoint This method is used to retrieve endpoints from a previous call to PeerPnrpStartResolve.


Syntax

HRESULT WINAPI PeerPnrpGetEndpoint(  INHANDLE hResolve,  OUTPPEER_PNRP_ENDPOINT_INFO *pEndpoint);


Arguments

hResolveThe resolve handle from PeerPnrpStartResolvepEndpointA single Endpoint which was resolvedas aPeerPnrpEndpointInfo struct


Errors

PEER_E_PENDINGThe api was called before a resolve was readyPEER_E_NO_MORENo more endpoints are available,this resolve is complete.


This method is part of the asynchronous resolve PnrpAPIs that also include PeerPnrpStartResolve and PeerPnrpStopResolve.


The details of a structure used to contain data about a PNRP endpoint are documented below.


PeerPnrpEndpointlnfo The main data structure used to contain a PNRP Endpoint. Syntax

typedef struct peer_pnrp_endpoint_info_tag {  PWSTRpwzPeerId;  ULONGcAddresses;  PSOCKADDR*ppAddresses;  ULONGcbPayload;  PVOIDpPayload;} PEER_PNRP_ENDPOINT_INFO,*PPEER_PNRP_ENDPOINT_INFO;


Elements

pwzPeerIdThe complete Peer ID of the Endpoint.cAddressesNumber of address/port pairs associated with the Endpoint.pAddressesArray of address/port information that will bepublished with the namecbPayloadThe byte count of the payloadpPayloadThe payload information for the registration.


Although the forgoing text sets forth a detailed description of numerous different embodiments of the invention, it should be understood that the scope of the invention is defined by the words of the claims set forth at the end of this patent. The detailed description is to be construed as exemplary only and does not describe every possibly embodiment of the invention because describing every possible embodiment would be impractical, if not impossible. Numerous alternative embodiments could be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims defining the invention.


Thus, many modifications and variations may be made in the techniques and structures described and illustrated herein without departing from the spirit and scope of the present invention. Accordingly, it should be understood that the methods and apparatus described herein are illustrative only and are not limiting upon the scope of the invention.

Claims
  • 1. A computer-readable medium having computer-executable instructions for implementing a method of communicating data for the lifecycle support of participation in a peer-to-peer network, the method comprising: processing a call for notifying other nodes in a cloud regarding a change of a status of an endpoint in the peer-to-peer network; processing a call for exchanging name registration data about at least one other endpoint in the peer-to-peer network; and processing a call for reporting one of the name registration data about the at least one other endpoint in the peer-to-peer network and a null value, wherein the name registration data comprises a data structure having at least one of a peer identifier, a count of address/port pairs, an array of address/port pairs, and a payload corresponding to the at least one other endpoint and the null value indicates no endpoint was found.
  • 2. The computer-readable medium having cormputer-executable instructions of claim 1, wherein processing the call for notifying other nodes in a cloud regarding a change in the status of the endpoint comprises: receiving a call for distributing endpoint data having a plurality of call parameters including a peername; and responding with a parameter comprising a registration handle associated with the endpoint.
  • 3. The computer-readable medium having computer-executable instructions of claim 1, wherein processing a call for notifying other nodes in a cloud regarding a change in the status of the endpoint comprises: receiving a PeerPnrpRegister call having a plurality of call parameters comprising at least one of a peername, cloud data specifying one or more clouds, and one or more address/socket information pairs.
  • 4. The computer-readable medium having computer-executable instructions of claim 1, wherein processing a call for notifying other nodes in a cloud regarding a change in the status of the endpoint comprises receiving a call for changing registration data about an endpoint having a plurality of call parameters comprising at least one of a registration handle associated with the endpoint and an updated registration data, the updated registration data including at least one of cloud data specifying one or more clouds, and one or more address/socket information pairs.
  • 5. The computer-readable medium having computer-executable instructions of claim 1, wherein processing a call for notifying other nodes in a cloud regarding a change in the status of the endpoint comprises receiving a call for removing registration data associated with an endpoint having a call parameter comprising a registration handle associated with the peer name.
  • 6. The computer-readable medium having computer-executable instructions of claim 1, wherein processing the call for exchanging name registration data about at least one other endpoint in the peer-to-peer network comprises: receiving a request for gathering endpoint information in a blocking manner having a plurality of call parameters comprising at least one of a peername, a cloud name, and a maximum number of results to return; and responding with endpoint information about zero or more instances of endpoints.
  • 7. The computer-readable medium having computer-executable instructions of claim 1, wherein processing a call for exchanging name registration data about at least one other endpoint in the peer-to-peer network comprises: receiving a request for gathering endpoint information in a non-blocking manner having a plurality of call parameters comprising at least one of a peername, a cloud name, and a maximum number of results to return.
  • 8. The computer-readable medium having computer-executable instructions of claim 7, further comprising returning a result by one of a callback and a signal having an event handle.
  • 9. The computer-readable medium having computer-executable instructions of claim 1, wherein processing a call for exchanging name registration data about at least one other endpoint in the peer-to-peer network comprises: receiving a request for halting an in-process resolve request having a call parameters comprising at least an event handle associated with the in-process resolve request.
  • 10. The computer-readable medium having computer-executable instructions of claim 1, wherein processing a call for reporting the data about the at least one other endpoint in the peer-to-peer network comprises: receiving a request for receiving data corresponding to results of a prior resolve request having at least a call parameter comprising an event handle associated with the prior resolve request, wherein the event handle points to a data structure having information about zero or more other endpoints, the data structure having entries corresponding to zero or more peer identifiers of other endpoints, a number of address/port pairs associated with the other endpoint, a byte count of the payload, and a payload of information for the other endpoint(s).
  • 11. A method of communicating between a requesting process and a serving process supporting communication in a peer-to-peer network comprising: processing a call for exchanging data between a first endpoint and at least one other entity in the peer-to-peer network, with at least one other entity being one of a member of a cloud and another endpoint; and processing a call for reporting the data about the at least one other endpoint in the peer-to-peer network.
  • 12. The method of claim 11, wherein the processing a call for exchanging data comprises: issuing, by the requesting process, a PeerPrnrpGetCloudlnfo call to retrieve all cloud names; receiving, by the serving process, the PeerPnrpGetCloudlnfo call; and issuing, by the serving process to the requesting process, a response including a number of cloud names returned and information corresponding to each of the returned cloud names.
  • 13. The method of claim 11, wherein the processing of a call for exchanging data comprises: issuing, by the requesting process, a PeerPnrpResolve request including at least one of a peername, a cloud name, and a maximum number of results to return; receiving, by the serving process, the PeerPnrpResolve request, processing the request in a blocking manner; and issuing, by the serving process to the requesting process, a response including an event handle for signaling availability of results and for use in retrieving the results.
  • 14. The method of claim 11, wherein the processing a call for exchanging data comprises: issuing, by the requesting process, a PeerPnrpStartResolve request including at least one of a peemame, a cloud name, and a maximum number of results to return; receiving, by the serving process, the PeerPnrpStartResolve request, processing the request in a non-blocking manner; and issuing, by the serving process-to the requesting process, a response including an event handle for signaling availability of results and for use in retrieving the results.
  • 15. The method of claim 11, wherein the processing a call for exchanging data comprises: issuing, by the requesting process, a PeerPnrpRegister call to add endpoint data to at least one associated cloud, the call including a peername; receiving, by the serving process, the PeerPnrpRegister call and sending data about the endpoint and the peername to zero or more other entities; and issuing, by the serving process, a response to the requesting process including a registration handle associated with the endpoint data.
  • 16. The method of claim 15, wherein the issuing, by the requesting process, a PeerPnrpRegister call, further comprises: issuing, by the requesting process, a PeerPnrpRegister call to add endpoint data to at least one associated cloud, the call including a peername and at least one registration parameter, the registration parameter comprising at least one of a cloud name, a count of addresses to publish, an array of address/socket pairs, a port for an automatically selected address, a comment field, and a payload of information corresponding to the registration.
  • 17. The method of claim 11, wherein the processing a call for exchanging data comprises: issuing, by the requesting process, a PeerPnrpUpdateRegistration call to update endpoint data in at least a portion of the peer-to-peer network, the call including a peername and updated endpoint data; and receiving, by the serving process, the PeerPnrpUpdateRegistration call and sending data including the peername and updated endpoint data to at least one surrounding entity.
  • 18. A computer adapted for participation in a peer-to-peer network comprising: a network communication device for exchanging data via a network; and a memory storing machine-readable instructions; a processor for executing the machine-readable instructions performing a method comprising: processing a call for registering a peer-to-peer network endpoint , the call having a peer name and a single attribute predefined value; and selecting a valid address to register with the peer-to-peer network.
  • 19. The computer of claim 18 wherein the memory stores instructions for performing the method further comprising: automatically updating the valid address when a host machine address changes.
  • 20. The computer of claim 18 wherein the memory stores instructions for performing the method further comprising: automatically updating the valid address in a new cloud when the new cloud is discovered.