Data storage and retrieval

Information

  • Patent Grant
  • 9203928
  • Patent Number
    9,203,928
  • Date Filed
    Monday, January 7, 2013
    12 years ago
  • Date Issued
    Tuesday, December 1, 2015
    9 years ago
Abstract
A method for data forwarding storage and retrieval in a network of interconnected computer system nodes may include directing data to a computer memory, continuously forwarding the data, from one computer memory to anther computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network, and retrieving the data in response to an activity.
Description
BACKGROUND

The volume of data that must be stored by individuals, organizations, businesses and government is growing every year. In addition to just keeping up with demand, organizations face other storage challenges. With the move to on-line, real-time business and government, critical data must be protected from loss or inaccessibility due to software or hardware failure. Today, many storage products do not provide complete failure protection and expose users to the risk of data loss or unavailability. For example, many storage solutions on the market today offer protection against some failure modes, such as processor failure, but not against others, such as disk drive failure. Many organizations are exposed to the risk of data loss or data unavailability due to component failure in their data storage system.


The data storage market is typically divided into two major segments, i.e., Direct Attached Storage (DAS) and Network Storage. DAS includes disks connected directly to a server.


Network Storage includes disks that are attached to a network rather than a specific server and can then be accessed and shared by other devices and applications on that network. Network Storage is typically divided into two segments, i.e., Storage Area Networks (SANs) and Network Attached Storage (NAS).


A SAN is a high-speed special-purpose network (or subnetwork) that interconnects different kinds of data storage devices with associated data servers on behalf of a larger network of users. Typically, a SAN is part of the overall network of computing resources for an enterprise. A storage area network is usually clustered in close proximity to other computing resources but may also extend to remote locations for backup and archival storage, using wide area (WAN) network carrier technologies.


NAS is hard disk storage that is set up with its own network address rather than being attached to the local computer that is serving applications to a network's workstation users. By removing storage access and its management from the local server, both application programming and files can be served faster because they are not competing for the same processor resources. The NAS is attached to a local area network (typically, an Ethernet network) and assigned an IP address. File requests are mapped by the main server to the NAS file server.


All of the above share one common feature that can be an Achilles tendon in more ways than one, i.e., data is stored on a physical medium, such as a disk drive, CD drive, and so forth.





BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.



FIG. 1A is a block diagram of an exemplary network.



FIG. 1B is a block diagram of an exemplary continuous data forwarding framework including two user systems.



FIG. 1C is a block diagram of an exemplary framework that includes a user system and a number of network systems.



FIG. 2 is a block diagram of an exemplary user system.



FIG. 3A is a block diagram of an exemplary network system.



FIG. 3B is a block diagram of another exemplary network system.



FIG. 4A is a flow diagram of an exemplary storage process.



FIG. 4B is a flow diagram of another exemplary storage process.



FIG. 5A is a flow diagram of an exemplary data forwarding process.



FIG. 5B is a flow diagram of another exemplary data file forwarding process.



FIG. 5C is a flow diagram of another exemplary data file forwarding process.



FIG. 6A is a flow diagram of an exemplary process for forwarding advertisements.



FIG. 6B is a flow diagram of an exemplary data forwarding process.



FIG. 6C is a flow diagram of an exemplary search process.



FIG. 7 is a flow diagram of an exemplary file retrieval process.



FIG. 8 is a flow diagram of an exemplary file deletion process.



FIG. 9 is an example of a user interface for an instant messaging client program.





DETAILED DESCRIPTIONS

Unlike peer to peer networks, which use data forwarding in a transient fashion so that data is eventually stored on a physical medium such as a disk drive, the present invention is a continuous data forwarding system, i.e., data is stored by continually forwarding it from one node memory to another node memory.


Example Continuous Data Forwarding Frameworks

As shown in FIG. 1A, an exemplary network 10 includes a user system 12 and a number of network systems 14, 16, 18, 20, 22. Each of the network systems 14, 16, 18, 20, 22 can be considered to be a node in the network 10 and one such network system may be designated as a central server, such as network system 14, which may assume a control position in network 10. Each of the nodes 14, 16, 18, 20, 22 may be established as a privately controlled network of peers under direct control of the central server 14. Peered nodes may also be a mix of private and public nodes, and thus not under the direct physical control of the central server 14. The network 10 may also be wholly public where the central server 14 (or servers) has no direct ownership or direct physical control of any of the peered nodes.


As shown in FIG. 1B, an exemplary continuously data forwarding framework 10 includes two user systems 12, 14 (also referred to as client systems) coupled to a number of network systems 16, 18, 20, 22 (also referred to as servers). Each of the network systems 16, 18, 20, 22 is considered to be a node in a network 24 and one such network system may be designated as a host or central server, such as network system 16. As such, network system 16 may assume a control position in network 24. Each of the nodes 16, 18, 20, 22 can be established as a privately controlled network of peers under direct control of the central server 16. Peered nodes can also be a mix of private and public nodes (e.g., the Internet), and thus not under the direct physical control of the central server 16. The network 24 can also be wholly public where the central server 16 (or servers) has no direct ownership or direct physical control of any of the peered nodes.


The continuously data forwarding framework 10 supports communications between computer users, such as users on user systems 12, 14. Computer users on user systems 12, 14 are distributed geographically and communicate using one or more of the network systems 16, 18, 20, 22 in network 24. User systems 12, 14 are connected to network 24 through various communication mediums, such as a modem connected to a telephone line (using, for example, serial line internet protocol (SLIP) or point-to-point protocol (PPP)) or a direct internetwork connection (using, for example, transmission control protocol/internet protocol (TCP/IP)).


Each of the user systems 12, 14 may be implemented using, for example, a general-purpose computer capable of responding to and executing instructions in a defined manner, a personal computer, a special-purpose computer, a workstation, a server, a device, a component, or other equipment or some combination thereof capable of responding to and executing instructions. User systems 12, 14 may receive instructions from, for example, a software application, a program, a piece of code, a device, a computer, a computer system, or a combination thereof, which independently or collectively direct operations, as described herein. These instructions may take the form of one or more communications programs that facilitate communications between the users of client systems 12, 14. For instance, such communications programs may include E-mail programs, Instant Messaging (IM) programs, File Transfer Protocol (FTP) programs, Voice-over-Internet (VoIP) programs, as so forth. The instructions may be embodied permanently or temporarily in any type of machine, component, equipment, or storage medium.


Clients systems 12, 14 include a communications interface (not shown) used by the communications programs to send communications through network 24. The communications may include E-mail, audio data, video data, general binary data, or text data (e.g., encoded in American Standard Code for Information Interchange (ASCII) format).


The network 24 can include a series of portals interconnected through a coherent system. Examples of the network 24 include the Internet, Wide Area Networks (WANs), Local Area Networks (LANs), analog or digital wired and wireless telephone networks (e.g. a Public Switched Telephone Network (PSTN)), an Integrated Services Digital Network (ISDN), a Digital Subscriber Line (xDSL)), or any other wired or wireless network. The network 24 may include multiple networks or sub-networks, each of which may include, for example, a wired or wireless data pathway.


A central server 16 (also referred to as host server) may be connected to network 24 and may be used to facilitate some direct or indirect communications between the client systems 12, 14. As with the client systems 12, 14, central server 16 may be implemented using, for example, a general-purpose computer capable of responding to and executing instructions in a defined manner, a personal computer, a special-purpose computer, a workstation, a server, a device, a component, or other equipment or some combination thereof capable of responding to and executing instructions. Central server 16 may receive instructions from, for example, a software application, a program, a piece of code, a device, a computer, a computer system, or a combination thereof, which independently or collectively direct operations, as described herein. These instructions may take the form of one or more communications programs. For instance, such communications programs may include E-mail programs, IM programs, FTP programs, VoIP programs, and so forth. The instructions may be embodied permanently or temporarily in any type of machine, component, equipment, or storage medium.


Further, central server 16 includes a communications interface (not shown) used by the communications programs to send communications through network 24. The communications may include E-mail, audio data, video data, general binary data, or text data (e.g., encoded in American Standard Code for Information Interchange (ASCII) format).


The user systems 12, 14 can execute an instant messaging (IM) client program. IM programs typically enable users to communicate in real-time with each other in a variety of ways. Most IM programs provide, for example:


(1) Instant messages—send notes back and forth with a friend who is online


(2) Chat—create a chat room with friends or co-workers


(3) Web links—share links to your favorite Web sites


(4) Video—send and view videos, and chat face to face with friends


(5) Images—look at an image stored on your friend's computer


(6) Sounds—play sounds for your friends


(7) Files—share files by sending them directly to your friends


(8) Talk—use the Internet instead of a phone to actually talk with friends


(9) Streaming content—real-time or near-real-time stock quotes and news


(10) Mobile capabilities—send instant messages from your cell phone


Examples of IM communications include those provided by AIM (America Online™ Instant Messenger), Yahoo™ Messenger, MSN™ Messenger, and ICQ™, and so forth.


The framework 10 supports these IM communications and enables users to store video, images, sounds, files and other content, which can be included in IM communications. Unlike other systems, such as data storage networks, the framework 10 does not use fixed physical data storage to store data, such as image files and video files, for example. When a request to store data is received by the central server 16 from one of the user systems 12, 14, data is directed to a node in the network 24 where it is then continuously forwarded from node memory to node memory in the network 24 without storing on any physical storage medium such as a disk drive. The forwarded data resides only for a very brief period of time in the memory of any one node in the network 24. Data is not stored on any physical storage medium in any network node.


In a like manner, when a request to retrieve data is received by the central server 16 from a user system 12, 14, the requested data, which is being forwarded from node memory to node memory in the network 24, is retrieved.


Data forwarded in this manner can be segmented and segments forwarded as described above. Still, the segmented data is not stored on any physical storage medium in any network node, but merely forwarded from the memory of one node to the memory of another node.


As shown in FIG. 1C, an exemplary framework 10A includes a user system 12 (also referred to as client systems) and a number of network systems 14, 16, 18, 20, 22. Each of the network systems 14, 16, 18, 20, 22 can be considered to be a node in the framework 10A and one such network system may be designated as a central server, such as network system 14, which may assume a control position in framework 10A. Each of the nodes 14, 16, 18, 20, 22 may be established as a privately controlled network of peers under direct control of the central server 14. Peered nodes may also be a mix of private and public nodes, and thus not under the direct physical control of the central server 14. The framework 10A may also be wholly public where the central server 14 (or servers) has no direct ownership or direct physical control of any of the peered nodes.


In one example, nodes 14, 16, 18, 20 and 22 are considered to be a private network. In a private network, an administrator controls the nodes and may designate which node is the central server. The framework 10A can also include one or more additional nodes, for example, nodes 24, 26 and 28. These nodes 24, 26 and 28 are considered to be part of one or more public networks in which the administrator has little or no control.


Example Selective Data Forwarding

In addition, when a request to store data is received, its ownership and/or data type can be checked against entries in a store maintained by the central server 14. For example, data owned by a paid subscriber may be forwarded from node memory to node memory, and/or data of a particular type, such as office productivity file data type, an audio data file type, a visual data file type, a video data file type, an object oriented file type and/or a database data file type. Thus, data forwarding is selective based on ownership and/or type. Certain users can have data forwarded from node memory to node memory, and/or certain types of data can be forwarded from node memory to node memory. Entries in the store can be added, modified or deleted, enabling even more flexibility in selectively forwarding data from node memory to node memory. Such a store can reside locally at the central server 14 or be forwarded from node memory to node memory without storing on any physical storage medium such as a disk drive. In addition to ownership and file type, other data parameters that can be evaluated include file size, file use frequency, user history, user preferences, and premiums paid by users.


Example User System

As shown in FIG. 2, the user system 12 can include a processor 30, memory 32 and input/output (I/O) device 34. Memory 32 can include an operating system (OS) 36, such as Linux, Apple™ OS or Windows™, one or more application processes 38, and a storage process 100, explained in detail below. Application processes 38 can include user productivity software, such as OpenOffice or Microsoft Office™ The I/O device 34 can include a graphical user interface (GUI) 40 for display to a user 42.


Example Network Systems

As shown in FIG. 3A, each of the network systems, such as network system 14, can include a processor 50 and memory 52. Memory 52 can include an OS 54, such as Linux, Apple™ OS or Windows™, and a data forwarding process 200, explained in detail below.


As shown in FIG. 3B, each of the network systems, such as network system 14A (any reference herein to system 14 may refer to system 14 of FIG. 3A and/or system 14A of FIG. 3B), can include a processor 50 and memory 52. Memory 52 can include an OS 54, such as Linux, Apple™ OS or Windows™, a data file forwarding process 200, a search process 300 and a retrieval process 400, fully described below.


One network system, such as network system 22, is designated as a deletion node, more fully described below. Memory of the deletion node 22 does not include a data file forwarding process 200, search process 300 and retrieval process 400. Any data file received by the deletion node is not forwarded or saved. New data received in the memory of the deletion node overwrites old data received by the memory of the deletion node. In effect, the deletion node 22 acts as a black hole for data files forwarded to it.


In traditional systems, application processes 38 needs to store, delete, search and retrieve data files. In these traditional systems, a data file is stored on local or remote physical devices. And in some systems, this data file can be segmented into different pieces or packets and stored locally or remotely on physical mediums of storage. Use of fixed physical data storage devices add cost, maintenance, management and generate a fixed physical record of the data, whether or not that is the desire of the user 42.


The present invention as described in FIGS. 3A and 3B does not use fixed physical data storage to store data files. Herein, the terms “data file” are used to represent all file and media types handled by the system, such as, for example, files for data, program files, audio files, video files, picture files, and so forth. When a request to store a data file is received by the central server 14 from storage process 100, the data file is directed to a node memory in the framework 10 where it is then continuously forwarded from node memory to node memory in the framework 10 by the data file forwarding process 200 in each of the network nodes without storing on any physical storage medium, such as a disk drive. The forwarded data file resides only for a very brief period of time in the memory of any one node in the framework 10. Data files are not stored on any physical storage medium in any network node.


When a request to retrieve a data file is received by the central server 14 from storage process 100, the requested data file, which is being forwarded from node memory to node memory in the framework 10, is retrieved.


Data files forwarded in this manner can be segmented and segments forwarded as described above. Still, the segmented data is not stored on any physical storage medium in any network node, but forwarded from the memory of one node to the memory of another node.


Data files being forwarded in this manner can be deleted and thus no longer forwarded from node memory to node memory.


Example Storage Processes

As shown in FIG. 4A, storage process 100 includes sending (102) a request to a central server 14 to store or retrieve data. If the request is a retrieve data request, storage process 100 receives the requested data from the central server 14 or node in the network.


If the request to the central server 14 is a store data request, storage process 100 receives (104) an address of a node from the central server 14 and forwards (106) the data to the node memory represented by the received address. Determining an address of a node available to receive the data can be based on one or more factors, such as network traffic analysis, available memory, combinations of factors, and so forth. A time stamp can be applied to the data in the computer memory of the specific node.


As shown in FIG. 4B, storage process 100A includes sending (102) a request to a central server 14 to store, retrieve or delete a data file. If the request is a retrieve data file request, storage process 100 receives (104) the requested data file from the central server 14 or node in the network. Any reference herein to storage process 100 may refer to storage process 100 of FIG. 4A and/or storage system 100A of FIG. 4B.


If the request to the central server 14 is a store data file request, storage process 100 receives (106) an address of a node from the central server 14 and forwards (108) the data file to the node memory represented by the received address.


Example Data Forwarding Processes


FIGS. 5A, 5B, and 5C are flowcharts illustrating example processes of data forwarding. Any reference herein to process 200 may refer to process 200A of FIG. 5A, process 200B of FIG. 5B, and/or process 200C of FIG. 5C, depending on context of the reference.


As shown in FIG. 5A, data forwarding process 200 includes receiving (202) a request to store or retrieve data. If the received request is a request to store data, data forwarding process 200 determines (204) an address of a node available to receive the data in memory. This determination (204) can include pinging the network and determining which of the nodes in a network is available, or determining which node in the network has the least traffic, or determining which node in the network has the largest available memory, or any combination of these or other factors.


Process 200 sends (206) a message to the user system with the address of a specific node for the requester to forward the data.


Process 200 detects (208) the presence of data in node memory. Process 200 forwards (210) the data in memory to another node in the network of nodes and continues to repeat detecting (208) and forwarding (210) of the data from node memory to node memory. When data arrive in any node memory, process 200 affixes (212) a time stamp to the data. Additionally, as data enters and exits any mode memory, the data may be encrypted and de-encrypted.


Forwarding (210) can include pinging the node in the network to determine which of the nodes in the network is available, or determining which node in the network has the least traffic, or determining which node in the network has the largest available memory, or any combination of these or other factors.


In one specific example, at the point of entry to a node, data undergoes an encrypted “handshake” with the node or central server 14 or user. This can be a public or private encryption system, such as the Cashmere system, which can use public-private keys. Cashmere decouples the encrypted forwarding path and message payload, which improves the performance as the source only needs to perform a single public key encryption on each message that uses the destination's unique public key. This has the benefit that only the true destination node will be able to decrypt the message payload and not every node in the corresponding relay group. Cashmere provides the capability that the destination can send anonymous reply messages without knowing the source's identity. This is done in a similar way, where the source creates a reply path and encrypts it in a similar manner as the forwarding path.


In another example, other routing schemes are utilized.


If the received request is a request to retrieve data being continuously forwarded from node memory to node memory, data forwarding process 200 matches (214) at the central server 14 using a hash mark or other unique code that can be “sniffed” by the node upon the data entering the node via the encryption handshake. This can occur by pinging the nodes in the network. Process 200 sends (216) the message to return the data to the user directly to the node or node state where the central server 14 believes the data will likely appear. The more the central server 14 can narrow the node state that it pings to, then the more efficient the retrieval will become and the less burdened by unnecessary messaging traffic to nodes that are not necessary for a transaction between the central server 14 and the node capable of forwarding the data.


Once the correct node receives the message to forward the data in node memory to the requester, process 200 forwards (218) in node memory the data to the requester and forwards (220) a confirmation message that the data has been sent to the user. This routing message may be sent directly to the central server 14 or may be passed to the central server 14 or servers via other node(s) or supernode(s) in the network 10. Upon the user receiving the requested data the user's application functions to automatically ping the central server 14 that the data requested has been received. Thus the network 10 creates data storage without caching, downloading and/or storing the data on any physical storage medium. Data storage and management is accomplished via a continuous routing of the data from node memory to node memory, the forwarded data only downloaded when the user requests the data to be returned to the user from the network 10.


New nodes and node states may be added and/or deleted from the network 10 based upon performance. Users may have access to all nodes or may be segmented to certain nodes or “node states” by the central server(s) or via the specific architecture of the private, public or private-public network.


Individual nodes, nodes states and supernodes may also be extranet peers, wireless network peers, satellite peered nodes, Wi-Fi peered nodes, broadband networks, and so forth, in public or private networks. Peered nodes or users may be used as routing participants in the network 10 from any valid peer point with the same security systems employed, as well as custom solutions suitable for the rigors of specific deployments, such as wireless encryption schemes for wireless peers, and so forth.


In process 200, rather than have data cached or held in remote servers, hard drives or other fixed storage medium, the data are passed, routed, forwarded from node memory to node memory. The data are never downloaded until the authorized user calls for the data. A user on the system may authorize more than one user to have access to the data.


A primary goal in process 200 is to generate a data storage and management system where the data is never fixed in physical storage, but in fact, is continually being routed/forwarded from node memory to node memory in the network. The path of the nodes to which data is forwarded may also be altered by the central server 14 to adjust for system capacities and to eliminate redundant paths of data that may weaken the security of the network due to the increased probability of data path without this feature.


As shown in FIG. 5B, data file forwarding process 200B includes receiving (202) a request from a source system in a network to store a data file.


Process 200 directs (204) the data file to a computer memory in a network. Process 200 saves (206) a file name of the data file, and in some implementations, a file type, a username and a date stamp, in an index file associated with the central server 14; the actual data contained in the data file is not stored on any physical medium. The index file is used to search for data files during the search process 300, described fully below. Process 200 scrambles (208) a copy of the contents of the data file and saves (210) the copied scrambled data in memory or on a physical storage device associated with the central server 14.


For example, assume a data file named “myfile.txt” includes the following text: This is an example of data contained in an exemplary data file. The text herein is maintained as written in the data file and the data file continuously forwarded from node memory to node memory without storing on a physical medium.


Scrambling (208) a copy of the above data file may, in one example, results in the following scrambled data: to without storing on a physical medium example node this contained exemplary herein file from maintained text data, and the in continuously is an of forwarded memory.


Only this scrambled data, indexed by file name, is saved to physical storage—no unscrambled data file is stored in any physical medium, such as a disk drive. Saving the copied scrambled data aids in maintaining security and in searching for data files being continuously forwarded.


Process 200 continuously forwards (212) the data file from the first computer memory to other computer memories in the network without storing on any physical storage device in the network. Continuously forwarding (212) includes detecting a presence of the data file in memory of the specific node of the network and forwarding the data file to another computer memory of a node in the network of interconnected computer system nodes without storing any physical storage device.


In one specific example, at the point of entry to a node, forwarded data in the data file undergoes an encrypted “handshake” with the node or central server 14 or user. This can be a public or private encryption system, such as the Cashmere system, which can use public-private keys. Cashmere decouples the encrypted forwarding path and message payload, which improves the performance as the source only needs to perform a single public key encryption on each message that uses the destination's unique public key. This has the benefit that only the true destination node will be able to decrypt the message payload and not every node in the corresponding relay group. Cashmere provides the capability that the destination can send anonymous reply messages without knowing the source's identity. This is done in a similar way, where the source creates a reply path and encrypts it in a similar manner as the forwarding path. In another example, other routing schemes are utilized.


New nodes and node states may be added and/or deleted from the framework 10 based upon performance. Users may have access to all nodes or may be segmented to certain nodes or “node states” by the central server(s) or via the specific architecture of the private, public or private-public network.


Example Measurement in Data Forwarding and Storage

Data forwarded in the network can be measured. For example, the central server 14 maintains a store of data statistics each time new data is requested to be forwarded in the network. The store can be a flat file, a table, or a database file, and is continuously forwarded from node memory to node memory in the network with storage on any physical medium, such as a disk drive. Contents of the store record each time a file is requested to be forwarded, its owner, its type, and/or additional information. In this manner, the central server 14 can generate a list of useful information when requested. More particularly, the central server 14 can account for the number of files, the number of file types, and the gross amount of data (e.g., megabytes) in the system as a whole, or owned per user of group of users. The central server 14 tracks data added and removed from the network using the store. The central server 14 can generate a list of all data being forwarded in the network, and/or data being forwarded by a specific user or group of users.


Example Data Forwarding Assisted by Geolocation

Geolocation generally refers to identifying a real-world geographic location of an Internet connected computer, mobile device, website visitor or other. Geolocation can be used to refer to the practice of assessing the location, or it can be used to refer to the actual assessed location or location data. Geolocation can be performed by associating a geographic location with, for example, the Internet Protocol address, Media Access Control (MAC) address, Radio Frequency Identification (RFID), hardware embedded article/production number, embedded software number (such as UUID, Exif/IPTC/XMP or modern steganography), Wi-Fi connection location, or device GPS coordinates, or other, perhaps self-disclosed, information.


Networks in general, and more particularly the Internet, have become a collection of resources meant to appeal to a large general audience. Although this multitude of information has been a great boon, it also has diluted the importance of geographically localized information. Offering the ability for network users to garner information based on geographic location can decrease search times and increase visibility of local establishments. Similarly, user communities and chat-rooms can be enhanced through knowing the locations (and therefore, local times, weather conditions and news events) of their members as they roam the globe. It is possible to provide user services in applications and Web sites without the need for users to carry GPS receivers or even to know where they themselves are.


Geolocation by IP address is the technique of determining a user's geographic latitude, longitude and, by inference, city, region and nation by comparing the user's public Internet IP address with known locations of other electronically neighboring servers and routers.


Possible applications for geolocation by IP address exist for Weblogs, chat programs, user communities, forums, distributed computing environments, security, urban mapping and network robustness.


Although several methods of geographically locating an individual currently exist, each system has cost and other detriments that make them technology prohibitive in computing environments. Global Positioning System (GPS) is limited by line-of-sight to the constellation of satellites in Earth's orbit, which severely limits locating systems in cities, due to high buildings, and indoors, due to complete overhead blockage. Several projects have been started to install sensors or to use broadcast television signals to provide for urban and indoor geolocation.


By contrast, these environments already are witnessing a growing trend of installing wireless access points (AP). Airports, cafes, offices and city neighborhoods all have begun installing wireless APs to provide Internet access to wireless devices. Using this available and symbiotic infrastructure, geolocation by IP address can be implemented immediately.


Several RFC proposals have been made by the Internet Engineering Task Force (IETF) that aim to provide geolocation resources and infrastructure. Several companies now offer pay-per-use services for determining location by IP. Several years ago, CAIDA, the Cooperative Association for Internet Data Analysis, began a geolocation by IP address effort called NetGeo. This system was a publicly accessible database of geographically located IP addresses. Through the use of many complex rules, the NetGeo database slowly filled and was corrected for the location of IP addresses.


To query the NetGeo database, an HTTP request is made with the query IP address, like this:

















$ http://netgeo.caida.org/perl/netgeo.cgi?target=192.168.0.1



VERSION=1.0



TARGET:192.168.0.1



NAME:IANA-CBLK1



CITY:MARINA DEL REY



STATE: CALIFORNIA



COUNTRY: US



LAT: 33.98



LONG: −118.45



LAT_LONG_GRAN: cITY



LAST-UPDATED: 16-MAY-2001



NIC: ARIN



LOOKUP_TYPE: Block Allocation



RATING:



DOMAIN_GUESS:iana.org



STATUS: OK










The NetGeo response includes the city, state, country, latitude and longitude of the IP address in question. Furthermore, the granularity (LAT_LONG_GRAN) also is estimated to give some idea about the accuracy of the location. This accuracy also can be deduced from the LAST_UPDATED field. Obviously, the older the update, the more likely it is that the location has changed. This is true especially for IP addresses assigned to residential customers, as companies holding these addresses are in constant flux.


Several existing packages assist in retrieving information from the NetGeo database. The PEAR system has a PHP package, and a PERL module, CAIDA::NetGeo::Client, is available. It is a relatively straightforward task to make a request in whatever language you are using for your application or service. For example, a function in PHP for getting and parsing the NetGeo response looks like this:














1: function getLocationCaidaNetGeo($ip)


2: {


3: $NetGeoURL = “http://netgeo.caida.org/perl/netgeo.cgi?target=”.$ip;


4:


5: if($NetGeoFP = fopen($NetGeoURL,r))


6: {


7: ob_start( );


8:


9: fpassthru($NetGeoFP);


10: $NetGeoHTML = ob_get_contents( );


11: ob_end_clean( );


12:


13: fclose($NetGeoFP);


14: }


15: preg_match (“/LAT:(.*)/i”, $NetGeoHTML, $temp) or die(“Could not


find element LAT”);


16: $location[0] = $temp[1];


17: preg_match (“/LONG:(.*)/i”, $NetGeoHTML, $temp) or die(“Could


not find element LONG”);


18: $location[1] = $temp[1];


19:


20: return $location;


21: }









The NetGeo database slowly is becoming more inaccurate as IP address blocks change hands in company close-outs and absorptions. Several other tools are available for determining location, however. A description of the NetGeo infrastructure itself presents some of the methods it employed for mapping IP addresses and can be a source of guidance for future projects.


One of the most useful geolocation resources is DNS LOC information, but it is difficult to enforce across the Internet infrastructure. RFC 1876 is the standard that outlines “A Means for Expressing Location Information in the Domain Name System.” Specifically, this is done by placing the location information of a server on the DNS registration page. Several popular servers have employed this standard but not enough to be directly useful as of yet.


To check the LOC DNS information of a server, you need to get the LOC type of the host:














$ host -t LOC yahoo.com


yahoo.com LOC 37 23 30.900 N 121 59 19.000 W 7.00m 100m 100m 2m









This parses out to 37 degrees 23′ 30.900″ North Latitude by 121 degrees 59′ 19.000″ West Longitude at 7 meters in altitude, with an approximate size of 100 meters at 100 meters horizontal precision and 2 meters vertical precision. There are several benefits to servers that offer their geographic location in this way. First, if you are connecting from a server that shows its DNS LOC information, determining your geolocation is simple, and applications may use this information without further work, although some verification may be useful. Second, if you are connecting on your second or third bounce through a server that has DNS LOC information, it may be possible to make an estimate of your location based on traffic and ping times. However, these estimates greatly degrade accuracy.


It also is possible to put the DNS LOC information for your Web site in its registration. If more servers come to use LOC information, geolocation accuracy will be much easier to attain.


The “host” is a DNS lookup utility that allows users to find out various pieces of information about a host. The simplest use is doing hostname to IP address lookups and the reverse. The reverse, dotted-decimal IPv4 notation, is used for this, and the actual server that hosts the canonical name is returned. The type flag, -t, can be used to obtain specific information from the host record from the name server.


Service providers typically provide an internal naming scheme for assigning IP addresses and associating names with these addresses. Typically, the canonical name of an IP address contains the country-code top-level domain (ccTLDs) in a suffix. CN is China, FR is France, RO is Romania and so on. Furthermore, the name even may contain the city or region in which the IP address is located. Often, however, this information is shortened to some name that requires a heuristic to determine. For example, in your service or application, a user may appear to be coming from d14-69-1-64.try.wideopenwest.com. A whois at this address reveals it is a WideOpenWest account from Michigan. Using some logic, it is possible to deduce that this user is connecting through a server located in Troy, Mich., hence the .try. in the canonical name.


Some projects have been started to decipher these addresses, and you also can get all of the country codes and associated cities and regions of a country from the IANA Root-Zone Whois Information or the US Geospatial Intelligence Agency, which hosts the GEOnet Names Server (GNS). The GNS has freely available data files on almost all world countries, regions, states and cities, including their sizes, geographic locations and abbreviations, as well as other information.


Information such as that presented on the GNS also can be used to provide users with utilities and services specific to their geographical locations. For example, it is possible to determine a user's local currency, time zone and language. Time zone is especially useful for members of a community or chat group to determine when another friend may be available and on-line.


When a request to retrieve data is received by the central server 14 from storage process 100, the requested data, which is being forwarded from node memory to node memory in the network 10 assisted by geolocations of the nodes, is retrieved.


Data forwarded in this manner can be segmented and segments forwarded as described above. Still, the segmented data is not stored on any physical storage medium in any network node, but merely forwarded from the memory of one node to the memory of another node. As shown in FIG. 4A, storage process 100 includes sending (102) a request to a central server 14 to store or retrieve data. If the request is a retrieve data request, storage process 100 receives the requested data from the central server 14 or node in the network.


If the request to the central server 14 is a store data request, storage process 100 receives (104) an address of a node from the central server 14 and forwards (106) the data to the node memory represented by the received address. The central server 14 is assisted in finding an appropriate address by a geolocation, which can help locate a node that is underutilized or light in terms of network traffic.


As shown in FIG. 5C, data forwarding process 200C includes receiving (202) a request to store or retrieve data. If the received request is a request to store data, data forwarding process 200 determines (204) an address of a node available to receive the data in memory assisted by its geolocation. This determination (204) can include using a geolocation to help locate a node, pinging the network and determining which of the nodes in a network is available, or determining which node in the network has the least traffic, or determining which node in the network has the largest available memory, or the geographic coordinates of the node, or any combination of these or other factors. Using geolocation enables greater speed and efficiency by selecting nodes that are proximate to each other and/or the user.


Process 200 sends (206) a message to the user system with the address of a specific node for the requester to forward the data.


Process 200 detects (208) the presence of data in node memory. Process 200 forwards (210) the data in memory to another node in the network of nodes, assisted by geolocation (e.g., a location of the node in the network) and continues to repeat detecting (208) and forwarding (210) of the data from node memory to node memory assisted by geolocation. When data arrives in any node memory, process 200 affixes (212) a time stamp to the data.


Forwarding (210) can include pinging the node in the network to determine which of the nodes in the network is available, or determining which node in the network has the least traffic, or determining which node in the network has the largest available memory, or any combination of these or other factors.


Example Multi-Homed Data Forwarding and Storage

A node typically has one network interface with one associated network address. However, a node may include multiple network interfaces, each with their own associated non-loopback network address, such as a non-loopback Internet protocol (IP) address. Furthermore, a node may include a network interface with multiple associated non-loopback network addresses, such as multiple non-loopback IP addresses. Such a node is referred to as a “multi-homed node.”


For example, the Internet Engineering Task Force (IETF) has developed IP version 6 (IPv6). The hierarchical layers provided by IPv6 may change the way multi-homing devices within a network are perceived. In IPv4, multi-homing is generally perceived as a host or system that uses multiple network interfaces. In contrast, hosts in IPv6 may only have one network interface, but respond to multiple global IPv6 addresses, link-local addresses, and site-local addresses. As a result, almost every host in the IPv6 network can be a multi-homed host.


Process 200 can be modified and enabled within a single computer system that includes multiple IP (IP) addresses (e.g., 2001:db8::1, 2001:db8::2 and 2001:db8::3 in IPv6), but only one physical upstream link. This is sometimes referred to as single link, multiple IP address (spaces) multi-homing.


As described above, a device can be multi-homed (e.g., host-centric multi-homing), when it has more than one interface, and each of the interfaces is attached to different networks (may be within a multi-homed network). In addition, in IPv6, each interface can have multiple addresses, which means than even with a single interface, a host can be multi-homed.


Multi-homing can provide a certain degree of resilience/redundancy against failures (link, hardware, protocols, others) and also enables features such as load balancing. Moreover, multi-homing can be used in order to differentiate traffic based on policy, for non-technical reasons, such as cost associated with different flows, time of the day, and so forth. For highly distributed enterprises, it can also occur as an aid to address that enterprise's geographical distribution, and as a traffic engineering mechanism to improve local performance such as latency and hop count reductions for real time protocols.


With single link, multiple IP address (spaces) multi-homing, a modified process 200 forwards data in memory within a single computer having multiple assigned IP addresses. When the computer is powered-off or experiences a failure, such as loss of power, all data being forwarded in memory is automatically forwarded to a node memory in the network 10, where it is continually routed/forwarded from node memory to node memory in the network 10 according to process 200. When power is restored to the computer, data is recovered/reloaded from the network 10 and then continuously forwarded within the memory of the computer without ever being fixed in physical storage.


Example Advertisement Forwarding

In one example, this data forwarding storage and management system where the data is continually being routed/forwarded from node memory to node memory in the network is used as an advertisement forwarding and retrieval system. Advertisement is deployed into the data forwarding storage and management system from a master server or control station and recalled on demand or in response to a stimulus or activity. Here, we consider advertisement as a broad term that can include any content, including, but limited to, text, audio, visual or any combination thereof. Advertisement can be deployed into the data forwarding storage network and recalled/retrieved when needed, e.g., directed to an IP address of a specific user system, directed to paid and/or unpaid subscribers of applications within the data forwarding storage network, and/or directed to users outside of the data forwarding storage network. Advertisement being continuously forwarded in the data forwarding storage network can be sent to all users or specifically targeted according to one or more user characteristics, user profiles, usage patterns, history and/or past or present viewed page content. The advertisement being continuously forwarded in the data forwarding storage network can be displayed to a current user within an application or web browser or delivered to a wired or wireless radio, television and/or television network. Advertisements can be retrieved in response to a stimulus or activity, such as the user's profile, traffic patterns of one or more users, application profiles, and so forth. Advertisements can be stored and delivered in any media form and either pre-configured by specific file type and size for a specific end user or site delivery requirements/formats, or delivered and formatted by virtue of the end user or middleware software compatibility systems.


In one example, selected advertisement can be delivered to a user through a web browser. More particularly, a plug-in and/or helper application can be associated with a user's web browser. In general, a plug-in is a computer program that interacts with a host application (a web browser or an email client, for example) to provide a certain, usually very specific, function “on demand.” As a user navigates to a particular web page, the plug-in can parse displayed text. The plug-in can then request specific advertisement being continuously forwarded in the data forwarding storage network that matches the parsed text to the web browser of the user for display in a section of the display screen or as a pop-up.


In another example, a user requesting retrieval of a data file being continuously forwarded in the data forwarding storage network may be presented with specific advertisement being continuously forwarded in the data forwarding storage network that matches the user's profile. The user's profile may include various personal and/or demographic data that aids in directing appropriate advertisement to the user. The advertisement may then be displayed as a banner or in a shared window or in a separate window.


As shown in FIG. 6A, a process 300A includes directing (302) advertisement to a computer memory. The advertisement can include any content, including, but limited to, text, audio, visual or any combination thereof. The advertisement can include multiple configurations in order to satisfy different systems delivery specifications. Advertisements can be stored and delivered in any media form and either pre-configured by specific file type and size for a specific end user or site delivery requirements/formats, or delivered and formatted by virtue of the end user or middleware software compatibility systems.


Process 300 directs (304) data to a computer memory.


Process 300 continuously forwards (306) each of the unique data, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network.


Process 300 continuously forwards (308) each of the unique advertisements, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network.


Process 300 retrieves (310) one of the advertisements in response to an activity.


Example Data Retrieval


FIGS. 6A, 6B, and 6C are flowcharts illustrating example processes. Any reference herein to process 300 may refer to process 300A of FIG. 6A, process 300B of FIG. 6B, and/or process 400C of FIG. 6C, depending on context of the reference.


As shown in FIG. 6B, data forwarding process 300B includes receiving (302) a request to store or retrieve data. If the received request is a request to store data, data forwarding process 300 determines (304) a memory location associated with an IP address available to receive the data.


Process 300 sends (306) a message with the memory location associated with the IP address for the requester to forward the data.


Process 300 detects (308) the presence of data in a memory location. Process 300 forwards (310) the data in the memory location to another memory location associated with another IP address within the computer and continues to repeat detecting (308) and forwarding (310) of the data from memory location associated with one IP address to a memory location associated with another IP address.


If the received request is a request to retrieve data being continuously forwarded from memory location to memory location, data forwarding process 300 locates (312) the requested data being forwarded within the memory and returns (314) the located data to the requester.


Example Search Query Processing

As shown in FIG. 6C, the search process 300C includes receiving (302) a query. Example queries include filenames, file types, usernames, dates and so forth. In one example, the query is a keyword or keywords. Search process 300 searches (304) the database of scrambled files represented by the index of file names for a match of the keyword or keywords. If a match of the keyword or keywords is found among the scrambled files, process 300 generates (306) a list of filenames containing the keyword or keywords. In one example, the list of file names is displayed to a user on an input/output device, enabling the user to select one of the file names. In another example, the list of filenames displayed includes supplemental information with respect to the file, such as, file type, file size, date saved and/or last modified, and so forth. Process 300 receives (308) a user selection of one of the filenames contained in the generated list of file names. The user selection can include a mouse click, a key board input, an audio input, and so forth, indicating a selected filename.


Process 300 launches (310) a file retrieval process 400.


Example File Retrieval Process

As shown in FIG. 7, the file retrieval process 400 matches (402) the requested filename at the central server using a hash mark or other unique code that can be “sniffed” by the node upon the data entering the node via the encryption handshake. This can occur by pinging the nodes in the network. Process 400 sends (404) the message to return the data to the user directly to the node or node state where the central server believes the data will likely appear. The more the central server can narrow the node state that it pings to, then the more efficient the retrieval will become and the less burdened by unnecessary messaging traffic to nodes that are not necessary for a transaction between the central server and the node capable of forwarding the data.


Once the correct node receives the message to forward the data in node memory to the requester, process 400 forwards (406) in node memory the data to the requester and forwards (408) a confirmation message that the data has been sent to the user. This routing message may be sent directly to the central server or may be passed to the central server or servers via other node(s) or supernode(s) in the framework 10. Upon the user receiving the requested data the user's application functions to automatically ping the central server that the data requested has been received. Thus the framework 10 creates data storage without caching, downloading and/or storing the data on any physical storage medium. Data storage and management is accomplished via a continuous routing of the data from node memory to node memory.


In another embodiment, storage process 100 only stores the scrambled data along with filename, and in some instances, file type, username, and/or date stamp, while automatically deleting the non-scrambled data file.


If the request to the central server 14 is a delete data file request, the central server 14 launches a file deletion process 500.


Example File Deletion

As shown in FIG. 8, process 500 matches (502) the filename to delete at the central server 14 using a hash mark or other unique code that can be “sniffed” by the node upon the data entering the node via the encryption handshake. This can occur by pinging the nodes in the network. Process 500 sends (504) the message to forward the data to the deletion node 28 directly to the node or node state where the central server believes the data will likely appear. The more the central server can narrow the node state that it pings to, then the more efficient the retrieval will become and the less burdened by unnecessary messaging traffic to nodes that are not necessary for a transaction between the central server and the node capable of forwarding the data.


Process 500 forwards (506) in node memory the data to the deletion node. Process 500 removes (508) the data file name from the index and forwards (510) a confirmation message that the data has been deleted to the user. This routing message may be sent directly to the central server or may be passed to the central server or servers via other node(s) or supernode(s) in the framework 10.


The framework 10 creates data storage without caching, downloading and/or storing the data on any physical storage medium. Data storage and management is accomplished via a continuous routing of the data from node memory to node memory, the forwarded data only downloaded when the user requests the data to be returned to the user from the framework 10.


Example User Interface for Real-Time Communications


FIG. 9 illustrates an example interface presented to a user of one of the client systems 12, 14 when running an instant messaging client program. As described above, instant messaging programs typically enable users to communicate in real-time with each other in a variety of ways. For example, many instant messaging programs enable users to send text as an instant message, to transfer files, and to communicate by voice.


Shown is a desktop 900 with a user interface 905 of the instant messaging client program. User interface 905 has a text box 910 that displays representations 915 of the program user's contacts or buddies (both terms are used interchangeably herein), which are other users of an instant messaging program with whom the program user desires to communicate and interact. The representations 915 may provide contextual information to the program user about the buddy, such as whether the contact is online, how long the contact has been online, whether the contact is away, or whether the contact is using a mobile device.


The list of contacts displayed in text box 910 of user interface 905 typically is referred to as the contact list or buddy list. The IM program user can typically add or remove contacts from the contact list. In the example shown, the representations 915 are text icons showing the screen names of the contacts.


Instant messaging programs may use an instant messaging server to assist in communications between users of the instant messaging program. The instant messaging server may be implemented, for example, using central server 16. When a user is connected to the network and executes the instant messaging program, the instant messaging program contacts the central server 16 and logs the user onto the central server 16. The central server 16 informs the instant messaging program when the program user's contacts are online and facilitates communications between the program user and an online contact.


The central server 16 may support IM services irrespective of a program user's network or Internet access. Thus, central server 16 may enable users to send and receive IMs, regardless of whether they have access to any particular Internet service provider (ISP). The central server 16 also may support associated services, such as administrative matters, advertising, directory services, chat, and interest groups related to IM. To transfer data, the central server 16 employs one or more IM protocols. The data may be encrypted and de-encrypted.


To begin an IM session, the IM client program running on a client system 12, 14 establishes a connection with the central server 16 and logs onto the central server 16. Once a session is established, a user can use the IM client program to view whether particular buddies are online, exchange IMs with particular buddies, participate in group chat rooms, and trade files such as pictures, invitations or documents. The IM program user also may be able to find other buddies with similar interests, get customized information such as news and stock quotes, and search the World Wide Web.


Central server 16 may assist IM communications between users of IM client programs by facilitating the establishment of a peer-to-peer communication session between the IM client programs. Or the central server 16 may assist IM communications by directly routing communications between the IM client programs.


When a contact is online, the IM program user can communicate or interact with the contact in a number of ways. For instance, the IM program user can send an instant message to the contact (typically in the form of text).


Sending a message opens up a window in which messages can be typed back-and-forth between the IM program user and the contact. Similarly, the IM program user also can send a file or other content to the contact.


To initiate these actions for a contact, the IM program user performs operations on the representation of the contact displayed in user interface 905. The program then executes the corresponding action in response to the operation performed on the representation. For example, an instant message might be initiated by double-clicking on a contact's representation. Or, a file transfer might be initiated by the IM program user selecting the contact's representation to bring up a context menu and choosing “send a file” from the menu.


Other actions can be executed in response to operations performed on the representation of the contact displayed in interface 905. For instance, a “buddy icon” can be set for the contact such that communications with the contact display the buddy icon. In addition, for example, profile information about the contact can be retrieved, an alert can be set to inform the program user when the contact is online, a VoIP communication session can be established, or an e-mail can be sent.


User interface 905 may have icons 930 to help a user set various options or perform operations in the instant messaging program.


Example Rotating Encryption in Data Forwarding Storage

In one specific example, at the point of entry to a node, data undergoes an encrypted “handshake” with the node or central server 14 or user. The encryption scheme employed is under the control of the central server 14, which can change or rotate the scheme periodically or in response to external factors. Any two or more encryption schemes can be used. For example, encryption schemes involving simple conversions can include ASCII to Binary, Binary to ASCII, ASCII to Hex, Hex to ASCII, Binary to Hex, Hex to Binary, Dec to Hex, Hex to Dec, Dec to Roman, and Roman to Dec, and so forth.


Encryption schemes involving network tools can include IP to Dec, Dec to IP, IP to Hex, Hex to IP, IP/Net Calculator, IPv6 Validator, IPv6 Compress, IPv6 Uncompress, and so forth.


Non-Key En/DeCryption schemes can include PasswordGen, Backwards, Base 64 Encode, Base 64 Decode, Caesar Bruteforce, 133t 5p34k 3nc0d3, 133t 5p34k d3c0d3, Igpay Atinlay, Un-Pig Latin, ROT-13, and so forth.


HTML Encoding schemes can include HTML Entities Encode, HTML Entities Decode, URL Encode, URL Decode and so forth.


Hash Algorithm schemes can include DES, MD4, MD5, SHA1, SHA-224, SHA-256, SHA-384, SHA-512, HAVAL 128, HAVAL 160, HAVAL 192, HAVAL 224, HAVAL 256, RIPEMD 128, RIPEMD 160, RIPEMD 256, RIPEMD 320, Tiger, Tiger 128, Tiger 160, Adler 32, Whirlpool, GOST, CRC32, CRC32B, and so forth.


Key En/DeCryption schemes can include Tripple DES, Blowfish, CAST 128, CAST 256, GOST, Rijndael 128, Rijndael 192, Rijndael 256, SERPENT, Safer+, RC2, XTEA, LOKI97, DES, TwoFish, Wake (ECB mode, BASE64 armored), and so forth.


Time Conversion schemes can include Unix Timestamp to Date/Time, Date/Time to Unix Timestamp, Unix Timestamp to RFC 2822, Unix Timestamp to Internet Time, Unix Timestamp to ISO 8601, and so forth.


The central server 14 can direct a different encryption scheme to each of the network systems 16, 18, 20, 22, or a single encryption scheme to all of the network systems 16, 18, 20, 22.


The central server 14 can periodically direct one or more of the network systems 16, 18, 20, 22, to change their current encryption scheme to another encryption scheme. The central server 14 can direct the network systems 16, 18, 20, 22, to employ a particular encryption scheme based on the type of data being forwarded from node memory to node memory. The central server 14 can direct the network systems 16, 18, 20, 22, to employ a particular encryption scheme based on an owner of the data being forwarded from node memory to node memory.


The central server 14 can store the various encryption schemes locally and send a particular encryption scheme to a node memory for use, or the network systems 16, 18, 20, 22, can store the various encryption schemes locally and wait for instructions received from the central server 14 to select a particular encryption scheme for use.


If the received request is a request to retrieve data being continuously forwarded from node memory to node memory, data forwarding process 200 matches (214) at the central server 14 using a hash mark or other unique code that can be “sniffed” by the node upon the data entering the node via the encryption handshake. This can occur by pinging the nodes in the network. Process 200 sends (216) the message to return the data to the user directly to the node or node state where the central server 14 believes the data will likely appear. The more the central server 14 can narrow the node state that it pings to, then the more efficient the retrieval will become and the less burdened by unnecessary messaging traffic to nodes that are not necessary for a transaction between the central server 14 and the node capable of forwarding the data.


Redundant Data Forwarding


A redundant data storage and management system may be implemented where the redundant data is never fixed in physical storage, but in fact, is continually being routed/forwarded from node memory to node memory. The path of the nodes to which redundant data is forwarded may also be altered by the central server 14 to adjust for system capacities and to eliminate redundant paths of data that may weaken the security of the network due to the increased probability of data path without this feature.


The invention can be implemented to realize one or more of the following advantages. One or more networks create redundant data storage without caching or downloads. Redundant data storage and management are accomplished via a constant routing of the redundant data.


Other Descriptions


This data storage and management system in which the data is never fixed in physical storage, but in fact, is continually being routed/forwarded from node memory to node memory in the network, can be used as a backend system(s) in many applications that currently used fixed medium storage. In one example, this data storage and management system where the data is continually being routed/forwarded from node memory to node memory in the network is used in a media delivery system. Here, we consider media to broadly include any predictable content, any archival content, any audio content, visual content, any text-based content, and so forth. Predictable content can be deployed into the data forwarding storage network and recalled/retrieved when needed, e.g., directed to an IP address of a specific user system.


The content can include text, audio, visual images, audiovisual images, or any combination thereof. For example, the network can continuously forward certain audiovisual highlights that are used each day, such as program introductions, graphic packages, introduction and theme music, historical footage of significance, commonly used reference footage, and so forth.


This content being continuously forwarded in the network may or may not be needed in the future. More specifically, content that is most likely needed but are seeded into the network according to the probability of use, not based upon the individual needs of a user to store a file. In addition to using probability of need as a storage priority, the network can use a more diverse distribution list for the stored content than the forward storage system utilized by a user for “normal file storage” because users are delivered material not by calling/requesting a file from the network itself, but by virtue of a content provider using the network as a distribution tool to their audience.


One such example is a stock quote system. In traditional stock quote systems used on the World Wide Web (“Web”), a user accesses a stock quote website through a graphical user interface (GUI) used for web browsing, such as Firefox™ Opera™, or Flock™. One example stock quote website is Yahoo!™ financial. The user enters a trading symbol of a stock in which he/she wants to query. The stock quote web site receives the stock symbol, sends the stock symbol to a stock quote backend for a current price, receives the current price from the stock quote backend, and sends the current price to the user's GUI for viewing by the user. The current price is a numerical value, such as 171/2, in this example.


Numeric values can be deployed into the data storage and management system and continually routed/forwarded from node memory to node memory in the network. A range of numeric values in appropriate increments can be deployed in the data storage and management system, similar to how data files are deployed when a message to store is received. Each of the numeric values is sent from a user system to the central server 14 using the data forwarding process 200, fully described above. This results in a large number of distinct and unique numeric values continually being routed/forwarded from node memory to node memory in the network.


When a user requests a current stock price from a web application like Yahoo! financial, Yahoo! financial requests from the backend stock quote server a current price and the central server 14 is informed of this price directly from the back end stock quote server. The central server 14 requests the numeric value representing the received price from the network and once found, directs the numeric value to the Internet Protocol (IP) address of the user requesting the quote.


In another stock quote example, a range of numeric values embedded in text can be deployed into the data storage and management system where they are continually being routed/forwarded from node memory to node memory in the network. For example, “IBM is selling at 25,” “IBM is selling at 251/8,” and forth, can be deployed. When a result for the current price of IBM is received, the financial web site requests from the backend stock quote server a current price and the central server 14 is informed of this price directly from the back end stock quote server. The central server 14 requests the numeric value representing the received price, along with associated text, from the network and once found, directs the numeric value with associated text to the Internet Protocol (IP) address of the user requesting the price. For example, if the current price of IBM sock is 25, the central server 14 requests that “IBM is selling at 25” be delivered to the user requesting the quote.


The above specific example used a range of unique numeric values in appropriate increments deployed in our data storage and management system. However, any predictable content, archival data and/or media data can be deployed in our data storage and management system. For example, election results can be deployed into our data storage and management system. More specifically, a news item reporting “Senator Obama won the general election” and that “Senator McKane won the general election” can be deployed to the network where they are never fixed in physical storage, but in fact, continually being routed/forwarded from node memory to node memory in the network.


When the election results are known in November 2008, a user can request election results. The web application makes a request to a news service requesting election results from a web application having a back end supported by our data storage and management system. The central server 14 is informed of election results by a news server. The central server 14 locates the news item in the network and directs the news story to the Internet Protocol (IP) address of the user requesting the news information.


In each of the examples above, the network includes a group of interconnected computer system nodes each adapted to receive data items and continuously forward the data items from computer memory to computer memory, independent of each other, without storing on any physical storage device, in response to a request to store the data items from a requesting system and retrieve a particular data item being continuously forwarded from computer memory to computer memory in response to a request to retrieve the data item from the requesting system. Each node in the network is adapted to detect the presence of a data item in its memory and forward the data item to a computer memory of another node in the interconnected computer systems nodes according to a node's availability. The node's availability can be determined according to its volume of network traffic. Each node can encrypt the data item.


A central node can be adapted to match the data retrieval request at a central server using a hash mark representing the data item entering a node, send a message to a node that is predicted to have the data item in memory, the message instructing the node to forward the data item in memory to the requester, and send a confirmation message to the central server that the data item in memory has been forwarded to the requester.


The techniques described above are not limited to any particular hardware or software configuration. Rather, they may be implemented using hardware, software, or a combination of both. The programs may be implemented in a high-level programming language and may also be implemented in assembly or other lower level languages, if desired. Any such program will typically be stored on a computer-usable storage medium or device (e.g., CD-ROM, RAM, or magnetic disk). When read into the processor of the computer and executed, the instructions of the program cause the programmable computer to carry out the various operations described.


Furthermore, while the techniques have been described primarily with IM applications, they may be applied to other communications programs such as FTP programs, e-mail programs, voice-over-IP (VoIP) or other telephony programs, or players for streaming media.


It is to be understood that the foregoing description is intended to illustrate and not to limit the scope of the disclosure, which is defined by the scope of the appended claims. Other embodiments are within the scope of the following claims.

Claims
  • 1. A method comprising: accessing a user profile comprising demographic data of a user;determining, by a computing system, advertising content for transmission to a user computer of the user based on at least the user profile;transmitting, from the computing system, a request for the determined advertising content to one or more computer system nodes of a network of computer system nodes, wherein continuous forwarding of a first advertisement included in a plurality of advertisements is performed independent of continuous forwarding of a second advertisement included in the plurality of advertisements, among the computer system nodes in the network without storing the advertisements on any fixed storage medium in the network during the continuous forwarding among the computer system nodes in the network; andproviding the determined advertising content to the user computer.
  • 2. The method of claim 1, wherein the user profile further comprises computer usage patterns of the use, web browsing history of the user, or content of web pages viewed by the user.
  • 3. The method of claim 1, wherein the advertising content comprises text, audio, and/or visual content.
  • 4. The method of claim 1, wherein each of the one or more computer system nodes utilizes a same encryption scheme.
  • 5. The method of claim 1, wherein providing the determined advertising content to the user computer comprises instructing one or more computer system nodes of the network to forward the determined advertising content to the user computer.
  • 6. The method of claim 1, wherein providing the determined advertising content to the user computer comprises providing the user computer with a network location of the determined advertising content.
  • 7. The method of claim 1, wherein continuously forwarding comprises: detecting a particular advertisement at a specific node in the network of computer system nodes; andforwarding the particular advertisement to another node in the network of computer system nodes without storing the particular advertisement on any fixed storage medium within the network of computer system nodes.
  • 8. The method of claim 1, further comprising: determining an address of a node within the network of computer system nodes available to receive the determined advertising content based on one or more factors of the network of computer system nodes.
  • 9. The method of claim 8, wherein the one or more factors comprise network traffic analysis of the network of computer system nodes and available memory for the node within the computer system nodes.
  • 10. The method of claim 1, further comprising: transmitting to a first one or more of the computer system nodes an indication of a first encryption scheme to be utilized by the first one or more computer system nodes during said continuous forwarding via the first one or more computer system nodes.
  • 11. The method of claim 10 further comprising: transmitting to a second one or more of the computer system nodes an indication of a second encryption scheme to be utilized by the second one or more computer system nodes during said continuous forwarding via the second one or more computer system nodes.
  • 12. The method of claim 1, further comprising: continuing forwarding each of the plurality of advertisements, independent of one other, among the computer system nodes in the network without storing the advertisements on any fixed storage medium in the network during said forwarding among the computer system nodes in the network;receiving a request from a requesting computer system to retrieve a particular advertisement being continuously forwarded in the network of computer system nodes; andretrieving the particular advertisement from a computer system node in response to the request to retrieve the particular advertisement.
  • 13. The method of claim 12, wherein retrieving comprises: identifying an entry of a data structure associated with the particular advertisement based on a hash mark associated with the particular advertisement;determining, based on the identified entry of the data structure, a computer system node that is predicted to have the particular advertisement; andtransmitting a message instructing the determined computer system node to forward the particular advertisement to the requesting computing system.
  • 14. The method of claim 13, further comprising receiving an acknowledgement from the requesting computing system that the particular advertisement has been received.
  • 15. A computing system comprising: one or more hardware processors configured to execute instructions;a computer readable medium storing instructions configured for execution by the one or more processors in order to cause the computing system to: access a user profile comprising demographic data of a user;determine advertising content for transmission to a user computer of the user based on at least the user profile;transmit a request for the determined advertising content to one or more computer system nodes of a network of computer system nodes, wherein continuous forwarding of a first advertisement included in a plurality of advertisements is performed independent of continuous forwarding of a second advertisement included in the plurality of advertisements, among the computer system nodes in the network without storing the advertisements on any fixed storage medium in the network during the continuous forwarding among the computer system nodes in the network; andprovide the determined advertising content to the user computer.
  • 16. The system of claim 15, wherein the computer readable medium further includes instructions for continuous forwarding which cause the computer system to: detect a particular advertisement at a specific node in the network of computer system nodes; andforward the particular advertisement to another node in the network of computer system nodes without storing the particular advertisement on any fixed storage medium within the network of computer system nodes.
  • 17. The system of claim 15, wherein the computer readable medium further includes instructions which cause the computer system to determine an address of a node within the network of computer system nodes available to receive the determined advertising content based on one or more factors of the network of computer system nodes, wherein the one or more factors comprise network traffic analysis of the network of computer system nodes and available memory for the node within the computer system nodes.
  • 18. The system of claim 15, wherein the computer readable medium further includes instructions which cause the computer system to transmit to a first one or more of the computer system nodes an indication of a first encryption scheme to be utilized by the first one or more computer system nodes during said continuous forwarding via the first one or more computer system nodes.
  • 19. A non-transitory computer-readable medium having instructions encoded thereon, wherein the instructions are readable by a computing system in order to cause the computing system to perform operations comprising: accessing a user profile comprising demographic data of a user;determining advertising content for transmission to a user computer of the user based on at least the user profile;transmitting a request for the determined advertising content to one or more computer system nodes of a network of computer system nodes, wherein continuous forwarding of a first advertisement included in a plurality of advertisements is performed independent of continuous forwarding of a second advertisement included in the plurality of advertisements, among the computer system nodes in the network without storing the advertisements on any fixed storage medium in the network during the continuous forwarding among the computer system nodes in the network; andproviding the determined advertising content to the user computer.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation-in-part of each of the following: U.S. application Ser. No. 13/003,502, filed Feb. 1, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/049763, filed Jul. 7, 2009, which is a continuation of U.S. patent application Ser. No. 12/170,925, filed Jul. 10, 2008, now U.S. Pat. No. 7,636,758, each entitled “ADVERTISEMENT FORWARDING STORAGE”U.S. patent application Ser. No. 13/003,505, filed Apr. 11, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/049755, filed Jul. 7, 2009, which is a continuation of U.S. patent application Ser. No. 12/170,901, filed Jul. 10, 2008, now U.S. Pat. No. 7,673,009, each entitled “MEDIA DELIVERY IN DATA FORWARDING STORAGE NETWORK”U.S. patent application Ser. No. 13/057,072, filed Feb. 1, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/050008, filed Jul. 9, 2009, which is a continuation of U.S. patent application Ser. No. 12/184,866, filed Aug. 1, 2008, now U.S. Pat. No. 7,599,997, each entitled “MULTI-HOMED DATA FORWARDING STORAGE”U.S. patent application Ser. No. 12/989,638, filed Feb. 25, 2011, which is the U.S. National Phase of International Application No. PCT/US/2009/041817, filed Apr. 27, 2009, which is a continuation of U.S. patent application Ser. No. 12/109,458, filed Apr. 25, 2008, now abandoned, each entitled “REAL-TIME COMMUNICATIONS OVER DATA FORWARDING FRAMEWORK”U.S. application Ser. No. 12/991,383, filed Jan. 24, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/042971, filed May 6, 2009, which is a continuation of U.S. patent application Ser. No. 12/116,610, filed May 7, 2008, now abandoned, each entitled “DELETION IN DATA FILE FORWARDING FRAMEWORK”U.S. patent application Ser. No. 13/119,133, filed Mar. 15, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/058368, filed Sep. 25, 2009, which is a continuation of U.S. patent application Ser. No. 12/240,925, filed Sep. 29, 2008, now U.S. Pat. No. 7,631,051, each entitled “GEOLOCATION ASSISTED DATA FORWARDING STORAGE”U.S. patent application Ser. No. 13/119,124, filed Mar. 15, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/058362, filed Sep. 25, 2009, which is a continuation of U.S. patent application Ser. No. 12/240,951, filed Sep. 29, 2008, now U.S. Pat. No. 7,636,759, each entitled “ROTATING ENCRYPTION IN DATA FORWARDING STORAGE”U.S. patent application Ser. No. 13/119,122, filed Mar. 15, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/058052, filed Sep. 23, 2009, which is a continuation of U.S. patent application Ser. No. 12/240,967, filed Sep. 29, 2008, now U.S. Pat. No. 7,636,760, each entitled “SELECTIVE DATA FORWARDING STORAGE”U.S. patent application Ser. No. 13/119,147, filed Mar. 15, 2011, which is the U.S. National Phase of International Application No. PCT/US2009/058376, filed Sep. 25, 2009, which is a continuation of U.S. patent application Ser. No. 12/240,991, filed Sep. 29, 2008, now U.S. Pat. No. 7,636,761, each entitled “MEASUREMENT IN DATA FORWARDING STORAGE”U.S. application Ser. No. 12/052,345, filed Mar. 20, 2008, entitled “REDUNDANT DATA FORWARDING STORAGE.” Each of the above-listed references is hereby incorporated by reference in its entirety.

US Referenced Citations (184)
Number Name Date Kind
3786424 McVoy et al. Jan 1974 A
5751928 Bakalash May 1998 A
5774553 Rosen Jun 1998 A
5787258 Costa et al. Jul 1998 A
5978485 Rosen Nov 1999 A
5978839 Okuhara et al. Nov 1999 A
6085030 Whitehead et al. Jul 2000 A
6151395 Harkins Nov 2000 A
6205478 Sugano et al. Mar 2001 B1
6219691 Youn Apr 2001 B1
6260159 Garnett et al. Jul 2001 B1
6298380 Coile et al. Oct 2001 B1
6505213 Kamada et al. Jan 2003 B1
6549957 Hanson et al. Apr 2003 B1
6587866 Modi et al. Jul 2003 B1
6650640 Muller et al. Nov 2003 B1
6677976 Parker et al. Jan 2004 B2
6684258 Gavin et al. Jan 2004 B1
6721275 Rodeheffer et al. Apr 2004 B1
6724767 Chong et al. Apr 2004 B1
6745289 Gruner et al. Jun 2004 B2
6907447 Cooperman et al. Jun 2005 B1
6941338 Madsen Sep 2005 B1
6983303 Pellegrino et al. Jan 2006 B2
6985476 Elliott et al. Jan 2006 B1
7007142 Smith Feb 2006 B2
7035933 O'Neal et al. Apr 2006 B2
7043530 Isaacs et al. May 2006 B2
7061923 Dugan et al. Jun 2006 B2
7061925 Joshi Jun 2006 B2
7103824 Halford Sep 2006 B2
7120631 Vahalia et al. Oct 2006 B1
7136638 Wacker et al. Nov 2006 B2
7139820 O'Toole, Jr. et al. Nov 2006 B1
7143170 Swildens et al. Nov 2006 B2
7191298 Kaminsky et al. Mar 2007 B2
7197576 Lo et al. Mar 2007 B1
7209973 Tormasov et al. Apr 2007 B2
7218952 Alperovich et al. May 2007 B1
7240214 Gazzetta et al. Jul 2007 B2
7257817 Cabrera et al. Aug 2007 B2
7266776 Quillen et al. Sep 2007 B2
7274706 Nguyen et al. Sep 2007 B1
7277941 Ignatius et al. Oct 2007 B2
7295556 Roese et al. Nov 2007 B2
7320059 Armangau et al. Jan 2008 B1
7327731 Kumar et al. Feb 2008 B1
7346063 Herbst Mar 2008 B1
7346909 Eldar et al. Mar 2008 B1
7356540 Smith et al. Apr 2008 B2
7356567 Odell et al. Apr 2008 B2
7363316 Anderson et al. Apr 2008 B2
7366868 Reuter et al. Apr 2008 B2
7404002 Pereira Jul 2008 B1
7415506 Gajjar et al. Aug 2008 B2
7424514 Noble et al. Sep 2008 B2
7426471 Briscoe et al. Sep 2008 B1
7426574 Liao Sep 2008 B2
7426637 Risan et al. Sep 2008 B2
7426745 McCarty Sep 2008 B2
7428219 Khosravi Sep 2008 B2
7428540 Coates et al. Sep 2008 B1
7430584 Fein et al. Sep 2008 B1
7457835 Toebes et al. Nov 2008 B2
7529784 Kavuri et al. May 2009 B2
7599997 Fein et al. Oct 2009 B1
7631051 Fein et al. Dec 2009 B1
7631052 Fein et al. Dec 2009 B2
7636758 Fein et al. Dec 2009 B1
7636759 Fein et al. Dec 2009 B1
7636760 Fein et al. Dec 2009 B1
7636761 Fein et al. Dec 2009 B1
7636762 Fein et al. Dec 2009 B1
7636763 Fein et al. Dec 2009 B1
7636764 Fein et al. Dec 2009 B1
7668926 Fein et al. Feb 2010 B2
7668927 Fein et al. Feb 2010 B2
7673009 Fein et al. Mar 2010 B2
7685248 Fein et al. Mar 2010 B1
7707287 Shafir et al. Apr 2010 B2
7783608 Shitomi Aug 2010 B2
7844695 Fein et al. Nov 2010 B2
7865586 Cohn Jan 2011 B2
7877456 Fein et al. Jan 2011 B2
7924888 Schultze et al. Apr 2011 B2
7974645 Choi-Grogan Jul 2011 B2
8266237 Moore et al. Sep 2012 B2
8352635 Fein et al. Jan 2013 B2
8386585 Fein et al. Feb 2013 B2
8452844 Fein et al. May 2013 B2
8458285 Fein et al. Jun 2013 B2
8554866 Fein et al. Oct 2013 B2
8599678 Fein et al. Dec 2013 B2
20020059444 Shinno May 2002 A1
20020158899 Raymond Oct 2002 A1
20020194371 Kadoi Dec 2002 A1
20030093463 Graf May 2003 A1
20030101200 Koyama et al. May 2003 A1
20030137880 Barrenscheen et al. Jul 2003 A1
20030158958 Chiu Aug 2003 A1
20040064633 Oota Apr 2004 A1
20040093390 Oberdorfer May 2004 A1
20040165525 Burak Aug 2004 A1
20040223503 Lynch et al. Nov 2004 A1
20040250029 Ji et al. Dec 2004 A1
20050010647 Durham Jan 2005 A1
20050010685 Ramnath Jan 2005 A1
20050015466 Tripp Jan 2005 A1
20050044162 Liang et al. Feb 2005 A1
20050071568 Yamamoto et al. Mar 2005 A1
20050131984 Hofmann et al. Jun 2005 A1
20050201409 Griswold et al. Sep 2005 A1
20050216473 Aoyagi Sep 2005 A1
20050216727 Chattopadhyay et al. Sep 2005 A1
20050240749 Clemo et al. Oct 2005 A1
20050243823 Griswold et al. Nov 2005 A1
20060031593 Sinclair Feb 2006 A1
20060091007 Inoue et al. May 2006 A1
20060095512 Noma May 2006 A1
20060143505 Olarig Jun 2006 A1
20060159456 Gumaste et al. Jul 2006 A1
20060209822 Hamamoto et al. Sep 2006 A1
20060212551 Kao et al. Sep 2006 A1
20060242212 Brinkmann et al. Oct 2006 A1
20070005694 Popkin et al. Jan 2007 A1
20070011097 Eckleder Jan 2007 A1
20070050446 Moore Mar 2007 A1
20070050761 Hester et al. Mar 2007 A1
20070055765 Lisiecki et al. Mar 2007 A1
20070058606 Koskelainen Mar 2007 A1
20070073775 Walker et al. Mar 2007 A1
20070073965 Rajakarunanayake Mar 2007 A1
20070079087 Wang et al. Apr 2007 A1
20070171832 Agmon et al. Jul 2007 A1
20070186068 Agrawal Aug 2007 A1
20070195772 Shadish Aug 2007 A1
20070214105 Sfarti et al. Sep 2007 A1
20070266220 Nelson Nov 2007 A1
20070271349 Clemo et al. Nov 2007 A1
20080013448 Horie et al. Jan 2008 A1
20080016564 Claudatos et al. Jan 2008 A1
20080037777 Ignatius et al. Feb 2008 A1
20080059495 Kiessig et al. Mar 2008 A1
20080071855 Farber et al. Mar 2008 A1
20080084330 Picard Apr 2008 A1
20080091744 Shitomi et al. Apr 2008 A1
20080095075 Monier Apr 2008 A1
20080101277 Taylor et al. May 2008 A1
20080104085 Papoutsakis et al. May 2008 A1
20080114891 Pereira May 2008 A1
20080144655 Beam et al. Jun 2008 A1
20080151893 Nordmark et al. Jun 2008 A1
20080155094 Roese et al. Jun 2008 A1
20080155252 Nambiar Jun 2008 A1
20080155275 Natarajan et al. Jun 2008 A1
20080155537 Dinta et al. Jun 2008 A1
20080181224 Van Hensbergen et al. Jul 2008 A1
20080222415 Munger et al. Sep 2008 A1
20080222492 Earhart et al. Sep 2008 A1
20080225842 Goldfein et al. Sep 2008 A1
20080225888 Valluri et al. Sep 2008 A1
20080228943 Balus et al. Sep 2008 A1
20080313318 Vermeulen et al. Dec 2008 A1
20090006808 Blumrich et al. Jan 2009 A1
20090036171 Palenius Feb 2009 A1
20090063187 Johnson et al. Mar 2009 A1
20090063419 Nurminen et al. Mar 2009 A1
20090067322 Shand et al. Mar 2009 A1
20090092124 Singhal et al. Apr 2009 A1
20090104978 Ben-Ami Apr 2009 A1
20090141621 Fan et al. Jun 2009 A1
20090204320 Shaffer et al. Aug 2009 A1
20090228593 Takeda Sep 2009 A1
20090234933 Fein et al. Sep 2009 A1
20090238167 Fein et al. Sep 2009 A1
20090254567 Fein et al. Oct 2009 A1
20090300635 Ferris Dec 2009 A1
20100185581 Bakalash Jul 2010 A1
20110125721 Fein May 2011 A1
20110138075 Fein Jun 2011 A1
20110167131 Fein Jul 2011 A1
20110179131 Fein Jul 2011 A1
20110258049 Ramer et al. Oct 2011 A1
20130124336 Fein May 2013 A1
Foreign Referenced Citations (101)
Number Date Country
2009296500 Jan 2013 AU
2009296744 Jan 2013 AU
2009240392 Feb 2013 AU
2009244352 Feb 2013 AU
2009276965 Feb 2013 AU
2009296495 Feb 2013 AU
2009268716 Mar 2013 AU
2009296496 Mar 2013 AU
2009225859 Sep 2013 AU
2009268792 Oct 2013 AU
2009296490 Nov 2013 AU
1529964 Sep 2004 CN
1823506 Aug 2006 CN
1855857 Nov 2006 CN
101188569 May 2008 CN
101330451 (A) Dec 2008 CN
102016820 Apr 2011 CN
102016828 Apr 2011 CN
0 458 567 May 1991 EP
0 722 591 Jun 1999 EP
1 048 998 Nov 2000 EP
1 266 308 Dec 2002 EP
1 968 257 Sep 2003 EP
1 357 476 Oct 2003 EP
1406446 Apr 2004 EP
1 802 051 Nov 2006 EP
1322068 Nov 2006 EP
1 746 790 Jan 2007 EP
1 372 316 May 2007 EP
1798934 Jun 2007 EP
1798937 Jun 2007 EP
1826968 Aug 2007 EP
1485787 Oct 2007 EP
1776639 Dec 2007 EP
1479236 Jan 2008 EP
1 931 090 Jun 2008 EP
56-013857 Feb 1981 JP
56-116144 Sep 1981 JP
56-116145 Sep 1981 JP
60-241346 Nov 1985 JP
61-163757 Jul 1986 JP
61-165889 Jul 1986 JP
06-021953 Jan 1994 JP
06-068047 Mar 1994 JP
10-254761 Sep 1998 JP
11-065911 Mar 1999 JP
2002-510409 Apr 2002 JP
2002-268952 Sep 2002 JP
2003-296176 Oct 2003 JP
2003-296179 Oct 2003 JP
2005-070987 Mar 2005 JP
2005-216241 Aug 2005 JP
2005-275937 Oct 2005 JP
2005-310126 Nov 2005 JP
2005-339325 Dec 2005 JP
2007-304665 Nov 2007 JP
2007-310673 Nov 2007 JP
2008-033406 Feb 2008 JP
2008-052340 Mar 2008 JP
2008-516304 May 2008 JP
2008-192129 Aug 2008 JP
2008-537258 Sep 2008 JP
2011-519097 Jun 2011 JP
2011-523120 Aug 2011 JP
2011-528141 Nov 2011 JP
WO 9404991 Mar 1994 WO
WO 9719427 May 1997 WO
WO 9719427 May 1997 WO
WO 0167303 Sep 2001 WO
WO 02052417 Jul 2002 WO
WO 03009544 Jan 2003 WO
WO 2004059529 Jul 2004 WO
WO 2005034441 Apr 2005 WO
WO 2005062578 Jul 2005 WO
WO 2006026567 Mar 2006 WO
WO 2006115594 Nov 2006 WO
WO 2006124084 Nov 2006 WO
WO 2006124217 Nov 2006 WO
WO 2007001631 Jan 2007 WO
WO 2007019689 Feb 2007 WO
WO 2009021289 Feb 2007 WO
WO 2007031593 Mar 2007 WO
WO 2008006079 Jan 2008 WO
WO 2008013036 Jan 2008 WO
WO 2008070958 Jun 2008 WO
WO 2008094930 Aug 2008 WO
WO 2008108699 Sep 2008 WO
WO 2008109390 Sep 2008 WO
WO 2009117259 Sep 2009 WO
WO 2009126418 Oct 2009 WO
WO 2009132345 Oct 2009 WO
WO 2009137571 Nov 2009 WO
WO 2010005928 Jan 2010 WO
WO 2010005935 Jan 2010 WO
WO 2010014368 Feb 2010 WO
WO 2010036712 Apr 2010 WO
WO 2010036881 Apr 2010 WO
WO 2010036883 Apr 2010 WO
WO 2010036886 Apr 2010 WO
WO 2010036887 Apr 2010 WO
WO 2010036891 Apr 2010 WO
Non-Patent Literature Citations (143)
Entry
Akasaka, Tsutomu, “Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays”, Fujitsu Sci. Tech. J., Jan. 2006, vol. 42, Issue 1, pp. 9-16.
Amazon Elastic Compute Cloud (Amazon EC2)—Limited Beta, Amazon.com: Amazon EC2, Amazon Elastic Compute Cloud, Virtual Grid Computing: . . . p. 1 of 9, http://web.archive.org/web/20070705164650rn—2/www.amazon.com/b?ie=UTF8&node=2 . . . Jun. 18, 2010.
Aref, et al.: “Nile: A Query Processing Engine for Data Streams”, Department of Computer Sciences Purdue University, West Lafayette, IN., USA, Data Engineering, 2004. Proceedings. 20th International Conference on Mar. 30-Apr. 2, 2004.
Ari, Ismail: “Design and Management of Globally-Distributed Network Caches”, University of California Santa Cruz, Sep. 2004, 220 pages.
Australian Office Action (Exam Report No. 2) re AU Application No. 2009240392, dated Sep. 25, 2012.
Australian Office Action re AU Application No. 2009240392, dated May 1, 2012.
Australian Office Action re AU Application No. 2009268792, dated Jun. 8, 2012.
Australian Office Action re AU Application No. 2009268792, dated Nov. 13, 2012.
Australian Office Action, re AU Application No. 2009-268716, dated Feb. 29, 2012.
Australian Office Action, re AU Application No. 2009276965, dated Mar. 22, 2012.
Australian Office Action, re AU Application No. 2009296495, dated Apr. 5, 2012.
Australian Office Action, re AU Application No. 2009296496, dated May 23, 2012.
Australian Office Action, re AU Application No. 2009296500, dated Apr. 13, 2012.
Australian Office Action, re AU Application No. 2009296744, dated Apr. 19, 2012.
Cheeha Kim, “An Efficient Multicast Data Forwarding Scheme for Mobile Ad Hoc Networks”, Information Networking, Convergence in Broadband and Mobile Networking, International Conf., Lecture Notes in Computer Science, Jan. 31-Feb. 2, 2005, vol. 3391/2005, 510-519, DOI: 10.1007/978-3-540-30582-8—53.
EPO Exam Report re App. No. 09723251.6, dated Nov. 11, 2010.
EPO, Extended Search Report re EPO App. No. 09734559.9, dated Jul. 25, 2011.
Ertaul et al.: “ECC Based Threshold Cryptography for Secure Data Forwarding and Secure Key Exchange in MANET (I)”, LNCS 3462, pp. 102-113, Networking 2005, May 2-6, 2005 University of Waterloo, Waterloo Ontario Canada.
Ertaul et al: “Implementation of Homomorphic Encryption Schemes for Secure Packet Forwarding in Mobile Ad Hoc Networks (MANETs)”, IJCSNS, vol. 7, Issue No. 11, pp. 132-141, Nov. 2007.
European Extended Search Report, re EP Application No. 09743557.2 dated Nov. 9, 2012.
European Extended Search Report, re EP Application No. 09795041, dated Oct. 11, 2011.
European Extended Search Report, re EP Application No. 09816904.8, dated Feb. 6, 2012.
European Office Action, re EP Application No. 09795041, dated Aug. 9, 2012.
European Supplemental Search Report, re EP Application No. 09795034.9, dated Nov. 7, 2011.
Girao et al.: “TinyPEDS: Tiny Persistent Encrypted Data Storage in Asynchronous Wireless Sensor Networks”, Ad Hoc Networks, vol. 5, Issue 7, p. 1073-1089 (Sep. 2007).
Hoke, et al.: “InteMon: Continuous Mining of Sensor Data in Large-scale Self-* Infrastructures”, Carnegie Mellon University. Appears in ACM SIGOPS Operating Systems Review, 40(3): pp. 38-44. ACM Press, Jul. 2006.
Huang et al., “Secure Data Forwarding in Wireless Ad Hoc Networks”, IEEE International Conference, May 16-20, 2005, pp. 3535-3531, vol. 5.
Ito, Takeshi et al. “Technologies of ETERNUSVS900 Storage Virtualization Switch”, Fujitsu Sci. Tech. J., Jan. 2006, vol. 42, Issue 1, pp. 17-23.
Japanese Decision of Rejection re JP Application No. 2011-529255, dated Nov. 14, 2012.
Japanese Office Action (Notice of Rejection), re JP Application No. JP 2011-500858, dated May 29, 2012.
Japanese Office Action dated Apr. 13, 2012, re JP App. No. 2011-517515.
Japanese Office Action re JP Application No. 2011-529185, dated Jan. 23, 2013.
Japanese Office Action re JP Application No. 2011-529255, dated Apr. 26, 2012.
Japanese Office Action re JP Application No. JP 2011-521163, dated May 23, 2012.
Japanese Office Action re JP Application No. 2011-508626, dated Sep. 27, 2012.
Japanese Office Action, re JP App. No. 2011-529258, dated Apr. 26, 2012.
Japanese Office Action, re JP Application No. 2011-504044, dated Sep. 4, 2012.
Japanese Office Action, re JP Application No. 2011-506496, dated Mar. 21, 2012.
Ji, Lusheng et al., “On Providing Secure and Portable Wireless Data Networking Services: Architecture and Data Forwarding Mechanisms”, IPSJ Journal, Oct. 2004, vol. 45, Issue 10, pp. 2261-2269.
Kim, Youngmin, “An Efficient Multicast Data Forwarding Scheme for Mobile Ad Hoc Networks”, Information Networking, Convergence in Broadband and Mobile Networking, International Conf., 2005, vol. ICOIN 2005, Jeju Island, Korea, Jan. 31-Feb. 2, 2005.
Komandur et al., “SPAM: A Data Forwarding Model for Multipoint-to-Multipoint Connection Support in ATM Networks”, IC3N, IEEE Computer Society, Sep. 1997, pp. 1-7.
Korean Office Action, re KR Application No. 10-2011-7000703, dated Jul. 16, 2012.
Mellor, Chris: “Storage in the cloud—Doppler shift or marketing gloss?”, Storage in the cloud—Feature—Techworld.com, Published: 12:00, Dec. 18, 2007, http://features.techworld.com/storage/3893/storage-in-the-cloud/.
Pakkala, et al.: “Towards a Peer-to-Peer Extended Content Delivery Network”, pp. 1-5, XP002615892, retrieved from the Internet: Url:http://www.eurasip.org/Proceedings/Ext/IST05/papers/99.pdf on Jan. 7, 2011.
PCT International Preliminary Report and Report on Patentability re App. No. PCT/US2009/49763, dated Jan. 20, 2011.
PCT International Preliminary Report on Patentability and Written Opinion re App. No. PCT/US2009/041817, dated Oct. 26, 2010.
PCT International Preliminary Report on Patentability and Written Opinion re App. No. PCT/US2009/042971, dated Nov. 9, 2010.
PCT International Preliminary Report on Patentability and Written Opinion re App. No. PCT/US2009/49755, dated Jan. 20, 2011.
PCT International Preliminary Report on Patentability and Written Opinion re App. No. PCT/US2009/58052, dated Mar. 29, 2011.
PCT International Preliminary Report on Patentability and Written Opinion re App. No. PCT/US2009/58364, dated Mar. 29, 2011.
PCT International Preliminary Report on Patentability and Written Opinion re App. No. PCT/US2009/58367, dated Mar. 29, 2011.
PCT International Preliminary Report on Patentability and Written Opinion re App. No. PCT/US2009/58368, dated Mar. 29, 2011.
PCT International Preliminary Report on Patentability and Written Report re App. No. PCT/US2009/58376, dated Mar. 29, 2011, mailed on Apr. 7, 2011.
PCT International Report on Patentability and Written Opinion Re PCT/US2009/036171 dated Sep. 21, 2010.
PCT International Report on Patentability and Written Opinion re PCT/US2009/037579, dated Oct. 12, 2010.
PCT International Report on Patentability re App. No. PCT/US2009/50008, dated Feb. 10, 2011.
PCT International Search Report and Written Opinion, PCT/US2009/036171, dated Sep. 4, 2009 in 13 pgs.
PCT International Search Report and Written Opinion, PCT/US2009/037579, dated Jun. 24, 2009, 12 pages.
PCT International Search Report re App. No. PCT/US2009/41817, dated Jun. 30, 2009.
PCT International Search Report re App. No. PCT/US2009/42971, dated Jun. 24, 2009.
PCT International Search Report re App. No. PCT/US2009/49755, dated Aug. 7, 2009.
PCT International Search Report re App. No. PCT/US2009/49763, dated Oct. 16, 2009.
PCT International Search Report re App. No. PCT/US2009/50008, dated Aug. 11, 2009.
PCT International Search Report re App. No. PCT/US2009/58052, dated Nov. 4, 2009.
PCT International Search Report re App. No. PCT/US2009/58362, dated Nov. 2, 2009.
PCT International Search Report re App. No. PCT/US2009/58368, dated Oct. 26, 2009.
PCT International Search Report re App. No. PCT/US2009/58376, dated Oct. 28, 2009.
PCT International Search Report, re App. No. PCT/US2009/58364, dated Nov. 4, 2009.
PCT International Search Report, re App. No. PCT/US2009/58367, dated Oct. 26, 2009.
Pierre, et al.: “Globule: A Collaborative Content Delivery Network”, IEEE Communications Magazine, vol. 44, No. 8, Aug. 1, 2006, pp. 127-133.
Purczynski et al.: “Juggling with packets: floating data storage,” , Internet Archive, Oct. 2003. [retrieved on Apr. 25, 2012]. Retrieved from the Internet:<URL: http://web.archive.org/web/20031017082346/http://lcamtuf.coredump.cx/juggling—with—packets.txt.
Purczynski, et al.: “Juggling with packets: floating data storage,” Internet Archive, Oct. 2003. [retrieved on Apr. 25, 2012]. Retrieved from the Internet:<URL: http://web.archive.org/web/20031017082346/http://lcamtuf.coredump.cx/juggling—with—packets.txt>.
Sharma, et al.: “Data Warehouse and Olap Technology Part-1”, power point presentation, first cited on Sep. 29, 2008 in U.S. Appl. No. 12/241,003.
Stefansson, et al.: “MyriadStore: Technical Report”, Swedish Institute of Computer Science (SICS), Technical Report T2006:09, ISSN 1100-3154, ISRN: SCIS-T02006/09-SE. May 3, 2006 in 13 pages.
Tanenbaum, Andrew S.: “Network Protocols”, Computing Surveys, vol. 13, No. 4, Dec. 1, 1981, pp. 453-489.
U. S. Interview Summary mailed Aug. 8, 2012, re U.S. Appl. No. 12/989,638.
U. S. Interview Summary re U.S. Appl. No. 13/003,505, dated Sep. 20, 2012.
U. S. Non-Final Office Action mailed May 14, 2009, re U.S. Appl. No. 12/132,804.
U. S. Notice of Allowance mailed Aug. 7, 2008, re U.S. Appl. No. 12/046,757.
U. S. Notice of Allowance mailed Jul. 15, 2009, re U.S. Appl. No. 12/132,804.
U. S. Notice of Allowance mailed Jul. 26, 2010, re U.S. Appl. No. 12/240,802.
U. S. Notice of Allowance mailed Sep. 15, 2010, re U.S. Appl. No. 12/099,498.
U. S. Office Action re U.S. Appl. No. 12/240,757, dated Jun. 22, 2009.
U. S. Office Action re U.S. Appl. No. 13/003,505, dated Aug. 22, 2012.
U.S. Interview Summary mailed Aug. 6, 2012, re U.S. Appl. No. 12/991,383.
U.S. Interview Summary mailed Aug. 6, 2012, re U.S. Appl. No. 13/057,072.
U.S. Interview Summary mailed Aug. 6, 2012, re U.S. Appl. No. 13/119,124.
U.S. Interview Summary mailed Aug. 7, 2012, re U.S. Appl. No. 13/003,502.
U.S. Interview Summary mailed Aug. 7, 2012, re U.S. Appl. No. 13/119,122.
U.S. Interview Summary mailed Aug. 7, 2012, re U.S. Appl. No. 13/119,147.
U.S. Interview Summary re U.S. Appl. No. 13/119,133, dated Aug. 7, 2012.
U.S. Non-Final Office Action mailed Apr. 16, 2010, re U.S. Appl. No. 12/099,498.
U.S. Notice of Allowance mailed Aug. 31, 2012, re U.S. Appl. No. 13/119,133.
U.S. Notice of Allowance mailed Dec. 14, 2012, re U.S. Appl. No. 12/991,383.
U.S. Notice of Allowance mailed Dec. 6, 2012, re U.S. Appl. No. 12/052,345.
U.S. Notice of Allowance mailed Sep. 17, 2012, re U.S. Appl. No. 13/057,072.
U.S. Notice of Allowance re U.S. Appl. No. 12/052,345, dated Apr. 2, 2012.
U.S. Notice of Allowance re U.S. Appl. No. 12/052,345, dated Dec. 29, 2011.
U.S. Notice of Allowance re U.S. Appl. No. 12/989,638, dated Nov. 5, 2012.
U.S. Office Action re U.S. Appl. No. 12/052,345, dated Oct. 4, 2011.
U.S. Response to U.S. Office Action re U.S. Appl. No. 12/170,901, dated Jun. 12, 2009.
U.S. Response to U.S. Office Action re U.S. Appl. No. 12/170,925, dated Jul. 12, 2009.
U.S. Response to U.S. Office Action re U.S. Appl. No. 12/241,003, dated Jun. 11, 2009.
Wagner, et al.: “A Network Application Programming Interface for Data Processing in Sensor Networks”, Rice University Technical Report TREE0705. 10 pages, Submitted to IPSN (Jan. 2007)—http://www.citeulike.org/user/leosteinfeld/article/5146294.
European Examination Report, re EP Application No. 09743557.2 dated Jul. 18, 2014.
European Supplemental Search Report re EPO Application No. 09 81 6907, mailed Jul. 10, 2014.
Japanese Decision of Rejection re JP Application No. JP 2011-517508, dated Oct. 2, 2014.
Japanese Decision of Dismissal of Amendment re JP Application No. JP 2011-517508, dated Oct. 2, 2014.
Australian Exam Report, re AU Application No. 2009296500, dated Oct. 24, 2012.
Australian Office Action, re AU Application No. 2009296490, issued Feb. 4, 2013.
Canadian Office Action, re CA Application No. CA 2,727,457, dated May 9, 2013.
Canadian Office action, re CA Application No. CA 2,738,634, dated Dec. 20, 2031.
Canadian Office Action, re CA Application No. CA 2722,415, dated Feb. 18, 2014.
Canadian Office Action, re CA Application No. CA 2722,415, dated Mar. 25, 2013.
Chinese Notice of Rejection, re CN Application No. 200980109783.4, dated Nov. 20, 2013.
Chinese Office Action, re CN Application No. 200980126972.2, dated Sep. 2, 2013. w/machine translation.
Chinese Office Action, Second, re CN Application No. 200980138485.8, dated May 30, 2013.
Chou, et al.: “Dynamic route mainenance for geographic forwarding in mobile ad hoc networks,” Computer Networks 52 (2008) 418-431.
Hirose, et al., “Computer Software Dictionary,” Japan, Maruzen Company, Limited, Apr. 5, 1996, third printing, pp. 92-93 and 106-107.
Japanese Office Action dated Mar. 19, 2013, re JP App. No. 2011-517515. w/Machine Trans.
Japanese Office Action re JP Application No. 2011-529185, dated Aug. 22, 2013.
Japanese Office Action re JP Application No. 2011-529256, dated Aug. 22, 2013.
Japanese Office Action re JP Application No. 2011-529256, dated Mar. 12, 2013.
Japanese Office Action re JP Application No. JP 2011-517508, dated Aug. 2, 2013.
Japanese Office Action re JP Application No. JP 2011-517508, dated Feb. 27, 2013.
Japanese Office Action re JP Application No. JP 2011-517508, dated Feb. 4, 2014.
Japanese Office Action, re JP Application No. 2011-506496, dated Mar. 8, 2013.
Japanese Office Action, re JP Application No. 2011-529253, dated Feb. 13, 2013.
Korean Notice of Allowance re KR Application No. KR 10-2011-7009321, dated Apr. 30, 2103.
Korean Notice of Allowance, re KR Patent Application No. KR 10-2011-7009318, dated Mar. 19, 2013.
Na, et al.: “GLR: A novel geographic routing scheme for large wireless ad hoc networks,” Computer Networks 50 (2006) 3434-3448.
U. S. Interview Summary re U.S. Appl. No. 13/003,505, dated May 8, 2013.
U. S. Office Action re U.S. Appl. No. 13/003,505, dated Feb. 11, 2013.
U.S. Notice of Allowance mailed Jul. 25, 2013, re U.S. Appl. No. 13/003,505.
U.S. Notice of Allowance mailed Jun. 10, 2013, re U.S. Appl. No. 13/119,147.
U.S. Notice of Allowance re U.S. Appl. No. 13/119,122, dated Mar. 5, 2013.
U.S. Notice of Allowance re U.S. Appl. No. 13/119,124, dated Mar. 15, 2013.
Chinese Office Action, re CN Application No. 200980126972.2, dated Mar. 17, 2014. w/machine translation.
Balon, S. et al.: “A scalable and decentralized fast-rerouting scheme with efficient bandwidth sharing,” Comptuer Networks 50 (2006) 3043-3063.
Bryant, S., et al. “Internet Draft IP Reroute Using Not-via Addresses,” Jul. 2007, (vol. rtgwg, No. 1, XPo15051639, ISSN: 0000-0004).
European Extended Search Report, re EP Application No. 09816903.0 (PCT/US2009)058367), dated Jan. 5, 2015.
Previdi, Stefano: “IP Fast ReRoute Technologies,” Asia Pacific Regional Internet Conference on Operational Technologies, APRICOT 2006, Mar. 2, 2006, pp. 1-60, XP055051142, Perth Australia.
Office Action for Japanese Application No. 2011-529253 dated Mar. 19, 2015.
Related Publications (1)
Number Date Country
20130124336 A1 May 2013 US
Continuations (9)
Number Date Country
Parent 12170925 Jul 2008 US
Child 13033502 US
Parent 12170901 Jul 2008 US
Child 13003505 US
Parent 12184866 Aug 2008 US
Child 13057072 US
Parent 12109458 Apr 2008 US
Child 12989638 US
Parent 12116610 May 2008 US
Child 12991383 US
Parent 12240925 Sep 2008 US
Child 13119133 US
Parent 12240951 Sep 2008 US
Child 13119124 US
Parent 12240967 Sep 2008 US
Child 13119122 US
Parent 12240991 Sep 2008 US
Child 13119147 US
Continuation in Parts (10)
Number Date Country
Parent 13033502 US
Child 13735759 US
Parent 13003505 US
Child 12170925 US
Parent 13057072 US
Child 12170901 US
Parent 12989638 US
Child 12184866 US
Parent 12991383 US
Child 12109458 US
Parent 13119133 US
Child 12116610 US
Parent 13119124 US
Child 12240925 US
Parent 13119122 US
Child 12240951 US
Parent 13119147 US
Child 12240967 US
Parent 12052345 Mar 2008 US
Child 12240991 US