The present application claims priority from Japanese patent application JP2007-332003 filed on Dec. 25, 2007, the content of which is hereby incorporated by reference into this application.
1. Field of the Invention
The present invention is directed to a service providing system in a distributed environment.
2. Description of the Related Art
The World Wide Web (hereinafter, referred to as the Web), which has become popular since the 1990, now plays a role as a basement of many network services as of 2007.
In the conventional network services, the download-centric data flow has been predominant in such a manner that a server replies to a request of content from a user. However, with the recent introduction of communication mode of a Peer-to-Peer (P2P) or consumer generated media (CGM) such as blogs or image posting sites has caused a considerable increase in uploading traffic, which sends information from the client side to the server side. In addition, upload dedicated services are also used for monitoring using pictures.
Current physical networks have been established under the assumption that downloads are overwhelmingly prevalent in comparison with uploads. For example, Asymmetric Digital Subscriber Line (ADSL), which is used for accessing the Internet in many households, allocates a broad bandwidth for a downlink as compared to an uplink. Moreover, the decrease in downloads lead to a reduction in traffic, which results in a cache downloaded content into a node on the way or allow contents to be delivered near a user in advance by using a contents delivery network (CDN) technology.
However, these technologies can not deal with the increase of upload traffic. First, servers, which are uploading destinations, are located at a small number of bases. Accordingly, when uploads are done in a concurrent, parallel fashion by a group of clients that are distributed over a broad area, a large load may be exerted onto the server. Secondly, traffic is gradually concentrated from the client group aiming at the server, and therefore, traffic congestion may occur if line capacity is exceeded at any one point, which may remarkably decrease the efficiency of the network. Finally, service providers should prepare a large amount of storage beforehand in order to correspond to the amount of information that the user contributes.
A cache system used for high speed Web access generally caches the content provided as a reply in response to a request from the client but not the request itself. This originates from the fact that the general Web services are provided by a system that cannot execute the process without sending the upload content to the server.
The cash server disclosed in JP-A-2002-196969 temporarily buffers the content of a file in a cache located over a communication path when the file is uploaded onto the server and then sends the contents when the line becomes empty. Moreover, when access to the content that has not been sent to the server yet is attempted, the server acquires the content from the cache and then replies to the access request.
It is assumed in the abovementioned related art that the content to be provided as a reply to the client is placed on the server last, and thus, the peak of the traffic concentration may be reduced. However, the total amount of traffic aiming at the server is not reduced and the amount of storage needed in the server is not reduced.
An object of the present invention is to provide a system that may prevent upload traffic from concentrating loads onto the line/server, and more specifically, a system that may eliminate any necessity of requiring expensive storage space for service providers upon initiation of service provisions.
Another object of the present invention is to provide a system that may dynamically generate a reply to a request of content from a device placed over a communication path so as to be still able to provide the service even when the uploaded content is stored in the device placed over the communication path, and may permanently store the content necessary for the service in the device placed over the communication path.
To achieve the above objects, there is provided a service providing system according to the present invention, where a client, a service gateway, and a server are connected to each other through a network. Here, the client sends a first message to the server through the service gateway, the service gateway inquires a processing method of the first message from the client of the server by using a second message including the content of the first message, the server replies to the inquiry from the service gateway with the processing method, and the service gateway performs a process of the first message from the client based on the received processing method.
To achieve the above objects of the invention, there is provided a service gateway according to the present invention. The service gateway is connected to a client and a server through a network. The service gateway includes a processing unit; a storage unit; and a network interface, wherein the network interface receives a first message sent from the client to the server, the processing unit inquires a processing method of the first message of the server by using a second message including the content of the first message, receives the processing method provided as a reply from the server, processes the first message based on the received processing method, and sends a generated reply message to the client.
Furthermore, there is provided a server according to the present invention. The server is connected to a client via a service gateway through a network. The server includes a processing unit; a storage unit; and a network interface, wherein the network interface receives a second message including the content of a first message to inquire a processing method of the first message from the service gateway that has received the first message that is sent from the client to the server, and the processing unit generates the processing method of the first message, and more preferably a group of templates for a reply message and generation logic for embedding a blank of the template, based on the second message.
According to the present invention, a necessary bandwidth over the network between the service gateway and the server may be reduced by accumulating the data uploaded onto the server onto the service gateway according to the present invention.
In addition, the present invention may shorten the turnaround time of a reply to a request from the client by generating the reply on the service gateway that is located at a physically near position from the client.
Moreover, since the content itself is stored in the storage included in the service gateway, the service provider may reduce the amount of finance required to provide necessary storage upon initiation of services, if the storage is invested by those who upload the content.
Before various embodiments of the present invention are described, an example of a schematic construction of a service providing system according to the present invention will be described with reference to
The service gateway 103 may store the processing method received from the server 106 in a storage unit included in the service gateway 103. This eliminates any necessity of access to the server 106 to respond to a second or later request from the client 101.
The service gateway 103 may store a reply message to the client 101, which is generated according to the processing message, in the storage unit of the service gateway 103. This allows the cost to be reduced when a executing a high generation of processes for a load in respects to replying to a second or later request in terms of static content.
Furthermore, the service gateway 103 may store a part or the whole of the message from the client 101, which is analyzed according to the processing method, in the storage unit of the service gateway 103.
The service gateway 103 allocates a unique identifier to the data stored in the storage unit and sends the identifier to the server 106. By doing this, when receiving a request for a large size content, such as registration of an image, the service gateway 103 stores the large size content in the storage unit, so that the server 106 may manage only the metadata on the content (storage location of the image, its descriptions, its title, and the like), thus enabling the reduction in storage capacity on the side of the server 106.
A method of describing the processing method in the server 106 is preferably expressed as a group of templates of the reply message and a program for embedding a blank of one of the templates. The service gateway generates dynamic content by embedding results of program execution into the blank template.
The service gateway 103 identifies the client 101 by an authorization token included in the message. The term “authorization token” refers to an authorization header or the like in the HTTP. And, this authorization token may also be used to determine whether a request is processed or not.
In addition, a process of receiving an advertisement from an advertisement provider to embed the content of the advertisement into the content may also be performed by describing a means/method for acquiring information necessary for generating the reply message from an external server, such as an advertisement server and the like, as the program of the processing method.
In a distributed service providing system according to the first embodiment, a Web based image registration system will be described.
The distributed service providing system according to the first embodiment includes clients 101-1 and 101-2, access networks 102-1 to 102-3, service gateways 103-1 to 103-3, and a core network 105.
As shown in
The server 106 and the client 101 are also a computer that has the same construction as that of the service gateway 103. The only difference from the service gateway 103 is that there is a difference in information recorded in the memory 305 from the service gateway 103, and the connection destination of the network interface 304 is the access network 102.
A personal storage 401 shown in
A logic/template/contents cache 411 shown in
A generated link management table 421 shown in
The table groups are stored in the memory 305 included in the service gateway 103.
The content storage location management table 501 shown in
A generation logic/content storage table 511 shown in
Firstly, when the service gateway 103 receives a request from the client 101, the CPU 301 extracts URL from the header of the request (step 601). Next, the CPU 301 retrieves the generated link management table 421 by using the URL as a key (step 602). The CPU 301 examines whether the user ID of the transmission source of the request is included in the readable user's ID 426 of the entry when the entry exists in the generated link management table 421 (step 603). When the user ID of the transmission source is not included in the readable user's ID 426, this access is not permitted and thus the CPU 301 returns an error to the client 101 (step 604). The CPU 301 retrieves the logic/template/contents cache 411 by using the URL as a key when the user ID of the transmission source is included in the readable user's ID 426 (step 605). When no entry does not exist in the cache, the CPU 301 extracts the SWG-ID 423 included in the entry of the generated link management table 421 and acquires content of the user ID and the file ID included in the entry from the service gateway 103 indicated by the ID. And, after registering the reply in the logic/template/contents cache 411, the CPU 301 provides the content to the client as a reply (step 606). When any entry exists in the logic/template/contents cash 411, the CPU 301 provides the content shown in the generated content 415 of the entry to the client as a reply (step 607).
Next, when the URL doesn't exist in the generated link management table in the step 602, the CPU 301 retrieves the logic/template/contents cache 411 by using the URL as a key (step 608). When no entry exists, the service gateway forwards the request from the client to server 106 as it is (step 609). When an entry exists, the CPU 301 examines whether the content shown in the generated content 415 is “Null” or not (step 610). When the content shown in the generated content is not “Null”, the CPU 301 provides the content shown in the generated content 415 to the client as a reply (step 611). When the content shown in the generated content 415 is “Null”, the CPU 301 generates content according to the template 413 and the generation logic 414 and provides the generated content to the client as a reply (step 612).
The process is divided into three depending on whether the content received from the server 106 is template/generation logic, template/generation logic/analysis logic/reply template, or other replies (step 701). When the template/generation logic is provided as a reply, the CPU 301 extracts cacheability information from the reply (step 702). The CPU 301 examines whether it is described as the template/logic that may be cached as cacheability information (step 703). If the template/logic may be cached, the CPU 301 registers the template/generation logic in the logic/template/contents cache 411 by using the URL included in the reply as a key (step 704). If the template/logic may not be cached or registration to the cache has been completed, the CPU 301 generates a reply page according to the template/generation logic (step 705). The CPU 301 examines the cacheability information again herein to examine whether the generated page may be cached or not (step 706). When the generated page may be cached, the CPU 301 stores the generated page in the cache similarly to the step 704 (step 707). When the generated page may not be cached or has been completed to be stored in the cache, the CPU 301 sends generated page back to the client (step 708).
When the reception content is determined as the template/generation logic/analysis logic/reply template in the step 701, the CPU 301 extracts various types of information from the reply (step 709). Further, the CPU 301 registers the analysis logic and the reply template in the logic/template/contents cache 411 as the generation logic and the template, respectively, by using the URL included in the reply as a key (step 710). Thereafter, the CPU 301 executes processes from step 702 to step 708 to reply to the client 101. The CPU 301 performs processes similar to those of a general Web cache when the other determination is made in step 701. That is, the CPU 301 extracts the cacheability of the content from the pragma header information included in the reply (step 711). The CPU 301 stores the content designated by the generated content 415 included in the logic/template/contents cache 411 as cacheability by using the requested URL as a key (step 713). Then, the CPU 301 provides the reception content as a reply to the client 101 as it is (step 714).
The server 106 extracts URL information from the request sent from the service gateway 103 (step 801). The server 106 retrieves the generation logic/content storage table 511 by using the URL as a key (step 802). If no entry exists in the generation logic/content storage table 511, the server 106 replies the service gateway with an error reply (step 803). If an entry exists in the generation logic/content storage table 511, the server 106 extracts the authorization header included in the header of the request as the user ID (step 804). And then, the server 106 combines the extracted authorization header with the permitted access information 516 of the entry (step 805). If access is not granted, the process proceeds to step 803 to reply with an error. If access is granted, the server 106 refers to the server logic 513 of the entry to examine whether the value is “Null” or not (step 806). If the value is not Null, the server 106 executes the program stored in the server logic 513 of the entry (step 807) to generate a reply and replies to the service gateway 103 with the generated reply (step 808). If the value is Null in the step 806, the server 106 replies with the template 514 and the SGW generation logic 515 included in the entry (step 809).
Hereinafter, examples of a typical sequence and a message exchanged at the time of the sequence will be described according to an embodiment.
In the process 901, client A 101-1 sends the server 106 a page fetch request with a first message through the service gateway 1103-1. Details on the first message sent during process 901 are shown in the message 1101 in
The server 106 that has received the sent message performs a process according to the flow of
Details on the message sent in process 903 are shown in the message 1102 in
The service gateway 1103-1, which has received the reply from the server 106, performs a process according to the flow process shown in
The generated content is sent from the service gateway 1103-1 to the client A 101-1 in the process 904. The reply message sent herein is shown in the message 1201 in
Next, a sequence will be described when a list page fetch request is sent from a client 101 that belongs to a different service gateway 103.
A client B 101-2 makes a list page fetch request to the server 106 through a service gateway 2103-2 (905). A first message sent herein is a variation to the message 1101, where the URL, the user ID, and the password part have been changed, and thus, the detailed description will be omitted. The service gateway 2103-2, which has received the message, sends the message to the server assuming that the entry does not exist in the generated link management table 421 and the logic/template/contents cache 411 (906). The server 106 checks the permitted access and selects an appropriate entry from the generation logic/storage table 511 to reply to the service gateway 2103-2 (907).
After storing the template/generation logic in the cache appropriately, the service gateway 2103-2, which has received the reply, generates the contents. It is assumed herein that logic that performs data fetch from the neighboring service gateway 1103-1 is performed by the generation logic. The service gateway 2103-2 makes a content fetch request of the content belonging to the client A to the service gateway 1103-1 according to this logic (908). The message sent herein is shown in the message 1202 in
The service gateway 2103-2 generates a page by using the content included in the message 1203, and replies a client B 101-2 with the generated page (910). As a result, it has been shown that the content is sent and cached appropriately between clients that belong to a different service gateway 103.
Finally, a sequence will be described when client A 101-1 contributes contents.
Client A 101-1 sends a posting page request to the server 106 through the service gateway 1103-1 (1001). A first message sent in process 1001 is shown in the message 1301 in
Client A 101-1 performs the posting process by using the form included in the page generated herein. Specifically, client A 101-1 sends a POST request including the content according to the URL included in the form (1005). The message sent in process 1005 is shown in the message 1501 in
The service gateway 1103-1, which has received the message 1501, performs a process according to the flow process of
As a result, metadata is firstly sent to the server 106 (1006). The message sent in the process 1006 is shown in the message 1601 in
It has been described above according to the first embodiment that the content is stored in the personal storage 401 of the service gateway 103 so that the server 106 may manage the location information of the content.
A second embodiment that achieves the insertion of an advertisement into contents will be described hereafter. The difference from the first embodiment lies in that an advertisement server 1701 for managing an advertisement manuscript and a user attribute server 1702 for managing a user attribute are added to the configuration as shown in
Even though the advertisement server 1701 and the user attribute server 1702 are located in different access networks as shown in
When receiving the reply, the service gateway 1103-1 starts the generation of the page which includes the advertisement according to the generation logic. First of all, the service gateway 1103-1 sends a user attribute fetch request to the user attribute server 1702 according to the instruction included in the template (1805). A user ID is included in this request. An example of this request message is shown in the reference numeral 2001. In this example, the user ID referred to as “Client A” is described in the first row of the request message. The user attribute server 1702 retrieves the user's attribute (age and sex, etc.) from the user ID by using the user attribute table 2101 and replies with the user's attribute (1806). An example of the reply message is shown in the reference numeral 2003. Here, the user's attribute is described in the body of the reply. The service gateway 1103-1 sends the received attribute and the URL of a page to be generated to the advertisement server 1701 (1807). An example of the message sent during process 1807 is shown as reference numeral 2003.
The advertisement server 1701 selects an advertisement applicable for the page from the URL of the page and the user attribute, and replies with the URL of an image displayed in the advertisement and the URL of the previous site guiding to the advertisement (1808). An example of the message sent during process 1808 is shown as reference numeral 2004. The service gateway 103-1 embeds received information into the template according to the generation logic to generate a page. And, the service gateway 1103-1 replies to client A 101-1 with the generated page. An example of the reply message is shown as reference numeral 2005. The part of the template represented as “<?sgw . . . ?>” included in the message 1901 is replaced by the URL of the advertisement in this example.
It has been described that the advertisement is inserted based on the attribute of the client and the attribute of the page according the abovementioned procedure. Since the method where a user ID is not forwarded directly to the side of the advertisement server 1701 is adopted in this embodiment, a leak of personal information may be suppressed. A service provider who provides services through the server 106 has an advantage because a service provider is capable of controlling the parts in which the advertisements have been inserted in detail by using the template.
An image monitoring system with a Web camera will be described according to the third embodiment.
If this image monitoring is performed by the conventional system without the service gateway 103, the client 101 fetches the image without considering the importance of the image and the precedence with other camera groups, which may cause the traffic flowing in the core network to increase.
In this embodiment, the service gateway 103 is placed at a position near the camera group 2201 over a network to judge the precedence so as to control the reply to the client 101. As a result, the adjustment of the reply precedence may be made between two or more camera groups 2201, thus making it possible to perform image monitoring having high quality while with less traffic.
Hereafter, the processing content of this embodiment will be described according to the representative sequence shown in
In response to the request, the Web camera 2201-1 checks the permitted access in a processing unit embedded therein, selects the template/logic for the URL when the access is permitted, and replies the service gateway 103 (2303). An example of the reply message sent during process 2303 is shown as reference numeral 2402. The message includes the URL for fetching an image from the Web camera 2201, the template upon access to the URL, and the logic, which serve as information for the image fetch request.
The service gateway 103 caches the template/logic included in the message 2402, and replies to the client 101 with the URL for the image fetch request, which is information for the image fetch request (2304).
The client 101 sends an image fetch request to the Web camera 2201-1 by using the URL included in the reply message (2305). An example of the message sent during process 2305 is shown as reference numeral 2501.
The service gateway 103 which relays the image fetch request initiates the analysis of the request.
The image precedence information is stored in the reply header “X-Precedence” in this embodiment. Then, the service gateway 103 makes a comparison with the recent image precedence of another camera group 2201, which is accommodated in the service gateway 103 in step 2604, and when the camera is determined to belong to the group with high precedence (here, top three cameras) (step 2605), replies with the image (step 2606). Otherwise, the service gateway 103 returns the process to step 2601.
The client 101 sends the following image fetch request immediately after receiving the reply of the image. Repeating this procedure enables the client 101 to continuously receive the image from the Web camera 2201 and use the received image as a moving picture.
It has been described above that the image is fetched at a constant frequency between the service gateway 103 and the Web camera 2201 according to the abovementioned procedure, and the precedence of the information transmission is adjusted between a number of cameras in between the service gateway 103 and the client 101, such that data with low precedence is provided as a reply to the client 101 at low frequency and data with high precedence is provided as a reply to the client 101 at high frequency.
The inventive system that has been described above according to various embodiments may be used for the overall service provided over the network.
Number | Date | Country | Kind |
---|---|---|---|
2007-332003 | Dec 2007 | JP | national |