The present invention generally relates to a system for self-correcting updating errors associated with a table. More specifically, the invention relates to a system for updating a data table used in a distributed networking environment in a manner that periodically corrects errors generated during the update process.
With the growing number of technological advancements, computer systems are becoming increasingly more complex. They may both store and process information in a host of locations. Some systems even use various components to independently process different kinds of information. When the workload of a system is distributed among its collaborative elements, the associated data may be distributed as well. Some examples include master/slave, client/server, peer-to-peer, or other type of arrangement.
Distributing data may create several challenges. A distributed work environment may be challenging because of errors associated with using a distributed table. For example, a processor may attempt to add an entry to the distributed table during a table update process. The processor may make this attempt by believing that there is enough room in a distributed table to add an additional entry because the table is not full. However, the add attempt may fail because the location within the distributed table where the processor is adding the entry is actually full. Because the processor does not realize this, an internal constraint error occurs.
The structure of a distributed table contributes to the creation of internal constraints.
As shown in
Using a distributed data table may also create sequencing challenges that complicate the synchronization process. Typically, the synchronization process only modifies or deletes an entry after it has been added. Because internal constraints may prevent a successful add from occurring, the synchronization process may be hindered. An additional complication arises once the distributed table has gotten out of synchronization for a particular entry. That is, typical add, modify, and delete table actions performed for that entry must be amended by the synchronization process to ensure the distributed table is properly maintained. In other words, the synchronization process must make sure that it does not attempt to modify an entry unless it is certain that it was successfully added, nor attempt to delete an entry that does not exist in the distributed table. Moreover, additional problems may result from attempting to modify or delete non-existent entries, such as causing the device to malfunction. Similarly, failing to automatically retry entry add failures may prevent a device from performing as expected.
Thus, there is a general need in the art for a more effective approach to updating distributed data tables that does not sacrifice the efficiency in utilizing a distributed work environment. There is a further need for a table update approach that may correct errors resulting from the add, modify, and delete actions occurring out of sequence. Moreover, there is a need for an update approach that does not unduly burden computer resources in solving the above-identified problems.
The present invention meets the needs described above in a system for updating a data table used in a distributed networking environment in a manner that periodically corrects errors generated during the update process. This unique system may operate at peak operating efficiency by self-correcting errors that may occur while updating a distributed data table. This error correction substantially reduces the number of interrupts to the update process, which increases the operating efficiency.
Generally described, the system self corrects updating errors to a distributed data table. To do this, the system adds an entry to the distributed data table after receiving an update request. The system sets a first indicator to reflect whether adding the entry was successful. The system also periodically compares a current table capacity level with a maximum table capacity level. Finally, the system periodically attempts to add the entry so long as the first indicator reflects a previously unsuccessful add and the current table capacity level is less than the maximum table capacity level.
More specifically, the system self-corrects updating errors to a distributed table by processing a first update request. The system also attempts to change at least one entry in the distributed data table in response to processing the update request. A first indicator is set to reflect whether the entry was successfully changed. The system periodically compares a maximum table capacity level with a current table capacity level. Periodically, a second indicator is set to reflect the current table capacity level. Finally, the system periodically attempts to change the entry so long as the first indicator reflects a previously unsuccessful change and the second indicator reflects less than the maximum table capacity level.
The inventive system may be implemented in a computing device for self-correcting updating errors. This computing device has a main data table with numerous entries and a distributed data table with numerous entries. The entries in the distributed data table are representatives of entries in the main data table. A processor connects to both the distributed data table and the main data table. This processor periodically produces update requests so the entries in the distributed data table reflect changes in the main data table. The computing device also includes an apparatus for storing algorithms. This apparatus connects to the processor so that these algorithms may self-correct updating errors for the distributed data table.
The invention may be understood by reference to the following descriptions taken in conjunction with the accompanying drawings, in which like reference numerals identify like element.
While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and subsequently are described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed. In contrast, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
To ensure that the entries in the distributed data table 240 reflect the most recent entry in the master table 230, the device 200 periodically updates the data entries in the distributed data table 240 using processor 210. Algorithms 220 and gauge 250 facilitate that update process by self-correcting updating errors. The algorithms 220 may include a table synchronization algorithm 223 and a recurring task algorithm 225. These will be described in greater detail with reference to subsequent figures.
Entries in the distributed data table 240 may contain various kinds of information. Some examples include a value, which may be routing information or address information. In addition, each entry may contain an indicator that identifies whether the last operation was successful (e.g., add indicator) and a failed counter. The failed counter may indicate the number of times the entry was not successfully added.
The historical information 305 includes an indicator that depicts whether the current update was successful using a TRUE or FALSE value. This information may also include a failed counter, which tallies the number of times that the current entry was not successfully updated. Therefore, the historical information 305 is stored for each entry within a given table. Though the failed counter and indicator may be stored within a given entry as described above, they may also be stored in a separate location, such as a separate control array used for table maintenance. In an alternative embodiment, the failed counter may not be used at all.
The gauge 250 indicates whether the distributed table 200 includes empty entries. That is, when the distributed table 240 is completely full and has no more empty entries, the gauge 250 registers a maximum capacity level 310. As the device 200 performs various operations, the number of entries within the table varies. The current capacity level 320 indicates the number of entries that the distributed table 240 includes at any given moment. Once the current capacity level 320 is equal to the maximum capacity level 310, the table is considered full.
Step 467 is followed by step 470. In step 470, the update process 450 runs the table synchronization subroutine, which embodies the Table Synchronization Algorithm 223. This subroutine is described in greater detail with respect to
If an add request was not received in step 465, the “no” branch is followed from step 465 to step 474. In step 474, the update process 450 determines if it received a modify request. To accomplish this step, the update process 450 may use a separately running protocol. That is, this process determines if the information previously stored in the entry should be changed. If a modify request was received, the “yes” branch is followed from step 474 to step 476. In step 476, the update process 450 determines if the last attempt to add data to that entry failed. The manner that the update process 450 determines this step is described with reference to
If the update process 450 determines that a modify request was not received in step 474, the “no” branch is followed from step 474 to step 480, implying this is a delete request. In step 480, this process determines if the last add request failed. This step is also described in greater detail with reference to
Turning now to
Step 510 is followed by step 520 where the subroutine 470 determines if the entry was successfully added. If the entry was successfully added, the subroutine 470 follows the “yes” branch from step 520 to step 530. In that step, the add indicator described in reference to
If the entry was not successfully added, the subroutine 470 follows the “no” branch from step 520 to step 540. In step 540, the subroutine 470 sets the add indicator to FALSE. Step 540 is followed by step 550. In step 550, the subroutine 470 increments the failed add counter. In an alternative embodiment that does not use a failed counter, one skilled in the art will appreciate that step 550 may be omitted. Step 550 is then followed by the end step 535.
Otherwise, the “no” branch is followed from step 620 to step 630. In step 630, the subroutine 472 attempts to find table entries whose add indicator is set to FALSE. That is, subroutine 472 searches for all individual tables, or hash groups, entries that were not previously successful in storing.
The decision step 635 follows step 630. In step 635, the subroutine 472 determines if the device 200 includes a failed add counter previously described in reference to
If the failed add value is less than this limit, the subroutine 472 follows the “yes” branch from step 640 to step 645. In step 645, the subroutine 472 completes the table synchronization subroutine 470 described with reference to
Turning now to
If they are not equal, the subroutine 700 follows the “no” branch from step 720 to step 730. In step 730, the subroutine 700 retrieves the first entry whose add indicator is set to FALSE. Step 735 follows step 730 in which the routine 700 determines if the current failed add value is less than the predefined limit. If the value is less, the subroutine follows the “yes” branch from step 735 to step 740. In step 740, the subroutine 700 marks the entry. Step 740 is followed by step 745. If the failed add value is not less than the predefined limit, the “no” branch is followed from step 735 to step 745.
In step 745, the subroutine 700 determines if there are any more previously unsuccessful entries. If there are additional entries, the “yes” branch is followed from step 745 to step 750. In step 750, the subroutine 700 retrieves the next entry with an add indicator set to FALSE. Step 750 is followed by step 735.
If there are not any more entries, the “no” branch is followed from step 745 to step 755. In step 755, the subroutine runs the table synchronization subroutine 470 for all marked entries. The end step 725 follows step 755.
One skilled in the art will appreciate that the subroutine 700 is functionally identical to the subroutine 472 described with reference to
A system for self-correcting updates in a distributed data table according to the present invention creates a host of advantages. For example, failures due to temporary conditions in the distributed table are recoverable. Moreover, the recurring task algorithm avoids overburdening the processor 210 because of unbounded entry-add, retry attempts. In the implementation described with reference to
The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different, but equivalent, manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
This application claims priority to U.S. Application No. 60/567,769, filed May 3, 2004. The aforementioned application(s) are hereby incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
60567769 | May 2004 | US |