This specification relates generally to computer hardware and software architecture, and more particularly relates to a method and system for storage.
Relational databases were originally developed at a time when the speed of central processing units (“CPU”) were relatively slow, the amount of random access memory was relatively small, the size of hard disks was relatively small, but the speed at which hard disks was accessed was relatively fast. Interestingly, hardware advancements have now lead to a different paradigm, where CPUs are relatively fast, the amount of random access memory is relatively high, the size of hard disks is relatively large, but the speed at which hard disks are accessed is relatively slow. This new paradigm means that where large amounts of data are written in relatively small blocks across a large hard disk, the speed at which that data can be accessed is somewhat limited.
A method and system for storage is provided that in one embodiment includes a store process that continually appends data to the end of a data file and without deleting the data file. Changes to data structures are managed by adding new data to the file and changing appropriate pointers in the data file to reflect the new data. Various application programming interfaces are also provided so that the store process can operate transparently to higher level applications. Various plug-ins are also provided so that the store process can utilize different types, configurations and numbers of storage devices.
An aspect of the specification provides a method for storing comprising:
Another aspect of the specification provides a skip-list data structure readable by a processing unit. The processing unit is configured to perform operations on contents of the data structure. The skip list data structure comprises a root object, a first child object and a plurality of additional child objects. The root object includes a pointer from said root object to said first child object. The root object also includes a pointer from the root object to every other one of the additional child objects. The every other one of said additional child objects including a pointer to one of each of said additional child objects to which said root object does not point.
Referring now to
System 50 also optionally includes a network interface 74 that connects processor 54 to a network (not shown), which in turn can connect to one or more additional persistent storage devices (not shown) that are similar in function to persistent storage device 70.
System 50 also includes volatile storage 78, which can be implemented as random access memory (“RAM”), which can be used to temporarily store applications and data as they are being used by processor 54. System 50 also includes read only memory (“ROM”) 82 which contains a basic operating system containing rudimentary programming instructions, commonly known as a Basic Input/Output System (“BIOS”) that are executable by processor 54 when system 50 is initially powered so that a higher level operating system and applications can be loaded and executed on processor 54. Collectively, one can view processor 54, volatile storage device and ROM 82 as a microcomputer. It should now be apparent that system 50 can be based on the structure and functionality of a commercial server such as a Sun Fire X4450 Server from Sun Microsystems Inc., of Palo Alto, USA, but it is to be stressed that this is a purely exemplary server, as server 50 (and other elements of system 50a and its variants) could also be based on any type of computing device including from other manufacturers.
The microcomputer implemented on system 50 is thus configured to store and execute the requisite BIOS, operating system and applications to provide the desired functionality of system 50. In particular, system 50 is configured so that a plurality of processes is executable on processor 54. In
Referring now to
It should now be understood the repeated performances of store 100 will continue to append objects to file 300. For example, assume that store 100 is executed again for a second object O2 immediately after the preceding exemplary performance of store 100 for object O1. As a result of such performance, file 300 would appear as represented in
Thus, Table I corresponds to the file 300 as shown in
The teachings herein are applicable to the storage of many types of file structures. One exemplary type of file structure is a tree structure. Building on the example of file 300 in
Referring now to
It should now be understood that store 100 and commit 200 can be performed any number of times and file 300 will grow in size, particularly in light of the fact that no delete command is utilized. For example, assume that tree structure T-1 is to be replaced by tree structure T-2 shown in
Of note is that file 300 continues to grow, but object O1, object O2 and the root marker RM in location L2 are no longer active. However, it is to be reemphasized that the “Active” status column is not expressly maintained by file 300, but is the effective result of utilizing store and commit as previously described. Indeed, the “active” status need not even be relevant depending on the intended use of file 300, since tree structure T-1 in the form of object O1, object O2 and the root marker RM in location L2 is still available on an archive basis.
Referring now to
Referring now to
Referring now to
In
File system interface 302 can be based on any suitable file manager, such as the well-known file manager interface found within a Windows™ operating system from Microsoft Corporation, Redmond, Wash., USA.
B-tree interface 304, discussed further below, can be an interface that automatically manages the storage (including deletion and appending) of objects maintained in a B-tree structure on behalf of a higher level application. (While the present example is a B-tree, in other embodiments similar tree structures are contemplated such as B+-Tree, B*-Tree, binary tree, trie, and the like.)
Skip list interface 308, can be an interface that automatically manages the storage (including deletion and appending) of objects maintained in a linked list structure on behalf of a higher level application.
Likewise, store and forward interface 316 is also an interface that automatically manages storage of objects maintained in the structures bearing similar name to the respective interface. Store-and-forward interface 316 can be used to implement clustering, since with clustering changes to the local store are communicated to remote stores so that remote stores can make the corresponding changes local to their copies as well. In this example, remoting plug-in 460 would be used in conjunction with store and forward interface 316. In the event the remote server being accessed by remoting plug-in 460 is temporarily inaccessible, then data would be stored somewhere until such time as the missing resource becomes accessible again, all of which would be managed by store-and-forward interface 316. All updates for the remote node would be stored in a store-and-forward structure until they could ultimately be delivered.
Geographic interface 320 manages the storage of objects that represent geographic locations, as might be implemented using a quad-tree. It should now be apparent that interfaces 300 in general manage the storage structure as that structure is utilized by higher level applications, and such interfaces 300 access store 100 accordingly. Interfaces 300 can issue interact with store 100 via block 105 and block 110 as per the method shown in
DAO 320, SQL 324 and application 328 represent higher level applications or interfaces which directly or indirectly utilize store 100.
Caching plug-in 402 can work transparently with and on behalf of store 100 such that storage of certain objects according to store 100 are temporarily stored in volatile storage 78 (or a separate volatile storage device not shown) according to a given criteria. When the criteria is satisfied those objects are actually flushed to persistent storage device 70 in the manner described above.
Clustering plug-in 404 can work transparently with and on behalf of store 100 such that file 300 is spread across a plurality of persistent storage devices (not shown in
Partitioning plug-in 408 is similar in concept to clustering plug-in 404 and can also work transparently with and on behalf of store 100 such that portions of file 300 are stored across a multiple number of persistent storage devices 70. By way of further explanation, clustering plug-in stores 404 all data to all persistent storage devices, whereas partitioning plug-in 408 typically only stores a subset of data across each persistent storage device. Also, with clustering, updates can originate from multiple locations whereas with partitioning updates typically occur from one location unless there is further partitioning.
Sequencing plug-in 412, can be implemented as a variant of clustering plug-in 404, utilizes a plurality of persistent storage devices (not shown) in sequence, such that when one persistent storage device is full the file is continued on the next persistent storage device. Sequencing plug-in 412, can also be implemented so that data inserted during the same time period (e.g. during one day or week) are all stored in the same file. This implementation can make time-based retention policies simple (ie. deleting data after 90 days for example) as one just drop whole files of when the data in them expires.
Monitoring plug-in 416 can be used to assess the utilization of persistent storage device 70 (or multiples thereof) so that operations from store 100 can be moderated to accommodate any restrictions associated with a given persistent storage devices.
Remoting plug-in 420 can be used where a remotely connected persistent storage device (not shown) is connected to network interface 74, in order to permit store 100 to utilize such remotely connected persistent storage devices.
It is to be reemphasized that the components relative to interfaces 300 and plug-ins 400 as introduced in
File 300 from this point will simply grow in size. For example,
To further assist in the foregoing, Table V shows the contents of file 300 after file 300 is updated from the state shown in Table IV, (and as also shown the bottom half of
Using the foregoing, it will now be understood that deletion of nodes can be effected in a similar manner, whereby file 300 just continues to grow and various locations cease to be active and new locations in file 300 are written to, even with duplicates of data already present within file 300 in the event that new pointers are required.
As another example that the components relative to interfaces 300 and plug-ins 400 as introduced in
(Those skilled in the art will now recognize that skip-list SL-1 is a novel skip list, as in order to be a traditional skip-list, object O13 would need to point to object O14; object O15 would need to point to object O16; and object O14 would need to point to object O16. Thus, skip list SL-1 is in fact a novel skip-list. Skip list SL-1 is therefore a novel embodiment in and of itself. Skip list SL-1 can have many uses, such as journal-based storage.)
As another example that the components relative to interfaces 300 and plug-ins 400 as introduced in
As a further enhancement, it is to be understood that partition plug-in 408 can be implemented, if desired, in accordance with the teachings of co-pending and commonly-assigned U.S. patent application Ser. No. 11/693,305 filed Mar. 29, 2007, now U.S. Pat. No. 7,680,766, the contents of which are incorporated herein by reference.
As another example that the components relative to interfaces 300 and plug-ins 400 as introduced in
It should now be apparent that many other combinations, subsets and variations of interfaces 300 and plug-ins 400 across one or more physical servers are within the scope of the teachings herein. In general, it should now be understood also that combinations, subsets and variations of all of the embodiments herein are contemplated.
Indeed, the present novel system and method for storage can present some advantages in certain implementations. For example, the inventor has done research leading the inventor to believe that a properly configured implementation can provide disk accessing speeds of up to 1.5 million transactions per second. A still further potential advantage from a properly configured implementation where the persistent storage is based on Flash drives can ultimately lead to longer life for such Flash drives, as a properly configured implementation of certain embodiments can lead to substantially equal use of all memory locations in the Flash drive, or at least more equal use of those memory locations than the prior art. Since memory locations in Flash drives “burn out” after a certain number of uses, the useful life of such Flash drives can conceivably be extended. As a still further potential advantage, properly configured implementations of certain embodiments can provide databases where recording of changes to those databases is automatically effected.
In addition, the teachings herein can, in properly configured implementations, support Relational, object oriented, Temporal, Network (Hierarchical), Inverted-Index (search engine), Object-Relational, Geographic and other persistence paradigms which can all be combined into the same database at the same time.
In addition, teachings herein can support, in properly configured implementations, a single top-level first-in-first-out read/write queue that would suffice for an entire system. No internal synchronization would be required for read operations. This is possible because Objects are never updated, only replaced, which can provide good in-memory performance.
In addition, Compact Disk Representation of Objects can be provided using certain properly configured embodiments of the teachings herein. As known to those skilled in the art, relational databases using fixed-size rows can be very wasteful for cases where many rows are unused or usually contain a default value, or when large strings values are used to store only small values on average. The teachings herein can, in properly configured implementations, support storage of non-default values and so that only amount of string data which is actually used is stored. This can lead to significant performance and disk-space efficiency improvements.
In addition, bursty traffic can be more readily accommodated using properly configured embodiments of the teachings. Under short periods of heavy load a journal garbage collector would execute at a lower priority (or not at all), thus allowing for higher peak loads than what could normally be sustained.
In addition, properly configured embodiments of the teachings herein can provide Large Collection Support and obviate the need for a Separate Transaction Journal. This means that transactions can be easily supported without incurring the overhead normally associated with their use.
In addition, properly configured embodiments of the teachings herein can obviate Separate Clustering Channels. This means that larger clusters can be supported more efficiently. A ten fold increase over certain configurations of current clustering performance could be realized.
In addition, properly configured embodiments of the teachings herein can provide scalability because data is never updated, only replaced, there is a reduced (and possibly zero) possibility for corruption. Properly configured embodiments can be equally suitable for databases of all sizes. For small databases it can be as fast as completely in-memory systems. Large databases can be scaled in the same way as completely disk-based systems. This can allow one to code all of databases in the same way without forcing the use a different system depending on your performance or size requirements.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CA2007/002292 | 12/13/2007 | WO | 00 | 6/8/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2009/073949 | 6/18/2009 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5664189 | Wilcox et al. | Sep 1997 | A |
5752243 | Reiter et al. | May 1998 | A |
5860124 | Matthews et al. | Jan 1999 | A |
7844632 | Zhou et al. | Nov 2010 | B2 |
20020194244 | Raventos | Dec 2002 | A1 |
20040001408 | Propps et al. | Jan 2004 | A1 |
20040044840 | Wong | Mar 2004 | A1 |
20060136500 | Hua et al. | Jun 2006 | A1 |
20060173956 | Ulrich et al. | Aug 2006 | A1 |
20070033375 | Sinclair et al. | Feb 2007 | A1 |
20070156998 | Gorobets | Jul 2007 | A1 |
20070186032 | Sinclair et al. | Aug 2007 | A1 |
20100110935 | Tamassia et al. | May 2010 | A1 |
20100223423 | Sinclair et al. | Sep 2010 | A1 |
Number | Date | Country |
---|---|---|
2007019175 | Feb 2007 | WO |
Entry |
---|
European Patent Application No. 07 85 5574 Search Report dated Jan. 10, 2011. |
Number | Date | Country | |
---|---|---|---|
20100274829 A1 | Oct 2010 | US |