The subject matter described herein relates to a versioned insert only hash table that supports concurrent reader and writer access for in-memory columnar stores.
With some columnar in-memory data stores, column values can be dictionary compressed. Such compression is such that each distinct value in a column is mapped to a unique integer value. This mapping is one-to-one. These integer values are sometimes referred to a value IDs or yids as shorthand for value identifiers. Associated to each column there is a vector of these yids which can referred to as a column data array or an index vector. For storage efficiency the yids in the vector can be packed so that only n-bits to represent the highest vid as each position in the vector is logically n-bits wide. For example if n is equal to 2, in the first 64 bits of the index vector, the yids for the first 32 rows in the column can be stored.
A hash table maps values of one domain (e.g., strings, etc) to values in another, possibly different domain (e.g., integers, etc.). Consider a column of type string and a hash table mapping string values to yids. Assuming the first value inserted into this column is “hello”. This value can be identified within the column with vid 1. Let's assume the next value inserted is “hello world”. This new value will have a vid of 2. To keep track of these mappings a hash table is used to specify where the keys are of type string and the values of type integer. This hash table can be used to determine when a string is being inserted into the column if there is already a vid assigned to it or not.
Hash tables are often used for certain operations such as recovery and for specialized columns that do not require sorting as is provided by column dictionaries. For example, hash tables can be used for each delta column to keep track of the top-N most common values in the column, where N is typically a small value (e.g., the top 10). Regardless, when hash tables are used, both readers and writers need to concurrently access the hash table.
In one aspect, at least one read operation is concurrently performed with at least one write operation that each insert a key/value pair into a backing array of a backing hash table of a hash table forming part of a columnar in-memory database. The backing array maps a plurality of pointers each to a respective bucket. Each bucket includes at least one state bit and a hashed value of a corresponding key. Thereafter, for each write operation, a first available position in the backing array at which a pointer to a new bucket containing the key/value pair can be inserted is iteratively determined (such that each first available position has no corresponding pre-existing pointer). Subsequently, for each write operation, the pointer to the new bucket containing the key/value pair is inserted at the corresponding first determined position in the backing array.
In some implementations, a hash function can be applied to the key. A first position in the backing array can be identified by applying a modulo operation to a size of the backing array to the result of the hash function. It can be checked if there is already a pointer at a specified position. A new bucket can be created if there is not already a pointer at the specified position. The new bucket can encapsulate at least one state bit indicating that the bucket is not overflown, and the key/value pair. Further, an iterative process can be implemented that includes (i) marking the at least one state bit for the bucket corresponding to the specified pointer at the most recent position as overflown if there is already a pointer at the specifying position and (ii) identifying a different position in the backing array as an alternative to the last specified position until such time that a position is identified that does not have a pointer. Identifying a different position in the backing array can use a compare-and-swap (CAS) technique to attempt to establish the pointer in the different position. It can be checked to determine whether there is already a pointer at the different position. If not, a new bucket can be created that encapsulates (i) at least one state bit indicating that the bucket is not overflown, and (ii) the key/value pair. The iterative identification and marking can continue for a pre-determined number of times (i.e., cycles). A size of the backing array can be increased at such time that the pre-determined number of times is exceeded.
It can be determined, for at least one write operation, that the key is already in the hash table. In such cases, a caller of the write operation can be notified that the key is already in the hash table.
The write operations can include executing a write functor. A semaphore can be associated with each has table. The semaphore can be assigned to a single write operation. The semaphore can be released after execution of the write functor. Subsequent write operations can wait for the semaphore assigned to an earlier write operation to be released. A write functor by a subsequent write operation on the backing array can be executed once the write operation receives the sempahore.
A reader operation can access a backing hash table that is not deallocated while the reader operation accesses the backing hash table. Such backing hash table can be maintained while the reader operation continues to access the backing hash table even if a new backing hash table is established by a writer operation.
Non-transitory computer program products (i.e., physically embodied computer program products) are also described that store instructions, which when executed by one or more data processors of one or more computing systems, causes at least one data processor to perform operations herein. Similarly, computer systems are also described that may include one or more data processors and memory coupled to the one or more data processors. The memory may temporarily or permanently store instructions that cause at least one processor to perform one or more of the operations described herein. In addition, methods can be implemented by one or more data processors either within a single computing system or distributed among two or more computing systems. Such computing systems can be connected and can exchange data and/or commands or other instructions or the like via one or more connections, including but not limited to a connection over a network (e.g. the Internet, a wireless wide area network, a local area network, a wide area network, a wired network, or the like), via a direct connection between one or more of the multiple computing systems, etc.
The subject matter described herein provides many technical advantages. For example, the current subject matter provides an efficiently accessed hash table that can be concurrent accessed by both writer and reader operations.
The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
The current subject matter includes a number of aspects that can be applied individually or in combinations of one or more such aspects to support a unified database table approach that integrates the performance advantages of in-memory database approaches with the reduced storage costs of on-disk database approaches. The current subject matter can be implemented in database systems using in-memory OLAP, for example including databases sized at several terabytes (or more), tables with billions (or more) of rows, and the like; systems using in-memory OLTP (e.g. enterprise resource planning or ERP system or the like, for example in databases sized at several terabytes (or more) with high transactional volumes; and systems using on-disk OLAP (e.g. “big data,” analytics servers for advanced analytics, data warehousing, business intelligence environments, or the like), for example databases sized at several petabytes or even more, tables with up to trillions of rows, and the like.
The current subject matter can be implemented as a core software platform of an enterprise resource planning (ERP) system, other business software architecture, or other data-intensive computing application or software architecture that runs on one or more processors that are under the control of a specific organization. This arrangement can be very effective for a large-scale organization that has very sophisticated in-house information technology (IT) staff and for whom a sizable capital investment in computing hardware and consulting services required to customize a commercially available business software solution to work with organization-specific business processes and functions is feasible.
A database management agent 160 or other comparable functionality can access a database management system 170 that stores and provides access to data (e.g. definitions of business scenarios, business processes, and one or more business configurations as well as data, metadata, master data, etc. relating to definitions of the business scenarios, business processes, and one or more business configurations, and/or concrete instances of data objects and/or business objects that are relevant to a specific instance of a business scenario or a business process, and the like. The database management system 170 can include at least one table 180 and additionally include parallelization features consistent with those described herein.
To achieve a best possible compression and also to support very large data tables, a main part of the table can be divided into one or more fragments.
Fragments 330 can advantageously be sufficiently large to gain maximum performance due to optimized compression of the fragment and high in-memory performance of aggregations and scans. Conversely, such fragments can be sufficiently small to load a largest column of any given fragment into memory and to sort the fragment in-memory. Fragments can also be sufficiently small to be able to coalesce two or more partially empty fragments into a smaller number of fragments. As an illustrative and non-limiting example of this aspect, a fragment can contain one billion rows with a maximum of 100 GB of data per column. Other fragment sizes are also within the scope of the current subject matter. A fragment can optionally include a chain of pages. In some implementations, a column can also include a chain of pages. Column data can be compressed, for example using a dictionary and/or any other compression method. Table fragments can be materialized in-memory in contiguous address spaces for maximum performance. All fragments of the database can be stored on-disk, and access to these fragments can be made based on an analysis of the data access requirement of a query.
Referring again to
Also as shown in
A single RowID space can be used across pages in a page chain. A RowID, which generally refers to a logical row in the database, can be used to refer to a logical row in an in-memory portion of the database and also to a physical row in an on-disk portion of the database. A row index typically refers to physical 0-based index of rows in the table. A 0-based index can be used to physically address rows in a contiguous array, where logical RowIDs represent logical order, not physical location of the rows. In some in-memory database systems, a physical identifier for a data record position can be referred to as a UDIV or DocID. Distinct from a logical RowID, the UDIV or DocID (or a comparable parameter) can indicate a physical position of a row (e.g. a data record), whereas the RowID indicates a logical position. To allow a partition of a table to have a single RowID and row index space consistent with implementations of the current subject matter, a RowID can be assigned a monotonically increasing ID for newly-inserted records and for new versions of updated records across fragments. In other words, updating a record will change its RowID, for example, because an update is effectively a deletion of an old record (having a RowID) and insertion of a new record (having a new RowID). Using this approach, a delta store of a table can be sorted by RowID, which can be used for optimizations of access paths. Separate physical table entities can be stored per partition, and these separate physical table entities can be joined on a query level into a logical table.
When an optimized compression is performed during a columnar merge operation to add changes recorded in the delta store to the main store, the rows in the table are generally re-sorted. In other words, the rows after a merge operation are typically no longer ordered by their physical row ID. Therefore, stable row identifier can be used consistent with one or more implementations of the current subject matter. The stable row identifiers can optionally be a logical RowID. Use of a stable, logical (as opposed to physical) RowID can allow rows to be addressed in REDO/UNDO entries in a write-ahead log and transaction undo log. Additionally, cursors that are stable across merges without holding references to the old main version of the database can be facilitated in this manner. To enable these features, a mapping of an in-memory logical RowID to a physical row index and vice versa can be stored. In some implementations of the current subject matter, a RowID column can be added to each table. The RowID column can also be amenable to being compressed in some implementations of the current subject matter.
A RowID index 506 can serve as a search structure to allow a page 504 to be found based on a given interval of RowID values. The search time can be on the order of log n, where n is very small. The RowID index can provide fast access to data via RowID values. For optimization, “new” pages can have a 1:1 association between RowID and row index, so that simple math (no lookup) operations are possible. Only pages that are reorganized by a merge process need a RowID index in at least some implementations of the current subject matter.
Functional block diagram 700 also illustrates a read operation 720. Generally, read operations can have access to all fragments (i.e., active fragment 712 and closed fragments 716). Read operations can be optimized by loading only the fragments that contain data from a particular query. Fragments that do not contain such data can be excluded. In order to make this decision, container-level metadata (e.g., a minimum value and/or a maximum value) can be stored for each fragment. This metadata can be compared to the query to determine whether a fragment contains the requested data.
With reference to diagram 800 of
The heuristic to decide the size of the new backing array 830 of a new backing hash table 820 can be changed on a per hash table instance. The new backing hash table can be configured to always have a backing array of more capacity than the one it is replacing (as the delta store is insert-only and it is only being appended to), and the decision of how big the new backing array can be based on the current size of the backing array 830 as well as max overflow count that will be used in the new backing hash table.
As there can be concurrent readers and writers accessing the hash table 810 while the new backing hash table 820 is being established, such readers and writers need to be prevented from accessing an old backing hash table 820 that has been de-allocated (as this would cause data corruption or a crash).
In order to allow concurrent writers and readers, the database 170 can utilize versioned data structures. The database 170 can use a garbage collector (GC) mechanism that controls access and changes to the versioned data structures such as the hash table 810. The GC can keep a counter of the number of modifications done to any of the versioned data structures it is in charge of (there is only one number irrespectively of the number of versioned data structures; there can be a GC per table). The GC only needs to know of structural changes. In the case of the hash table the only structural change is the establishment of a new backing hash table. Insertion of a new key value pair when there is no need to establish a new backing hash table is not considered a structural change. Hence, it is necessary for writers that made structural changes to notify the GC. Each structural change in any of the versioned data structures the GC controls causes the increment of the modification counter by one. The actual mechanism for notification of structural changes consists on physically giving the GC the old backing hash table 820 that is being replaced. The GC then increment the modification counter and it will only destroy the old backing hash table 820 when it is sure there are no readers that may potentially access the old backing hash table 820.
Before readers access the hash table 810, they can obtain a handle from the GC. The GC keeps track of the value of the modification counter at the time each reader obtained its handle. When the readers are done accessing hash tables 810 they destroy the handle that was provided by the GC. This destruction of the reader handle triggers a check by the GC which decides if there are no more readers with handles that were obtained at a modification counter less or equal than the reader handle being destroyed. If this is the case the GC can destroy all backing hash tables that were provided to the GC for destruction at the time its modification counter was less or equal the counter at the time of creation of the reader handle being destroyed.
There is no guarantee neither for readers nor writers that they will be accessing the most recent backing hash table 820. For readers this is typically not an issue. The reason is that in databases these read accesses are related to a transaction being executed (for example a select on a table). Rows that have been inserted after the read started should not be visible to the reader transaction anyways. On the other hand, readers may see data that has been inserted into the hash table after they started their reads. In this case there are MVCC (multi-version concurrency control) mechanisms in upper layers that filter out from the results of the query any yids for rows that the transaction should not see.
When a write produces a structural change, the following steps can be taken. With reference, to diagram 900 of
Second, at 920, the writer can allocate a backing array 830 of a new backing hash table and populate it with content from the backing array of the old backing hash table 820. This copying of the contents from the old backing hash table 820 may imply storing key value pairs in buckets at positions in the new backing array 830 different than the positions in the backing array of the old backing hash table. Notice other writers that did not need to establish a new backing hash table may still be operating on the old backing hash table 820 and they may be inserting at positions that this structural writer has already processed when copying into the backing array 830 of the new backing hash table 820. It must be guaranteed that these writes are not lost.
Next, at 930, the writer can replace the old backing hash table 820 with a new backing hash table and give the old backing hash table 820 to garbage collection. The writer can then, at 940, release its previously obtained semaphore. Writers waiting on the semaphore can then be awoken. The writer can then, at 950, execute (i.e., invoke, etc.) its write functor.
A writer that does not need to establish a new backing hash table can just execute its write functor. The execution of the write functor itself is as follows (and as illustrated in diagram 1000 of
First, at 1010, the write operation on the old backing hash table 820 can be executed. Thereafter, at 1020, the semaphore for any writer that may be producing structural changes (i.e. establishing a new backing hash table 820) is waited on.
Next, at 1030, any waiting writers may be awakened. Awakened in this regard refers to notifying writers that the semaphore has been released whether such writers are seeking to only execute a functor or if they are seeking to product structural changes (and thus require the semaphore). Thereafter, at 1040, the writer checks the current backing array; if the backing array matches the old backing array the writer acted upon, the write is then considered to be completed. If there were structural changes, the process starts again with execution of the write operation on the new (i.e., current, etc.) backing array 830 (to guarantee that the writer's insertions in the hash table 810 are not lost).
If a first writer producing structural changes cannot immediately obtain the semaphore but had to wait for it, it can be checked if another writer established a new backing array 830 during such time period (i.e., while the writer is waiting for the semaphore) that has enough capacity and the max overflow count—and hence no further backing array is needed for the first writer. In such a case, the first writer would then execute its functor using the established new backing array 830.
It will be appreciated that the semaphore is only one example and that other types of exclusion mechanisms can be used such as mutexes (locks) and spinlocks. For example, an exclusion mechanism can ensure that only one writer is performing structural changes to a backing array at a given time. Furthermore, the exclusion mechanism can be implemented to allow other writers interested in making structural changes to the backing array to wait for the structural writer owning the exclusion mechanism. Once the structural writer owning the exclusion mechanism releases the mechanism, only one waiting structural writer can assume ownership of the mechanism. Other structural writers can keep waiting until they obtain ownership of the exclusion mechanism. Still further, a writer can query the mechanism to either obtain ownership of the exclusion mechanism (if it is available) or to be put to sleep until the exclusion mechanism becomes available to such writer (e.g., after the exclusion mechanism is released by a writer making structural changes, etc.). In addition, the exclusion mechanism can allow non-structural writers to wait on the exclusion mechanism for the structural writer that owns the mechanism to release it. It should be noted that non-structural writers do not assume ownership of the mechanism, they simply do not wait anymore once the structural writer releases the exclusion mechanism.
One type of read access is provided. Given a key (e.g., “hello”) the associated value identifier (e.g. “1”), can be returned if the given key is in the hash table. Otherwise an invalid value provided at hash table creation is returned to indicate that the key was not found.
Referring again to
Besides the bucket state, the key (e.g. “hello”) can also be stored and associated value (e.g., 1) in the bucket 850. Pointers 840 to buckets 850 can be provided in the backing array 830 of the backing hash table 820 (rather than the buckets themselves) so that, when writing into a position of the backing array 830 of the backing hash table 820, readers can either see an empty position or a used position. This arrangement can require an atomic read and atomic write into the position. These atomic reads and writes are typically only guaranteed when the amount of data written or read matches the processors' word size (generally 64 bits). The current buckets are larger than 64 bits.
To insert into the bucket 850, a hash function can be first applied to the key being inserted. This results in, for example, a 64 bit number that can be used to identify the position in the backing array 830 where the bucket 850 that will contain the key/value pair will be. A modulo of the size of the backing array 830 to the result of the hash function can be applied to identify this position. Once the position has been identified, it can be checked whether there is already a bucket pointer 840. If not, a bucket 850 can be created and populated with the key/value pair being inserted. In addition, the bucket state can be accordingly set (i.e., not overflown, and key hash set to the first 63 bits of the hashed key value, etc.).
Using compare and swap (CAS) (i.e., an atomic instruction used in multithreading to achieve synchronization by comparing the contents of a memory location to a given value and, only if they are the same, modifying the contents of that memory location to a given new value), it can be attempted to establish the pointer into that position. If the pointer can be established into that position, no further action is required (except, in some cases, to ensure that there are no in-flight resizes of the backing array). If the pointer cannot be established into that position (because of an earlier write at such position), the bucket 850 of the writer that earlier established such position can be marked as overflowed. Next, an operation can be applied (e.g., add a fixed number, currently 31) to look for a next bucket to use. If this position is empty, a compare and swap operation can be performed for the bucket. If this position is already taken, again the bucket 850 in that position can be marked as overflowed and again, an operation (e.g., adding the fixed number 31) can be performed to get the next bucket. If the capacity of the backing array 830 is exceeded, the backing array's size can be subtracted from the position and a new position in the backing array 830 can be sought.
The attempt to find an empty position in the backing array 830 for the new bucket 850 can be limited to a maximum number of times (max overflow count) until such time that it is determined that a new backing hash table 820 with bigger backing array 830 is required (the initial max overflow count can be an optional parameter that the creator of the hash table provides when the hash table is first created; default is 4). The technique of adding 31 (or other number) when a position is already used and resizing after so many tries can be referred to as an open addressing hash table.
If it is determined that a particular value for insertion is already in the hash table (i.e., the backing array 830), the writer can simply be informed of the insert operation.
When looking for a value associated to a given key, the hash function can be applied to the key and a modulo operation can be applied with the backing array's size to identify the position where the pointer 840 to the bucket 850 containing the value should be. If there is no pointer 840 to a bucket 850 at that position, the process can be terminated because the key is not in the hash table. If there is a valid bucket pointer 840, the hashed value of the key that is being sought can be checked with the hashed key value in the bucket state (e.g., the first 63 bits can be checked). If there is a match, the given key can be compared with the key in the bucket 850. If there is a match the search has successfully complete since the key has been found and the value associated to the key in the bucket 850 can be returned. If the hashed key values do not match or the keys do not match, the bucket state can be checked to see if it is overflown. If not, then the key is not in the hash table. If it is overflown, then an integer can be added (e.g., 31) and the insertion attempt can be repeated for the bucket 850 at the resulting position. The looking up for a new bucket 850 can be terminated after a pre-defined number of positions are checked (this can be determined by the max overflow count associated to the backing hash table 820). The number of tries can be the same number that is used when inserting and deciding that whether to establish a new backing hash table 820.
As noted above, it is desirable to have concurrent readers and writers. For readers if a key is not found, it is not guaranteed that the key was not added by a concurrent writer while the read was executing. For writers, it must be guaranteed that insertions into the hash table are not lost.
Writer functionality can be encapsulated into “write functors”. Write functors can be encoded in functional units that can be repeatedly invoked by writers. Write functors can ensure that no write operations are lost.
A write functor can be provided to insert a key value pair into the hash table.
With reference to diagram 1000 of
If, at 1020, the writer's functor succeeded and this was a new key, the writer can wait on the semaphore for any writer that may be producing structural changes (i.e., establishing a new backing array).
If, at 1030, there were structural changes, the writer can execute its functor again (this guarantees the insertions by the writer into the hash table are not lost).
If, at 1040, the insert into the hash table was not successful, the writer can attempt to obtain the semaphore (in order to attempt to establish a new backing array). If the writer had to wait for the semaphore, the writer can execute its functor once again. If the writer immediately obtained the semaphore (i.e. there were no other writers trying to establish a new backing array), a new, larger backing hash table can be allocated (e.g., a backing array X% larger than the previous one, etc.)
The writer can then, at 1050, re-insert each key/value pair found in the old backing array into the new backing array because bucket positions will change.
Thereafter, at 1060, a new backing hash table can be established (i.e. at this point this new backing hash table becomes the hash table's backing hash table) with the old backing hash table can be given to the garbage collector, a signal can be given to writers waiting for the semaphore, and the process can be repeated from 1010. If the key/value pair still cannot be inserted after a number of resizes, then a new backing hash table 820 can have a max overflow count that corresponds to 1+the previous backing hash table's overflow count.
One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores.
To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive trackpads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like.
In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible.
The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5222235 | Hintz et al. | Jun 1993 | A |
5280612 | Lorie et al. | Jan 1994 | A |
5594898 | Dalal et al. | Jan 1997 | A |
5701480 | Raz | Dec 1997 | A |
5717919 | Kodavalla et al. | Feb 1998 | A |
5758145 | Bhargava et al. | May 1998 | A |
5794229 | French et al. | Aug 1998 | A |
5870758 | Bamford et al. | Feb 1999 | A |
5933833 | Musashi | Aug 1999 | A |
6070165 | Whitmore | May 2000 | A |
6275830 | Muthukkaruppan et al. | Aug 2001 | B1 |
6282605 | Moore | Aug 2001 | B1 |
6397227 | Klein et al. | May 2002 | B1 |
6453313 | Klein et al. | Sep 2002 | B1 |
6490670 | Coffins et al. | Dec 2002 | B1 |
6567407 | Mizukoshi | May 2003 | B1 |
6606617 | Bonner et al. | Aug 2003 | B1 |
6668263 | Cranston et al. | Dec 2003 | B1 |
6754653 | Bonner et al. | Jun 2004 | B2 |
6865577 | Sereda | Mar 2005 | B1 |
7698712 | Schreter | Apr 2010 | B2 |
7761434 | Surtani et al. | Jul 2010 | B2 |
8024296 | Gopinathan et al. | Sep 2011 | B1 |
8161024 | Renkes et al. | Apr 2012 | B2 |
8170981 | Tewksbary | May 2012 | B1 |
8224860 | Starkey | Jul 2012 | B2 |
8364648 | Sim-Tang | Jan 2013 | B1 |
8510344 | Briggs et al. | Aug 2013 | B1 |
8650583 | Schreter | Feb 2014 | B2 |
8732139 | Schreter | May 2014 | B2 |
8768891 | Schreter | Jul 2014 | B2 |
8868506 | Bhargava et al. | Oct 2014 | B1 |
9058268 | Ostiguy et al. | Jun 2015 | B1 |
9098522 | Lee et al. | Aug 2015 | B2 |
9141435 | Wein | Sep 2015 | B2 |
9262330 | Muthukumarasamy | Feb 2016 | B2 |
9268810 | Andrei et al. | Feb 2016 | B2 |
9275095 | Bhattacharjee et al. | Mar 2016 | B2 |
9275097 | DeLaFranier et al. | Mar 2016 | B2 |
9305046 | Bhattacharjee et al. | Apr 2016 | B2 |
9372743 | Sethi et al. | Jun 2016 | B1 |
9430274 | Zhang | Aug 2016 | B2 |
9489409 | Sharique et al. | Nov 2016 | B2 |
9645844 | Zhang | May 2017 | B2 |
9665609 | Andrei et al. | May 2017 | B2 |
9811577 | Martin et al. | Nov 2017 | B2 |
20010051944 | Lim et al. | Dec 2001 | A1 |
20020107837 | Osborne et al. | Aug 2002 | A1 |
20020156798 | Larue et al. | Oct 2002 | A1 |
20030028551 | Sutherland | Feb 2003 | A1 |
20030065652 | Spacey | Apr 2003 | A1 |
20030204534 | Hopeman et al. | Oct 2003 | A1 |
20030217075 | Nakano et al. | Nov 2003 | A1 |
20040034616 | Witkowski et al. | Feb 2004 | A1 |
20040054644 | Ganesh et al. | Mar 2004 | A1 |
20040064601 | Swanberg | Apr 2004 | A1 |
20040249838 | Hinshaw et al. | Dec 2004 | A1 |
20050027692 | Shyam et al. | Feb 2005 | A1 |
20050097266 | Factor et al. | May 2005 | A1 |
20050234868 | Terek et al. | Oct 2005 | A1 |
20060004833 | Trivedi et al. | Jan 2006 | A1 |
20060005191 | Boehm | Jan 2006 | A1 |
20060036655 | Lastovica | Feb 2006 | A1 |
20060206489 | Finnie et al. | Sep 2006 | A1 |
20070192360 | Prahlad et al. | Aug 2007 | A1 |
20080046444 | Fachan et al. | Feb 2008 | A1 |
20080183958 | Cheriton | Jul 2008 | A1 |
20080247729 | Park | Oct 2008 | A1 |
20090064160 | Larson et al. | Mar 2009 | A1 |
20090080523 | McDowell | Mar 2009 | A1 |
20090094236 | Renkes et al. | Apr 2009 | A1 |
20090254532 | Yang et al. | Oct 2009 | A1 |
20090287703 | Furuya | Nov 2009 | A1 |
20090287737 | Hammerly | Nov 2009 | A1 |
20100008309 | Cheng et al. | Jan 2010 | A1 |
20100082545 | Bhattacharjee et al. | Apr 2010 | A1 |
20100241812 | Bekoou | Sep 2010 | A1 |
20100281005 | Carlin et al. | Nov 2010 | A1 |
20100287143 | Di Carlo et al. | Nov 2010 | A1 |
20110010330 | McCline et al. | Jan 2011 | A1 |
20110060726 | Idicula et al. | Mar 2011 | A1 |
20110087854 | Rushworth et al. | Apr 2011 | A1 |
20110145835 | Rodrigues et al. | Jun 2011 | A1 |
20110153566 | Larson et al. | Jun 2011 | A1 |
20110252000 | Diaconu et al. | Oct 2011 | A1 |
20110270809 | Dinkar et al. | Nov 2011 | A1 |
20110276744 | Sengupta | Nov 2011 | A1 |
20110302143 | Lomet | Dec 2011 | A1 |
20120011106 | Reid et al. | Jan 2012 | A1 |
20120047126 | Branscome et al. | Feb 2012 | A1 |
20120102006 | Larson et al. | Apr 2012 | A1 |
20120137081 | Shea | May 2012 | A1 |
20120179877 | Shriraman et al. | Jul 2012 | A1 |
20120191696 | Renkes et al. | Jul 2012 | A1 |
20120221528 | Renkes et al. | Aug 2012 | A1 |
20120233438 | Bak et al. | Sep 2012 | A1 |
20120265728 | Plattner et al. | Oct 2012 | A1 |
20120284228 | Ghosh et al. | Nov 2012 | A1 |
20130054936 | Davis | Feb 2013 | A1 |
20130091162 | Lewak | Apr 2013 | A1 |
20130097135 | Goldberg | Apr 2013 | A1 |
20130103655 | Fanghaenel et al. | Apr 2013 | A1 |
20130117247 | Schreter et al. | May 2013 | A1 |
20130166566 | Lemke et al. | Jun 2013 | A1 |
20130346378 | Tsirogiannis et al. | Dec 2013 | A1 |
20140025651 | Schreter | Jan 2014 | A1 |
20140101093 | Lanphear et al. | Apr 2014 | A1 |
20140136571 | Bonvin et al. | May 2014 | A1 |
20140214334 | Plattner et al. | Jul 2014 | A1 |
20140222418 | Richtarsky et al. | Aug 2014 | A1 |
20140279930 | Gupta et al. | Sep 2014 | A1 |
20140279961 | Schreter et al. | Sep 2014 | A1 |
20150039573 | Bhattacharjee et al. | Feb 2015 | A1 |
20150089125 | Mukherjee et al. | Mar 2015 | A1 |
20150113026 | Sharique et al. | Apr 2015 | A1 |
20150142819 | Florendo et al. | May 2015 | A1 |
20150193264 | Hutton et al. | Jul 2015 | A1 |
20150261805 | Lee et al. | Sep 2015 | A1 |
20150278281 | Zhang | Oct 2015 | A1 |
20160103860 | Bhattacharjee et al. | Apr 2016 | A1 |
20160125022 | Rider et al. | May 2016 | A1 |
20160147445 | Schreter et al. | May 2016 | A1 |
20160147447 | Blanco et al. | May 2016 | A1 |
20160147448 | Schreter et al. | May 2016 | A1 |
20160147449 | Andrei et al. | May 2016 | A1 |
20160147457 | Legler et al. | May 2016 | A1 |
20160147459 | Wein et al. | May 2016 | A1 |
20160147617 | Lee et al. | May 2016 | A1 |
20160147618 | Lee et al. | May 2016 | A1 |
20160147776 | Florendo et al. | May 2016 | A1 |
20160147778 | Schreter et al. | May 2016 | A1 |
20160147786 | Andrei et al. | May 2016 | A1 |
20160147801 | Wein et al. | May 2016 | A1 |
20160147804 | Wein et al. | May 2016 | A1 |
20160147806 | Blanco et al. | May 2016 | A1 |
20160147808 | Schreter et al. | May 2016 | A1 |
20160147809 | Schreter et al. | May 2016 | A1 |
20160147811 | Eluri et al. | May 2016 | A1 |
20160147812 | Andrei et al. | May 2016 | A1 |
20160147813 | Lee et al. | May 2016 | A1 |
20160147814 | Goel et al. | May 2016 | A1 |
20160147819 | Schreter et al. | May 2016 | A1 |
20160147820 | Schreter | May 2016 | A1 |
20160147821 | Schreter et al. | May 2016 | A1 |
20160147834 | Lee et al. | May 2016 | A1 |
20160147858 | Lee et al. | May 2016 | A1 |
20160147859 | Lee et al. | May 2016 | A1 |
20160147861 | Schreter et al. | May 2016 | A1 |
20160147862 | Schreter et al. | May 2016 | A1 |
20160147904 | Wein et al. | May 2016 | A1 |
20160147906 | Schreter et al. | May 2016 | A1 |
Number | Date | Country |
---|---|---|
2778961 | Sep 2014 | EP |
WO-0129690 | Apr 2001 | WO |
Entry |
---|
“NBit Dictionary Compression,” Sybase, May 23, 2013. Web. Mar. 15, 2017 <http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc1777.1600/doc/html/wil1345808527844.html>. |
“HANA database lectures—Outline Part 1 Motivation—Why main memory processing.” Mar. 2014 (Mar. 2014). XP055197666. Web. Jun. 23, 2015. 81 pages.; URL:http://cse.yeditepe.edu.tr/-odemir/spring2014/cse415/HanaDatabase.pdf;. |
“HANA Persistence: Shadow Pages.” Jun. 2013. Yeditepe Üniversitesi Bilgisayar Mühendisli{hacek over (g)}i Bölümü. Web. Apr. 21, 2016. 32 pages. <http://cse.yeditepe.edu.tr/˜odemir/spring2014/cse415/Persistency.pptx>. |
“Optimistic concurrency control.” Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc., Jul. 19, 2014, Web. Mar. 3, 2016. pp. 1-3. |
Brown, E. et al. “Fast Incremental indexing for Full-Text Information Retrieval.” VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases. San Francisco: Morgan Kaufmann, 1994. pp. 1-11. |
Jens Krueger et al. “Main Memory Databases for Enterprise Applications.” Industrial Engineering and Engineering Management (IE&EM), 2011 IEEE 18th International Conference On, IEEE, Sep. 3, 2011 (Sep. 3, 2011), pp. 547-557, XP032056073. |
Lemke, Christian, et al. “Speeding Up Queries in Column Stores.” Data Warehousing and Knowledge Discovery Lecture Notes in Computer Science (2010): 117-29. Web. Apr. 21, 2016. |
Lu, Andy. “SAP HANA Concurrency Control.” SAP Community Network. Oct. 28, 2014. Web. Apr. 22, 2016. 4 pages. <http://scn.sap.com/docs/DOC-57101>. |
Mumy, Mark. “SAP Sybase IQ 16.0 Hardware Sizing Guide.” SAP Community Network. May 12, 2013. Web. Apr. 21, 2016. 25 pages. <http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c0836b4f-429d-3010-a686-c35c73674180?QuickLink=index&overridelayout=true&58385785468058>. |
Ailamaki, et al., “Weaving Relations for Cache Performance,” Proceedings of the the Twenty-Seventh International Conference on Very Large Data Bases, Sep. 11-14, Orlando, FL, Jan. 1, 2001. |
Hector Garcia-Molina, et al., “Database Systems The Second Complete Book Second Edition—Chapter 13—Secondary Storage Management,” Database Systems the Complete Book, second edition, Jun. 15, 2008. |
Number | Date | Country | |
---|---|---|---|
20160147750 A1 | May 2016 | US |