1. Technical Field
The present invention relates to an improved data processing system. In particular, the present invention relates to cached object returned from a database query. Still more particular, the present invention relates to a context based cached infrastructure that enables a subset query over the cached object returned from a database query in a data processing system.
2. Description of Related Art
In the current enterprise JavaBeans™ (EJB) specification, lifecycle methods are provided for managing an entity bean's lifecycle. Examples of lifecycle methods include ejbCreate, which manages the creation of entity beans; ejbStore, which manages update of entity beans; and ejbRemove, which manages removal of entity beans. An entity bean is an enterprise JavaBean™ that has a physical data representation in a data store, for example, a row in a relational database table. Enterprise JavaBean™ or J2EE is a product available from Sun Microsystems, Inc.
In addition to lifecycle methods, enterprise JavaBeans™ specification provides ejbFind and ejbSelect methods to query entity beans that satisfy a search condition. For applications that seldom update their data, it is more efficient to cache the data locally rather than querying the database each time an update occurs, since database queries affect application performance.
Currently, query results may be cached and a user may search the query results by a certain criteria. For example, a catalog may have a “product” field and a “type” field, a user may search by the product, such as product=“electronics” or product=“books”. Since the catalog is seldom updated, the query results may be cached by the criteria, such that when the user performs the same search, the result is returned from the cached object instead of the database, thus, improving the search response time. If query results are cached without context, for each query, data may be returned if and only if it is an exact match.
Currently, no existing mechanism is present that allows a search to be performed on the subset of the existing cached query results. For example, to perform a search on query results returned by product=“books” for type=“bestsellers”. If all the “books” are already cached, it is more efficient to iterate the result of “books” and filter them to retrieve the “bestsellers”, rather than performing a separate search on the database based on the product and type.
In addition, no existing mechanism is available that sets up query results in such a way that makes it easy for user to iterate and filter query results. Therefore, it would be advantageous to have an improved method for a context based cache infrastructure that enables subset query over a cached object, such that database queries may be minimized to improve search performance.
The present invention provides a method, an apparatus, and computer instructions for a context based infrastructure to enable subset query over a cached object. The mechanism of the present invention detects a query to a root context of a context tree from a user, wherein the query includes a name and value pair. Responsive to detecting the query, the mechanism traverses the context tree for a parent context of a subcontext corresponding to the name and value pair, and determines if the parent context caches all query results.
If the parent context does not cache all query results, the mechanism repeats the traversing step for next parent context of the subcontext until a root context is encountered. When a root context is encountered, the mechanism issues a query to the database for the name and value pair, and caches the result of the database query in a new context.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
With reference now to the figures,
In the depicted example, server 104 is connected to network 102 along with storage unit 106. In addition, clients 108, 110, and 112 are connected to network 102. These clients 108, 110, and 112 may be, for example, personal computers or network computers. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to clients 108-112. Clients 108, 110, and 112 are clients to server 104. Network data processing system 100 may include additional servers, clients, and other devices not shown. In the depicted example, network data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, government, educational and other computer systems that route data and messages. Of course, network data processing system 100 also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN).
Referring to
Peripheral component interconnect (PCI) bus bridge 214 connected to I/O bus 212 provides an interface to PCI local bus 216. A number of modems may be connected to PCI local bus 216. Typical PCI bus implementations will support four PCI expansion slots or add-in connectors. Communications links to clients 108-112 in
Additional PCI bus bridges 222 and 224 provide interfaces for additional PCI local buses 226 and 228, from which additional modems or network adapters may be supported. In this manner, data processing system 200 allows connections to multiple network computers. A memory-mapped graphics adapter 230 and hard disk 232 may also be connected to I/O bus 212 as depicted, either directly or indirectly.
Those of ordinary skill in the art will appreciate that the hardware depicted in
The data processing system depicted in
With reference now to
An operating system runs on processor 302 and is used to coordinate and provide control of various components within data processing system 300 in
Those of ordinary skill in the art will appreciate that the hardware in
As another example, data processing system 300 may be a stand-alone system configured to be bootable without relying on some type of network communication interfaces As a further example, data processing system 300 may be a personal digital assistant (PDA) device, which is configured with ROM and/or flash ROM in order to provide non-volatile memory for storing operating system files and/or user-generated data.
The depicted example in
The processes and mechanisms of the present invention may be implemented as computer instructions executed by processor 302 in data processing system 300 in
The present invention provides a method, an apparatus, and computer instructions for a context based cache infrastructure to enable subset query over cached object. The present invention provides a mechanism that enables in-memory or cached object query by constructing the cache as a context tree. The context tree includes a root cache context, ‘/’, for each EJB type. The root cache context can hold objects that belong to the EJB type without any filtering. For example, a root cache context may hold the entire catalog data returned from catalog.findAll() query.
Each root cache context may include sub contexts, which indicate detailed filtering of cached results of the current root cache context by a group of field name/field value pairs. For example, an EJB type “catalog” may include a “product” field and a “type” field, and a root cache context ‘/’ may include sub context ‘/product/books’, which hold objects returned from catalog.findbyProduct(“book”) query. Sub context ‘/product/books’ may also include its sub context ‘/product/books/types/bestsellers’, which hold objects returned from catalog.findByProductAndType(“books”, “bestsellers”) query.
When a query is detected by the mechanism of the present invention, a findContext() method is called to the root cache context with a field name and field pair pairs, for example, {“product”, “book”} {“type”, “bestsellers”}. In turn, a context at the level of ‘/product/book/type/bestsellers’ is returned. The mechanism of the present invention then traverses the parents of ‘/product/book/type/bestsellers’ context until it reaches the root cache context to identify the nearest context that cached the query results.
In the above example, the mechanism of the present invention traverses first in subcontext ‘/product/book’, and then in root cache context ‘/’. If a parent context that cached query results is found, the mechanism of the present invention iterates the cached results of the upper level and filters out the remaining field name and field value pairs, that is, the original field name and value pairs excluding the upper level context represented. However, if no parent context is found, the mechanism of the present invention issues a query to the database and caches the result at the new context level.
Turning now to
Root cache context 400 has subcontext that indicates detail filtering of cache result by a group of field name/field value pair. In this example, root cache context 400 has subcontext ‘/product/books’ 402, which hold objects filtered from a Catalog.findByProduct(“book”) subset query. In turn, subcontext ‘/product/books’ 402 has a subcontext ‘/product/books/type/bestsellers’ 404 that hold objects filtered from a Catalog.findByProductAndType(“books”,“bestseller”) subset query.
Turning now to
Product field 502 has a set of fields, including books 506, CDs 508, and magazines 510. Books 506 represents subcontext ‘/product/books’ 402 in
Turning now to
Next, the mechanism of the present invention traverses the parent of the subcontext according to the field name and value pair (step 602). A determination is then by the mechanism as to whether the parent context caches all query results (step 604). If the parent context has all query results, the mechanism of the present invention iterates the cache result of the parent context and filters out the remaining field name and value pairs (step 608). The process then terminates.
However, if the parent context does not have all query results, the mechanism of the present invention then makes a determination as to whether the parent context is the root context (step 606). This means that the root context has been reached. If the parent context is not the root context, the mechanism traverses to the next parent context up the context tree (step 610). However, if the parent context is the root context, the mechanism of the present invention issues a query to the database and caches the query result in a new context (step 612) and the process terminates thereafter.
In summary, the present invention provides a context based infrastructure to enable subset query over a cached object. By using the mechanism of the present invention, a user may now iterate and filter query results. In addition, database queries may now be minimized to improve search performance.
It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media, such as a floppy disk, a hard disk drive, a RAM, CD-ROMs, DVD-ROMs, and transmission-type media, such as digital and analog communications links, wired or wireless communications links using transmission forms, such as, for example, radio frequency and light wave transmissions. The computer readable media may take the form of coded formats that are decoded for actual use in a particular data processing system.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Number | Name | Date | Kind |
---|---|---|---|
5842219 | High et al. | Nov 1998 | A |
5864819 | De Armas et al. | Jan 1999 | A |
5890151 | Agrawal et al. | Mar 1999 | A |
6145056 | Heydon et al. | Nov 2000 | A |
6208993 | Shadmon | Mar 2001 | B1 |
6421683 | Lamburt | Jul 2002 | B1 |
6535970 | Bills et al. | Mar 2003 | B1 |
6704736 | Rys et al. | Mar 2004 | B1 |
6735593 | Williams | May 2004 | B1 |
6748374 | Madan et al. | Jun 2004 | B1 |
6799184 | Bhatt et al. | Sep 2004 | B2 |
6868525 | Szabo | Mar 2005 | B1 |
6928466 | Bulka et al. | Aug 2005 | B1 |
6934699 | Haas et al. | Aug 2005 | B1 |
6950815 | Tijare et al. | Sep 2005 | B2 |
7020644 | Jameson | Mar 2006 | B2 |
7047242 | Ponte | May 2006 | B1 |
7130839 | Boreham et al. | Oct 2006 | B2 |
7181438 | Szabo | Feb 2007 | B1 |
7219091 | Bruno et al. | May 2007 | B1 |
7467131 | Gharachorloo et al. | Dec 2008 | B1 |
20030018898 | Lection et al. | Jan 2003 | A1 |
20030065874 | Marron et al. | Apr 2003 | A1 |
20030195870 | Newcombe et al. | Oct 2003 | A1 |
20030212664 | Breining et al. | Nov 2003 | A1 |
20040059719 | Gupta et al. | Mar 2004 | A1 |
20040128615 | Carmel et al. | Jul 2004 | A1 |
20040168169 | Ebro et al. | Aug 2004 | A1 |
20040230584 | Nouri | Nov 2004 | A1 |
20060112090 | Amer-Yahia et al. | May 2006 | A1 |
20060190355 | Jammes et al. | Aug 2006 | A1 |
20060224610 | Wakeam et al. | Oct 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20060230024 A1 | Oct 2006 | US |