1. Field of the Invention
This invention relates generally to data processing. More particularly, an embodiment relates to a system and method for performing data processing using shared memory.
2. Description of the Related Art
Traditional client-server systems employ a two-tiered architecture such as that illustrated in
As is known in the art, the “business logic” component of the application represents the core of the application, i.e., the rules governing the underlying business process (or other functionality) provided by the application. The “presentation logic” describes the specific manner in which the results of the business logic are formatted for display on the user interface. The “database” 104 includes data access logic used by the business logic to store and retrieve data.
The limitations of the two-tiered architecture illustrated in
In response to limitations associated with the two-tiered client-server architecture, a multi-tiered architecture has been developed, as illustrated in
This separation of logic components and the user interface provides a more flexible and scalable architecture compared to that provided by the two-tier model. For example, the separation ensures that all clients 125 share a single implementation of business logic 122. If business rules change, changing the current implementation of business logic 122 to a new version may not require updating any client-side program code. In addition, presentation logic 121 may be provided which generates code for a variety of different user interfaces 120, which may be standard browsers such as Internet Explorer® or Netscape Navigator®.
The multi-tiered architecture illustrated in
For example, in a J2EE environment, such as the one illustrated in
In recent years, as business application development projects have grown larger and more diversified, integration of business applications in terms of people, information, and processed is becoming increasingly important. SAP® NetWeaver™ was developed and presented by SAP AG with core capabilities to provide a solution for the integration of people, information, and processes.
However, the integration of people, information, and process is resulting in an ever increasing demand for high-level planning, maintenance, and administration, which in turn, requires the underline architecture and environment to conform to, for example, platform independence, inter-process communication, increased security, development versioning, multi-user possibility, shared memory, and efficient classloading. For example, it would be useful to have an architectural environment that provides increased robustness, improved integration, better monitoring, reduced memory footprint, decreased internal threads, faster session failover, and shared memory.
A system and method are described for performing data processing using shared memory. In one embodiment, a first application programming engine is employed to process first application programming-based requests. Additionally, a second application programming engine is employed to process second application programming-based requests. The first and second application programming engines are integrated using a memory to provide a common access to both the first and second programming engines.
The appended claims set forth the features of the invention with particularity. The embodiments of the invention, together with its advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings in which:
Described below is a system and method for employing performing data processing using shared memory. Throughout the description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the present invention.
In the following description, numerous specific details such as logic implementations, opcodes, resource partitioning, resource sharing, and resource duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices may be set forth in order to provide a more thorough understanding of various embodiments of the present invention. It will be appreciated, however, to one skilled in the art that the embodiments of the present invention may be practiced without such specific details, based on the disclosure provided. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
Various embodiments of the present invention will be described below. The various embodiments may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or a machine or logic circuits programmed with the instructions to perform the various embodiments. Alternatively, the various embodiments may be performed by a combination of hardware and software.
Various embodiments of the present invention may be provided as a computer program product, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process according to various embodiments of the present invention. A machine-readable storage medium includes, but is not limited to, floppy diskette, optical disk, compact disk-read-only memory (CD-ROM), magneto-optical disk, read-only memory (ROM), random access memory (RAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, or another type of media/machine-readable storage medium suitable for storing electronic instructions. Moreover, various embodiments of the present invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
Information integration 204 refers to converting information into knowledge quickly and efficiently. Information integration 204 provides efficient business intelligence 216 and knowledge management 220 using SAP products like Business Information Warehouse (BW) and Knowledge Management (KM). Further, consolidation of master data management 218 beyond system boundaries is performed using SAP's Master Data Management (MDM). Process integration 206 refers to optimized process management using integration broker or SAP exchange infrastructure 222 and business process management 224 techniques. Examples of products to perform process integration 206 include Exchange Infrastructure (XI) and Business Process Management (BPM).
Application platform 208 refers to SAP's Web Application Server (Web AS), which is the basis for SAP applications. Web AS, which may be independent of the database and operating system 230, includes a J2EE engine 226 in combination with an already present ABAP engine 228 to further enhance the application platform 208. The architecture 200 further includes a composite application framework 232 to provide various open interfaces (APIs) and a lifecycle management 234, which is an extension of the previous Transport Management System (TMS). As illustrated, the architecture 200 further provides communication with Microsoft .NET 236, International Business Machine® (IBM) WebSphere™ 238, and the like 240.
The Web AS 320 having the ABAP engine 302 is further enhanced by including a J2EE engine 304. The J2EE engine 304 is in communication with the ABAP engine 302 via a fast Remote Function Call (RFC) connection 306. The two engines 302-304 are further in communication with an Internet Communication Manger (ICM) 308. The ICM 308 is provided for handling and distributing queries (e.g., Internet queries) to various individual components of the architecture 300. The architecture 300 further supports a browser 310, such as Microsoft Internet Explorer, Netscape Navigator, and other modified variations of mobile end devices, such as personal digital assistants (PDAs), pocket computers, smart cell phones, other hybrid devices, and the like. The Web AS 320 also supports various protocols and standards 312, such as HyperText Markup Language (HTML), eXtensible Markup Language (XML), Wireless Markup Language (WML), HyperText Transport Protocol (HTTP(S)), Simple Mail Transfer Protocol (SMTP), Web Distributed Authority and Versioning (WebDAV), Simple Object Access Protocol (SOAP), Single Sign-On (SSO), Secure Sockets Layer (SSL), X.509, Unicode, and the like.
At the presentation layer 410, the clients are illustrated as workstations or terminals 412-416 that are used to collect and gather user input and send it to the application layer 420 via a network connection. The network connection may be a wired or wireless connection to a LAN, a Wide Area Network (WAN), a Metropolitan Area Network (MAN), an intranet, and/or the Internet. The terminals 412-416 include personal computers, notebook computers, personal digital assistants, telephones, and the like. In one embodiment in which the network connection connects to the Internet, one or more of the user terminals 412-416 may include a Web browser (e.g., Internet Explorer or Netscape Navigator) to interface with the Internet.
The presentation layer 410 allows the end user to interact with the relevant application using a GUI, such as the SAP GUI, which is a universal client widely used for accessing SAP R/3 or mySAP functions. The GUI works as a browser and offers easy access to various SAP functions, such as application transactions, reports, and system administration functions. The SAP GUI, for example, is available in three different formats, each of which having its own unique selling point and is suited to a particular user. The three formats include SAP GUI for Windows®, SAP GUI for HTML, and SAP GUI for Java™.
The presentation layer 410 may also includes various management applications, such as a Java Management Extension (JMX)-compliant management application, a JMX manager, and/or a proprietary management application. The management applications include one or more graphical management applications, such as a visual administrator, operating to, for example, retrieve and display information received from the application layer 420 and/or the database layer 430. The visual administrator includes a monitor viewer to display such and other information. The monitor viewer includes a GUI-based or Web-based monitor viewer. Management applications include third party tools, such as file systems, to store information.
The application layer 420 includes various application servers and computing devices to perform data processing. The application layer 420 includes a dispatcher 422, which refers to the central process on the application layer 420 for processing transactions. For example, the dispatcher 422 is used to distribute the request load to individual work processes 424-428, organize communication between the work processes 424-428, and handle connection to the presentation layer 410. For example, when a user makes processing entries from his computer using the menu on the presentation layer 410, the entries are converted into a special format (e.g., GUI protocol) and forwarded to the dispatcher 422. The dispatcher 422 then places this request in a dispatcher queue. The queue is then used to find free work processes 424-428 that carry out the processing. The application layer 420 may be implemented in accordance with J2EE v1.3, final release Sep. 24, 2001, published on Jul. 18, 2002 (the J2EE Standard). An update of J2EE v1.3 was recently released, on Nov. 24, 2003, as J2EE v1.4. The management techniques described herein are used to manage resources within a “cluster” of server nodes. An exemplary cluster architecture is described below with respect to
The database layer 430 is used to optimize the data access without being dependent on the underlying database and the operating system. The database independence is achieved using open standards, such as Open SgL and Java Database Connectivity (JDC). The presentation layer 410 is where the user interacts with the relevant application, which is then executed at the application layer 420, while the data persistence 432-436 is managed at the database layer 430. The database layer 430 may include one or more database management systems (DBMS) and data sources. Furthermore, the database layer 430 is compatible with both the ABAP and J2EE engines.
The database layer 430 may include one or more database servers, EJB servers, old systems, and mySAP components. The clients at the presentation layer 410 may access one or more of the applications via standalone Java programs and programs that help access an application via, for example, using Internet Inter-Object Request Broker Protocol (IIOP)/Common Object Request Broker Architecture (COBRA) written using any number of programming languages (e.g., −C, C, and C++).
The J2EE environment may also include various J2EE containers that are associated with various J2EE services and APIs, which include Java Naming Directory Interface (JNDI), Java Database Connectivity (JDBC), J2EE connector Architecture (JCA), Remote Invocation (RMI), Java Transaction API (JTA), Java Transaction Service (JTS), Java Message Service (JMS), Java Mail, Java Cryptography Architecture (JCA), Java Cryptography Extension (JCE), and Java Authentication and Authorization Service (JAAS). The J2EE services further include EJB_service, servlet_JSP, application_client_service, connector_service to provide (J2EE containers, namely) EJB containers, Web containers, application client containers, and connector containers, respectively.
A process refers to a task being run by a computer, which is often executed simultaneously with several other tasks. Many of the processes exist simultaneously with each of them taking turns on the central processing unit (CPU). Typically, the processes include operating system (OS) processes that are embedded in the operating system. The processes consume CPU time as opposed to the memory that takes up space. This is typically the case for both the processes that are managed by the operating system and those processes that are defined by process calculi. The processes further include specialized processes, such as ABAP work processes 508-512 and J2EE worker nodes 514-518.
The operating system works to keep the processes separated and allocate the resources to help eliminate the potential interferences of the processes with each other when being executed simultaneously. Such potential interferences can cause system failures. Further, the operating system may also provide mechanisms for inter-process communication to enable processes to interact in safe and predictable manner. Typically, an OS process consists of memory (e.g., a region of virtual memory), which contains executable code or task-specific data, operating system resources that are allocated to each of the processes which include file descriptor (for UNIX) and handles (for Windows), security attributes (e.g., process owner and the set of permissions), and the processor state (e.g., content of registers, physical memory addresses), which is stored in the actual registers when the process is executing.
The ABAP work processes and the J2EE worker nodes which are OS processes 508-518 are considered specialized processes that contain the attributes and behavior of the typical OS process and are created, scheduled, and maintained by the operating system. For example, the ABAP work processes 508-512 are specialized in that they are used to execute the ABAP-based transactions, and the J2EE worker nodes 514-518 are specialized in that they are used to execute the Java-based transactions.
Having assigned individualized memory provides a relatively inefficient computing, which lacks robustness as the work processes 508-512 and worker nodes 514-518 do not communicate with each other and have to access the local memory for information or data. For example, the direct communication between the ABAP instance 504 and its ABAP work processes 508-512 and the J2EE instance 506 and its J2EE worker nodes 514-518 is lacking. Furthermore, such network-based communication using various network connections also causes the data processing transactions to be time-consuming, unreliable due to network errors, and less secure. For example, a typical data processing transaction may include retrieving of the data from one local memory, flowing of the data through various protocols (e.g., Transmission Control Protocol (TCP), User Datagram Protocol (UDP)), addresses (e.g., Internet Protocol (IP) address) and operating systems, before reaching its destination at another local memory.
In one embodiment, the FCA 622 includes shared memory 624 to facilitate bi-directional communication between various independent processes that include OS processes and further include specialized processes, such as the ABAP work processes 608-612 and the J2EE worker nodes 614-618. The shared memory 624 at the FCA 602 provides a relatively fast, efficient, scalable, reliable, and secure communication between various work processes and worker nodes 608-618 on the same physical host. The shared memory-based bi-directional communication utilizes the centralized shared memory 624 for the work processes and worker nodes 608-618 and other components of the architecture 600 to share and access and thus, eliminating the need for having individualized local memory and for communicating via a network. Furthermore, the use of the shared memory 624 provides for a copy-free communication, high bandwidth, low latency, and fixed size communication buffers.
Typical OS processes refer to tasks embedded in the operating system. For example, each time a client initiates a program or a document (e.g., opening Microsoft Word®), a request is placed with the operating system regarding commencing the task of opening the document for the client. Several of such processes can be performed simultaneously in the CPU by taking turns. Typically, an operating system provides the isolation of such processes, so they are less likely to interfere with each other, such as when there is a crashed process, none of the other processes are affected by it and so the potential system failure is avoided. For example, the operating system can increase isolation and robustness by allocating one process for each user session, and running a VM for that user session within the allocated process. However, in some situations (e.g., when there are a large number of user sessions), such operating system scheduling and allocation can add to the system overhead and consume valuable resources, such as time and space.
The processes may contain some memory (e.g., a region of virtual memory for suspended processes which contains executable code or task-specific data), operating system resources that are allocated to such processes (such as file descriptors, when referring to UNIX, and handles, when referring to Windows), security attributes, such as process owner and the process' set of permissions, and the processor state, such as the content of registers, physical memory addresses, etc.
Various enterprise servers and other large servers are considered request processing engines for processing large numbers of small user requests associated with user sessions. The user requests lead to the creation of processes, which refer to processing of such user requests. The processing of the requests usually involves the running of a business code (e.g., Java servlets or EJBs) in a runtime system (e.g., a Java virtual machine (JVM)) executing on a server. In such a server, scalability can be achieved by using multiple threads, such as a multi-threaded VM, to process requests corresponding to a number of user sessions.
In one embodiment, the shared memory 624 can provide a common access and a buffer for the process-attachable VMs, the OS processes including ABAP work processes 608-612 and J2EE worker nodes 614-618, as well as the dispatcher processes. It is to be noted that the ABAP work processes 608-612 at the ABAP engine 604 are considered specialized processes that are used for processing the OS processes with specialized functionality. The work processes 608-612 have the attributes and behavior that are also common with the OS processes and they may be created, scheduled, and maintained by the operating system. For example, the ABAP work processes 608-612 are to execute ABAP transactions, while the J2EE worker nodes 614-618, also regarded as specialized processes having similar attributes as the OS processes, are to execute the Java code.
Having introduced the FCA 602 to the architecture 600 facilitates an executable program (e.g., a program running on an OS process executing the code) to use the FCA functionalities by binding the FCA library at the time of development and by calling the FCA-API in the programming language (e.g., C or Java). For example, at runtime, the executable program operates as a process in the operating system, such as when a program (e.g., MS Word or Excel) is started several times, which creates several OS processes associated with one program that are performed using the FCA functionalities. In one embodiment, the FCA 602 may remain independent of a particular programming language (e.g., ABAP or Java) or a particular operating system (e.g., UNIX or Windows). The FCA functionalities (e.g., ABAP statements, transactions, input/output processing, etc.) may be achieved by coding such functionalities in the program. Stated differently, the program, when running, is executed as an OS process and as such it performs various tasks, such as reading/writing data, processing data, and accessing the FCA functionalities.
Although not illustrated here, a dispatcher (e.g., ABAP dispatcher 622) could serve as a central process on the application layer for processing transactions. For example, the ABAP dispatcher 622 facilitates the starting of the ABAP work processes 608-612, monitoring of the status of the work processes 608-612, restarting a work process 608-612 in case of a crash, communicating with the GUI, dispatching requests to the ABAP work processes 608-612 based on the availability of such work processes 608-612, and communicating with the message server 632. In one embodiment, the dispatcher may use the FCA-based shared memory 624 to communicate with the work processes 608-612, but the FCA 602 alone may not necessarily replace the dispatcher 622. However, the functionalities of the dispatcher 622 may be moved to other components and processes, such as to the Internet Communication Manger (ICM) 620 to perform one or more of the dispatcher-related tasks. In one embodiment, this can be performed by providing a code in the program, which when running on an OS process, can execute the code. Also, on the ABAP instance 604, the dispatcher may still remain to provide communication with GUI, such as the SAP GUI.
On the J2EE instance 606, in one embodiment, the functionality of the J2EE dispatcher (not shown) may be moved to the ICM 620. The moving of the J2EE dispatcher functionalities to the ICM 620 provides increased robustness, scalability, and a simple architecture with a single access point. In another embodiment, it is not required that the J2EE dispatcher be removed when using the FCA-based architecture 600 and that the FCA 602 can also work with the J2EE dispatcher to perform various tasks. In an alternative embodiment, with regard to dispatching various requests, neither the ABAP dispatcher 622 nor the J2EE dispatcher may be needed, because the user requests can be serially assigned to the available ABAP work processes 608-612 and J2EE worker nodes 614-618. For example, each ABAP work process 608-612 could maintain a request queue for various requests at the shared memory 624 and attach the VM of the user session corresponding to the request at the front of the request queue to process the next request.
In one embodiment, having the shared memory 624 helps eliminate the necessity for local communication memory or individually dispersed memory for performing requests and for communicating data. Stated differently, the shared memory 624, as opposed to a local memory using a network connection, is used to create a buffer (e.g., for receiving and transmitting data) for the work processes 608-612 and the worker nodes 614-618. For example, once a request to perform a particular task is received at the server from a client/user session, a process to be performed is initiated as the request is created. A request queue is created at the shared memory 624 and the recently-created request is then placed in the request queue. In one embodiment, the dispatcher 622 then determines the availability of various work processes 608-612 and, based on such availability, assigns the request to the available work process 608-612 to handle. The work process 608-612 performs the corresponding OS process to satisfy the client request. The satisfying of the request may include performing the requested task and providing the requested information or response data back to the client via the shared memory 624. In another embodiment, if the dispatcher 622 is not used, the ICM 620 may possess the functionalities of the dispatcher 622 and assign the request to, for example, the available ABAP work process 608-612 or J2EE worker node 614-618. The ABAP-related requests are sent to the ABAP work processes 608-612 and the Java-related requests are sent to the J2EE worker nodes 614-618. Having the shared memory 624 provided by the FCA 602 not only allows a copy-free transmission of the data, but also eliminates the potential of the data being lost due to connection or network failures. Furthermore, using a single shared memory 624 allows the various tasks to run on a single local host, which in turn, provides a secure transmission of data. In one embodiment, the shared memory 624 includes memory pipes that are used bi-directionally and are created at startup along with initialization of the FCA 602.
In one embodiment, a block of the shared memory 624 may be set aside to generate request queues with each request queue having one or more requests to be performed. In one embodiment, the work processes 608-612 and worker nodes 614-618 may have direct access to this block of the shared memory 624 or a portion of the block may be mapped to the address space of the selected work processes 608-612 and worker nodes 614-618.
In one embodiment, the architecture 600 employs FCA handles (not shown) as communication end-points. The handles are regarded as an entity at the FCA 602 level for providing communication. Although the handles are not sockets, they act socket-like. For example, the handles are presented as sockets to the programmers and developers for their convenience and familiarity, but the architecture 600 has the benefits of employing the handles. Having the shared memory 624 reduces administrative costs, while increasing consistency and easing communication between various processes 602-606. Various entities at the shared memory 624 may include data, datagrams, application update information, strings, constants, variable, objects that are instances for a class, runtime representations of a class, and classloaders that are used to load class runtime representatives.
In the illustrated embodiment, the FCA 602 provides an FCA-based shared memory 624 in communication with an ICM 620, an ABAP instance 604, and a J2EE instance 606. The ABAP instance 604 includes various specialized work processes 608-612 that are, based on their availability, assigned various ABAP-based OS processes/client requests to perform. The architecture 600 further includes the J2EE instance 606, which includes server nodes or worker nodes 614-618 to complement the ABAP work processes 608-612 to perform various Java-based tasks (e.g., performing client requests/OS processes) that are assigned to them. In one embodiment, the J2EE instance 606 may include Java Virtual Machines (JVMs), while the ABAP instance 604 may include ABAP language VMs (ABAP VMs). The ABAP is a programming language for developing applications for the SAP R/3 system, which is a widely installed business application system developed by SAP AG. The Common Language Runtime (CLR) VMs may communicate with ABAP instance using FCA. The CLR is a managed code execution environment developed by Microsoft Corp. of Redmond, Wash.
The shared memory 624 includes memory pipes, which are used bi-directionally, to provide bi-directional communication between various components of the architecture 600 that include the ABAP instance 604 and the J2EE instance 606 and their work processes 608-612 and worker nodes 614-618, respectively, the ICM 620, and other third-party applications. In one embodiment, having the shared memory 624 eliminates the necessity for the J2EE instance 606 to communicate with the ICM 620, and ultimately the client, via the TCP/IP connection. Instead, the J2EE instance 606 and the ABAP instance 604 are integrated such that both instances 604-606 are in communication with the ICM 620 via the shared memory 624. Further, the J2EE instance 606 is no longer required to have a dispatcher (e.g., dispatcher 524 of
In one embodiment, the FCA 602 is used to provide an integration point for the ABAP and J2EE instances 604-606, which allows the J2EE worker nodes 614-618 and the ABAP work processes 608-612 to have access to the same centralized shared memory 624. Stated differently, not only the ABAP instance 604 and its work processes 608-612 having access to the FCA-based shared memory 624, but also the J2EE instance 606 and its worker nodes 614-618 have access to the same shared memory 624, which allows for direct bi-directional communication between various components of the architecture 600, including the work processes 608-612 and the worker nodes 614-618. Having access to the common shared memory 624 eliminates the need for associating individualized local communication memory for each of the work processes 608-612 and worker nodes 614-618 and the need for distributing the memory to various components of the architecture 600. Furthermore, the FCA-based shared memory 624 provides a common or centralized memory for each of the components to access, which eliminates the need for individualized/localized cache use for communicating entities (e.g., placing requests, updating data, retrieving responses) between components.
In one embodiment, the FCA 602 is used to provide a common API to facilitate the common access to the shared memory 624 and to provide direct bi-directional communication between various components of the architecture 600. In one embodiment, the shared memory 624 includes memory pipes that are used in a bi-directional fashion to facilitate the bi-directional communication between, for example, the ICM 620 and the ABAP and J2EE instances 604-606. The use of the shared memory 624 results in a cost-effective, efficient, fast, robust, and copy-free communication of entities between various components of the architecture 600. Using the shared memory 624 also allows for the integration of the J2EE instance 606 and the ICM 620 by providing direct and bi-directional communication between the two. For instance, the communication data is transported via the shared memory 624 and only local load-balancing is necessitated and further, protocols, such as RMI, P4, and Telnet, are ported through the ICM 620. Other protocols, such as SMTP, HTTP HTTPS, NNTP, FastCGI, remain ported through the ICM 620.
In one embodiment, the ICM 620 is used to facilitate communication between the architecture 600 and the clients by providing a browser or browser-like access to the user. The Internet protocols supported by the ICM 620 are provided as plug-ins for other standard protocols (e.g., HTTP, SMTP). For example, in a server role, the ICM 620 processes requests from the Internet that are received from the client via a Uniform Resource Locator (URL) with the server/port combination that the ICM 620 listens. The ICM 620 then invokes the local handler responsible for processing these requests, based on the URL. Applications (e.g., Business Server Page (BSP)) needing an ABAP context are transferred to the ABAP work processes 608-612, while Java requests are transferred to the J2EE instance 606 to be processed by the J2EE worker nodes 614-618. In one embodiment, the transfer of the requests between the ICM 620 and the ABAP instance 604 is conducted via the ABAP dispatcher 622, which also serves as a load balancer and a point for providing connection to a GUI. On the J2EE side 606, the dispatcher may not be present or necessitated.
The ICM 620 may include a central request queue for requests that are processed on worker threads. Various time-consuming operations (e.g., accept, SSL handshake, read, write, handshake, and close) are triggered through a request in the ICM request queue, which may not be protocol-specific. The queues in the shared memory 624 include request queues for each of the work processes 608-612 and the worker nodes 614-618. The number of entries in the request queues at the shared memory 624 provides an indication of the load situation of the server. The queues in shared memory 624 may also include other relevant information, such as information to help with FCA gueue Monitoring (FgM). The values may include the name of the queue (set at startup), current number of requests in the queue (set by a dedicated process), peak number of requests (maintained by FCA), maximum number of requests (fixed value that can be set at startup), last insert (maintained by FCA), and last remove (maintained by FCA).
In one embodiment, these improvements are achieved by providing a common access to a commonly shared memory using memory pipes 706 and other necessary layers 702-704 and 708 of the architecture 700. Such use of the shared memory using the memory pipes 706 also provides secure and copy-free transfer of data, and decreased network overhead, latency, copy operations, and process switches. Further, to integrate the J2EE engine and the ICM, as illustrated in
In the illustrated embodiment, the architecture 700 includes a layer of operating system 702. The operating system 702 refers to the master control program that runs the computer. The first program is loaded when the computer is turned on, its main part, the kernel, resides in memory at all times. The operating system 702 sets the standards for all application programs that run on the computer. Further, the applications communicate with the operating system 702 for user interface and file management operations. Some examples of the operating system 702 include Windows (e.g., 95, 98, 2000, NT, ME, and XP), Unix (e.g., Solaris and Linux), Macintosh OS, IBM mainframe OS/390, and AS/400's OS/400. Disk Operating System (DOS) is still used for some applications, and there are other special-purpose operating systems as well.
In one embodiment, the semaphores 704 occupy another layer of the architecture 700. The semaphores 704 refer to the shared space for interprocess communications (IPC) controlled by “wake up” and “sleep” commands. For example, the source process fills a queue and goes to sleep until the destination process uses the data and tells the source process to wake up. The semaphores 704 are provided to work together with the memory pipes 706, which occupy another layer of the architecture 700, to facilitate the shared memory. The memory pipes 706 refer to a fast memory based unidirectional communication using pipes that are to transport communication data between various components of the architecture 700.
Using the architecture 700, these memory pipes 706 are utilized bi-directionally at the shared memory to relatively efficiently and quickly transport data between various components. The communication between processes and components is facilitated and further enhanced by the FCA communication layer 708, which include a communication interface or API. The communication layer 708 works with the semaphores 704 and the memory pipes 706 to facilitate direct and bi-directional communication between processes and components and to keep the communication efficient, secure, and fast. Further, the communication layer 708 works as an API to external inputs, third-party applications, and clients.
In addition to the layers 702-708 described, the FCA 700 may also includes another interface layer (not shown) to provide socket-like interface for ease. For example, a Java layer (e.g., jFCA) may be used to provide Java-based communication for external applications. This is also for programmers and developers who use Java to make use of the architecture 700. Also, for example, the FCA 700 employs handles as communication endpoints, but they are communicated to the programmers as sockets, which are well-known but are not as efficient as handles, by providing the Java interface layer. Similarly, the FCA 700 provides other interface layers, such as a C layer, to provide another interface to external applications and to facilitate an easier way to use the shared memory when programming in C.
Each memory pipe 802-804 has two opening a read opening 806-808 and a write opening 810-812. The read opening 806-808 is from where the data 814-824 enters the memory pipe 802-804 to be sent. The write opening 810-812 is from where the data 814-824 exits the memory pipe 802-804 to be received. For example, data 814 enters the memory pipe 802 with a name 826. The name 826 represents metadata that is associated with and corresponds to the data 814 for, for example, identification, categorization, and the like. This data 814 is placed in the queue along with data 816 and data 818. The memory pipes 802-804 work in accordance with the well-known First-In-First-Out (FIFO) technique and so data 814 is placed behind data 816 and 818. On the other side, data 818 is the first one to exit or written 810, followed by data 816 and finally, data 814. Memory pipe 804 also work in the same way as the memory pipe 802. It is contemplated that the shared memory 800 may contain any number of memory pipes in addition to the two illustrated.
Although the memory pipes 802-804 appear one-directional in nature, using the FCA (e.g., the FCA communication layer), the memory pipes 802-804 are combined and placed in such order, as illustrated, that they facilitate bi-directional communication. Using the memory pipes 802-804 for bi-directional communication allows the shared memory 800 to be accessible by various components and processes and thus, providing direct and bi-directional communication between such components and processes. The shared memory 800 helps avoid network-like insecure communication and instead, provides a common shared memory-based secure communication.
The memory pipes 802-804 may need several buffers to perform a copy-free bi-directional communication of receiving requests and providing response to such requests. For example, the write side 810 may pre-allocate sufficient memory buffer before the read side 806 input the data 814 into the memory pipe 802. Several FCA buffer API functions are called to accomplish the buffer-related tasks at the memory pipes 802-804. In one embodiment, on the read side 806-808 of the memory pipes 802-804, reading data from the FCA connection object, which references buffer in the shared memory 800 of type FCA_BUF_HDL, is performed by calling <FcaGetInbuf>. With the function call <FCAGetInbuf>, access to the shared memory or memory pipe buffer is obtained from the server. The buffer is removed form the FCA input queue.
By calling <FcaPeekInbuf>, access to the buffer is received from the server, but the buffer is not removed from the input queue. With the function call <FcaGetOutbuf>, a new buffer is received to send to a communication partner. In one embodiment, the maximum usable size of the buffer may be fixed and thus, no size may need to be specified as parameter. Also, the buffer can now be released again with <FcaFreeBuf> or send to the communication partner with <FcaFlushOutbuf>. In one embodiment, the attributes (e.g., size_used and status) of the buffer may be set and there no further operations may be allowed on this buffer once this function is called. Finally, the buffer can be freed with the function call <FcaFreeBuf>, which may include either allocating the buffer with <FcaGetOutbuf> or receiving the buffer with <FcaGetInbuf>. No further operations with this buffer may be allowed.
At processing block 1016, the remaining unprocessed requests are retracted from the crashed server. For example, unprocessed requests 7-9 are retracted. At processing block 1018, the retracted requests are then load balanced and dispatched to another server of the cluster of servers. The retracted and re-dispatched requests are processed at the new server at processing block 1020.
In one embodiment, once the connection is established 1114, the FCA client 1104 sends 1116 a request to the server 1102. The server 1102 receives the request having request data 1118 from the client 1104. The request is then processed at the server 1102 using various entities and the server 1102 then sends the response data 1120 in response to the request from the client 1104 to the client 1104. The client 1104 receives the response data 1122. The FCA connection is then closed 1124 when it is not longer needed.
At decision block 1216, a determination is made as to whether more requests are to be received. If yes, the process continues with processing block 1210. If not, the connection is closed at termination block 1218.
In one embodiment, the request may be received at the shared memory via the ICM, which may include additional request queues to properly hold, maintain, and distribute the incoming client requests to the shared memory. The request is then assigned to an entity or component, such as an available work process, to process the request at processing block 1308. The assigning of the request for processing includes determining whether the request is ABAP-based or Java-based. For example, an ABAP-based process request is assigned to an available ABAP work process at the ABAP instance, while the Java-based process request is assigned to an available J2EE worker node at the J2EE instance. Having the FCA-based shared memory allows the ABAP and J2EE instances to have direct bi-directional communication via the shared memory.
The assigned request is then retrieved from the shared memory by the available work process or the available worker node so that the request can be satisfied at processing block 1310. The request is then processed at processing block 1312. While the request is being processed by the available work process or worker node, subsequent requests corresponding to various processes are continuously received at the shared memory and are placed in various request queues at the shared memory.
A system architecture according to one embodiment of the invention is illustrated in
The sever nodes 1414, 1416, 1418 within instance 1410 provide the business and/or presentation logic for the network applications supported by the system. Each of the sever nodes 1414, 1416, 1418 within a particular instance 1410 may be configured with a redundant set of application logic and associated data. In one embodiment, the dispatcher 1410 distributes service requests from clients to one or more of the sever nodes 1414, 1416, 1418 based on the load on each of the servers. For example, in one embodiment, the dispatcher 1410 implements a round-robin policy of distributing service requests.
The sever nodes 1414, 1416, 1418 may be Java 2 Enterprise Edition (“J2EE”) sever nodes which support Enterprise Java Bean (“EJB”) components and EJB containers (at the business layer) and Servlets and Java Server Pages (“JSP”) (at the presentation layer). Of course, the embodiments of the invention described herein may be implemented in the context of various different software platforms including, by way of example, Microsoft .NET platforms and/or the Advanced Business Application Programming (“ABAP”) platforms developed by SAP AG, the assignee of the present application.
In one embodiment, communication and synchronization between each of the instances 1410, 1420 is enabled via the central services instance 1400. As illustrated in
In one embodiment, the locking service 1402 disables access to (i.e., locks) certain specified portions of configuration data and/or program code stored within a central database 1430 or resources shared in the cluster by different services. The locking manager locks data on behalf of various system components which need to synchronize access to specific types of data and program code (e.g., such as the configuration managers 1444, 1454). As described in detail below, the locking service enables a distributed caching architecture for caching copies of server/dispatcher configuration data.
In one embodiment, the messaging service 1404 and the locking service 1402 are each implemented on dedicated servers. However, the messaging service 1404 and the locking service 1402 may be implemented on a single server or across multiple servers while still complying with the underlying principles of the invention.
As illustrated in
Referring now to
In one embodiment of the invention, to improve the speed at which the various servers and dispatchers access the configuration data, the configuration managers 1444, 1454 cache configuration data locally within configuration caches 1500, 1501. As such, to ensure that the configuration data within the configuration caches 1500, 1501 remains up-to-date, the configuration managers 1444, 1454 implement cache synchronization policies, as described herein.
A hard drive or other storage device 1630 may be used by the system 1600 for storing information and instructions. The storage device 1630 may include a magnetic disk or optical disc and its corresponding drive, flash memory or other nonvolatile memory, or other memory device. Such elements may be combined together or may be separate components. The system 1600 may include a read only memory (ROM) 1635 or other static storage device for storing static information and instructions for the processors 1615 through 1620.
A keyboard or other input device 1640 may be coupled to the bus 1610 for communicating information or command selections to the processors 1615 through 1620. The input device 1640 may include a keyboard, a keypad, a touch-screen and stylus, a voice-activated system, or other input device, or combinations of such devices. The computer may further include a mouse or other cursor control device 1645, which may be a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to the processors and to control cursor movement on a display device. The system 1600 may include a computer display device 1650, such as a cathode ray tube (CRT), liquid crystal display (LCD), or other display technology, to display information to a user. In some environments, the display device may be a touch-screen that is also utilized as at least a part of an input device. In some environments, the computer display device 1650 may be or may include an auditory device, such as a speaker for providing auditory information.
A communication device 1650 may also be coupled to the bus 1610. The communication device 1650 may include a modem, a transceiver, a wireless modem, or other interface device. The system 1600 may be linked to a network or to other device using via an interface 1655, which may include links to the Internet, a local area network, or another environment. The system 1600 may comprise a server that connects to multiple devices. In one embodiment the system 1600 comprises a Java® compatible server that is connected to user devices and to external resources.
While the machine-readable medium 1630 is illustrated in an exemplary embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine of the system 1600 and that causes the machine to perform any one or more of the methodologies of the present invention. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
Furthermore, it is appreciated that a lesser or more equipped computer system than the example described above may be desirable for certain implementations. Therefore, the configuration of system 1300 may vary from implementation to implementation depending upon numerous factors, such as price constraints, performance requirements, technological improvements, and/or other circumstances.
It should be noted that, while the embodiments described herein may be performed under the control of a programmed processor, such as processors 202-206, in alternative embodiments, the embodiments may be fully or partially implemented by any programmable or hardcoded logic, such as field programmable gate arrays (FPGAs), TTL logic, or application specific integrated circuits (ASICs). Additionally, the embodiments of the present invention may be performed by any combination of programmed general-purpose computer components and/or custom hardware components. Therefore, nothing disclosed herein should be construed as limiting the various embodiments of the present invention to a particular embodiment wherein the recited embodiments may be performed by a specific combination of hardware components.
It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the invention.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive, and that the embodiments of the present invention are not to be limited to specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure.
Number | Name | Date | Kind |
---|---|---|---|
5566302 | Khalidi et al. | Oct 1996 | A |
5566315 | Milillo et al. | Oct 1996 | A |
5617570 | Russell et al. | Apr 1997 | A |
5682328 | Roeber et al. | Oct 1997 | A |
5692193 | Jagannathan et al. | Nov 1997 | A |
5745778 | Alfieri | Apr 1998 | A |
5805790 | Nota et al. | Sep 1998 | A |
5905868 | Baghai et al. | May 1999 | A |
5909540 | Carter et al. | Jun 1999 | A |
5944781 | Murray | Aug 1999 | A |
5951643 | Shelton et al. | Sep 1999 | A |
5999179 | Kekic et al. | Dec 1999 | A |
6075938 | Bugnion et al. | Jun 2000 | A |
6115712 | Islam et al. | Sep 2000 | A |
6115721 | Nagy | Sep 2000 | A |
6167423 | Chopra et al. | Dec 2000 | A |
6199179 | Kauffman et al. | Mar 2001 | B1 |
6216212 | Challenger et al. | Apr 2001 | B1 |
6256712 | Challenger | Jul 2001 | B1 |
6272598 | Arlitt et al. | Aug 2001 | B1 |
6282566 | Lee, Jr. et al. | Aug 2001 | B1 |
6295582 | Spencer | Sep 2001 | B1 |
6330709 | Johnson et al. | Dec 2001 | B1 |
6336170 | Dean et al. | Jan 2002 | B1 |
6345386 | Delo et al. | Feb 2002 | B1 |
6349344 | Sauntry et al. | Feb 2002 | B1 |
6356946 | Clegg et al. | Mar 2002 | B1 |
6412045 | DeKoning et al. | Jun 2002 | B1 |
6415364 | Bauman et al. | Jul 2002 | B1 |
6424828 | Collins et al. | Jul 2002 | B1 |
6425057 | Cherkasova et al. | Jul 2002 | B1 |
6438654 | Elko et al. | Aug 2002 | B1 |
6467052 | Kaler et al. | Oct 2002 | B1 |
6553377 | Eschelbeck et al. | Apr 2003 | B1 |
6587937 | Jensen et al. | Jul 2003 | B1 |
6591347 | Tischler et al. | Jul 2003 | B2 |
6601112 | O'Rourke et al. | Jul 2003 | B1 |
6615253 | Bowman-Amuah | Sep 2003 | B1 |
6640244 | Bowman-Amuah | Oct 2003 | B1 |
6643802 | Frost et al. | Nov 2003 | B1 |
6654948 | Konuru et al. | Nov 2003 | B1 |
6658478 | Singhal et al. | Dec 2003 | B1 |
6681251 | Leymann et al. | Jan 2004 | B1 |
6687702 | Vaitheeswaran et al. | Feb 2004 | B2 |
6738977 | Berry et al. | May 2004 | B1 |
6760911 | Ye | Jul 2004 | B1 |
6766419 | Zahir et al. | Jul 2004 | B1 |
6769022 | DeKoning et al. | Jul 2004 | B1 |
6772409 | Chawla et al. | Aug 2004 | B1 |
6795856 | Bunch | Sep 2004 | B1 |
6799202 | Hankinson et al. | Sep 2004 | B1 |
6829679 | DeSota et al. | Dec 2004 | B2 |
6854114 | Sexton et al. | Feb 2005 | B1 |
6879995 | Chinta et al. | Apr 2005 | B1 |
7003770 | Pang et al. | Feb 2006 | B1 |
7024512 | Franaszek et al. | Apr 2006 | B1 |
7024695 | Kumar et al. | Apr 2006 | B1 |
7058957 | Nguyen | Jun 2006 | B1 |
7089566 | Johnson | Aug 2006 | B1 |
7124170 | Sibert | Oct 2006 | B1 |
7127472 | Enokida et al. | Oct 2006 | B1 |
7127713 | Davis et al. | Oct 2006 | B2 |
7130891 | Bernardin et al. | Oct 2006 | B2 |
7149741 | Burkey et al. | Dec 2006 | B2 |
7155512 | Lean et al. | Dec 2006 | B2 |
7165239 | Hejlsberg et al. | Jan 2007 | B2 |
7194761 | Champagne | Mar 2007 | B1 |
7203769 | Schnier | Apr 2007 | B2 |
7216160 | Chintalapati et al. | May 2007 | B2 |
7237140 | Nakamura et al. | Jun 2007 | B2 |
7246167 | Kalmuk et al. | Jul 2007 | B2 |
7254634 | Davis et al. | Aug 2007 | B1 |
7296267 | Cota-Robles et al. | Nov 2007 | B2 |
7302423 | De Bellis | Nov 2007 | B2 |
7373647 | Kalmuk et al. | May 2008 | B2 |
7386848 | Cavage et al. | Jun 2008 | B2 |
7395338 | Fujinaga | Jul 2008 | B2 |
20010029520 | Miyazaki et al. | Oct 2001 | A1 |
20020049767 | Bannett | Apr 2002 | A1 |
20020052914 | Zalewski et al. | May 2002 | A1 |
20020073283 | Lewis et al. | Jun 2002 | A1 |
20020078060 | Garst et al. | Jun 2002 | A1 |
20020093487 | Rosenberg | Jul 2002 | A1 |
20020099753 | Hardin et al. | Jul 2002 | A1 |
20020129264 | Rowland et al. | Sep 2002 | A1 |
20020133805 | Pugh et al. | Sep 2002 | A1 |
20020147888 | Trevathan | Oct 2002 | A1 |
20020169926 | Pinckney et al. | Nov 2002 | A1 |
20020174097 | Rusch et al. | Nov 2002 | A1 |
20020181307 | Fifield et al. | Dec 2002 | A1 |
20020198953 | O'Rourke et al. | Dec 2002 | A1 |
20030009533 | Shuster | Jan 2003 | A1 |
20030014521 | Elson et al. | Jan 2003 | A1 |
20030014552 | Vaitheeswaran et al. | Jan 2003 | A1 |
20030023827 | Palanca et al. | Jan 2003 | A1 |
20030028671 | Mehta et al. | Feb 2003 | A1 |
20030037178 | Vessey et al. | Feb 2003 | A1 |
20030058880 | Sarkinen et al. | Mar 2003 | A1 |
20030084248 | Gaither et al. | May 2003 | A1 |
20030084251 | Gaither et al. | May 2003 | A1 |
20030088604 | Kuck et al. | May 2003 | A1 |
20030093420 | Ramme | May 2003 | A1 |
20030093487 | Czajkowski et al. | May 2003 | A1 |
20030097360 | McGuire et al. | May 2003 | A1 |
20030105887 | Cox et al. | Jun 2003 | A1 |
20030115190 | Soderstrom et al. | Jun 2003 | A1 |
20030120811 | Hanson et al. | Jun 2003 | A1 |
20030131010 | Redpath | Jul 2003 | A1 |
20030131286 | Kaler et al. | Jul 2003 | A1 |
20030177356 | Abela | Sep 2003 | A1 |
20030177382 | Ofek et al. | Sep 2003 | A1 |
20030187927 | Winchell | Oct 2003 | A1 |
20030191795 | Bernardin et al. | Oct 2003 | A1 |
20030195923 | Bloch et al. | Oct 2003 | A1 |
20030196136 | Haynes et al. | Oct 2003 | A1 |
20030200526 | Arcand | Oct 2003 | A1 |
20030212654 | Harper et al. | Nov 2003 | A1 |
20030229760 | Doyle et al. | Dec 2003 | A1 |
20040003033 | Kamen et al. | Jan 2004 | A1 |
20040024610 | Fradkov et al. | Feb 2004 | A1 |
20040024881 | Elving et al. | Feb 2004 | A1 |
20040024971 | Bogin et al. | Feb 2004 | A1 |
20040045014 | Radhakrishnan | Mar 2004 | A1 |
20040128370 | Kortright | Jul 2004 | A1 |
20040167980 | Doyle et al. | Aug 2004 | A1 |
20040168029 | Civlin | Aug 2004 | A1 |
20040181537 | Chawla et al. | Sep 2004 | A1 |
20040187140 | Aigner et al. | Sep 2004 | A1 |
20040205144 | Otake | Oct 2004 | A1 |
20040205299 | Bearden | Oct 2004 | A1 |
20040213172 | Myers et al. | Oct 2004 | A1 |
20040215703 | Song et al. | Oct 2004 | A1 |
20040221285 | Donovan et al. | Nov 2004 | A1 |
20040221294 | Kalmuk et al. | Nov 2004 | A1 |
20050021594 | Bernardin et al. | Jan 2005 | A1 |
20050021917 | Mathur et al. | Jan 2005 | A1 |
20050027943 | Steere et al. | Feb 2005 | A1 |
20050044197 | Lai | Feb 2005 | A1 |
20050044301 | Vasilevsky et al. | Feb 2005 | A1 |
20050055686 | Buban et al. | Mar 2005 | A1 |
20050060704 | Bulson et al. | Mar 2005 | A1 |
20050086656 | Whitlock et al. | Apr 2005 | A1 |
20050086662 | Monnie et al. | Apr 2005 | A1 |
20050160396 | Chadzynski | Jul 2005 | A1 |
20050188068 | Kilian | Aug 2005 | A1 |
20050216502 | Kaura et al. | Sep 2005 | A1 |
20050256880 | Nam Koong et al. | Nov 2005 | A1 |
20050262181 | Schmidt et al. | Nov 2005 | A1 |
20050262493 | Schmidt et al. | Nov 2005 | A1 |
20050262512 | Schmidt et al. | Nov 2005 | A1 |
20050268238 | Quang et al. | Dec 2005 | A1 |
20050268294 | Petev et al. | Dec 2005 | A1 |
20050278274 | Kovachka-Dimitrova et al. | Dec 2005 | A1 |
20050278346 | Shang et al. | Dec 2005 | A1 |
20060053112 | Chitkara et al. | Mar 2006 | A1 |
20060053425 | Berkman et al. | Mar 2006 | A1 |
20060059453 | Kuck et al. | Mar 2006 | A1 |
20060064545 | Wintergerst | Mar 2006 | A1 |
20060064549 | Wintergerst | Mar 2006 | A1 |
20060070051 | Kuck et al. | Mar 2006 | A1 |
20060092165 | Abdalla et al. | May 2006 | A1 |
20060094351 | Nowak et al. | May 2006 | A1 |
20060129512 | Braun et al. | Jun 2006 | A1 |
20060129546 | Braun et al. | Jun 2006 | A1 |
20060129981 | Dostert et al. | Jun 2006 | A1 |
20060143328 | Fleischer et al. | Jun 2006 | A1 |
20060143359 | Dostert et al. | Jun 2006 | A1 |
20060143389 | Kilian et al. | Jun 2006 | A1 |
20060143609 | Stanev | Jun 2006 | A1 |
20060143618 | Fleischer et al. | Jun 2006 | A1 |
20060143619 | Galchev et al. | Jun 2006 | A1 |
20060150197 | Werner | Jul 2006 | A1 |
20060155867 | Kilian et al. | Jul 2006 | A1 |
20060159197 | Kraut et al. | Jul 2006 | A1 |
20060167980 | Werner | Jul 2006 | A1 |
20060168646 | Werner | Jul 2006 | A1 |
20060168846 | Juan | Aug 2006 | A1 |
20060206856 | Breeden et al. | Sep 2006 | A1 |
20070150586 | Kilian et al. | Jun 2007 | A1 |
20070156907 | Galchev et al. | Jul 2007 | A1 |
20070266305 | Cong et al. | Nov 2007 | A1 |
Number | Date | Country |
---|---|---|
0 459 931 | Dec 1991 | EP |
459931 | Dec 1991 | EP |
1380941 | Jan 2004 | EP |
1027796 | Jun 2004 | EP |
2365553 | Feb 2002 | GB |
WO0023898 | Apr 2000 | WO |
WO03073204 | Sep 2003 | WO |
WO2004038586 | May 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20060129546 A1 | Jun 2006 | US |