Server-processor hybrid system for processing data

Information

  • Patent Grant
  • 9900375
  • Patent Number
    9,900,375
  • Date Filed
    Wednesday, June 3, 2015
    9 years ago
  • Date Issued
    Tuesday, February 20, 2018
    6 years ago
Abstract
The present invention relates to a server-processor hybrid system that comprises (among other things) a set (one or more) of front-end servers (e.g., mainframes) and a set of back-end application optimized processors. Moreover, implementations of the invention provide a server and processor hybrid system and method for distributing and managing the execution of applications at a fine-grained level via an I/O-connected hybrid system. This method allows one system to be used to manage and control the system functions, and one or more other systems to co-processor.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is related in some aspects to commonly owned and co-pending patent application number 11/940,470, entitled “PROCESSOR-SERVER HYBRID SYSTEM FOR PROCESSING DATA”, filed Nov. 15, 2007, the entire contents of which are herein incorporated by reference. This application is related in some aspects to commonly owned and co-pending patent application Ser. No. 11/877,926, entitled “HIGH BANDWIDTH IMAGE PROCESSING SYSTEM”, filed Oct. 24, 2007, the entire contents of which are herein incorporated by reference. This application is related in some aspects to commonly owned and co-pending patent application Ser. No. 11/767,728, entitled “HYBRID IMAGE PROCESSING SYSTEM”, filed Jun. 25, 2007, the entire contents of which are herein incorporated by reference. This application is also related in some aspects to commonly owned and co-pending patent application Ser. No. 11/738,723, entitled “HETEROGENEOUS IMAGE PROCESSING SYSTEM”, filed Apr. 23, 2007, the entire contents of which are herein incorporated by reference. This application is also related in some aspects to commonly owned and co-pending patent application Ser. No. 11/738,711, entitled “HETEROGENEOUS IMAGE PROCESSING SYSTEM”, filed Apr. 23, 2007, the entire contents of which are herein incorporated by reference.


FIELD OF THE INVENTION

The present invention generally relates to data processing. Specifically, the present invention relates to a server-processor hybrid system for more efficient data processing.


BACKGROUND OF THE INVENTION

Web 1.0 is historically referred to as the World Wide Web, which was originally about connecting computers and making technology more efficient for computers. Web 2.0/3.0 is considered to encompass the communities and social networks that build contextual relationships and facilitate knowledge sharing and virtual web servicing. Traditional web service can be thought of as a very thin client. That is, a browser displays images relayed by a server, and every significant user action is communicated to the front-end server for processing. Web 2.0 is a social interaction that is consisted of the software layer on the client, so the user gets quick system response. The back-end storage and retrieval of data is conducted asynchronously in the background, so the user doesn't have to wait for the network. Web 3.0 is geared towards the 3 dimensional vision such as in virtual universes. This could open up new ways to connect and collaborate using 3D shared. Along these lines, web 3.0 describes the evolution of Web usage and interaction along several separate paths. These include transforming the Web into a database, a move towards making content accessible by multiple non-browser applications.


Unfortunately, the traditional server cannot efficiently handle the characteristics of Web 3.0. No existing approach addresses this issue. In view of the foregoing, there exists a need for an approach that solves this deficiency.


SUMMARY OF THE INVENTION

The present invention relates to a server-processor hybrid system that comprises (among other things) a set (one or more) of front-end servers (e.g., mainframes) and a set of back-end application optimized processors. Moreover, implementations of the invention provide a server and processor hybrid system and method for distributing and managing the execution of applications at a fine-grained level via an I/O-connected hybrid system. This method allows one system to be used to manage and control the system functions, and one or more other systems to serve as a co-processor or accelerator for server functions.


The present invention allows the front-end server management and control system components to be reused, and the applications such as virtual web or game processing components to be used as an accelerator or co-processor. The system components can be run using different operating systems. The front-end server(s) acts as a normal transaction based computing resource, but for which these transactions may be the trigger for large computationally intense graphics, rendering or other functions. The processor is placed at the back-end to handle such functions. In addition to traditional transaction processing, the front-end server(s) would also perform specific processor selection functions, and set-up, control and management functions of the cell co-processors.


A first aspect of the present invention provides a server-processor hybrid system for processing data, comprising: a set of front-end servers for receiving the data from an external source; a set of back-end application optimized processors for receiving the data from the set of front-end servers, for processing the data, and for returning processed data to the set of front-end servers; and an interface within at least one of the set of front-end servers having a set network interconnects, the interface connecting the set of front-end servers with the set of back-end application optimized processors.


A second aspect of the present invention provides a method for processing data, comprising: receiving the data from an external source on a front-end server; sending the data from the front-end server to a back-end application optimized processor via an interface having a set of network interconnects, the interface being embodied in server; processing the data on the back-end application optimized processor to yield processed data; and receiving the processed data from the back-end application optimized processor on the front-end server.


A third aspect of the present invention provides a method for deploying a server-processor hybrid system for processing data, comprising: providing a computer infrastructure being operable to: receive the data from an external source on a front-end server; send the data from the front-end server to a back-end end application optimized via an interface having a network interconnects, the interface being embodied in the front-end server; process the data on the back-end application optimized processor to yield processed data; and receive the processed data from the back-end application optimized processor on the front-end server.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other features of this invention will be more readily understood from the following detailed description of the various aspects of the invention taken in conjunction with the accompanying drawings in which:



FIG. 1 shows box diagram depicting the components of the server-processor hybrid system according to the present invention.



FIG. 2A shows a more detailed diagram of the system of FIG. 1 according to the present invention.



FIG. 2B shows a more specific diagram of the back-end application optimized processors(s) of the hybrid system according to the present invention.



FIG. 3 shows communication flow within the server-processor hybrid system according to the present invention.



FIGS. 4A-D shows a method flow diagram according to the present invention.





The drawings are not necessarily to scale. The drawings are merely schematic representations, not intended to portray specific parameters of the invention. The drawings are intended to depict only typical embodiments of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numbering represents like elements.


DETAILED DESCRIPTION OF THE INVENTION

As indicated above, the present invention relates to a server-processor hybrid system that comprises (among other things) a set (one or more) of front-end servers (e.g., mainframes) and a set of back-end application optimized processors. Moreover, implementations of the invention provide a server and processor hybrid system and method for distributing and managing the execution of applications at a fine-grained level via an I/O-connected hybrid system. This method allows one system to be used to manage and control the system functions, and one or more other systems to serves as a co-processor or accelerator.


The present invention allows the front-end server management and control system components to be reused, and the applications such as virtual web or game processing components to be used as an accelerator or co-processor. The system components can be run using different operating systems. The front-end server(s) acts as a normal transaction based computing resource. These transactions may be the trigger for large, computationally intense graphics, renderings, numerically intensive computations, or other functions. The processor is placed at the back-end to handle such functions. In addition to traditional transaction processing, the front-end server(s) would also perform specific processor selection functions, and set-up, control and management functions of the co-processors. It should be understood that as used herein, the term data can mean any type of data such as “multi-modal” data, e.g., video and/or data streams, predefined block lengths, text, graphics, picture formats, xml, html etc. Having the server at the front-end of the hybrid system provides (among other things) good at high transaction throughput, and cell processor workload management, cell processor resource management.


Referring now to FIG. 1, a logical diagram according to the present invention is shown. In general, the present invention provides a server-processor hybrid system 11 that comprises a set (one or more) of front-end servers 12 and a set of back-end application optimized processors (referred to herein as processors for brevity) 20. As shown, each server 12 typically includes infrastructure 14 (e.g., email, spam, firewall, security, etc.), a web content server 16, and portal/front-end 17 (e.g., an interface as will be further described below). Applications 19 and databases 18 are also hosted on these servers. Along these lines server(s) 12 are typically System z servers that are commercially available from IBM Corp. of Armonk, N.Y. (System z and related terms are trademarks of IBM Corp. in the United States and/or other countries). Each processor 20 typically includes one or more application function accelerators 22, and one or more database function accelerators 24. Along those lines, processor(s) 20 are typically cell blades that are commercially available from IBM Corp. (cell, cell blade and related terms are trademarks of IBM Corp in the United States and/or other countries). Moreover, processor(s) 20 are optimized based on the applications 19 that are running so that performance is optimal and efficient. As shown, server 12 receives data from an external source 10 via typically communication methods (e.g., LAN, WLAN, etc.). Such data is communicated to processor 20 for processing via an interface of server 12. Processed data can then be stored and/or returned to server 12 and onto external source 10. As depicted, server 12 represents the front-end of hybrid system 11 while processor 20 represents the back-end.


This system is further shown in FIGS. 2A-B. FIG. 2A shows external source(s) 10 communicating server(s) 12, which communicates with processor(s) 20 via interface 23. Typically, interface 23 is an input/output (I/O) cage embodied/contained within each server 12. Interface 23 also includes a set of network interconnects such as express peripheral component interconnects (PCIs) 25. Interface 23 may also include other components as indicated in the above-incorporated patent applications.


In any event, data will be received from external source(s) 10 on processor(s) 20 and communicated to server(s) via interface(s) 23. Once received, processor(s) 20 will process the data and return the processed data to server(s) 12, which can return the same to external source(s) 10. Processor(s) 20 can also leverage staging storage and processed data storage devices to store the original data and/or the processed data. As shown in FIG. 2B, each processor 20 typically includes a power processing element (PPE) 30, an element interconnect bus (EIB) 32 coupled to the PPE 30, and a set (e.g., one or more but typically a plurality) of special purpose engines (SPEs) 34. The SPEs share the load for processing the data.


Referring briefly to FIG. 3, a more specific diagram showing the components' placement within hybrid system 11 is shown. As depicted, server(s) 12 receive/send data from external sources A and B, and route that data to processor(s) 20 for processing. After such processing, processed data is returned to server(s) 12, and to external sources A and B. Also present in hybrid system 11 is staging storage device 36 and processed data storage device 38. Staging storage device 36 can be used to store data prior to, during and/or after being processed, while processed data storage device can be used to store processed data.


Referring now to FIGS. 4A-D, a flow diagram of an illustrative process according to the present invention will be described. For brevity purposes, for the remainder of the Detailed Description of the Invention, server 12 is referred to as “S”, while processor 20 is referred to as “C”. In step S1 external source (A) makes a connection request. In step S2, the connection request is passed on to C after validation by server S. In step S3, C accepts connection, S informs A of connection setup completeness. In step S4 stream P arrives from A at server S. S performs P′=T(P) where T is a transformation function on stream P. In step S5, S can save the data in storage and/or pass it to another device. In step S6, output bytes are continuously passed onto C. In step S7, C performs P″=U(P′) where U is transformation function performed by C. In step S8, P″ is routed to back to S. S performs P3=V(P″) where V is a transformation function performed by server S in step S9. In step S10, P3 is routed continuously to B or A. Additionally, in step S10, A presents connection termination packet (E). In step S11, S receives E and in S12 S inspects E. In step S13, it is determined that E is a connection termination packet. In step S14, input sampling and computation stops. In step S15, S informs C of stream completion. In step S16, C stops computation. In step S17, C informs S of computation termination. In step S18, S informs B of connection termination. In step S19, S acknowledges A of computation completion.


Although not separately shown in a diagram, the following is an example of another control flow made possible under the present invention. This control flow is useful in scenarios where requests are made directly by S to C without data being sourced from A or redirected to B.

    • 1. S makes connection request
    • 2. Is the connection request valid? (performed by C)
    • 3. If yes, accepted by C
    • 4. Stream P arrives from S at Processor C (P can also just be “block” input with a predefined length)
    • 5. C performs T(P) where T is a transformation function on stream P
    • 6. T(P) Output bytes are continuously passed back to S
    • 7. S encounters End-of-File or End of Stream
    • 8. S presents connection termination packet (E)
    • 9. C inspects E
    • 10. Is E a connection termination packet?
    • 11. If Yes, stop sampling inputs, stop computation on C
    • 12. C acknowledges S on computation termination


Under the present invention, both a push model and a pull model can be used. Control messages can be sent across a separate control path with data messages being sent over the regular data path. Here two separate connection IDs are needed. Control messages can also be sent along with data messages across the same path. In this case, only one connection ID is needed. Both Push and Pull models can be realized for separate or unified data path and control path. The push model is useful for short data where latency is a concern. Control messages usually have latency bounds for data transfer. This requires engagement of the data source computer processor until all the data is pushed out. The pull model is usually useful for bulk data where the destination computer can read data directly from the source's memory without involving the source's processor. Here the latency of communicating the location and size of the data from the source to the destination can easily be amortized over the whole data transfer. In a preferred embodiment of this invention, push and pull model can be invoked selectively depending on the length of data to be exchanged.


The following steps show how the push and pull models works:


Dynamic Model Selection






    • (1) S and C want to communicate. Sender (S or C) makes the following decisions


      Step 1—Is data of predefined length and less than Push Threshold (PT)?


      Step 2—If yes, then employ “push”


      Step 3—if no, then data is of streaming nature without any known size. S “shoulder taps” C without location address of data.


      Push Threshold (PT) is a parameter that can be chosen for a given application or data type (fixed length or stream) by the designer of the system.


      Push Model

    • S shoulder taps C with data block size (if known).

    • S looks up application communication rate requirements (R).

    • S looks up # of links in “link aggregation pool” (N).

    • S matches R and N by expanding or shrinking N [dynamic allocation].

    • S and C agree on number of links required for data transfer

    • S pushes data to C.

    • S can close connection in the following ways—when all data is sent (size known) & when job is complete.

    • S closes connection by shoulder tap to C.


      Pull Model


      S shoulder taps C with data block size (if known).


      S looks up application communication rate requirements (R).


      S looks up # of links in “link aggregation pool” (N).


      S matches R and N by expanding or shrinking N [dynamic allocation].


      S and C agree on number of links required for data transfer


      C pulls data from S memory.


      S can close connection in the following ways—when all data is sent (size known) & when job is complete.


      S closes connection by shoulder tap to C





In FIG. 3, server 12 (“S”) and processor 20 (“C”) share access to staging storage device 36. If S needs to transfer dataset D to C then the following steps must happen—(i) S must read D and (ii) transfer D to C over link L. Instead, S can inform C of the name of the dataset and C can read this dataset directly from 36. This possible because S and C share staging device 36. The steps required for this are listed as follows (again, for brevity “S” and “C” are used to refer to server 12 and processor 20, respectively)—

    • Step 1—S provides dataset name & location (dataset descriptor) along control path to C. This serves as “shoulder tap”. C receives this information by polling for data, “pushed” from S.
    • Step 2—C reads data from D using dataset descriptor.
    • Step 1—Push or pull implementation possible.
    • Step 2—Pull or push implementation possible.
    • Step 1 (push)—“Control Path”
      • S shoulder taps (writes to) C with dataset name & location (if known).
    • Step 1 (pull)—“Control Path”
      • S shoulder taps C with data block size (if known).
      • C pulls data from S memory.
    • Step 2 (Pull form)—“Data path”
      • Staging storage device 36 stores table with dataset name and dataset block locations.
      • C makes read request to staging storage device 36 with dataset name D.
      • staging storage device 36 provides a list of blocks.
      • C reads blocks from staging storage device 36
      • C encounters end of dataset.
      • C closes connection.
    • Step 2 (push form)—“Data Path”
      • Staging storage device 36 stores table with dataset name and dataset block locations.
      • C makes read request to staging storage device 36 with dataset name D.
      • storage controller of staging storage device 36 pushes disk blocks of D directly into memory of C.
      • D closes connection.


The foregoing description of various aspects of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously, many modifications and variations are possible. Such modifications and variations that may be apparent to a person skilled in the art are intended to be included within the scope of the invention as defined by the accompanying claims.

Claims
  • 1. A server-processor hybrid system for processing data, comprising: a set of front-end servers configured to receive the data from an external source;a set of back-end application optimized processors configured to receive the data from the set of front-end servers, process the data, and return processed data to the set of front-end servers; andan interface within at least one of the set of front-end servers having a set of network interconnects, the interface connecting the set of front-end servers with the set of back-end application optimized processors, the interface configured to:communicate the data received from the external source, from the set of front-end servers to the set of back-end application optimized processors by selectively invoking a push model or a pull model, andcommunicate the processed data from the back-end application optimized processors to the set of front-end servers by selectively invoking the push model or the pull model,wherein the push model is selectively invoked when the data to be transmitted includes a first data type, andthe pull model is selectively invoked when the data to be transmitted includes a second data type that is distinct from the first data type.
  • 2. The server-processor hybrid system of claim 1, the interface being an input/output (I/O) cage, and the interface being embodied in each of the set of front-end servers.
  • 3. The server-processor hybrid system of claim 1, each of the set of back-end application optimized processors comprising: a power processing element (PPE);an element interconnect bus (EIB) coupled to the PPE; anda set of special purpose engines (SPEs) coupled to the EIB, wherein the set of SPEs is configured to process the data.
  • 4. The server-processor hybrid system of claim 1, further comprising a web content server, portal, an application, an application accelerator and a database accelerator.
  • 5. The server-processor hybrid system of claim 1, further comprising: a staging storage device; and a processed data storage device.
  • 6. The server-processor hybrid system of claim 1, wherein the interface is further configured to: send a control message across a control path that is separate from a path across which the data received from the external source and the processed data are sent,wherein a first connection ID is required for the control path and a second connection ID is required for the path across which the data received from the external source and the processed data are sent.
  • 7. The server-processor hybrid system of claim 1, wherein the interface is further configured to: send a control message across a same path across which the data received from the external source and the processed data are sent,wherein a single connection ID is used for both the control message and the data.
US Referenced Citations (151)
Number Name Date Kind
4517593 Keller et al. May 1985 A
4893188 Murakami et al. Jan 1990 A
5136662 Maruyama et al. Aug 1992 A
5506999 Skillman et al. Apr 1996 A
5621811 Roder et al. Apr 1997 A
5659630 Forslund Aug 1997 A
5721883 Katsuo et al. Feb 1998 A
5809078 Tani et al. Sep 1998 A
5832308 Nakamura et al. Nov 1998 A
5956081 Katz et al. Sep 1999 A
6023637 Liu et al. Feb 2000 A
6025854 Hinz et al. Feb 2000 A
6081659 Garza et al. Jun 2000 A
6166373 Mao Dec 2000 A
6215898 Woodfill et al. Apr 2001 B1
6404902 Takano et al. Jun 2002 B1
6456737 Woodfill et al. Sep 2002 B1
6487619 Takagi Nov 2002 B1
6549992 Armangau et al. Apr 2003 B1
6567622 Phillips May 2003 B2
6647415 Olarig et al. Nov 2003 B1
6661931 Kawada Dec 2003 B1
6671397 Mahon et al. Dec 2003 B1
6744931 Komiya et al. Jun 2004 B2
6825943 Barr et al. Nov 2004 B1
6829378 DiFilippo et al. Dec 2004 B2
6898633 Lyndersay May 2005 B1
6898634 Collins et al. May 2005 B2
6898670 Nahum May 2005 B2
6950394 Chou et al. Sep 2005 B1
6978894 Mundt Dec 2005 B2
6987894 Sasaki et al. Jan 2006 B2
7000145 Werner et al. Feb 2006 B2
7016996 Schober Mar 2006 B1
7043745 Nygren et al. May 2006 B2
7065618 Ghemawat et al. Jun 2006 B1
7076569 Bailey et al. Jul 2006 B1
7095882 Akahori Aug 2006 B2
7102777 Haraguchi Sep 2006 B2
7106895 Goldberg et al. Sep 2006 B1
7142725 Komiya et al. Nov 2006 B2
7171036 Liu et al. Jan 2007 B1
7225324 Huppenthal et al. May 2007 B2
7243116 Suzuki et al. Jul 2007 B2
7299322 Hosouchi et al. Nov 2007 B2
7327889 Imai et al. Feb 2008 B1
7430622 Owen Sep 2008 B1
7480441 Klausberger et al. Jan 2009 B2
7523148 Suzuki et al. Apr 2009 B2
7602394 Seki et al. Oct 2009 B2
7605818 Nagao et al. Oct 2009 B2
7743087 Anderson et al. Jun 2010 B1
7801895 Hepper et al. Sep 2010 B2
7971011 Furukawa et al. Jun 2011 B2
8052272 Smith et al. Nov 2011 B2
8078837 Kajihara Dec 2011 B2
8086660 Smith Dec 2011 B2
8094157 Le Grand Jan 2012 B1
20020002636 Vange Jan 2002 A1
20020004816 Vange Jan 2002 A1
20020129216 Collins Sep 2002 A1
20020164059 DiFilippo et al. Nov 2002 A1
20020198371 Wang Dec 2002 A1
20030031355 Nagatsuka Feb 2003 A1
20030033520 Peiffer Feb 2003 A1
20030053118 Muramoto et al. Mar 2003 A1
20030061365 White Mar 2003 A1
20030092980 Nitz May 2003 A1
20030113034 Komiya et al. Jun 2003 A1
20040024810 Choubey et al. Feb 2004 A1
20040062265 Poledna Apr 2004 A1
20040062454 Komiya et al. Apr 2004 A1
20040091243 Theriault et al. May 2004 A1
20040122790 Walker et al. Jun 2004 A1
20040143631 Banerjee et al. Jul 2004 A1
20040153751 Marshal et al. Aug 2004 A1
20040156546 Kloth Aug 2004 A1
20040170313 Nakano et al. Sep 2004 A1
20040186371 Toda Sep 2004 A1
20040217956 Besl et al. Nov 2004 A1
20040225897 Norton Nov 2004 A1
20040228515 Okabe et al. Nov 2004 A1
20040233036 Sefton Nov 2004 A1
20040252467 Dobbs et al. Dec 2004 A1
20040260895 Werner et al. Dec 2004 A1
20050013960 Ozeki et al. Jan 2005 A1
20050022038 Kaushik et al. Jan 2005 A1
20050044132 Campbell et al. Feb 2005 A1
20050063575 Ma et al. Mar 2005 A1
20050080928 Beverly et al. Apr 2005 A1
20050083338 Yun et al. Apr 2005 A1
20050084137 Kim et al. Apr 2005 A1
20050093990 Aoyama May 2005 A1
20050113960 Karau et al. May 2005 A1
20050126505 Gallager et al. Jun 2005 A1
20050219253 Piazza et al. Oct 2005 A1
20050259866 Jacobs et al. Nov 2005 A1
20050263678 Arakawa Dec 2005 A1
20060013473 Woodfill et al. Jan 2006 A1
20060047794 Jezierski Mar 2006 A1
20060117238 DeVries et al. Jun 2006 A1
20060129697 Vange Jun 2006 A1
20060135117 Laumen et al. Jun 2006 A1
20060149798 Yamagami Jul 2006 A1
20060155805 Kim Jul 2006 A1
20060171452 Waehner Aug 2006 A1
20060184296 Voeller et al. Aug 2006 A1
20060190627 Wu et al. Aug 2006 A1
20060235863 Khan Oct 2006 A1
20060239194 Chapell Oct 2006 A1
20060250514 Inoue et al. Nov 2006 A1
20060268357 Vook et al. Nov 2006 A1
20060269119 Goldberg et al. Nov 2006 A1
20060274971 Kumazawa et al. Dec 2006 A1
20060279750 Ha Dec 2006 A1
20070006285 Stafie Jan 2007 A1
20070126744 Tsutsumi Jun 2007 A1
20070146491 Tremblay et al. Jun 2007 A1
20070150568 Ruiz Jun 2007 A1
20070159642 Choi Jul 2007 A1
20070174294 Srivastava Jul 2007 A1
20070198677 Ozhan Aug 2007 A1
20070229888 Matsui Oct 2007 A1
20070245097 Gschwind et al. Oct 2007 A1
20070250519 Fineberg et al. Oct 2007 A1
20080013862 Isaka et al. Jan 2008 A1
20080036780 Liang et al. Feb 2008 A1
20080063387 Yahata et al. Mar 2008 A1
20080092744 Storbo et al. Apr 2008 A1
20080129740 Itagaki et al. Jun 2008 A1
20080140771 Vass Jun 2008 A1
20080144880 DeLuca Jun 2008 A1
20080147781 Hopmann et al. Jun 2008 A1
20080154892 Liu Jun 2008 A1
20080177964 Takahashi et al. Jul 2008 A1
20080189752 Moradi Aug 2008 A1
20080259086 Doi et al. Oct 2008 A1
20080260297 Chung et al. Oct 2008 A1
20080263154 Van Datta Oct 2008 A1
20080270979 McCool et al. Oct 2008 A1
20090003542 Ramanathan et al. Jan 2009 A1
20090052542 Romanovskiy et al. Feb 2009 A1
20090066706 Yasue et al. Mar 2009 A1
20090074052 Fukuhara et al. Mar 2009 A1
20090083263 Felch et al. Mar 2009 A1
20090089462 Strutt Apr 2009 A1
20090150555 Kim et al. Jun 2009 A1
20090150556 Kim et al. Jun 2009 A1
20090187654 Raja et al. Jul 2009 A1
20090265396 Ram et al. Oct 2009 A1
20100060651 Gala et al. Mar 2010 A1
Foreign Referenced Citations (5)
Number Date Country
1345120 Sep 2003 EP
05233570 Sep 1993 JP
2001503171 Mar 2001 JP
2007102794 Apr 2007 JP
0068884 Apr 2000 WO
Non-Patent Literature Citations (70)
Entry
U.S. Appl. No. 11/951,709, Notice of Allowance dated Feb. 4, 2016, (IBME-0471), 54 pages.
Kim et al., U.S. Appl. No. 11/951,712, Office Action Communication, Sep. 9, 2009, 26 pages.
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, dated Nov. 22, 2010, 33 pages.
Chambers, U.S. Appl. No. 11/951,709, Office Action Communication, dated Nov. 29, 2010, 21 pages.
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, dated Nov. 17, 2010, 13 pages.
TIV, U.S. Appl. No. 11/951,712, Office Action Communication, dated Jan. 5, 2011, 18 pages.
Do, U.S. Appl. No. 11/668,875, Notice of Allowance & Fees Due, dated Aug. 13, 2010, 9 pages.
Do, U.S. Appl. No. 11/668,875, Notice of Allowance & Fees Due, dated Sep. 20, 2010, 8 pages.
Kuhnen, PCT / EP2008 / 050443, Invitation to Pay Additional Fees, dated Apr. 25, 2008, 6 pages.
Eveno, PCT / EP2008 / 050443, International Search Report, dated Jul. 22, 2008, 5 pages.
Cussac, PCT / EP2008 / 050443, PCT International Preliminary Report on Patentability, dated Aug. 4, 2009, 8 pages.
Tsung Yin Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, dated Feb. 18, 2011, 17 pages.
Tsung Yin Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, dated Feb. 18, 2011, 17 pages.
Kim et al., U.S. Appl. No. 11/940,470, Office Action Communication, dated Nov. 18, 2009, 31 pages.
Kim et al., U.S. Appl. No. 11/951,709, Office Action Communication, dated Nov. 17, 2009, 20 pages.
PCT Search Report, International Application No. PCT/EP2008/054331, dated Oct. 4, 2008, 10 pages.
Kim et al., U.S. Appl. No. 11/951,709, Office Action Communication, dated May 14, 2010 24 pages.
Kim et al., U.S. Appl. No. 11/940,470, Office Action Communication, dated Jun. 9, 2010 26 pages.
Chung et al., U.S. Appl. No. 11/738,711, Office Action Communication, dated Jun. 25, 2010, 26 pages.
Chung et al., U.S. Appl. No. 11/738,723, Office Action Communication, dated Jun. 24, 2010, 26 pages.
Kim et al., U.S. Appl. No. 11/951,712, Office Action Communication, dated Jul. 23, 2010, 25 pages.
Yang, U.S. Appl. No. 11/767,728, Office Action Communication, dated Nov. 19, 2010, 25 pages.
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, dated Nov. 9, 2010, 13 pages.
Cosby, U.S. Appl. No. 11/940,470, Office Action Communication, dated Nov. 26, 2010, 19 pages.
Tiv, U.S. Appl. No. 11/951,712, Office Action Communication, dated Oct. 21, 2011, 27 pages.
Yang, U.S. Appl. No. 11/767,728, Office Action Communication, dated Oct. 28, 2011, 33 pages.
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, dated Nov. 4, 2011, 15 pages.
Entezari, U.S. Appl. No. 12/028,073, Office Action Communication, dated Dec. 2, 2011, 51 pages.
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, dated Nov. 4, 2011, 14 pages.
Chambers, U.S. Appl. No. 11/951,709, Office Action Communication, dated Dec. 20, 2011, 40 pages.
Cosby, U.S. Appl. No. 11/940,470, Office Action Communication, dated Dec. 22, 2011, 41 pages.
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, dated Jan. 4, 2012, 40 pages.
Yang, U.S. Appl. No. 11/767,728, Office Action Communication, dated Feb. 16, 2012, 33 pages.
Bitar, U.S. Appl. No. 11/782,170, Notice of Allowance and Fees Due, dated Feb. 21, 2012, 20 pages.
Chambers, U.S. Appl. No. 11/951,709, Office Action Communication, dated Mar. 21, 2012, 27 pages.
Cosby, Lawrence V., U.S. Appl. No. 11/940,470, Office Action Communication, dated Mar. 4, 2011, 22 pages.
Yang, Qian, U.S. Appl. No. 11/877,926, Office Action Communication, dated Mar. 23, 2011, 32 pages.
Bitar, Nancy, U.S. Appl. No. 11/782,170, Office Action Communication, dated Mar. 17, 2011, 19 pages.
Yang, Qian, U.S. Appl. No. 11/767,728, Office Action Communication, dated Mar. 15, 2011, 34 pages.
Tiv, U.S. Appl. No. 11/951,712, Office Action Communication, dated Apr. 26, 2011, 20 pages.
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, dated May 23, 2011, 16 pages.
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, dated May 23, 2011, 16 pages.
Yang, U.S. Appl. No. 11/767,728, Office Action Communication, dated Jul. 28, 2011, 32 pages.
Bitar, U.S. Appl. No. 11/782,170, Office Action Communication, dated Sep. 16, 2011, 21 pages.
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, dated Sep. 23, 2011, 20 pages.
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, dated Sep. 27, 2011, 20 pages.
Entezari, U.S. Appl. No. 12/028,073, Notice of Allowance & Fees Due, dated Mar. 21, 2012, 18 pages.
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, dated Apr. 27, 2012, 32 pages.
Yang, U.S. Appl. No. 11/767,728, Office Action Communication, dated May 21, 2012, 49 pages.
Tsai, U.S. Appl. No. 11/738,711, Notice of Allowance & Fees Due, dated May 25, 2012, 5 pages.
Tsai, U.S. Appl. No. 11/738,723, Notice of Allowance & Fees Due, dated May 25, 2012, 31 pages.
Kim, U.S. Appl. No. 12/057,942, Office Action Communication, dated Jun. 7, 2012, 58 pages.
Yang, U.S. Appl. No. 11/767,728, Office Action Communication, dated Aug. 10, 2012, 41 pages.
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, dated Aug. 10, 2012, 47 pages.
Patel, U.S. Appl. No. 12/057,942, Notice of Allowance & Fees Due, dated Oct. 10, 2012, 18 pages.
Yang, U.S. Appl. No. 11/767,728, Notice of Allowance & Fees Due, dated Nov. 15, 2012, 15 pages.
Cosby, U.S. Appl. No. 11/940,470, Examiner's Answers , dated Nov. 16, 2012, 36 pages.
Chambers, U.S. Appl. No. 11/951,709, Examiners Answers, dated Nov. 23, 2012, 28 pages.
Yang, U.S. Appl. No. 11/877,926, Final Office Action, dated Nov. 30, 2012, 43 pages.
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, dated Mar. 1, 2013, 37 pages.
Japanese Patent Application No. 2008-0274140, Office Action Partial Translation, dated Mar. 26, 2013, 2 pages.
Masahiro, “Operating System SXO for Continuously Operable Sure System 2000 (1): An Overview, the 42nd National Convention of IPSJ”, (the first-half period of 1991), Japan, Information Processing Society of Japan, Feb. 25, 1991, abstract, 4 pages.
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, dated Jun. 26, 2013, 42 pages.
Yang, U.S. Appl. No. 11/877,926, Notice of Allowance & Fees Due, IBME-0467, dated Oct. 18, 2013, 12 pages.
Ansari, U.S. Appl. No. 11/940,506, Office Action, IBME-0465, dated Nov. 2, 2009, 20 pages.
Ansari, U.S. Appl. No. 11/940,506, Office Action, IBME-0465, dated May 14, 2010, 16 pages.
Ansari, U.S. Appl. No. 11/940,506, Final Office Action, IBME-0465, dated Oct. 29, 2010, 21 pages.
Ansari, U.S. Appl. No. 11/940,506, Office Action, IBME-0465, dated May 8, 2014, 90 pages.
Ansari, U.S. Appl. No. 11/940,506, Final Office Action, IBME-0465, dated Dec. 17, 2014, 17 pages.
Ansari, U.S. Appl. No. 11/940,506, Notice of Allowance and Fee(s) Due, IBME-0465, dated May 11, 2014, 12 pages.
Related Publications (1)
Number Date Country
20150271254 A1 Sep 2015 US
Continuations (1)
Number Date Country
Parent 11940506 Nov 2007 US
Child 14729596 US