In general, the present invention relates to image processing/inspection. Specifically, the present invention relates to a hybrid image processing system that provides accelerated image processing as compared to previous approaches.
Current image processing/inspection systems have limited processing power. Specifically, current systems perform all image processing functions within a single, general-purpose system. The processor used in current image processing/inspection systems is not powerful enough to handle the image processing demands, data rates, and algorithms for much of the current generation of systems (e.g., manufacturing inspection systems), let alone the next generation of systems. Next-generation manufacturing inspection systems have a need for a fast image processing system in order to complete image inspection within required times. As the size of the inspection area and the amount of gray scale data double, the data per one scan area increases dramatically. Therefore, the image inspection processing time is drastically increased. Thus, the current inspection system(s) will not adequately handle the requirements for future manufacturing systems.
Although, image processing functions are sometimes offloaded to another system, this other system also uses a general purpose processor that fails to actually perform any image processing acceleration. In addition, image processing functions in current systems are tied to a specific processor and platform, making it difficult to offload and accelerate specific functions at a fine-grained level.
Whereas the development of a new inspection system will increase cost and development time, it is desirable to use reusable system components without impacting system performance. In view of the foregoing, there exists a need for an approach that solves at least one of the above-referenced deficiencies of the current art.
This invention relates to machine vision computing environments, and more specifically relates to a system and method for selectively accelerating the execution of image processing applications using a hybrid computing system. Under the present invention, a hybrid system is one that is multi-platform, and potentially distributed via a network or other connection. Provided herein is a machine vision system and method for executing image processing applications on a hybrid image processing system. Implementations of the invention provide a machine vision system and method for distributing and managing the execution of image processing applications at a fine-grained level via a switch-connected hybrid system. This method allows one system to be used to manage and control the system functions, and one or more other systems to execute image processing applications. The teachings herein also allow the management and control system components to be reused, and the image processing components to be used as an image processing accelerator or co-processor.
As such, the hybrid image processing system of the present invention generally includes an image interface unit and an image processing unit. The image interface unit will receive image data corresponding to a set (i.e., at least one) of images, generate commands for processing the image data, and send the images and the commands to an image processing unit (of the hybrid image processing system). Upon receipt, the image processing unit will recognize and interpret the commands, assign and/or schedule tasks for processing the image data to a set of (e.g., special) processing engines based on the commands, and return results and/or processed image data to the image interface unit.
A first aspect of the present invention provides a hybrid image processing system, comprising: an image interface unit being configured to receive image data from a set of image recordation mechanisms; and an image processing unit being configured to receive the image data from the image interface unit, process the image data, and send processed image data to the image interface unit.
A second aspect of the present invention provides a hybrid image processing system, comprising: an image interface unit for receiving image data, for sending the image data and commands for processing the image data, and for receiving processed image data, the image interface unit comprising a client application, an image grabber library, a first communications library, and an image processing command library; and an image processing unit for receiving the image data and the commands from the image interface unit, for processing the image data, and for returning processed image data to the image interface unit, the image processing unit comprising a command dispatcher, an image processing library, a software developer kit library, and a second communications library.
A third aspect of the present invention provides a method for processing images, comprising: receiving image data on an image interface unit from a set of image recordation mechanisms; sending the image data and commands for processing the image data to an image processing unit; processing the image data on the image processing unit based on the commands; and returning processed image data from the image processing unit to the image interface unit.
A fourth aspect of the present invention provides a method for processing images, comprising: receiving image data; generating commands for processing the image data using an image processing command library; interpreting the commands; assigning tasks to a set of processing engines to process the image data based on the commands; and processing the image data in response to the tasks.
A fifth aspect of the present invention provides a program product stored on at least one computer readable medium for processing images, the at least one computer readable medium comprising program code for causing at least one computer system to: receive image data from a set of image recordation mechanisms; generate commands for processing the image data; interpret the commands; assign tasks to a set of processing engines based on the commands to process the image data; and process the image data in response to the tasks.
A sixth aspect of the present invention provides a method for deploying a system for processing images, comprising: providing a computer infrastructure being operable to: receive image data from a set of image recordation mechanisms; generate commands for processing the image data; interpret the commands; assign tasks to a set of processing engines to process the image data based on the commands; and process the image data in response to the tasks.
A seventh aspect of the present invention provides computer software embodied in a propagated signal for processing images, the computer software comprising instructions for causing at least one computer system to: receive image data from a set of image recordation mechanisms; generate commands for processing the image data; interpret the commands; assign tasks to a set of processing engines based on the commands to process the image data; and process the image data in response to the tasks.
An eighth aspect of the present invention provides a data processing system for a system for processing images, comprising: a memory medium comprising instructions; a bus coupled to the memory medium; and a processor coupled to the bus that when executing the instructions causes the data processing system to: receive image data from a set of image recordation mechanisms, generate commands for processing the image data, interpret the commands, assign tasks to a set of processing engines based on the commands to process the image data, and process the image data in response to the tasks.
A ninth aspect of the present invention provides a computer-implemented business method for processing images, comprising: receiving image data; generating commands for processing the image data using an image processing command library; interpreting the commands; assigning tasks to a set of processing engines to process the image data based on the commands; and processing the image data in response to the tasks.
These and other features of this invention will be more readily understood from the following detailed description of the various aspects of the invention taken in conjunction with the accompanying drawings in which:
The drawings are not necessarily to scale. The drawings are merely schematic representations, not intended to portray specific parameters of the invention. The drawings are intended to depict only typical embodiments of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numbering represents like elements.
This invention relates to machine vision computing environments, and more specifically relates to a system and method for selectively accelerating the execution of image processing applications using a hybrid computing system. Under the present invention, a hybrid system is one that is multi-platform, and potentially distributed via a network or other connection. Provided herein is a machine vision system and method for executing image processing applications on a hybrid image processing system. Implementations of the invention provide a machine vision system and method for distributing and managing the execution of image processing applications at a fine-grained level via a switch-connected hybrid system. This method allows one system to be used to manage and control the system functions, and one or more other systems to execute image processing applications. The teachings herein also allow the management and control system components to be reused, and the image processing components to be used as an image processing accelerator or co-processor.
As such, the hybrid image processing system of the present invention generally includes an image interface unit and an image processing unit. The image interface unit will receive image data corresponding to a set (i.e., at least one) of images, generate commands for processing the image data, and send the images and the commands to an image processing unit (of the hybrid image processing system). Upon receipt, the image processing unit will recognize and interpret the commands, assign and/or schedule tasks for processing the image data to a set of (e.g., special) processing engines based on the commands, and return results and/or processed image data to the image interface unit.
Referring now to
This new design approach is a processing/inspection system based on hybrid, reusable components/systems 54A-N that are combined with special purpose engines/accelerators. Image processing applications use algorithms that often have specialized functions that can benefit from special purpose processors. These special purpose processors can be used as accelerators to speed up image processing algorithms in a fine-grained, selective fashion that takes advantage of the strengths of both general purpose and special purpose processors. Thus, the present invention, combines image recording mechanisms/devices 58A-N such as cameras with a special purpose processor for image processing as well as a general purpose processor 50 for determining control information.
In a typical embodiment, images are received by hybrid systems 54A-N of image co-processor 52, which process the images to determine image data. This image data (and optionally the images themselves) are then communicated to control processor 50 and staging storage unit 60. Control processor 50 then processes the image data to determine control information. The images, image data, and/or control information can then be stored in archive storage unit 62.
Referring now to
Further shown within image co-processor 52 is a power processing element (PPE) 76, an element interconnect bus (EIB) 74 coupled to the PPE, and a set (e.g., one or more) but typically a plurality of special purpose engines (SPEs) 54A-N. SPEs 54A-N share the load involved with processing image(s) into image data. The division of work among SPEs 54A-N was not previously performed, and hence, previous systems are not suitable for current day and future image technology. As further shown, SPEs 54A-N feed image data, image processing routines, arithmetic/statistical operations, inspect processes, etc. to main memory 70 (which could be realized as staging storage unit 60 of
As further depicted, IA-based PC system 14A of the related art obtains an image from image recordation mechanism 10A via image grabber 20, and passes the image to a general purpose image processor 24 for processing (e.g., utilizing image buffer 22). This sparsely processed image data is then passed to bridge chip 26, IA CPU 30, and (DDR) main memory 28. As can be seen, system 14A (of the related art) utilizes only a single general-purpose processor to process the image. Whereas, the system of the present invention (and of the above-incorporated patent applications) utilizes an image co-processor having a plurality of SPEs 54A-N as well as general purpose control processor 24 of IA-based PC system 14A. This is accomplished by communications through legacy application(s) 32 in IA-based PC system 14A. Thus, the present invention not only provides improved and accelerated image processing, but it does so by utilizing both existing and new infrastructure. It should be noted that the hybrid image processing system of the present invention is operable with multiple different computing platforms (e.g., Windows, Linux, etc.).
Along these lines, this disclosure (as further described in
As will be further described below, image grabbers 56A-N receive image data corresponding to a set of images from image recordation mechanisms 58A-N, upon receipt, image interface unit 102 will generate commands for processing the image data using an image processing command library. The commands and image data will then be sent to image processing unit 104 using sets of communications cards 112A-N and 118A-N through set of communications switches 114A-N. Upon receiving the image data and commands, image processing unit 104 will interpret the commands, and assign tasks to a set of processing engines (e.g., SPEs of
Referring now to
Referring collectively to
Regardless, upon receiving the image data and commands in step S4, image processing unit 104 will acknowledge and interpret the commands, schedule/assign tasks for processing the image data to a set of processing engines (e.g., SPEs), and generate corresponding results. In general, these functions are performed via cell application 154, SDK library 156, command dispatcher 158, and processing engine library 160. In one embodiment, SDK library 156 means built-in functions prepared as a cell library (SPE Runtime library, SIMD mass library, etc). In addition, command dispatchers 158's function is to recognize commands sent from Intel server and distribute the tasks into SPEs or PPE itself. Processing engine (PPE/SPE) library 160 means an image processing library executed on PPE/SPE (PPE, SPE separately or using both PPE and SPE). In step 56, the results/processed image data are returned to image interface unit 102 via communications libraries 148A-B and communications cards 112A and 118A. As further shown in
It should be understood that the present invention could be deployed on one or more computing devices (e.g., servers, clients, etc.) within a computer infrastructure. This is intended to demonstrate, among other things, that the present invention could be implemented within a network environment (e.g., the Internet, a wide area network (WAN), a local area network (LAN), a virtual private network (VPN), etc.), or on a stand-alone computer system. In the case of the former, communication throughout the network can occur via any combination of various types of communications links. For example, the communication links can comprise addressable connections that may utilize any combination of wired and/or wireless transmission methods. Where communications occur via the Internet, connectivity could be provided by conventional TCP/IP sockets-based protocol, and an Internet service provider could be used to establish connectivity to the Internet. Still yet, the computer infrastructure is intended to demonstrate that some or all of the components of such an implementation could be deployed, managed, serviced, etc. by a service provider who offers to implement, deploy, and/or perform the functions of the present invention for others.
Where computer hardware is provided, it is understood that any computers utilized will include standard elements such as a processing unit, a memory medium, a bus, and input/output (I/O) interfaces. Further, such computer systems can be in communication with external I/O devices/resources. In general, processing units execute computer program code, such as the software (e.g., client application 140 and cell application 154) and functionality described above (e.g., all libraries discussed herein), which is stored within memory medium(s). While executing computer program code, the processing unit can read and/or write data to/from memory, I/O interfaces, etc. The bus provides a communication link between each of the components in a computer. External devices can comprise any device (e.g., keyboard, pointing device, display, etc.) that enable a user to interact with the computer system and/or any devices (e.g., network card, modem, etc.) that enable the computer to communicate with one or more other computing devices.
The hardware used to implement the present invention can comprise any specific purpose computing article of manufacture comprising hardware and/or computer program code for performing specific functions, any computing article of manufacture that comprises a combination of specific purpose and general purpose hardware/software, or the like. In each case, the program code and hardware can be created using standard programming and engineering techniques, respectively. Moreover, the processing unit therein may comprise a single processing unit, or be distributed across one or more processing units in one or more locations, e.g., on a client and server. Similarly, the memory medium can comprise any combination of various types of data storage and/or transmission media that reside at one or more physical locations. Further, the I/O interfaces can comprise any system for exchanging information with one or more external device. Still further, it is understood that one or more additional components (e.g., system software, math co-processing unit, etc.) can be included in the hardware.
While shown and described herein as a hybrid image processing system and method, it is understood that the invention further provides various alternative embodiments. For example, in one embodiment, the invention provides a computer-readable/useable medium that includes computer program code to enable a computer infrastructure to process images. To this extent, the computer-readable/useable medium includes program code that implements the process(es) of the invention. It is understood that the terms computer-readable medium or computer useable medium comprises one or more of any type of physical embodiment of the program code. In particular, the computer-readable/useable medium can comprise program code embodied on one or more portable storage articles of manufacture (e.g., a compact disc, a magnetic disk, a tape, etc.), on one or more data storage portions of a computing device (e.g., a fixed disk, a read-only memory, a random access memory, a cache memory, etc.), and/or as a data signal (e.g., a propagated signal) traveling over a network (e.g., during a wired/wireless electronic distribution of the program code).
In another embodiment, the invention provides a business method that performs the process of the invention on a subscription, advertising, and/or fee basis. That is, a service provider, such as a Solution Integrator, could offer to process images. In this case, the service provider can create, maintain, support, etc., a computer infrastructure, such as computer infrastructure that performs the process of the invention for one or more customers. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties.
In still another embodiment, the invention provides a computer-implemented method for processing images. In this case, a computer infrastructure can be provided and one or more systems for performing the process of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computer infrastructure. To this extent, the deployment of a system can comprise one or more of: (1) installing program code on a computing device from a computer-readable medium; (2) adding one or more computing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computer infrastructure to perform the process of the invention.
As used herein, it is understood that the terms “program code” and “computer program code” are synonymous and mean any expression, in any language, code or notation, of a set of instructions intended to cause a computing device having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form. To this extent, program code can be embodied as one or more of: an application/software program, component software/a library of functions, an operating system, a basic I/O system/driver for a particular computing and/or I/O device, and the like.
A data processing system suitable for storing and/or executing program code can be provided hereunder and can include at least one processor communicatively coupled, directly or indirectly, to memory element(s) through a system bus. The memory elements can include, but are not limited to, local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Input/output or I/O devices (including, but not limited to, keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
Network adapters also may be coupled to the system to enable the data processing system to become coupled to other data processing systems, remote printers, storage devices, and/or the like, through any combination of intervening private or public networks. Illustrative network adapters include, but are not limited to, modems, cable modems and Ethernet cards.
The foregoing description of various aspects of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously, many modifications and variations are possible. Such modifications and variations that may be apparent to a person skilled in the art are intended to be included within the scope of the invention as defined by the accompanying claims.
This application is a continuation-in-part (CIP) application of commonly owned and co-pending patent application Ser. No. 11/738,723, entitled “HETEROGENEOUS IMAGE PROCESSING SYSTEM”, filed Apr. 23, 2007, the entire contents of which are herein incorporated by reference. This application is also related in some aspects to commonly owned patent application Ser. No. 11/738,711, entitled “HETEROGENEOUS IMAGE PROCESSING SYSTEM”, filed Apr. 23, 2007, the entire contents of which are herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4517593 | Keller et al. | May 1985 | A |
4893188 | Murakami et al. | Jan 1990 | A |
5136662 | Maruyama et al. | Aug 1992 | A |
5506999 | Skillman et al. | Apr 1996 | A |
5621811 | Roder et al. | Apr 1997 | A |
5659630 | Forslund | Aug 1997 | A |
5721883 | Katsuo et al. | Feb 1998 | A |
5809078 | Tani et al. | Sep 1998 | A |
5956081 | Katz et al. | Sep 1999 | A |
6023637 | Liu et al. | Feb 2000 | A |
6025854 | Hinz et al. | Feb 2000 | A |
6081659 | Garza et al. | Jun 2000 | A |
6166373 | Mao | Dec 2000 | A |
6215898 | Woodfill et al. | Apr 2001 | B1 |
6404902 | Takano et al. | Jun 2002 | B1 |
6456737 | Woodfill et al. | Sep 2002 | B1 |
6487619 | Takagi | Nov 2002 | B1 |
6549992 | Armangau et al. | Apr 2003 | B1 |
6567622 | Phillips | May 2003 | B2 |
6647415 | Olarig et al. | Nov 2003 | B1 |
6661931 | Kawada | Dec 2003 | B1 |
6671397 | Mahon et al. | Dec 2003 | B1 |
6744931 | Komiya et al. | Jun 2004 | B2 |
6825943 | Barry et al. | Nov 2004 | B1 |
6829378 | DiFilippo et al. | Dec 2004 | B2 |
6898634 | Collins et al. | May 2005 | B2 |
6898670 | Nahum | May 2005 | B2 |
6950394 | Chou et al. | Sep 2005 | B1 |
6978894 | Mundt | Dec 2005 | B2 |
6987894 | Sasaki et al. | Jan 2006 | B2 |
7000145 | Werner et al. | Feb 2006 | B2 |
7016996 | Schober | Mar 2006 | B1 |
7043745 | Nygren et al. | May 2006 | B2 |
7065618 | Ghemawat et al. | Jun 2006 | B1 |
7076569 | Bailey et al. | Jul 2006 | B1 |
7095882 | Akahori | Aug 2006 | B2 |
7102777 | Haraguchi | Sep 2006 | B2 |
7106895 | Goldberg et al. | Sep 2006 | B1 |
7142725 | Komiya et al. | Nov 2006 | B2 |
7171036 | Liu et al. | Jan 2007 | B1 |
7225324 | Huppenthal et al. | May 2007 | B2 |
7243116 | Suzuki et al. | Jul 2007 | B2 |
7299322 | Hosouchi et al. | Nov 2007 | B2 |
7327889 | Imai et al. | Feb 2008 | B1 |
7430622 | Owen | Sep 2008 | B1 |
7480441 | Klausberger et al. | Jan 2009 | B2 |
7523148 | Suzuki et al. | Apr 2009 | B2 |
7602394 | Seki et al. | Oct 2009 | B2 |
7605818 | Nagao et al. | Oct 2009 | B2 |
7743087 | Anderson et al. | Jun 2010 | B1 |
7801895 | Hepper et al. | Sep 2010 | B2 |
8052272 | Smith et al. | Nov 2011 | B2 |
8078837 | Kajihara | Dec 2011 | B2 |
8086660 | Smith | Dec 2011 | B2 |
8094157 | Le Grand | Jan 2012 | B1 |
20020002636 | Vange et al. | Jan 2002 | A1 |
20020129216 | Collins | Sep 2002 | A1 |
20020164059 | DiFilippo et al. | Nov 2002 | A1 |
20020198371 | Wang | Dec 2002 | A1 |
20030031355 | Nagatsuka | Feb 2003 | A1 |
20030053118 | Muramoto et al. | Mar 2003 | A1 |
20030092980 | Nitz | May 2003 | A1 |
20030113034 | Komiya et al. | Jun 2003 | A1 |
20040024810 | Choubey et al. | Feb 2004 | A1 |
20040062265 | Poledna | Apr 2004 | A1 |
20040062454 | Komiya et al. | Apr 2004 | A1 |
20040091243 | Theriault et al. | May 2004 | A1 |
20040122790 | Walker et al. | Jun 2004 | A1 |
20040143631 | Banerjee et al. | Jul 2004 | A1 |
20040153751 | Marshall et al. | Aug 2004 | A1 |
20040156546 | Kloth | Aug 2004 | A1 |
20040170313 | Nakano et al. | Sep 2004 | A1 |
20040186371 | Toda | Sep 2004 | A1 |
20040217956 | Besl et al. | Nov 2004 | A1 |
20040228515 | Okabe et al. | Nov 2004 | A1 |
20040233036 | Sefton | Nov 2004 | A1 |
20040252467 | Dobbs et al. | Dec 2004 | A1 |
20040260895 | Werner et al. | Dec 2004 | A1 |
20050013960 | Ozeki et al. | Jan 2005 | A1 |
20050022038 | Kaushik et al. | Jan 2005 | A1 |
20050044132 | Campbell et al. | Feb 2005 | A1 |
20050063575 | Ma et al. | Mar 2005 | A1 |
20050080928 | Beverly et al. | Apr 2005 | A1 |
20050083338 | Yun et al. | Apr 2005 | A1 |
20050084137 | Kim et al. | Apr 2005 | A1 |
20050093990 | Aoyama | May 2005 | A1 |
20050113960 | Karau et al. | May 2005 | A1 |
20050126505 | Gallager et al. | Jun 2005 | A1 |
20050219253 | Piazza et al. | Oct 2005 | A1 |
20050259866 | Jacobs et al. | Nov 2005 | A1 |
20050263678 | Arakawa | Dec 2005 | A1 |
20060013473 | Woodfill et al. | Jan 2006 | A1 |
20060047794 | Jezierski | Mar 2006 | A1 |
20060117238 | DeVries et al. | Jun 2006 | A1 |
20060135117 | Laumen et al. | Jun 2006 | A1 |
20060149798 | Yamagami | Jul 2006 | A1 |
20060171452 | Waehner | Aug 2006 | A1 |
20060184296 | Voeller et al. | Aug 2006 | A1 |
20060190627 | Wu et al. | Aug 2006 | A1 |
20060235863 | Khan | Oct 2006 | A1 |
20060239194 | Chapell | Oct 2006 | A1 |
20060250514 | Inoue et al. | Nov 2006 | A1 |
20060268357 | Vook et al. | Nov 2006 | A1 |
20060269119 | Goldberg et al. | Nov 2006 | A1 |
20060274971 | Kumazawa et al. | Dec 2006 | A1 |
20060279750 | Ha | Dec 2006 | A1 |
20070126744 | Tsutsumi | Jun 2007 | A1 |
20070146491 | Tremblay et al. | Jun 2007 | A1 |
20070159642 | Choi | Jul 2007 | A1 |
20070229888 | Matsui | Oct 2007 | A1 |
20070245097 | Gschwind et al. | Oct 2007 | A1 |
20070250519 | Fineberg et al. | Oct 2007 | A1 |
20080013862 | Isaka et al. | Jan 2008 | A1 |
20080036780 | Liang et al. | Feb 2008 | A1 |
20080063387 | Yahata et al. | Mar 2008 | A1 |
20080092744 | Storbo et al. | Apr 2008 | A1 |
20080129740 | Itagaki et al. | Jun 2008 | A1 |
20080140771 | Vass et al. | Jun 2008 | A1 |
20080144880 | DeLuca | Jun 2008 | A1 |
20080147781 | Hopmann et al. | Jun 2008 | A1 |
20080177964 | Takahashi et al. | Jul 2008 | A1 |
20080259086 | Doi et al. | Oct 2008 | A1 |
20080260297 | Chung et al. | Oct 2008 | A1 |
20080263154 | Van Datta | Oct 2008 | A1 |
20080270979 | McCool et al. | Oct 2008 | A1 |
20090003542 | Ramanathan et al. | Jan 2009 | A1 |
20090052542 | Romanovskiy et al. | Feb 2009 | A1 |
20090066706 | Yasue et al. | Mar 2009 | A1 |
20090074052 | Fukuhara et al. | Mar 2009 | A1 |
20090083263 | Felch et al. | Mar 2009 | A1 |
20090089462 | Strutt | Apr 2009 | A1 |
20090150555 | Kim et al. | Jun 2009 | A1 |
20090150556 | Kim et al. | Jun 2009 | A1 |
20090187654 | Raja et al. | Jul 2009 | A1 |
20090265396 | Ram et al. | Oct 2009 | A1 |
20100060651 | Gala | Mar 2010 | A1 |
Number | Date | Country |
---|---|---|
1345120 | Sep 2003 | EP |
05233570 | Sep 1993 | JP |
2007102794 | Apr 2007 | JP |
0068884 | Apr 2000 | WO |
0068884 | Nov 2000 | WO |
Entry |
---|
Kim et al., U.S. Appl. No. 11/940,470, Office Action Communication, Jun. 9, 2010, 26 pages. |
Kim et al., U.S. Appl. No. 11/951,709, Office Action Communication, May 14, 2010, 24 pages. |
Kim et al., U.S. Appl. No. 11/940,506, Office Action Communication, May 14, 2010, 16 pages. |
Chung et al., U.S. Appl. No. 11/738,711, Office Action Communication, Jun. 25, 2010, 26 pages. |
Chung et al., U.S. Appl. No. 11/738,723, Office Action Communication, Jun. 24, 2010, 26 pages. |
Kim et al., U.S. Appl. No. 11/951,712, Office Action Communication, Jul. 23, 2010, 25 pages. |
Chambers, U.S. Appl. No. 11/951,709, Office Action Communication, Nov. 29, 2010, 21 pages. |
Cosby, U.S. Appl. No. 11/940,470, Office Action Communication, Nov. 26, 2010, 19 pages. |
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, Nov. 22, 2010, 33 pages. |
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, Nov. 17, 2010, 13 pages. |
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, Nov. 9, 2010, 13 pages. |
Ansari U.S. Appl. No. 11/940,506, Office Action Communication, Oct. 29, 2010, 21 pages. |
Tiv, U.S. Appl. No. 11/951,712, Office Action Communication, Jan. 5, 2011, 18 pages. |
Do, U.S. Appl. No. 11/668,875, Notice of Allowance & Fees Due, Aug. 13, 2010, 9 pages. |
Do, U.S. Appl. No. 11/668,875, Notice of Allowance & Fees Due, Sep. 20, 2010, 8 pages. |
Kuhnen, PCT / EP2008 / 050443, Invitation to Pay Additional Fees, Apr. 25, 2008, 6 pages. |
Eveno, PCT / EP2008 / 050443, International Search Report, Jul. 22, 2008, 5 pages. |
Cussac, PCT / EP2008 / 050443, PCT International Preliminary Report on Patentability, Aug. 4, 2009, 8 pages. |
Tsung Yin Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, Feb. 18, 2011, 17 pages. |
Tsung Yin Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, Feb. 18, 2011, 17 pages. |
Cosby, Lawrence V., U.S. Appl. No. 11/940,470, Office Action Communication, Mar. 4, 2011, 22 pages. |
Yang, Qian, U.S. Appl. No. 11/877,926, Office Action Communication, Mar. 23, 2011, 32 pages. |
Bitar, Nancy, U.S. Appl. No. 11/782,170, Office Action Communication, Mar. 17, 2011, 19 pages. |
Tiv, U.S. Appl. No. 11/951,712, Office Action Communication, Apr. 26, 2011, 20 pages. |
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, May 23, 2011, 16 pages. |
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, May 23, 2011, 16 pages. |
Bitar, U.S. Appl. No. 11/782,170, Notice of Allowance and Fees Due, Feb. 21, 2012, 20 pages. |
Chambers, U.S. Appl. No. 11/951,709, Office Action Communication, Mar. 21, 2012, 27 pages. |
Entezari, U.S. Appl. No. 12/028,073, Notice of Allowance & Fees Due, Mar. 21, 2012, 18 pages. |
Yang, U.S. Appl. No. 11/767,728, Office Action Communication, Oct. 28, 2011, 33 pages. |
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, Nov. 4, 2011, 14 pages. |
Entezari, U.S. Appl. No. 12/028,073, Office Action Communication, Dec. 2, 2011, 51 pages. |
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, Nov. 4, 2011, 15 pages. |
Chambers, U.S. Appl. No. 11/951,709, Office Action Communication, Dec. 20, 2011, 40 pages. |
Cosby, U.S. Appl. No. 11/940,470, Office Action Communication, Dec. 22, 2011, 41 pages. |
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, Jan. 4, 2012, 40 pages. |
Bitar, U.S. Appl. No. 11/782,170, Office Action Communication, Sep. 16, 2011, 21 pages. |
Tsai, U.S. Appl. No. 11/738,711, Office Action Communication, Sep. 23, 2011, 20 pages. |
Tsai, U.S. Appl. No. 11/738,723, Office Action Communication, Sep. 27, 2011, 20 pages. |
Tiv, U.S. Appl. No. 11/951,712, Office Action Communication, Oct. 21, 2011, 27 pages. |
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, Apr. 27, 2012, 32 pages. |
Tsai, U.S. Appl. No. 11/738,711, Notice of Allowance & Fees Due, May 25, 2012, 5 pages. |
Tsai, U.S. Appl. No. 11/738,723, Notice of Allowance & Fees Due, May 25, 2012, 31 pages. |
Kim, U.S. Appl. No. 12/057,942, Office Action Communication, Jun. 7, 2012, 58 pages. |
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, Aug. 10, 2012, 47 pages. |
Patent Cooperation Treaty, International Application No. PCT/EP2008/054331, Filed Oct. 4, 2008, International Search Report and Written Opinion of the International Searching Authority, Jul. 28, 2008, 10 pages. |
Kim et al., U.S. Appl. No. 11/940,506, Office Action Communication, Nov. 2, 2009, 20 pages. |
Kim et al., U.S. Appl. No. 11/951,709, Office Action Communication, Nov. 17, 2009, 20 pages. |
Kim et al., U.S. Appl. No. 11/951,712, Office Action Communication, Sep. 9, 2009, 26 pages. |
Kim et al., U.S. Appl. No. 11/940,470, Office Action Communication, Nov. 18, 2009, 31 pages. |
PCT Search Report, International Application No. PCT/EP2008/054331, Oct. 4, 2008, 10 pages. |
Cosby, U.S. Appl. No. 11/940,470, Examiner's Answers , Nov. 16, 2012, 36 pages. |
Chambers, U.S. Appl. No. 11/951,709, Examiners Answers, Nov. 23, 2012, 28 pages. |
Yang, U.S. Appl. No. 11/877,926, Final Office Action, Nov. 30, 2012, 43 pages. |
Japanese Patent Application No. 2008-0274140, Office Action Partial Translation, Mar. 26, 2013, 2 pages. |
Masahiro, “Operating System SXO for Continuously Operable Sure System 2000 (1): An Overview, the 42nd National Convention of IPSJ”, (the first-half period of 1991), Japan, Information Processing Society of Japan, Feb. 25, 1991, abstract, 4 pages. |
Yang, U.S. Appl. No. 11/877,926, Office Action Communication, Mar. 1, 2013, 37 pages. |
Patel, U.S. Appl. No. 12/057,942, Notice of Allowance & Fees Due, Oct. 10, 2012, 18 pages. |
Number | Date | Country | |
---|---|---|---|
20080259086 A1 | Oct 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11738723 | Apr 2007 | US |
Child | 11767728 | US |