Programmable test instrument

Information

  • Patent Grant
  • 9759772
  • Patent Number
    9,759,772
  • Date Filed
    Friday, October 28, 2011
    14 years ago
  • Date Issued
    Tuesday, September 12, 2017
    8 years ago
Abstract
In general, a test instrument includes a first processing system that is programmable to run one or more test programs to test a device interfaced to the test instrument, and that is programmed to control operation of the test instrument, a second processing system that is dedicated to device testing, the second processing system being programmable to run one or more test programs to test the device, and programmable logic configured to act as an interface between the test instrument and the device, the programmable logic being configurable to perform one or more tests on the device. The first processing system and the second processing system are programmable to access the device via the programmable logic.
Description
TECHNICAL FIELD

This disclosure relates generally to a programmable test instrument


BACKGROUND

Automatic test equipment (ATE) plays a role in the manufacture of electronics, such as semiconductor devices and circuit board assemblies. Manufacturers generally use automatic test equipment, or “tester instruments”, to verify the operation of devices during the manufacturing process. Such devices are referred to as a “device under test” (DUT) or a “unit under test” (UUT). Early detection of faults eliminates costs that would otherwise be incurred by processing defective devices, and thus reduces the overall costs of manufacturing. Manufacturers also use ATE to grade various specifications. Devices can be tested and binned according to different levels of performance in areas, such as speed. Devices can be labeled and sold according to their actual levels of performance.


SUMMARY

In general, in one aspect, a test instrument includes a first processing system that is programmable to run one or more test programs to test a device interfaced to the test instrument, and that is programmed to control operation of the test instrument, a second processing system that is dedicated to device testing, the second processing system being programmable to run one or more test programs to test the device, and programmable logic configured to act as an interface between the test instrument and the device, the programmable logic being configurable to perform one or more tests on the device. The first processing system and the second processing system are programmable to access the device via the programmable logic.


In general, in another aspect, a test instrument includes a first tier system for interacting with an environment external to the test instrument, the first tier system being programmable to perform testing operations on a device, a second tier system that is programmable to perform testing operations on the device, and a third tier system for interfacing to the device, the third tier system being configurable to perform testing operations on the device. The first tier system and the second tier system are programmed to access the device through the third tier system.


Aspects may include one or more of the following features. The first processing system has a first testing latency, the second processing system has a second testing latency, and the programmable logic has a third testing latency, the first testing latency being greater than the second testing latency, and the second testing latency being greater than the third testing latency. The first testing latency is on the order of milliseconds, the second testing latency is on the order of microseconds, and the third testing latency is on the order of nanoseconds. The first processing system is programmed to run one or more test programs to test the device interfaced to the test instrument, the second processing system is not programmed to run one or more test programs to test the device, and the programmable logic configured is not configured to perform one or more tests on the device.


The first processing system is not programmed to run one or more test programs to test the device interfaced to the test instrument, the second processing system is programmed to run one or more test programs to test the device, and the programmable logic is not configured to perform one or more tests on the device. The first processing system is not programmed to run one or more test programs to test the device interfaced to the test instrument, the second processing system is not programmed to run one or more test programs to test the device, and the programmable logic is configured to perform one or more tests on the device. The first processing system includes a processing device that executes a windowing operating system, the second processing system includes one or more processing devices, each of the one or more processing devices corresponding to a different device to be tested by the test instrument, and the programmable logic includes one or more field programmable gate arrays (FPGAs), each of the one or more FPGAs corresponding to a different device to be tested by the test instrument.


The programmable logic includes field programmable gate arrays (FPGAs), at least one of the FPGAs being programmable logic being configurable to perform one or more tests on the device, and at least one of the FPGAs being pre-programmed to perform functions that do not involve exchange of data with the device to be tested. At least one of the first processing system, the second processing system, and the programmable logic is reprogrammable via one or more interfaces. Controlling operation of the test instrument includes one or more of the following: exchanging communications between the test instrument and one or more entities over a network, scanning the test instrument for malware, and performing memory management functions.


Two or more of the features described in this disclosure, including this summary section, may be combined to form embodiments not specifically described herein.


The systems and techniques described herein, or portions thereof, may be implemented as a computer program product that includes instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices. The systems and techniques described herein, or portions thereof, may be implemented as an apparatus, method, or electronic system that may include one or more processing devices and memory to store executable instructions to implement the stated functions.


The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.





DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an example test instrument.



FIG. 2 is a block diagram of showing examples of features that may be incorporated into the example test instrument of FIG. 1



FIG. 3 is a block diagram of an example test system.



FIG. 4 is a block diagram of an example tester included in the test system.





DETAILED DESCRIPTION

Described herein is a test instrument having a multi-tiered architecture. For example, the architecture may include a first tier processing system that interacts with an environment external to the test instrument, and that is programmable to perform testing operations on a unit under test (UUT). The architecture may also include a second tier processing system that is programmable to perform testing operations on the UUT, and a third tier processing system that interfaces to the UUT and that is also configurable to perform testing operations on the DUT. The architecture may also be configured so that the first tier processing system and the second tier processing system access the device through the third tier system.



FIG. 1 is a block diagram of an example implementation of the foregoing test instrument 100. In FIG. 1, test instrument 100 includes a three-tiered processing system. However, in other example implementations, there may be more, or less, tiers. The different tiers of test instrument 100 reflect the relative relationship of the tiers to the DUT. In this example, first tier 101 includes a computer 102. Computer 102 controls various features of test instrument 100, such as communication with an external network. In addition, computer 102 is programmable to perform various testing operations, as described below. Second tier 104 includes one or more processing devices 106 to 108 that are dedicated to testing. For example, processing devices 106 to 108 typically do not perform non-test functions like test instrument control and network communication; however, the processing devices 106 to 108 may perform tasks such as communication and flow of control, interrupts, and timing. Third tier 110 includes logic 111 to 113 that is programmable both to act as an interface to a DUT 115 and to perform one or more test operations on the DUT.


In this example first tier 101, computer 102 includes one or more processing devices, such as one or more microprocessors or a single multi-core microprocessor (not shown). Computer 102 also includes memory (not shown) that stores executable code to control test instrument communication with the external environment, and to perform various “housekeeping” functions to control operation of test instrument 100. For example, computer 102 may be responsible for exchanging communications between the test instrument and one or more external entities over a network interface 120, scanning the test instrument for malware, memory management, power control, and other functions that are not specifically related to testing the DUT.


Computer 102 is also programmable to perform test operations on a DUT (e.g., 115) interfaced to test instrument 100. The test operations may include, but are not limited to, testing bus speed, reaction time, or any other appropriate operational aspects of the DUT. In general, the testing that is performed is dependent upon the type of device being tested, and the information sought during testing.


One or more test programs may be loaded into memory on computer 102, and executed by processing device(s) in computer 102 in order to perform the testing. While performing testing, computer 102 may continue to perform other functions, such as those described above, to keep test instrument 100 operational. Consequently, the test latency (e.g., the amount of time between the start of a test and receipt of test results) can be on the order of milliseconds. This is but an example of the test latency. In different systems, numerous factors may have an effect on test latency, such as the speed of the processing device(s) in computer 102, the amount of memory available in computer 102 to run the test programs, and so forth.


A possible advantage of performing testing via computer 102 relates to development costs of test programs. More specifically, computer 102 may run a an OS like Windows, or other relatively user-friendly, operating system. Tools available for development of test programs on such an operating system are typically widely available, and generally well known to test program developers. As a result, the cost of developing test programs on computer 102, to run on computer 102, can be less than the cost of developing test programs to run on the other tiers of the multi-tiered architecture. This generalization, however, may not apply in all cases.


In this example, second tier 104 includes multiple embedded processing devices 106 to 108. Here, three embedded processing devices are shown; however, test instrument 100 may include any appropriate number of embedded processing devices, e.g., one, two, four, five or more. These processing devices are embedded in the sense that they are incorporated into test instrument 100; and are dedicated to performing test functions (e.g., to testing DUTs interfaced to test instrument 100). Embedded processing devices 106 to 108 typically are not responsible for test instrument operations like the “housekeeping” operations described above that are performed by computer 102. However, in some implementations, embedded processing devices 106 to 108 may be programmed to perform one or more such operations, or other operations not specifically directed to testing DUTs.


Each embedded processing device 106 to 108 may include, e.g., a microcontroller or a microprocessor having a single core or multiple cores. Each microprocessor is programmable, either directly or via computer 102. For example, a user of test instrument 100 may interact with the operating system of computer 102 to program an embedded processing device 106. Alternatively, there may a direct interface, e.g., hardware or software, through which each embedded processing device may be programmed. Programming, in this context, refers to storing one or more test programs onto a respective embedded processing device, which can be executed on that embedded processing device to test a DUT.


As shown in FIG. 1, each embedded processing device is interfaced to computer 102 and to respective programmable logic (in this example, a field programmable gate array (FPGA)). As explained below, each FPGA acts as an interface to a separate DUT (not shown) or to a portion of a single DUT (e.g., a bus 122, 123, 124 on that DUT, as shown) for testing. Accordingly, in this example, each embedded processing device may be programmed with a test program designed specifically for the corresponding DUT, or portion thereof being tested. As noted, an appropriate test program may be loaded directly into the embedded processing device or it may be loaded via computer 102. Each embedded processing device may execute its own test program separately, and concurrently with, other embedded processing devices. In some implementations, there may be coordination among the embedded processing devices as to how their respective test programs are to be executed. Such coordination maybe implemented by the embedded processing device themselves or by computer 102. In some implementations, the coordination may involve devices at different tiers of the architecture. In some implementations, the different embedded processing devices 106 to 108 may implement different portions (e.g., modules) of the same test program, with or without appropriate coordination.


A possible advantage of performing testing via an embedded processing device relates to test latency. More specifically, because the embedded processing devices are dedicated to testing, their resources are not typically taxed by other tasks. As a result, testing latency can be less than that achieved by computer 102. For example, test latency for an embedded processing device can be on the order of microseconds. This, however, is but an example of embedded processing device test latency. In different systems, numerous factors may have an effect on test latency, such as processing device speed, the amount of memory available to run the test programs, and so forth. Accordingly, the foregoing generalization may not apply in all cases.


Furthermore, tools are available for development of test programs on the embedded processing devices. As a result, the cost of developing test programs on an embedded processing device, to run on an embedded processing device, can be less than the cost of developing test programs to run on hardware, such as an FPGA.


Third tier 110 includes programmable logic, e.g., FPGAs 111 to 113, although other types of programmable logic may be used in lieu of FPGAs. Each FPGA is configured by loading a program image into the FPGA. This program image is referred to as an “FPGA load”. In this example, each FPGA is configured to act as an interface between a DUT, or portion thereof (e.g., a DUT bus) and test instrument 100. For example, an FPGA may specify a port width, port speed (e.g., 10 MHz to 400 MHz), the number of input ports, the number of output ports, and so forth.


First tier 101 computing device(s) (e.g., computer 102) and second tier 104 computing device(s) (e.g., embedded processing devices 106 to 108) access DUT 115 through third tier 110. For example, as shown in FIG. 1, each embedded processing device may communicate with DUT 115 through a corresponding FPGA. Computer 102 may communicate with DUT 115 through one or more FPGAs, depending upon which DUT, or portion of a DUT, is being currently tested. In some implementations, each interface implemented by an FPGA is programmable. In other implementations, the interface implemented by each FPGA is static (e.g., not programmable).


Each FPGA may also be configurable to perform one or more tests on a corresponding DUT, or portion thereof to which the FPGA is interfaced. For example, the FPGA load for each FPGA may include one or more test routines that are run by the FPGA to test various aspects of the DUT. As above, the routines that are implemented depend upon the device being tested, and the information sought during testing. Test routines run by each FPGA may be run independently of other test routines run by other FPGAs, or there may be coordination among the various FPGAs. Each FPGA may execute its own test routine separately, and concurrently with, other embedded processing devices. In some implementations, there may be coordination among the FPGAs as to how their respective test programs are to be executed. Such coordination maybe implemented by the FPGAs themselves, by their corresponding embedded processing devices, or by computer 102. In some implementations, the coordination may involve devices at different tiers of the architecture. For example, computer 102, in concert with embedded processing devices 106 to 108, may coordinate operation of respective FPGAs 111 to 113. In some implementations, the different FPGAs may implement different portions (e.g., modules) of the same test routine, with or without appropriate coordination.


A possible advantage of performing testing via an FPGA relates to test latency. More specifically, because the FPGAs are hardware devices, they are able to run at higher speeds than the test routines programmed into either the embedded processing devices 106 to 108 or computer 102. As a result, testing latency can be less than that achieved by embedded processing devices 106 to 108 or computer 102. For example, test latency for an Programmable device can be on the order of nanoseconds. This, however, is but an example of FPGA test latency. In different systems, numerous factors may have an effect on test latency. Accordingly, the foregoing generalization may not apply in all cases.


In some implementations, testing may be performed exclusively by one tier or another of the architecture. For example, computer 102 may be programmed to run one or more test programs to test a DUT, while devices on other tiers of the architecture do not perform DUT tests. Embedded processing devices 106 to 108 may be programmed to run one or more test programs to test a DUT, while devices on other tiers of the architecture do not perform DUT tests. FPGAs 111 to 113 may be configured to run one or more tests on the device, while devices on other tiers of the architecture do not perform DUT tests. Devices that are not performing tests are not necessarily dormant during this time. For example, computer 102 may continue to perform the housekeeping operations described above; the FPGAs may continue to route data to/from the DUT (i.e., to act as interfaces to the DUT); and the embedded processing devices may continue be active in coordination or other communication (e.g., transmitting test results from the FPGAs to computer 102).


In other implementations, testing may be performed by different tiers of the architecture concurrently or in concert. For example, two or more of computer 102, embedded processing devices 106 to 108, and FPGAs 111 to 113 may act in coordination, at the same time or within the same test sequence, to perform one or more test operations on a single DUT or on multiple DUTs. To effect such coordination, appropriate programming is loaded into computer 102 and/or embedded processing devices 106 to 108, and/or an appropriate image is loaded into the FPGAs. By way of example, a first test may be performed on a DUT by computer 102; a second test may be performed on the DUT by embedded processing device 106; and a third test may be performed on the DUT by FPGA 111. The first, second and third tests may be separate tests, or part of the same test sequence. Data from the first, second and third tests may be combined, e.g., in computer 102, and processed to obtain the appropriate test results. These test results may be sent to an external computer (not shown) for analysis and reporting. Any of tier of the architecture or another (e.g., third party) party computer (not shown) may perform the coordination.


In implementations where one or more tiers of the architecture have not been programmed, the unprogrammed tiers may be bypassed (at least as far as their test functionality is concerned). The unprogrammed tiers may be pre-programmed or pre-configured to perform various functions, such as those described above relating to programming and communication among the tiers and with an external network.


Devices at the various tiers may be programmed or configured in real-time. In this context, “real-time” includes programming at test time or shortly before test time. That is, the test instrument need not come pre-programmed with test programs that are to be run on a DUT. Those test programs may be incorporated into the instrument at the appropriate time. Existing test programs on the test instrument may likewise be replaced with new test programs, as appropriate.



FIG. 2 shows another example implementation of a test instrument 200 having a multi-tiered architecture. In the example of FIG. 2, test instrument 200 includes a processing system 201, a control FPGA 202, and a test-defined FPGA 204.


Processing system 201 may be a computer, such as computer 102; an embedded processing device, such as embedded processing system 106 to 108; or a two-tiered processing system, such as tiers 101 and 1-4.


Control FPGA 202 may be a dedicated FPGA that is configured to perform various housekeeping functions that are not within the purview, e.g., of a computer, such as computer 102. For example, those functions may include reading memory, determining die temperatures, and regulating power in the test instrument. In this implementation, control FPGA 202 is not configurable; however, it may be configurable in other implementations.


Test-defined FPGA 204 may be a configurable FPGA, such as FPGAs 111 to 113 of FIG. 1. More specifically, test-defined FPGA 204 may be configurable to perform one or more tests on a corresponding DUT, or portion thereof to which the test-defined FPGA is interfaced. For example, the FPGA load for each test-defined FPGA may include one or more test routines that are run by the test-defined FPGA to test various aspects of the DUT. As above, the routines that are implemented depend upon the device being tested, and the information sought during testing. Test routines run by each test-defined FPGA may be run independently of other test routines run by other test-defined FPGAs, or there may be coordination among test-defined FPGA 204 and other test-defined FPGAs (not shown) that are part of the test instrument. The types of coordination among test-defined FPGAs, embedded processing devices, and a computer are similar to those described above with respect to FIG. 1.


In the example of FIG. 2, control FGPA 202 and test-defined FPGA 204 are separate devices. In other implementations, their functionalities can be combined into a single, programmable FPGA.



FIG. 2 also shows a bridge 205. Bridge 205 may include one or more buses and other appropriate electronics for transmitting communications among the various devices included in test instrument 200.


As shown in FIG. 2, processing system 201 is associated with memory 206; control FPGA 202 is associated with memory 208; and test-defined FPGA 204 is associated with memory 210. Each such memory may be used for storing test data and/or test programs, as well as executing test programs. In this example implementation, each memory is dedicated to its corresponding device. However, control FPGA 202 may provide a path, through which test-defined FPGA 204 (or another system processing device) may access, and use, its corresponding memory.


Referring now to FIG. 3, that figure shows an example of a system on which the architecture may be implemented. FIG. 3 shows an example test system 300 for testing a device-under-test (DUT) 301. Test system 300 includes a tester 302, which may have the multi-tiered architecture of FIG. 1 or 2. To interact with tester 302, system 300 includes a computer system 305 that interfaces with tester 302 over a network connection 306. As noted below, computer system 305 may incorporate the functionality of computer 102 (FIG. 1) or it may be an external computer that interacts with computer 102 on the test instrument. Typically, computer system 305 sends commands to tester 302 to initiate execution of routines and programs for testing DUT 301. Such executing test programs may initiate the generation and transmission of test signals to the DUT 301 and collect responses from the DUT. Various types of DUTs may be tested by system 300. For example, DUTs may be avionics, radar, weaponry, semiconductor devices, and so forth.


To provide test signals and collect responses from the DUT, tester 302 is connected, via an appropriate FPGA interface, to one or more connector pins that provide an interface for the internal circuitry of DUT 301. For illustrative purposes, in this example, device tester 302 is connected to a connector pin of DUT 301 via a hardwire connection to deliver test signals (to the internal circuitry of DUT 301). Device tester 302 also senses signals at DUT 301 in response to the test signals provided by device tester 302. For example, a voltage signal or a current signal may be sensed at a DUT pin in response to a test signal. Such single port tests may also be performed on other pins included in DUT 301. For example, tester 302 may provide test signals to other pins and collect associated signals reflected back over conductors (that deliver the provided signals). By collecting the reflected signals, the input impedance of the pins may be characterized along with other single port testing quantities. In other test scenarios, a digital signal may be sent to DUT 301 for storage on DUT 301. Once stored, DUT 301 may be accessed to retrieve and send the stored digital value to tester 302. The retrieved digital value may then be identified to determine if the proper value was stored on DUT 301.


Along with performing one-port measurements, a two-port test may also be performed by device tester 302. For example, a test signal may be injected to a pin on DUT 301 and a response signal may be collected from one or more other pins of DUT 301. This response signal is provided to device tester 302 to determine quantities, such as gain response, phase response, and other throughput measurement quantities.


Referring also to FIG. 4, to send and collect test signals from multiple connector pins of a DUT (or multiple DUTs), device tester 302 includes an interface card 401 that can communicate with numerous pins. For example, interface card 401 includes the one or more FPGAs described herein, which may be used to transmit test signals to the DUT and to collect corresponding responses. Each communication link to a pin on the DUT may constitute a channel and, by providing test signals to a large number of channels, testing time may be reduced since multiple tests may be performed simultaneously. Along with having many channels on an interface card, by including multiple interface cards in tester 302, the overall number of channels increases, thereby further reducing testing time. In this example, two additional interface cards 402 and 403 are shown to demonstrate that multiple interface cards may populate tester 302.


Each interface card may include dedicated integrated circuit circuitry, including, e.g., an FGPA and embedded processing device (as described, e.g., FIG. 1), for performing particular test functions. This circuitry may implement, e.g. a pin electronics (PE) stage for performing PE tests, and a parametric measurement unit (PMU) stage for performing tests. Typically PMU testing involves providing a (programmable) DC voltage or current signal to the DUT to determine such quantities as input and output impedance, current leakage, and other types of DC performance characterizations. PE testing involves sending DC or AC test signals, or waveforms, to a DUT (e.g., DUT 301) and collecting responses to further characterize the performance of the DUT. For example, the PE stage may transmit (to the DUT) AC test signals that represent a vector of binary values for storage on the DUT. Once these binary values have been stored, the DUT may be accessed by tester 302 to determine if the correct binary values have been stored.


In some arrangements, an interface device may be used to connect one or more conductors from tester 302 to the DUT. For example, the DUT may connect to an Interface Test Adapter (ITA) which interfaces with an Interface Connection Adapter (ICA) that connects with the tester. The DUT (e.g., DUT 301) may be mounted onto a device interface board (DIB) for providing access to each DUT pin. In such an arrangement, a DUT conductor may be connected to the DIB for placing test signals on the appropriate pin(s) of the DUT. Additionally, in some arrangements, tester 302 may connect to two or more DIBs for interfacing the channels provided by interface cards 401 to 403 to one or multiple DUTs.


To initiate and control the testing performed by interface cards 401 to 403, tester 302 includes a PE controller 408 (e.g., in a system processing device, in an embedded processing device, or in programmable logic) to provide test parameters (e.g., test signal voltage level, test signal current level, digital values, etc.) for producing test signals and analyzing DUT responses. Tester 302 also includes a network interface 409 that allows computer system 305 to control the operations executed by tester 302 and also allows data (e.g., test parameters, DUT responses, etc.) to pass between tester 302 and to computer system 305.


The computer system, or another processing device used on or associated with test system 300, may be configured to exchange communications with a test program running on tester 302 through active communication channels with the device tester. The computer system may be, or include, computer 102 of FIG. 1. Alternatively, computer 102 may be part of tester 302 and the computer system described with respect to FIG. 4 may communicate with computer 102.


The foregoing describes performing testing using a system processing device, embedded processing devices, or programmable logic. However, testing, as described herein, may be performed using a combination of system processing device, embedded processing devices, or programmable logic. For example, each of these different elements may run on or more test programs simultaneously to test the same device or portion thereof. Likewise, these different elements may coordinate testing so that, e.g., a system processing device (e.g., 102 of FIG. 1) performs a first part of a test sequence, an embedded processing device (e.g., 106 of FIG. 1) performs a second part of the same testing sequence, and programmable logic (e.g., FPGA 111 of FIG. 1) performs a third part of the same testing sequence. Any appropriate coordination may take place between the different programmable elements of the test instrument described herein.


Furthermore, in some implementations, a tier of processing may be circumvented. For example, testing may occur using a system processing device (e.g., 102) and programmable logic (e.g., FPGA 111), but not an embedded processing device. In such implementations, communications between the system processing device and the programmable logic may pass through an embedded processing device or bypass the embedded processing device tier altogether.


In some implementations, there may be more than three tiers of processing devices. For example, there may two tiers of embedded processing devices (resulting, e.g., in four tiers total). For example, a single embedded processing device may be used to coordinate testing of a single device, and different embedded processing devices (under the direction of that single embedded processing device) may be used to test different aspects or features of that single device.


In some implementations, one or more tiers of processing devices may be eliminated from the system of FIG. 1. For example, some implementations may not include a tier of embedded processing devices. In such example systems, there may be only a system processing device (e.g., 102 of FIG. 1) and programmable logic (e.g., FPGAs 111 to 113. In this regard, any appropriate combination of tiers may be employed in the test instrument described herein.


In some implementations, the system processing device (e.g., 102 of FIG. 1) may be external to the test instrument. For example, an external computer may be employed to control operations of the test instrument, and may interact with embedded processing device(s) and programmable logic on the test instrument in the manner described herein. In other implementations, the system processing device may be part of the test instrument or remote from the test instrument (e.g., connected to the test instrument over a network).


In some implementations, the programmable logic may be replaced with non-programmable logic. For example, rather than using an FPGA, one or more application-specific integrated circuits (ASICs) may be incorporated into the test instrument in place of, or in addition to, the programmable logic described herein.


The functionality described herein, or portions thereof, and its various modifications (hereinafter “the functions”), are not limited to the hardware described herein. All or part of the functions can be implemented, at least in part, via a computer program product, e.g., a computer program tangibly embodied in an information carrier, such as one or more non-transitory machine-readable media, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a computer, multiple computers, and/or programmable logic components.


A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.


Actions associated with implementing all or part of the functions can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the functions can be implemented as, special purpose logic circuitry, e.g., an FPGA and/or an ASIC (application-specific integrated circuit).


Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Components of a computer include a processor for executing instructions and one or more memory devices for storing instructions and data.


Components of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Components may be left out of the circuitry shown in FIGS. 1 to 4 without adversely affecting its operation. Furthermore, various separate components may be combined into one or more individual components to perform the functions described herein.


Other embodiments not specifically described herein are also within the scope of the following claims.

Claims
  • 1. A test instrument comprising: programmable logic programmed to act as an interface to a device under test, the programmable logic being configurable to perform one or more tests on the device, the programmable logic specifying a number of input ports and a number of output ports on the interface to the device;a first processing system that is programmable to run one or more test programs to test the device via the interface; anda second processing system that is dedicated to device testing, the second processing system comprising a plurality of embedded processing devices dedicated to device testing, the embedded processing devices being programmable to run one or more test programs to test the device via the interface;wherein the second processing system is configured to transmit test results from the programmable logic to the first processing system;wherein the programmable logic is configurable to execute one or more of the tests separately of the second processing system; andwherein the first processing system has a first testing latency, the second processing system has a second testing latency, and the programmable logic has a third testing latency, the first testing latency being greater than the second testing latency, and the second testing latency being greater than the third testing latency.
  • 2. The test instrument of claim 1, wherein the first testing latency is on the order of milliseconds, the second testing latency is on the order of microseconds, and the third testing latency is on the order of nanoseconds.
  • 3. The test instrument of claim 1, wherein the first processing system is programmed to run one or more test programs to test the device interfaced to the test instrument; wherein the second processing system is not programmed to run one or more test programs to test the device; andwherein the programmable logic is not configured to perform one or more tests on the device.
  • 4. The test instrument of claim 1, wherein the first processing system is not programmed to run one or more test programs to test the device interfaced to the test instrument; wherein the second processing system is programmed to run one or more test programs to test the device; andwherein the programmable logic is not configured to perform one or more tests on the device.
  • 5. The test instrument of claim 1, wherein the first processing system is not programmed to run one or more test programs to test the device interfaced to the test instrument; wherein the second processing system is not programmed to run one or more test programs to test the device; andwherein the programmable logic is configured to perform one or more tests on the device.
  • 6. The test instrument of claim 1, wherein the first processing system comprises a processing device that executes a windowing operating system; wherein each of the embedded processing devices is for testing a different device to be tested by the test instrument; andwherein the programmable logic comprises one or more field programmable gate arrays (FPGAs), each of the one or more FPGAs being for testing a different device to be tested by the test instrument.
  • 7. The test instrument of claim 1, wherein the programmable logic comprises field programmable gate arrays (FPGAs), at least one of the FPGAs being pre-programmed to perform functions that do not involve exchange of data with the device to be tested.
  • 8. The test instrument of claim 1, wherein at least one of the first processing system, the second processing system, and the programmable logic is reprogrammable via one or more interfaces.
  • 9. The test instrument of claim 1, wherein the first processing system is programmable to control operation of the test instrument by performing one or more of the following: exchanging communication between the test instrument and one or more entities over a network, scanning the test instrument for malware, and performing memory management functions.
  • 10. The test instrument of claim 1, wherein at least two of the programmable logic, the first processing system, and the second processing system are configured to perform testing operations on the device concurrently.
  • 11. A test instrument comprising: a first tier system for interacting with an environment external to the test instrument, the first tier system being programmable to perform testing operations on a device;a second tier system comprising a plurality of embedded processing devices dedicated to device testing, the embedded devices being programmable to perform testing operations on the device; anda third tier system that is programmed to act as an interface to the device, the third tier system being configurable to perform testing operations on the device, the first tier system and the second tier system being programmed to access the device through the interface;wherein the third tier system defines at least a number of input ports and a number of output ports of the interface to the device;wherein the second tier system is configured to transmit test results from the third tier system to the first tier system; andwherein the third tier system is configurable to execute one or more of the testing operations separately of the second tier system;wherein the first tier system has a first testing latency, the second tier system has a second testing latency, and the third tier system has a third testing latency, the first testing latency being greater than the second testing latency, and the second testing latency being greater than the third testing latency.
  • 12. The test instrument of claim 11, wherein the first testing latency is on the order of milliseconds, the second testing latency is on the order of microseconds, and the third testing latency is on the order of nanoseconds.
  • 13. The test instrument of claim 11, wherein the first tier system is programmed to run one or more test programs to perform the testing operations on the device; wherein the second tier system is not programmed to run one or more test programs to perform the testing operations on the device; andwherein the third tier system is not configured to perform one or more of the testing operations on the device.
  • 14. The test instrument of claim 11, wherein the first tier system is not programmed to run one or more test programs to perform the testing operations on the device; wherein the second tier system is programmed to run one or more test programs to perform the testing operations on the device; andwherein the third tier system is not configured to perform one or more testing operations on the device.
  • 15. The test instrument of claim 11, wherein the first tier system is not programmed to run one or more test programs to perform the testing operations on the device; wherein the second tier system is not programmed to run one or more test programs to perform the testing operations on the device; andwherein the third tier system is configured to perform one or more testing operations on the device.
  • 16. The test instrument of claim 11, wherein the first tier system comprises a processing device that executes a windowing operating system; wherein each of the embedded processing devices is for testing a different device to be tested by the test instrument; andwherein the third tier system comprises one or more field programmable gate arrays (FPGAs), each of the one or more FGPAs being for testing a different device to be tested by the test instrument.
  • 17. The test instrument of claim 11, wherein the third tier system comprises field programmable gate arrays (FPGAs), at least one of the FPGAs being configurable to perform one or more testing operations on the device, and at least one of the FPGAs being pre-programmed to perform functions that do not involve exchange of data with the device.
  • 18. The test instrument of claim 11, wherein at least one of the first tier system, the second tier system, and the third tier system is reprogrammable via one or more interfaces.
  • 19. The test instrument of claim 11, wherein the first tier system is programmable to control operation of the test instrument by performing one or more of the following: exchanging communications between the test instrument and one or more entities over a network, scanning the test instrument for malware, and performing memory management functions.
US Referenced Citations (252)
Number Name Date Kind
588385 Black Aug 1897 A
604448 Wells May 1898 A
3789360 Clark, Jr. et al. Jan 1974 A
3800090 Matena Mar 1974 A
3810120 Huettner et al. May 1974 A
3833888 Stafford et al. Sep 1974 A
3864670 Inoue et al. Feb 1975 A
3873818 Barnard Mar 1975 A
4038533 Dummermuth et al. Jul 1977 A
4042972 Gruner et al. Aug 1977 A
4195351 Barner et al. Mar 1980 A
4357703 Van Brunt Nov 1982 A
4466055 Kinoshita et al. Aug 1984 A
4481625 Roberts et al. Nov 1984 A
4496985 Jensen et al. Jan 1985 A
4509008 DasGupta et al. Apr 1985 A
4511968 Fencsik et al. Apr 1985 A
4525802 Hackamack Jun 1985 A
RE32326 Nagel et al. Jan 1987 E
4658209 Page Apr 1987 A
4775976 Yokoyama Oct 1988 A
4807147 Halbert et al. Feb 1989 A
4823363 Yoshida Apr 1989 A
4890102 Oliver Dec 1989 A
4901259 Watkins Feb 1990 A
4961053 Krug Oct 1990 A
5049814 Walker, III et al. Sep 1991 A
5061033 Richard Oct 1991 A
5072175 Marek Dec 1991 A
5091692 Ohno et al. Feb 1992 A
5177630 Goutzoulis et al. Jan 1993 A
5218684 Hayes et al. Jun 1993 A
5282166 Ozaki Jan 1994 A
5289116 Kurita et al. Feb 1994 A
5295079 Wong et al. Mar 1994 A
5339279 Toms et al. Aug 1994 A
5345109 Mehta Sep 1994 A
5404480 Suzuki Apr 1995 A
5410547 Drain Apr 1995 A
5444716 Jarwala et al. Aug 1995 A
5471524 Colvin et al. Nov 1995 A
5475624 West Dec 1995 A
5477160 Love Dec 1995 A
5490282 Dreps et al. Feb 1996 A
5493519 Allen, III Feb 1996 A
5524114 Peng Jun 1996 A
5528136 Rogoff et al. Jun 1996 A
5543707 Yoneyama et al. Aug 1996 A
5550480 Nelson et al. Aug 1996 A
5566296 Ohmori et al. Oct 1996 A
5581742 Lin et al. Dec 1996 A
5583874 Smith et al. Dec 1996 A
5583893 Nguyen Dec 1996 A
5598156 Hush et al. Jan 1997 A
5604888 Kiani-Shabestari et al. Feb 1997 A
5606567 Agrawal et al. Feb 1997 A
5614838 Jaber et al. Mar 1997 A
5633899 Fiedler et al. May 1997 A
5636163 Furutani et al. Jun 1997 A
5673276 Jarwala et al. Sep 1997 A
5675813 Holmdahl Oct 1997 A
5740086 Komoto Apr 1998 A
5778004 Jennion et al. Jul 1998 A
5781718 Nguyen Jul 1998 A
5784581 Hannah Jul 1998 A
5807767 Stroupe Sep 1998 A
5844856 Taylor Dec 1998 A
5845151 Story et al. Dec 1998 A
5859993 Snyder Jan 1999 A
5867436 Furutani et al. Feb 1999 A
5875132 Ozaki Feb 1999 A
5875198 Satoh Feb 1999 A
5883852 Ghia et al. Mar 1999 A
5887050 Fenske et al. Mar 1999 A
5889936 Chan Mar 1999 A
5896534 Pearce et al. Apr 1999 A
5920483 Greenwood et al. Jul 1999 A
5929651 Leas et al. Jul 1999 A
5930168 Roohparvar Jul 1999 A
5937154 Tegethoff Aug 1999 A
5942911 Motika et al. Aug 1999 A
5946472 Graves et al. Aug 1999 A
5951704 Sauer et al. Sep 1999 A
5959887 Takashina et al. Sep 1999 A
5959911 Krause et al. Sep 1999 A
5969986 Wong et al. Oct 1999 A
5990815 Linder et al. Nov 1999 A
5996102 Haulin Nov 1999 A
6002868 Jenkins et al. Dec 1999 A
6023428 Tran Feb 2000 A
6028439 Arkin et al. Feb 2000 A
6044481 Kornachuk et al. Mar 2000 A
6049896 Frank et al. Apr 2000 A
6064213 Khandros et al. May 2000 A
6069494 Ishikawa May 2000 A
6073193 Yap Jun 2000 A
6074904 Spikes, Jr. et al. Jun 2000 A
6075373 Iino Jun 2000 A
6084215 Furuya et al. Jul 2000 A
6128242 Banba et al. Oct 2000 A
6146970 Witek et al. Nov 2000 A
6148354 Ban et al. Nov 2000 A
6154803 Pontius et al. Nov 2000 A
6157975 Brief et al. Dec 2000 A
6189109 Sheikh et al. Feb 2001 B1
6202103 Vonbank et al. Mar 2001 B1
6208947 Beffa Mar 2001 B1
6232904 Hartmann et al. May 2001 B1
6272112 Orita Aug 2001 B1
6304982 Mongan et al. Oct 2001 B1
6320811 Snyder et al. Nov 2001 B1
6320866 Wolf et al. Nov 2001 B2
6324663 Chambers Nov 2001 B1
6330241 Fort Dec 2001 B1
6343260 Chew Jan 2002 B1
6345373 Chakradhar et al. Feb 2002 B1
6351134 Leas et al. Feb 2002 B2
6360271 Schuster et al. Mar 2002 B1
6363085 Samuels Mar 2002 B1
6363506 Karri Mar 2002 B1
6370635 Snyder Apr 2002 B2
6380753 Iino et al. Apr 2002 B1
6393588 Hsu et al. May 2002 B1
6400173 Shimizu et al. Jun 2002 B1
6404218 Le et al. Jun 2002 B1
6452411 Miller et al. Sep 2002 B1
6483330 Kline Nov 2002 B1
6509213 Noble Jan 2003 B2
6515484 Bald et al. Feb 2003 B1
6526557 Young et al. Feb 2003 B1
6527563 Clayton Mar 2003 B2
6531335 Grigg Mar 2003 B1
6535831 Hudson et al. Mar 2003 B1
6549155 Heminger et al. Apr 2003 B1
6551844 Eldridge et al. Apr 2003 B1
6559666 Bernier et al. May 2003 B2
6563173 Bolam et al. May 2003 B2
6571357 Martin et al. May 2003 B1
6603323 Miller et al. Aug 2003 B1
6627484 Ang Sep 2003 B1
6627954 Seefeldt Sep 2003 B1
6703852 Feltner Mar 2004 B1
6704888 Caudrelier et al. Mar 2004 B1
6724848 Iyer Apr 2004 B1
6727723 Shimizu et al. Apr 2004 B2
6734693 Nakayama May 2004 B2
6735720 Dunn et al. May 2004 B1
6753238 Kurita Jun 2004 B2
6759865 Gu et al. Jul 2004 B1
6774395 Lin et al. Aug 2004 B1
6798225 Miller Sep 2004 B2
6825052 Eldridge et al. Nov 2004 B2
6825683 Berndt et al. Nov 2004 B1
6847218 Nulty et al. Jan 2005 B1
6849928 Cha et al. Feb 2005 B2
6856150 Sporck et al. Feb 2005 B2
6876214 Crook et al. Apr 2005 B2
6903562 Smith et al. Jun 2005 B1
6912778 Ahn et al. Jul 2005 B2
6917998 Giles Jul 2005 B1
6959257 Larky et al. Oct 2005 B1
6975130 Yevmenenko Dec 2005 B2
6978335 Lee Dec 2005 B2
7036062 Morris et al. Apr 2006 B2
7102367 Yamagishi et al. Sep 2006 B2
7112975 Jin et al. Sep 2006 B1
7113902 Swoboda Sep 2006 B2
7127708 Gomez Oct 2006 B2
7138811 Mahoney et al. Nov 2006 B1
7154259 Miller Dec 2006 B2
7218134 Ho May 2007 B1
7245134 Granicher et al. Jul 2007 B2
7307433 Miller et al. Dec 2007 B2
7327153 Weinraub Feb 2008 B2
7381630 Sawyer Jun 2008 B2
7536679 O'Connell et al. May 2009 B1
7595746 Iorga et al. Sep 2009 B2
7733096 Lin et al. Jun 2010 B2
7906982 Meade et al. Mar 2011 B1
7949916 Ang May 2011 B1
8103992 Chan et al. Jan 2012 B1
8239590 Wennekamp et al. Aug 2012 B1
8798741 Ryan et al. Aug 2014 B2
9470759 Bourassa Oct 2016 B2
20020105352 Mori et al. Aug 2002 A1
20020145437 Sporck et al. Oct 2002 A1
20020171449 Shimizu et al. Nov 2002 A1
20030074611 Nachumovsky Apr 2003 A1
20030084388 Williamson et al. May 2003 A1
20030115517 Rutten Jun 2003 A1
20030206535 Shpak Nov 2003 A1
20030210031 Miller Nov 2003 A1
20030210069 Kikuchi et al. Nov 2003 A1
20030233208 Uesaka et al. Dec 2003 A1
20040008024 Miller Jan 2004 A1
20040036493 Miller Feb 2004 A1
20040075453 Slupsky Apr 2004 A1
20040175850 Shimizu et al. Sep 2004 A1
20040181763 Soltis et al. Sep 2004 A1
20050047339 Dube et al. Mar 2005 A1
20050093571 Suaris et al. May 2005 A1
20050110513 Osada et al. May 2005 A1
20050129033 Gordy et al. Jun 2005 A1
20050171722 Fritzsche Aug 2005 A1
20050182588 Chenoweth et al. Aug 2005 A1
20050193266 Subramanian et al. Sep 2005 A1
20050204223 Ong Sep 2005 A1
20050237073 Miller et al. Oct 2005 A1
20050262412 Mukai et al. Nov 2005 A1
20060028897 Vernenker et al. Feb 2006 A1
20060100812 Sturges et al. May 2006 A1
20060156288 Jones et al. Jul 2006 A1
20060242504 Kadota Oct 2006 A1
20060273809 Miller et al. Dec 2006 A1
20070074191 Geisinger Mar 2007 A1
20070162729 Ryu et al. Jul 2007 A1
20070176807 Mattes et al. Aug 2007 A1
20070245289 Suaris et al. Oct 2007 A1
20070261009 Granicher et al. Nov 2007 A1
20070277154 Badwe Nov 2007 A1
20080040641 Blancha et al. Feb 2008 A1
20080250288 Souef et al. Oct 2008 A1
20080274629 Meyer Nov 2008 A1
20080278190 Ong et al. Nov 2008 A1
20080297167 Possa Dec 2008 A1
20090024381 Sakamoto et al. Jan 2009 A1
20090066365 Solomon Mar 2009 A1
20090091347 Gohel et al. Apr 2009 A1
20090101940 Barrows et al. Apr 2009 A1
20090112548 Conner Apr 2009 A1
20090113245 Conner Apr 2009 A1
20090119084 Nagashima et al. May 2009 A1
20090119542 Nagashima et al. May 2009 A1
20090128162 Singleton et al. May 2009 A1
20090138567 Hoover et al. May 2009 A1
20090167583 Iorga et al. Jul 2009 A1
20090217311 Kocyan et al. Aug 2009 A1
20090299677 Torres Dec 2009 A1
20100058274 Pike et al. Mar 2010 A1
20100146338 Schalick et al. Jun 2010 A1
20100164527 Weyh et al. Jul 2010 A1
20100174955 Carnevale et al. Jul 2010 A1
20100180169 La Fever et al. Jul 2010 A1
20100287424 Kwon Nov 2010 A1
20100313071 Conner Dec 2010 A1
20110067040 Nagahara et al. Mar 2011 A1
20110291693 Ong et al. Dec 2011 A1
20120198279 Schroeder Aug 2012 A1
20130031276 Rathi et al. Jan 2013 A1
20130070640 Chapman Mar 2013 A1
20130110446 Bourassa et al. May 2013 A1
20130111505 Frick et al. May 2013 A1
Foreign Referenced Citations (10)
Number Date Country
H04-122141 Apr 1992 JP
2004-199536 Jul 2004 JP
2009-031933 Feb 2009 JP
2009-116878 May 2009 JP
2011-502265 Jan 2011 JP
10-2009-0107579 Feb 2009 KR
10-2009-0028569 Mar 2009 KR
10-2009-0107579 Oct 2009 KR
WO-9736230 Oct 1997 WO
2010054669 May 2010 WO
Non-Patent Literature Citations (31)
Entry
International search report and written opinion mailed Mar. 18, 3013 in international application No. PCT/US2012/056247, 10 pgs.
International Preliminary Report on Patentability mailed Mar. 18, 3013 in international application no. PCT/US2012/056247, 6 pgs.
Full Machine Translation, KR10-2009-0028569, 45 pgs.
J.L. Anderson Jr., “Test application program interface for instrumentation”, In: Autotestcon, IEEE, Sep. 21, 2006, pp. 272-276, See abstract; p. 273, paragraph 3: and figure 1.
Authorized officer Sung Cheal Byun, International Search Report and Written Opinion mailed Jan. 30, 2013 in international application No. PCT/US2012/056250, 9 pgs.
Search Report dated Feb. 24, 2015 in European application No. 12843264.8, 6 pgs.
Search Report dated Feb. 24, 2015 in European application No. 12843399.2, 7 pgs.
Search Report dated Jan. 29, 2015 in European application No. 12842878.6, 8 pgs.
Majid et al., “Stretching the limits of FPGA SerDes for enhanced ATE performance”, School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, GA, USA (2010), 6 pgs.
International Search Report dated Mar. 18, 2013 in International Application No. PCT/US2012/056245, 9 pgs.
International Preliminary Report on Patentability mailed May 8, 2014 in International Application No. PCT/US2012/056245, 6 pgs.
International Preliminary Report on Patentability mailed Apr. 29, 2014 in International Application No. PCT/US2012/056250, 6 pgs.
Full Machine Translation of KR10-2009-0107579, 14 pgs.
Full Machine Translation of JP2009-031933, 24 pgs.
Bengtsson, S. and Engstrom, O., Interface Charge Control of Directly Bonded Silicon Structures, Journal of Applied Physics, 66: 1231-1239 (1989).
Catalyst Enterprises Inc., TA700/800 Series PCI-X, PCI, CPCI, PMC BUS Analyzer-Exerciser User's Manual, 268 pages (2005).
EPO Communication pursuant to Article 94(3) EPC for EP12842878.6, 6 pages (Sep. 16, 2015).
File History of U.S. Appl. No. 10/828,755 (now U.S. Pat. No. 7,307,433), 230 pages (Retrieved Mar. 16, 2017).
File History of U.S. Appl. No. 11/711,382 (now U.S. Pat. No. 7,906,982), 497 pages (Retrieved Mar. 16, 2017).
File History of U.S. Appl. No. 11/964,254 (now U.S. Pat. No. 7,595,746), 148 pages (Retrieved Mar. 16, 2017).
File History of U.S. Appl. No. 13/284,378 (now U.S. Pat. No. 9,470,479), 404 pages (Retrieved Mar. 16, 2017).
File History of U.S. Appl. No. 13/284,491, 543 pages (Retrieved Mar. 16, 2017).
Free Online Dictionary of Computing, “DB-25”, 2 pages (Dec. 8, 1996). URL: http://foldoc.org/DB-25 [Retrieved Mar. 20, 2017].
Free Online Dictionary of Computing, “emulation”, 2 pages (May 22, 2003). URL: http://foldoc.org/emulation [Retrieved Mar. 20, 2017].
Free Online Dictionary of Computing, “host”, 2 pages (Feb. 16, 1995). URL: http://foldoc.org/host [Retrieved Mar. 20, 2017].
International Search Report for PCT/US19970/04032, 3 pages (Jul. 11, 1997).
International Search Report for PCT/US2003/014844, 3 pages (Sep. 18, 2003).
Japanese Office Action for JP2014-538797 and English Translation, 8 pages (Mar. 29, 2016).
Krstic, A. et al., Testing High Speed VLSI Device Using Slower Testers, VLSI Test Symposium, 17th IEEE Proceedings, 6 pages (Apr. 25-29, 1999).
Teledyne LeCroy, Protocol Analyzer, Products and Overview, 7 pages (2017). URL: http://teledynelecroy.com/protocolanalyzer [Retrieved Mar. 20, 2017].
USB, Frequently Asked Questions, 3 pages. URL: http://www.usb.org/about/faq/ [Retrieved Mar. 20, 2017].
Related Publications (1)
Number Date Country
20130110445 A1 May 2013 US