Various laboratories, industry players, and other institutions have developed sophisticated and specialized processing and experimentation equipment. For instance, power hardware-in-the-loop (PHIL) experimentation provides the capability to simulate interaction between real, physical devices and a large-scale simulated power grid. Not all interested parties may have the financial or technical means to build and/or maintain such equipment, however.
The Internet and other computer networks have made large strides in allowing the sharing of information between two remote parties. Thus, those interested in using specialized processing and experimentation equipment may not have to move at all, regardless of location. However, all communications networks are associated with an inherent latency—information does not travel instantaneously.
In one example, a system includes a first simulation system configured to emulate a first portion of an electrical system and control, based on the first portion of the electrical system, electrical inputs to a device under test. The system also includes a first observation device operatively coupled to the first simulation system. The first observation device is configured to receive a delayed version of at least one remote emulation value, the at least one remote emulation value representing a second portion of the electrical system. The at least one remote emulation value is generated by a second simulation system that is physically separate from the first simulation system. The first observation device is further configured to determine, based on the delayed version of the at least one remote emulation value and a model of the second portion of the electrical system, a respective real-time estimation of the at least one remote emulation value, and output, to the first simulation system, the respective real-time estimation of the at least one remote emulation value. The first simulation system is further configured to emulate the first portion of the electrical system based on the respective real-time estimation of the at least one remote emulation value.
In another example, a device includes at least one processor configured to receive a delayed version of at least one remote emulation value, the at least one remote emulation value representing a first portion of an electrical system. The at least one remote emulation value is generated by a simulation system that is physically separate from the computing device. The at least one processor is further configured to determine, based on the delayed version of the at least one remote emulation value and a model of the first portion of the electrical system, a respective real-time estimation of the at least one remote emulation value, emulate, based on the respective real-time estimation of the at least one remote emulation value, a second portion of the electrical system, and control, based on the second portion of the electrical system, electrical inputs to a device under test.
In another example, a method includes receiving, by a computing device comprising at least one processor, a delayed version of at least one remote emulation value, the at least one remote emulation value representing a first portion of an electrical system. The at least one remote emulation value is generated by a simulation system that is physically separate from the computing device. The method further includes determining, by the computing device and based on the delayed version of the at least one remote emulation value and a model of the first portion of the electrical system, a respective real-time estimation of the at least one remote emulation value, emulating, by the computing device and based on the respective real-time estimation of the at least one remote emulation value, a second portion of the electrical system, and controlling, by the computing device and based on the second portion of the electrical system, electrical inputs to a device under test.
The details of these and/or one or more additional examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
The present disclosure provides systems, devices, and methods that mitigate latency errors in distributed networks, such as those used to perform remote hardware-in-the-loop (HIL) experiments. As one example, system may include two physically disparate simulation systems, each configured to emulate different portions of an electrical system. Each simulation system may include an “observer,” configured to predict a current value of the other simulation system's emulated state, based on a model of the portion emulated by the other simulation system and a delayed indication of the other system's emulated state. The observer may provide the predicted current value to its corresponding simulation system which may use the predicted current value in emulation.
By predicting a current value of a physically disparate simulation system's state, the observer may reduce or even eliminate the potential for latency-based error. Such error may arise, for example, when the distance between the two simulation systems is large, when the congestion between the two simulation systems is limited or congested, or when the amount of data being shared between the two simulation systems is large. Reducing latency-error may allow for use of specialized facilities without the need to physically transport fragile or heavy components while ensuring that results are more accurate.
In the example of
Single location, real-time HIL systems, including controller HIL (CHIL) systems and power HIL (PHIL) systems, have been developed extensively in the art for closed-loop simulations. Examples include simulation of physical controllers and power devices and systems for investigating demand side management techniques for providing grid ancillary services. Some such simulations may include multi-physics domains. Although HIL experimentation is not a new concept, research has been focusing on ways to virtually connect multiple HIL experiments—consisting of both physical hardware and simulation at multiple locations—by connecting experiments through a communication link between real-time processors. Such systems, including two or more of the individual HIL systems depicted in
Several national and international laboratories, universities and industrial companies are pursuing these virtually connected testbeds, whereby individual HIL experiments at distant locations share state information to emulate larger-scale connected systems. Motivation for such an arrangement is driven in part by the desire to share resources that are physically separated (often over large geographical distances) and include, in such experiments, devices or systems that are too difficult to relocate or model. For instance, a recent experiment analyzing high penetration photovoltaics (PV) on an electrical distribution feeder was performed between two U.S. national laboratories through the use of distributed HIL, where a large-scale grid simulation at one laboratory in the state of Washington was virtually connected with a set of physical, residential PV inverters operating at the other laboratory, in the state of Colorado.
Communication latencies between HIL instances in a distributed HIL system are an important factor in ensuring the desired bandwidth of the emulated experiment is met. However, related-art research has not adequately addressed mitigation of communication latencies inherent in such remotely connected HIL systems. As the complexity, number, and physical distances between remotely connected HIL systems increase, it is expected that communication delays will begin to adversely impact the resolutions possible in multi-system (distributed) HIL experiments. In particular, sampling times between remote processors place a fundamental limitation on the effective bandwidth of the combined experiment due to the Nyquist criterion. Communication latencies from network traffic or stringent security firewalls at potential host locations can also be a significant source of delay. Advanced methods for mitigating communication latencies are needed to enable larger and more complex virtually connected testbeds.
In contrast to related art systems and methods, the systems, methods, and devices disclosed herein may leverage actual or synthesized observers in order to implement dynamical systems with delayed measurements. Such observers may mitigate at least a portion of the effects of communication delays inherent in, for example, a distributed HIL system. The upshot of the techniques described herein is that the output of the observer will, theoretically, asymptotically converge to the values of the actual (non-delayed) system output. In the HIL setup, this implies that the observer would closely track the output of the system/device under test in the remote location. Thus, the two remote systems/devices would operate closer to how they would operate if they were directly electrically connected (e.g., collocated). As a result, the techniques described herein may improve the overall accuracy, speed, and/or quality of simulations when, for example, multiple, physically-separated instances of the HIL system depicted in
In the example of
In the example of
In the example of
In the example of
In the example of
In this way, observers 6 may reduce or mitigate potential problems arising from the delays inherent in communication across large distances. As one example, without observers 6, simulation systems 4 may attempt to emulate their respective portions of the electrical system based on incorrect values of the other's state, leading to erroneous operation and inaccurate results. By using the delayed state information and a model to estimate current state information, observers 6 may allow simulation systems 4 to more accurately emulate their respective portion of the electrical system and thereby produce more accurate results in HIL experiments. Further details of simulation systems 4 and observers 6 are described below with respect to
The assumptions implicit in the use of the techniques described herein are a) an accurate model of the device(s) under test can be acquired at the remote location and utilized to synthesize the observer(s), and b) the datasets utilized as inputs to the HIL experiments (e.g., solar irradiance, load profiles, etc.) are available and synchronized at both locations.
The delay compensation methodology described herein (also referred to herein as the Observer Delay Compensation (ODC) approach) makes use of the notion of drift observability of a (non)linear system, and leverages methods for synthesizing observers for dynamical systems with delayed measurements.
The techniques of the present disclosure are described herein using the following notation. Upper-case (lower-case) boldface letters will be used for matrices (column vectors); and, (•)T for transposition. |•| denotes the absolute value of a number or the cardinality of a set; ∇ stands for the gradient operator; and, x, y denotes inner product of vectors x and y. For a given N×1 vector xεn, xi denotes its ith component, and ∥x∥2:=√{square root over (xTx)},∞ denotes the set of natural numbers (strictly positive integers).
Consider the following nonlinear dynamical system describing the evolution of a state vector x(t)εn.
{dot over (x)}(t)=f(x(t))+g(x(t))u(t) (1)
{umlaut over (y)}(t)=h(x(t−δ(t))) (2)
where u(t)εm is a known input vector, functions f:n→n, g:n×m→n, and h:n→ are in ∞, and
Let ξ:n→ be an infinitely differentiable function, and consider a vector field v:→n, ⊂n. Suppose that v(x)ε∞. Then, the Lie derivative of function (x) along the vector field v is defined as the following inner product:
The kth Lie derivative of ξ(x), denoted Lvkξ(x), is obtained by k-times repeated iteration of Lvξ(x), Lv0ξ(x):=ξ(x).
Consider the following mapping associated with functions f(•) and h(•) in (1):
Φ(x):=[h(x)Lfh(x) . . . Lfn-1h(x)]T. (4)
System (1) is said to be globally drift-observable if Φ(x) is a diffeomorphism (defined below) on n. Drift-observability of (1) implies that the Jacobian J(x) associated with Φ(x) is nonsingular for all xεn, in which case the mapping z=Φ(x) defines a global change of coordinates.
Suppose that system (1) is globally drift-observable, and has the following additional properties:
(P1) The triple (f,g,h) has uniform observation degree at least equal to n defined as:
L
g
L
f
k
h(x)=0,k=, . . . ,n−2,∀xεn (5)
L
g
L
f
n-1
h(x)≠0, for some xεn, (6)
in which case, the following function is well defined: (6)
p(z,u)=Lfnh(x)+LgLfn-1h(x)u)x=Φ
(P2) Function p(z,u) in (7) is globally uniformly Lipschitz continuous with respect to z, and the Lipschitz coefficient γ(∥u∥) is a non-decreasing function of ∥u∥; i.e., for any z1, z2εn, it holds that
∥p(z1,u)−p(z2,u)∥≦γ(∥u∥)∥z1−z2∥. (8)
If (1) is globally uniformly Lipschitz drift-observable (“GULDO”) with properties (P1) and (P2), then the following observer associated with system (1) can be constructed:
{circumflex over (x)}(t)=f({circumflex over (x)}(t))+g({circumflex over (x)}(t)u(t))+J−1({circumflex over (x)}(t))kδ[
k
δ
=e
−ρ5
k
0 (9b)
where vector k0εn and ρ≧0 are design parameters. A theorem from the literature asserts that if input ∥u(t)∥≦um for some constant um, then for a decay rate ρ≧0 and bounded delay δ(t)ε[0,Δ] there exists a vector k0εn such that
∥x(t)−{circumflex over (x)}(t)∥≦ce−ρt,t≧0 (10)
for some constant c. That is, {circumflex over (x)}(t) asymptotically converges to the actual (non-delayed) system state x(t). Furthermore, the theorem states that if vector k0 satisfies matrix inequality
(An−k0Cn)TP+P(An−k0Cn)+ (11)
(2ρ+β+n)P+γM2(BnTPBn)In×n≦0, (12)
where (An, Bn, Cn) are a Brunowski triple of order n, γm is the Lipschitz coefficient associated with um, β>0 and κ>1 design parameters, and P is symmetric positive definite, then exponential-decay state tracking is guaranteed for
Δ≦
As shown in the example of
As shown in the specific example of
Processors 34, in one example, are configured to implement functionality and/or process instructions for execution within simulation unit 32. For example, processors 34 may be capable of processing instructions stored in storage devices 38. Examples of processors 34 may include, any one or more of a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or integrated logic circuitry.
Simulation unit 32, in the example of
In some examples, simulation system 32 utilizes communication units 36 to communicate with one or more external devices such as one or more other simulation systems and/or one or more devices under test. For instance, communication units 36 may receive state information from a physically remote simulation system indicating the state of an emulated portion of an electrical system, and provide the state information to one or more other components of simulation system 32 (e.g., observer module 42). As another example, communications units 36 may receive information regarding a state or operation of a device under test and provide such information to emulation module 40 and/or other components.
One or more storage devices 38 may be configured to store information within simulation system 32 during operation. Storage devices 38, in some examples, can be described as a computer-readable storage medium. In some examples, storage devices 38 are a temporary memory, meaning that a primary purpose of storage devices 38 is not long-term storage. Storage devices 38, in some examples, are described as a volatile memory, meaning that storage devices 38 do not maintain stored contents when the computer is turned off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some examples, storage devices 38 are used to store program instructions for execution by processors 34. Storage devices 38, in one example, are used by software or applications running on simulation system 32 (e.g., modules 40 and 42) to temporarily store information during program execution.
Storage devices 38, in some examples, also include one or more computer-readable storage media. In such examples, storage devices 38 may be configured to store larger amounts of information than volatile memory. Storage devices 38 may further be configured for long-term storage of information. In some examples, storage devices 38 include non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM).
In some examples, simulation system 32 may contain more or fewer components than those shown in
In the example of
In the example of
Observer module 42 may be executed by processors 34 to determine, based on the delayed version of the at least one remote emulation value and remote system model 44, a respective real-time estimation of the at least one remote emulation value. That is, observer module 42 may communicate with remote system model 44 and/or emulation module 40 to estimate a current state of the remotely emulated portion of the electrical system based on a mathematical model of the remotely emulated portion and/or the current state of the portion emulated by emulation module 40. Observer module 42 may provide the real-time estimation of the at least one remote emulation value to emulation module 40 for use in emulating its portion of the electrical system and/or controlling the device under test.
In this way, observer module 42 may mitigate latency errors inherent in distributed simulations, thereby allowing simulation system 32 to more accurately emulate how the electrical system and the device under test would interact. Additionally, while described herein as being emulated by two simulation systems, the techniques of the present disclosure may be employed to mitigate latency errors among any number of distributed simulation systems. In such examples, simulation system 32 may, for instance, include multiple remote system models, each corresponding to a different portion of the electrical system that are emulated by different simulation systems. In this way, the techniques described herein may allow for larger, more complex simulations than those that may be performed by a single machine while improving accuracy and precision.
As shown in
The equation describing each of the kε{1,2} sources is
where ek* are inputs for representing externally-connected sources, ηk are voltage error gains and αk are damping factors.
The system of circuit 52 can be represented in state-space form
where the vector of states x=[i1 i2 v1 v2 i]r,
and the input vector u=[e1*e2*]T.
As shown in
System A in
where the vector of states x=[i1 v1 i v2]T,
and the input vector ua=[e1*î2]T. With the nominal output of System A at System A as v2(t), the delayed output
For System B, the state-space representation is
where the state vector xb=[i2],
A
b=[α2],Bb=[−η2η2], (21)
and the input vector ub=[e2*{circumflex over (v)}2]T. With the nominal output of System B at System B as i2(t), define the matrix Cb so that:
Based on these models of System A and System B, observers may be designed to mitigate latency error in accordance with the techniques described herein. In some examples, the observers may be designed quasi-heuristically. For example, it can be assumed that if: (i) both Systems A and B (e.g., as shown in the example of
These assumptions are based on the following: when designing an observer to estimate xεn, N≦n states of a remote, n-dimensional system, (a) proof of drift-observability of the remote system ensures the Jacobian J(x) in (9) is non-singular for all xεn, (b) the guarantee of observation degree ≧n ensures all remote states are observable, and (c) global uniform Lipschitz continuity ensures adjustment (e.g., feedback control) of states occurs along smooth and well-defined trajectories.
The observer at System A of
f(x(t))Ab×b(t)=α2i2(t) (23)
g(x(t))u(t)−η2e2*(t) (24)
h(x(t−δ(t))Cb×b(t−δ(t))=i2(t−δ(t)) (25)
The mapping zb=Φ(xb) for System B with n=1 is
Φ(xb)=xb=i2(t). (26)
Functions of the form Φ(x)=x,xε are diffeomorphic (see proof below). Hence, Φ(xb) is a diffeomorphism for xbε. The Jacobian for System B
J
b(xb)=1 (21)
which is indeed non-zero for all xbε.
It is necessary to examine whether remote System B possesses properties (P1) and (P2). For the first property, note that the first condition in (7) doesn't apply since n=1. The second condition is true since LgLf0h(xb)=η2e2*(t)≠0, for some xbεn, in particular ∀t s.t. input e2*(t)≠0. Note the function
is indeed well-defined. Considering property (P2) for System B, note that p(zb, ub) is globally uniformly Lipschitz continuous with respect to zb. Its derivative with respect to zb is equal to constant α2 for all zbε. The Lipschitz coefficient can be determined from the inequality
∥p(zb,1,ub)−p(zb,2,ub)∥=∥α2∥∥zb,1−zb,2∥≦γ(∥ub∥)∥zb,1−zb,2∥ (29)
From which the requirement can be derived that γ≧∥α2∥.
The observer can now be constructed according to (9):
where the vector k0 in (30) is determined using the standard method of obtaining a feedback vector for eigenvalue placement in Luenberger observers, which is always possible when the system is observable. This may be done, for example, to avoid reliance on the convergence guarantee in (12) and (13).
The observer at System B may be designed in a similar fashion. The observer at System B of
f(x(t))Aaxa(t) (31)
g(x(t))u(t)[−η1e1*(t)0 0 0]T (32)
h(x(t−δ(t)))Caxa(t−δ(t))=v2(t−δ(t)) (33)
The mapping zb=Φ(x) for System A, with n=4 in (4)
where σ1=1/LC1C2,σ2=1/LC1C2−R/L2C2,σ3=R2/L2C2−1/LC22, and σ4=R/L2C2. The mapping Φ(x) can be shown to be diffeomorphic if Aa is invertible (see proof below). From (34),
which is indeed non-zero for all xaε4. For property (P1), it is easy to verify that LgLfkh(x)=0 for k=0, . . . , n−2, ∀xεn. The second condition in (P1) holds since LgLf3h(x)=Γg(x(t))u(t) where Γ:=[ζ1 ζ2 ζ3 ζ4], ζ1=1/C1C2,ζ2=−R/L2C2,ζ3=(−1/LC1C2+R2/L2C2−1/L2C2), and ζ4=R/L2C2. Therefore, LgLf3h(xa)=−η1ζ1e1*(t)≠0 for some Xaε4, in particular for any xaε4 when the signal e1*(t)≠0. As expected, p(za,ua) is well defined:
Considering property (P2) for System A, note that p(za,ua) is globally uniformly Lipschitz continuous with respect to zb since its derivative with respect to this variable is equal to ΓAa−1 for all zbε4. The Lipschitz coefficient can be determined from (8) to derive the requirement that γ≧∥ΓAa−1∥.
The observer can be constructed according to (9):
where {circumflex over (x)}a=[î1 {circumflex over (v)}1 î{circumflex over (v)}2]T and where vector k0 in (37) is again found using the Luenberger observer design approach.
The techniques of the present disclosure were applied to a simulated version of circuit 54 of
The passive circuit elements, error gains, and damping factors in Table I were chosen arbitrarily. The update rate τu was chosen to be an order of magnitude greater than Td. The desired convergence rate ρ was an arbitrarily chosen observer design parameter. The communication delay Td was chosen to be several times greater than the empirically measured network delay (approximately 30 ms per round trip). The vector k0 was determined to be [1] for θa→b and [4.06 6.18 4.18 1.06]T for θb→a.
As shown in
The techniques of the present disclosure were also experimentally validated in hardware. During this hardware-based experiment, a closed-loop, remote HIL experiment emulating the system shown in
System A and System B portions of circuit 54 were simulated on Opal-RT real-time digital simulators, available from Opal-RT Technologies, Inc. of Montreal, Quebec Canada. System A was simulated at NREL and System B was simulated at CSU. The observers for both systems were executed on Arduino EtherDUE microcontrollers, located at each location and available from various manufacturers. The EtherDUE microcontrollers had a built-in Ethernet port in addition to analog and digital I/O channels.
Measurements and observer values were passed between collocated Opal-RT and Arduino microcontrollers using digital and analog I/O channels. State information was passed between remote locations through the EtherDUE boards using Ethernet communication. Synchronization between systems was implemented using algorithms on the Arduino boards in a master/slave hand-shaking configuration.
As previously described, selection of observer parameters and stability analysis for this experiment required an estimate of the network delay between Systems A and B. Since network delays between two systems are generally dependent on many factors (e.g., network load and routing algorithms), the delay was determined experimentally using repeated ping algorithms on the microcontrollers. The sample mean of the round-trip communication delay was found to be approximately 30 ms.
The following is a proof of diffeomorphisms for various mappings. Given two spaces and , a differential mapping f:→ is a diffeomorphism if it is (i) bijective and (ii) the inverse mapping f−1:→ is differentiable.
As a proof of diffeomorphism for mapping f(x)=x, consider the mapping f:→ defined by y=f(x)=x (i.e., the identity function). This map is differentiable with f′(x)=1. This map is bijective. That is, for each unique xε there exist a unique yε, and vice versa. The inverse map is also the identity function which, as stated previously, is differentiable. Thus, the mapping f(x)=x is a diffeomorphism.
As a proof of diffeomorphism for mapping f(x)=Ax, consider the mapping f:n→n defined by y=f(x)=Ax,xεn, yεn, and A an n×n matrix where all elements [A]ij are constants. This map is differentiable with Jf=A, where Jf is the Jacobian on f. The map f is bijective if and only iff A has a well-defined inverse, A−1. Assuming A−1 exists, the inverse mapping is described by x=f−1(y)=A−1y, which is differentiable with Jf
In the example of
In the example of
As shown in the example of
In one example, a system configured to perform some or all of the operations described in the example of
In some examples, the at least one remote emulation value represents at least one first remote emulation value, the first observation device is further configured to: receive a delayed version of at least one second remote emulation value, the at least one second remote emulation value representing a third portion of the electrical system, wherein the at least one second remote emulation value is generated by a third simulation system that is physically separate from the first simulation system and the second simulation system; determine, based on the at least one second remote emulation value and a model of the third portion of the electrical system, a respective real-time estimation of the at least one second remote emulation value; and output, to the first simulation system, the respective real-time estimation of the at least one second remote emulation value, and the first simulation system is further configured to emulate the first portion of the electrical system based on the respective real-time estimation of the at least one second remote emulation value.
In some examples, the at least one remote emulation value represents at least one first remote emulation value, and the system further includes: the second simulation system, configured to: emulate the second portion of the electrical system; control, based on the second portion of the electrical system, electrical inputs to a second device under test; and output the at least one first remote emulation value; a second observation device operatively coupled to the second simulation system, the second observation device being configured to: receive, from the first simulation system, a delayed version of at least one second remote emulation value representing the first portion of the electrical system, determine, based on the at least one second remote emulation value and a model of the first portion of the electrical system, a respective real-time estimation of the at least one second remote emulation value, and output, to the second simulation system, the respective real-time estimation of the at least one second remote emulation value. The second simulation system may further be configured to emulate the second portion of the electrical system based on the respective real-time estimation of the at least one second remote emulation value.
In some examples, the first observation device is configured to determine the respective real-time estimation of the at least one remote emulation value further based on at least one local emulation value, the at least one local emulation value representing the first portion of the electrical system. In some examples, the first simulation system is configured to control the electrical inputs to the device under test by modifying at least one of an input current to the device under test or a resistance laid out in series with the device under test. In some examples, the electrical system represents a power system and the at least one remote emulation value is a voltage component and a current component.
In some examples, the first observation device is integrated with the first simulation system. In some examples, the system further includes the device under test.
In one example, a device configured to perform some or all of the operations described in the example of
In some examples, the at least one remote emulation value represents at least one first remote emulation value, the simulation system represents a first simulation system, the at least one processor is further configured to: receive a delayed version of at least one second remote emulation value, the at least one second remote emulation value representing a third portion of the electrical system, wherein the at least one second remote emulation value is generated by a second simulation system that is physically separate from the first simulation system and the computing device; determine, based on the delayed version of the at least one second remote emulation value and a model of the third portion of the electrical system, a respective real-time estimation of the at least one second remote emulation value; and emulate the second portion of the electrical system further based on the respective real-time estimation of the at least one second remote emulation value.
In some examples, the at least one processor is configured to determine the respective real-time estimation of the at least one remote emulation value further based on at least one local emulation value, the at least one local emulation value representing the first portion of the electrical system. In some examples, the at least one processor is configured to control the electrical inputs to the device under test by modifying at least one of an input current to the device under test or a resistance laid out in series with the device under test. In some examples, the electrical system represents a power system and the at least one remote emulation value represents a voltage component and a current component.
In one example, a method for performing some or all of the operations described in the example of
In some examples, the at least one remote emulation value represents at least one first remote emulation value, the simulation system represents a first simulation system, and the method further includes: receiving a delayed version of at least one second remote emulation value, the at least one second remote emulation value representing a third portion of the electrical system, wherein the at least one second remote emulation value is generated by a second simulation system that is physically separate from the first simulation system and the computing device; and determining, based on the delayed version of the at least one second remote emulation value and a model of the third portion of the electrical system, a respective real-time estimation of the at least one second remote emulation value. The second portion of the electrical system may be emulated further based on the respective real-time estimation of the at least one second remote emulation value.
In some examples, the respective real-time estimation of the at least one remote emulation value is determined further based on at least one local emulation value, the at least one local emulation value representing the first portion of the electrical system. In some examples, controlling the electrical inputs to the device under test comprises modifying at least one of an input current to the device under test or a resistance laid out in series with the device under test.
In one or more examples, the techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media, which includes any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable storage medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of inter-operative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
The foregoing disclosure includes various examples set forth merely as illustration. The disclosed examples are not intended to be limiting. Modifications incorporating the spirit and substance of the described examples may occur to persons skilled in the art. These and other examples are within the scope of this disclosure and the following claims.
This application claims the benefit of U.S. Provisional Application No. 62/299,801, filed Feb. 25, 2016, the entire content of which is incorporated herein by reference.
This invention was made with government support under grant DE-AC36-08GO28308 awarded by the United States Department of Energy. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
62299801 | Feb 2016 | US |