Refer now to
An Embedded (computer) System 6 executes an Oscillographic Application that implements the majority of the control settings for the ‘scope, and interprets the Acquisition Record stored within the Acquisition Buffer in light of those control settings. It executes a Rendering Mechanism 74 (a programmatic mechanism stored in firmware) that renders the buffered Acquisition Record into bit mapped data stored in a Frame Buffer 7. The content of the Frame Buffer is displayed on a Display 8 as a trace with suitable annotations and other messages for the operator. The Embedded System 6 may be located within a traditional bench-top laboratory grade ‘scope, and it will be understood and appreciated that, for the sake of brevity, we have in this simplified figure suppressed the details of the user interface and the mechanisms by which the Oscillographic Application controls the operation of the DSO in response to indications from the operator.
The architecture shown in
The rates of taking the digital samples by the Digitizer 4 and of their storage as an Acquisition Record within the Acquisition Memory 5 is determined by a Time Base 14. The Digitizer might sample only in response to transitions in a signal from the Time Base, after which the sample is stored in the Acquisition Memory, or the Digitizer might operate at full speed, but with only every nth sample being stored (so-called ‘decimation’). To continue, the usual technique is for the Acquisition Record within the Acquisition Memory to function as a circular structure, where the earliest stored data is overwritten once the Acquisition Record is full. This behavior continues until a trigger event occurs, whereupon some preselected number of further samples is stored, followed then by the interpretation of the completed Acquisition Record and the preparation and display of a trace.
In the block diagram of
The Time Base Trigger 13 may be applied to the Time Base 14, as well as to the Embedded Control System 6. It is not so much (as it was in the old analog ‘scopes) that the Trigger Signal ‘turns on’ or starts the Time Base to produce a ‘sweep’—it was already doing what it needs to do to facilitate the sampling of the Actual Input Signal 1 and the storing of the Acquisition Record, as mentioned above. Instead, the DSO may recognize that: (1) Subsequent to the trigger event one or more stored samples need to be associated with that trigger and that a certain number of additional samples might still need to be acquired and stored, after which the Acquisition Record is complete; and, (2) The trigger event (as indicated by an edge in the Trigger Signal) is not constrained to occur only at times when a sample is taken. The implication is that the trigger event might not correspond exactly to one sample explicitly contained in the Acquisition Record, but actually corresponds to a location in time between two adjacent entries in the Acquisition Record. Nevertheless, it is desired to correctly indicate where on the displayed trace the trigger event happened. To do this we need to know (and keep track of) a time offset between when the trigger event occurred and the adjacent active edges from the Time Base.
After the Acquisition Record has been formed and buffered, the Oscillographic Application renders into the Frame Buffer 7 a bit mapped version of a trace according to the control settings that have been established by the operator.
As an aside, we offer the following comment on the operation of the architecture just described. It would be possible for the computer mechanism of the Embedded System and the Oscillographic Application it executes to be intimately involved in controlling every little detail—for all of the ‘smarts’ to be in that program, as it were, and for all the hardware to be as ‘dumb’ as possible. That puts a large burden on the program, and it may not be economical for it to run fast enough to properly control a ‘scope that takes high speed measurements. Accordingly, hardware blocks shown in
Now consider the flowcharts 16-19 of
One might wonder why we bother with the notion of “threads” instead of simply saying “Here are these flowcharts . . . ”, particularly when it is rightly suspected that one microprocessor core can only execute one instruction at a time. Our motivation comes from the following considerations. In a traditional flowcharting environment you put one finger at one place on one flowchart, and that describes at some level of abstraction what the system is doing and will do next. If there is an urge for another and separate activity to proceed, then another finger is needed. Things can get fairly complicated rather quickly if the separate flowcharts are allowed or expected to influence each other. This is particularly so if a time slicing/context switching mechanism(as in Unix or Linux) is used to ‘simultaneously’ execute the different processes. What is more, there might be more than one processor core, or special purpose autonomous hardware mechanisms that run fast (e.g., state machines, FPGAs) and that are dedicated to executing just one flowchart. The overhead for achieving the simultaneity (whether real or faux) is almost never visible at the flowchart level, and even inter-flowchart communication mechanisms, such as flags, semaphores and mail boxes, are apt to conceal as much truth as they afford value in terms of utility. This notion of threads is a generalization that acknowledges that those fussy issues do exist, but says that they belong to a particular implementation in a particular environment, and that if we agree to operate at a useful level of abstraction, we can keep the familiar flowcharts, with the understanding that: we might need many fingers; that flowcharts can and do appear and disappear under the control of some environmental overhead mechanism that we don't need to study; and that the rate of progress for one flowchart is not necessarily the same as for another. The useful grouping of such related activities/processes into coherent separate unified descriptions (our simplified flowcharts of
Now consider the flowchart 16 in
Step 21 is followed by qualifier 22, which asks if the H/W Trigger has been met. If the answer is NO, then a loop 23 is formed by waiting at qualifier 22 until the answer is YES. For the duration of this loop we can say that loop 23 ‘continues’ sampling.
Eventually, there will (presumably) be a YES answer at qualifier 22. This does not necessarily mean that the acquisition record is complete. For example, if the operator has (previously) specified that the Time Reference is to be is to be at the middle of the Acquisition Record, then at the time of the H/W Trigger only half of the desired Acquisition Record has been obtained, and the process of sampling and storing needs to continue to obtain the remaining half. Thus it is that qualifier 24 asks if the Acquisition Record is full, and if the answer is NO, then a loop 25 is formed that, as did loop 23, continues sampling and storing until the answer is YES.
Upon a YES answer at qualifier 24, step 26 ‘suspends’ sampling and storing, but without re-configuring the combination of the Time Base 14, Digitizer 4 and Acquisition Memory 5.
Next, qualifier 28 asks if a S/W Trigger has been set up, which would indicate an intent of operating the DSO with a Composite Trigger. If the answer is NO, then the thread proceeds to step 29, where the trace corresponding to the content of the Acquisition Record is displayed for at least brief period of time. Following that, step 75 examines the Acquisition Record to perform any automatic measurements that might have been specified. If the operator should press the STOP key, then the thread of flowchart 16 is abandoned, and the displayed trace would remain visible until the operator does something else. On the other hand, if there is no STOP and RUN remains in force, then the thread is producing a ‘live’ display, which is obtained by returning to step 21 at the conclusion of the brief time associated with the display at step 29 (this is the ‘RESUME’ idea mentioned above). The purpose of the brief delay at step 29 is so that the trace will be displayed for at least a perceptible length of time and thus actually be visible. In the live display situation the trace will remain displayed until it is replaced by one associated with the next trigger event. The rate of apparent change in the displayed trace is thus limited by the sum of the brief delay and the time required to obtain a full Acquisition Record having an associated trigger event.
The operation just described for a NO answer at qualifier 28 can be described as automatically honoring a H/W Trigger, since one did occur, and no S/W Trigger has been specified and no Composite Triggering is being attempted.
The answer at qualifier 28 might be YES, which is a case that we are particularly interested in. We can say that a YES answer at qualifier 28 is a provisional honoring of a H/W Trigger in anticipation of a possible Composite Trigger, the occurrence of which will now ultimately depend upon conditions within the Acquisition Record.
Accordingly, in the case of a YES answer for qualifier 28, the next step 30 is to examine the Acquisition Record for the existence of the condition described by the S/W Trigger criterion. The location of the Acquisition Record to examined might be either the Acquisition Memory 5 or the Acquisition Buffer 73. We shall have more to say about what the examination criteria might be, but for now it is sufficient to think of them as certain properties of a waveform that can be discovered as present or that can be measured. Once the Acquisition Record has been examined by step 30 a decision can be made at qualifier 31 as to whether the S/W Trigger criterion (whatever it is) has been met. If the answer is NO, then the specified S/W Trigger condition has not been met, the opportunity to perform a Composite Trigger is declined, and the entire process thus far described for the thread begins again with a transition back to step 21, so that another candidate Acquisition Record can be obtained for continued operation under the Composite Trigger regime.
On the other hand, if the answer at qualifier 31 is YES, then an instance of Composite Triggering (H/W Trigger then a S/W Trigger) has been achieved. At this juncture, a decision with qualifier 32 is made as to the nature of the S/W Trigger. If it is a ‘zone’ type S/W Trigger (to be explained in due course) then the Time Reference is left set to where it was located in step 27, which location is for the H/W Trigger. This is accomplished by the NO branch from qualifier 32 leading directly to step 29. Otherwise, the Time Reference is set by step 33 to the location in the Acquisition Record that met the S/W Trigger criterion.
The ‘live display’ versus STOP remarks made above pertaining to the H/W Trigger only (NO at qualifier 28) apply equally to the YES branch from qualifier 31 for Composite Trigger operation
We turn now to
Without further ado, consider
What we see in
The example set out in
Once the MEASUREMENT button 36 has been clicked, a dialog box 38 pertaining to a Measurement S/W Trigger appears. Within that is a drop-down menu box 39 that allows the user to select which automatic measurement to use for the S/W Trigger. There are two generally equivalent ways for choices to appear in the list of the drop-down menu for box 39. The first is for the list to simply be a long one that contains all possibilities. That works, but might be awkward, it which case it might be a ‘self-subdividing’ list of a sort that it already well known. The other possibility is one that is actually implemented in the Agilent ‘scopes mentioned above. It is that there already is, for the prior art that uses automatic measurements, a manner of indicating what measurement to make on which channel. Furthermore, it often makes sense, and it is allowed, for there to be several measurements specified. In the Agilent ‘scopes, the specified measurements and their results are indicated with legends in regions 50 and 60, as in
In this example case the selection for the S/W Trigger is “+width” upon channel one. According to our notion of an ‘automatic measurement’ this is sufficient information to produce a measured parameter value (provided, of course, that channel one is in use and there is indeed a H/W Trigger . . . ). As far as a S/W Trigger criterion goes, some additional condition must be specified to allow a S/W trigger decision to be made based on the value of that measured parameter. Accordingly, menu boxes 41 and 42 allow the specification of minimum and maximum limits, respectively, for that measured parameter value. Menu box 40 (Trigger When) allows the choices “OUTSIDE LIMITS” and “INSIDE LIMITS” (not shown). The case of exact equality is excluded as improbable (and of relatively little utility in a typically noisy environment where time and voltage measurements are made to three or four digits).
To continue, the Composite Trigger specification is established and in effect as soon as the necessary information has been specified. There are however, some additional things the operator might wish to specify. He may, for example, wish to limit the portion of the Acquisition Record that is inspected by step 30 in
Here are some other things that are of interest in
Looking briefly now at
Finally, as regards a ‘measurement style’ S/W Trigger, note
Refer now to
Now, while a serial bit pattern is always just a sequence of binary ones and zeros, not all descriptions of those bits are in binary. There are other notations, and these include octal, hexadecimal, ASCII, etc., and the symbols used to denote their values include much more that simply ‘1’ and ‘0’. These other notations are often considerably more convenient than regular binary, and are the remaining additional choices for the drop-down menu of box 69. If a different notation is specified in box 69, then indicia drawn from the corresponding collection of symbols is allowed in box 70, and that indicia is then properly construed as the described sequence of binary bits.
Finally, the search of the Acquisition Record will normally begin at its earliest portion, and proceed to its latest portion, so that the earliest portion of the waveform that satisfies the stated criteria is the ‘one that is found.’ If this is not the behavior desired, then just as with the automatic measurement example of
We now turn our attention to a Composite Trigger based upon the specification of one or more ‘zones’ for a S/W Trigger. A zone is closed region in the time/vertical signal space that is defined by the operator and that has significance according to whether the trace for the signal of interest does or does not enter (or touch) the zone. In one actual product, up to two zones can be defined, and a S/W Trigger can be specified as being a particular behavior with respect to those zones by a given signal. In that particular embodiment a zone must be rectangular, but these various limitations are mere simplifications, and a zone could, in principle, be one of a large number of zones, some of which are associated with different traces, and, be of any arbitrary closed shape. We shall have more to say about each of these topics at more convenient times.
Refer now to
We said above that a zone is a closed region in the time/vertical signal space. By this we mean that it is not a static location on the surface of the screen. So, suppose you had a stable display and outlined on the screen with a grease pencil a zone of interest. Ignoring for the moment the obvious problem of how the ‘scope is to decide if the trace intercepts a grease pencil drawing on a glass faceplate, consider what happens if the display settings are changed to pan or zoom without changing the underlying Acquisition Record. Unless the location and aspect ratio of the grease pencil outline change in a corresponding way, a zone would be of very limited use! Or suppose the user simply vertically repositions the trace with the vertical position control to better see the entire excursion of the signal. It seems clear that a zone ought to be a region within the coordinate system that contains the trace, and as such, can be assigned pixels to represent it, just as is done for the rendered trace itself, and that “it moves when the trace moves.” In fact, why not think of it as actually being an adjunct to the trace, almost as if it were part of it? Say, items representing the zone could be added to the acquisition record, and understood as such by the rendering process.
Well, almost. Recall that the Acquisition Memory 5 is closely coupled to the Digitizer 4, operates at very high speed according to an interleaved architecture, and is a semi-autonomous ‘circular buffer’ mechanism, too boot. We are properly quite reluctant to interfere with this carefully engineered high performance aspect of the ‘scope's architecture. And upon further consideration, we appreciate that different locations in the Acquisition Memory may hold data that represents the same waveform feature from one instance of trace capture to the next, anyway. Evidently, we cannot associate particular static locations in the Acquisition Memory with a zone, even if we wanted to, because even if the captured waveform is ‘the same,’ we won't know where the next instance of the H/W Trigger will fall relative to the address space of the Acquisition Memory until it actually happens. HMmm. But we CAN say where a zone of such and such size and shape ought to be relative to the H/W Trigger event, and we CAN have an analytical description of that zone that is of a form that does not need to reside in the Acquisition Memory, proper.
Suppose, then, that for the sake of simplicity and illustration, we limit a zone to being a rectangular region having sides parallel to the horizontal (time) and vertical (voltage) axes of the trace for Channel One. Such a rectangle will have a lower left corner (PLL), and can be fully described according to any of various formats, one of which is [PLL, Height, Duration]. Let's say we have a list of such zones for each trace.
Now suppose that there has been a H/W Trigger and we wish to determine if some S/W Trigger condition involving zones is also met. There are different ways to address this issue, and depending upon the approach chosen, certain sneaky complications can arise. A brief digression will serve as an example of the sort of mischief that is afoot.
Consider the question: “Does this nut thread onto that screw?” Now, if the nut and screw are really at hand (e.g., lying loose on a table), and are neither too large nor too small to be handled, the easiest way to answer the question is to try the proposed operation using the actual items themselves. Leaving aside any extraneous difficulties associated with the actor doing the manipulation (blindness, lost hands in an accident, etc.), what we are getting at here is that the tangible items themselves will combine in nature to reveal the answer, and we really don't need to know anything at all about them, except how to attempt their manipulation. They will either do it according to some objective standard(s) of satisfaction (to be decided upon), or they won't. We don't need any information about the parts to find the answer if we can actually use the parts for the operation of interest. So, if a drunken machinist is turning out arbitrarily sized nuts and screws (arbitrary in the sense that, absent any notion of step size except what our tools can measure, as in how many different diameters are there between ⅛″ and 1″?), and we are given one of each, a foolproof way to get the answer is to simply try it. Such a trial is a form of analog computation akin to discovering how many 2+3 is by dropping two marbles into an empty bag, followed by dropping in three more, and then inspecting the result. (To bring this line of reasoning to a quick and merciful end, we realize immediately that a waveform to be measured and a zone defined by a user are not marbles that a ‘scope can scoop up and drop into a bag . . . .)
Now suppose we don't have the tangible items on hand, and are instead told by some interested interlocutors information about them. If we are told the nut is 4-40 (#4 in diameter, forty threads per inch) and the screw is ¼-20, we can, upon consulting with either our memory or available reference data, conclude that there is no way that this nut will thread onto that screw. If we thought that we were going to be confronted with these sorts of (well defined) questions on a regular basis, we might save much time by simply compiling ahead of time a look-up table or some other codified rule that we would rely upon for an authoritative answer. It is important to realize that the form of information given has a lot to do with whether or not this can be done. Probably only a small fraction of the parts turned out by our drunken machinist could be classified in the manner of this particular example (assuming someone actually devised a way to do so), and most of the time we would be unable to answer the question.
Responding to our protestations of difficulty, our interlocutors agree to be more reasonable. They give us two files stored in memory, with the admonition: “This is all that can and ever will be known about this nut and that screw. We must know the answer, as the fate of the universe hangs in the balance, etc.” So, it seems we are faced with construing two descriptions as bona fide virtual replacements for the real things, and do the best we can to mimic nature by using certain assumptions. That seems fair enough, and it doesn't take us long to come to that conclusion. And after a bit more consideration, we further realize that it matters a great deal to us how the items are described. One of the files is suspiciously short (it contains only the ASCII characters “nut, RH, 4-40”) and the other is several million bytes long and appears to contain an ASCII “screw, RH:” followed by an enormous amount of numerical data formatted according to some protocol whose rules are promised to us, if we but read this other document. We decide that we still have a dilemma. We either need a way to reliably discover if the long 25 file is equivalent to “screw, RH, 4-40” or a way to turn the short file into a second long one of the same protocol as the first long one. That is, unless we can have recourse to some outside authority (fat chance!), the two descriptions need to be of the same sort if we are to have any hope of ourselves comparing two arbitrary items. And that is assuming that we can develop a practical and reasonably reliable self-sufficient comparison process that operates by inspecting the two files, even if they are of the same type. To ensure that we appreciate our situation, our interlocutors offer a final word of advice: “We hired that machinist you mentioned. Don't be fooled by the existence of the short file—there is NO guarantee that the item described by the LONG file fits the 4-40, 6-32, 10-32, 10-24 paradigm . . . .” Evidently, converting a long file to a short file paradigm is a tenuous option, and we console ourselves with the knowledge that it is not too difficult to convert any short file we may be given to a long format version, and then rely upon a robust programmatic comparison mechanism. One of our engineering staff is heard to snort: “Well, that's what computers are for . . . .” But then there is another voice from the back of the room: “Maybe there will never be any short descriptions—they might all be long. And then might it not happen that even two identical screws could have non-identical long file descriptions, say, they started at different points on the screw . . . . The files only MEAN the same thing; but they themselves are NOT the same!” Indeed, there are some sneaky complications afoot! We suspect that this is so even after having gone to the trouble of ensuring that both descriptions are truly commensurable (constructed according to the same paradigm and format).
Leaving now the fable of the nut and the screw to return to the realm of digital ‘scopes, their traces and user defined zones, and as a convenient point of departure, suppose that we have some memory at our disposal. We are allowed to treat it as if it were a Frame Buffer, at least as far being able to store therein a bit mapped output from the Rendering Mechanism. Now, for each zone in a list of zones for a trace, render a region of the trace that has the same, or perhaps slightly more, duration. Now, render into the same memory the corresponding zone. (We may have to add to the Rendering Mechanism a zone appreciation mechanism, since a zone's description is not, after all, necessarily in the form of an Acquisition Record!) Now ask the question: “Do any of the pixels in one collection overlap pixels in the other (i.e., occupy the same location)?” One way to answer that question is to set a (previously cleared) flag meaning “YES” anytime a value for an illuminated pixel is written to a location that already contains an illuminated pixel value. By doing this for each zone, we would then be in a position to answer such questions as “Did the trace for Channel One go through this zone?” or “. . . through this zone and not through that one?” or “. . . through both?” That is, we are in a position now to decide upon a zone-based S/W Trigger, which may then be described as some Boolean expression involving intersections of one or more zones and its trace. And at this level, we can further see how this would work for more than one trace. In such a case we would say that we have several traces, each with a list of associated zones, and we evaluate a more complex Boolean expression involving several traces and their respectively associated zones.
We appreciate that what we have done here is to convert both the trace and the zone to a common pixel level description. (These would not necessarily have to be pixels that we need to display, or that are even displayable—think of a ‘pixel’ here as a ‘quantized thing.’) The rules of the universe appear to require both the (‘real’) trace and the (‘real’) zone to be continuous, so if their common pixel level descriptions (which are merely discrete quantized descriptions and are only ‘approximately’ continuous, even if we ‘connect the dots’) are sufficiently dense compared to events in the continuous domain, we feel comfortable with the idea that intersection will probably produce at least one common pixel for the two descriptions.
Well, maybe. There again is that voice from the back of the room: “You've zoomed way out and then defined the zone. How is a high speed glitch in the vicinity of your zone rendered? There are only 1024 horizontal pixel locations across the whole screen, you've spent it showing around a millisecond of signal, which is about a microsecond per pixel, and Charlie says the glitch is only a few nanoseconds long . . .” If this technique of comparison at the pixel level were to be our plan, and it were to be carried out at the visible level, then we would want assurances that the rendering mechanism won't lead us astray (i.e., the glitch is not carelessly filtered out). We consult with the rendering department, and are told that this is not necessarily fatal, as the rendering mechanism can be given criteria that functions as a form of peak detection/persistence for just such emergencies, so that if we are vigilant a short glitch or other significant feature will not fall ‘through the crack of the floor boards,’ as it were: an identifiable (but necessarily disproportionate) placeholder of some sort will appear in the collection of rendered pixels to ensure a common overlap between the set of pixels used to describe the zone and those used for the trace.
We begin to suspect that this ‘common pixel’ approach, while ‘operative’ is not, in its simplest form, anyway, totally reliable. It appears to lend itself to exceptions based on corner cases that may lead to confusion and a disgusted operator. On the other hand, it has one nice attraction, in that if we were determined to have a zone of arbitrarily defined shape (say, it was traced one the screen with mouse motion), then there are known ways to obtain a list of pixels that occupy the interior and perimeter of that zone. We leave this here for the moment, and will revisit these ideas once we have more to work with concerning the definition of a zone.
Continuing with our high level description of ‘scope operation, if there is no S/W Trigger, then the results are discarded, and operation continues. If the S/W Trigger condition is met, then the desired screen's worth of trace (as specified by the operator, say, relative to the H/W Trigger) is rendered into the real Frame Buffer and displayed. Any zones that fall within the displayed section of the trace are drawn by the GUI thread 18 of
Continuing now with
Refer now to
The ‘CANCEL’ choice in menu 84 deletes the entire zone associated with that instance of the menu. The ‘WAVEFORM ZOOM’ choice changes the timing and/or voltage scaling of the waveform, rather than creating a zone.
In this connection, our illustration has had a trace displayed on the screen, which allows us to visually confirm that a zone is being specified in an appropriate location relative to that trace. This is certainly convenient, but is not, in principle anyway, absolutely necessary. Recall that we said that a zone was just a closed region in the display space, located relative to the H/W Trigger. If we knew, either from experience, wonderfully good guess-work, or hard analysis, just what the description of a suitable zone was, then one could imagine a zone-oriented GUI that had a menu that simply let us key that stuff in, sans mouse tracks. To be sure, ‘that stuff’ would likely NOT be a description rooted in the common pixel level (life is too short, and how would we get such a thing, anyway?). If we were able to use instead a compact and analytically precise description of some easy zone, such as a rectangle, things would be somewhat easier, although such a ‘manual’ system would still likely not be too convenient to use. We might often get disgusted when things don't work as supposed, owing to incorrect assumptions or to errors attributable to the sheer complexity of trying to keep mental track of all that stuff, some or much of which might be off-screen. After all, making life easier is supposedly what GUIs are all about. This view will, no doubt, add to the appreciation of why we have chosen to illustrate the creation of ‘ZONE 1’ with a GUI and against the backdrop of a trace of the sort (i.e., is an actual instance of one) that is related to the proposed zone. In this mode of operation we are using an existing instance of the trace as a convenient placeholder, or template, in lieu of an actual (yet to be acquired) trace whose particular shape will figure into the zone-oriented S/W Trigger.
Furthermore, it will be appreciated that the automatic determination by a programmatic system of the coordinates describing a visibly displayed object, such as a rectangle associated with, and created by a mouse controlled screen pointer, is a conventional accomplishment, and involves techniques known in themselves. In the case where a rectangle is created as indicated, in might be described with a collection of parameters that represent the Trace Number, Zone Number (for that trace) an Initial Starting Point, a Width, and a Height: [TN, ZN, PIS, W, H]. For a four trace ‘scope TN would range from one to four (or from zero to three or from A to D for persons of a certain other persuasion), ZN would range from one to however many, PIS would be an (X,Y) pair where the X is a signed offset in time from the H/W Trigger and Y is a voltage, W is a signed excursion away from X and H is a signed excursion away from Y. As mentioned above, the discovery of (X, Y) and of W and H from housekeeping parameters maintained by the ‘scope and the motion of the mouse is a task that whose accomplishment is known in the art.
As a further digression in connection with the definition of the size, shape and location of a zone, it can also be appreciated that the limitation of having a zone be a well behaved (think: easy to characterize) rectangle can be removed with the aid of techniques borrowed from the computer graphics arts that serve the community of solid modelers and CAD (Computer Aided Design) users (i.e., techniques found in their software and special purpose hardware packages). So, for example, if our ‘scope user were to be allowed to describe a zone by tracing an irregular but useful outline with a mouse controlled cursor, the zone's perimeter can be construed as a collection of linked line segments. This in turn amounts to a collection of one or more polygons occupying the interior of the zone, and the computer graphics art is replete with ways to perform polygon fill operations. That is, it is known how to find the collection of pixels, described in some arbitrary coordinate system, that occupy the perimeter and interior of a given polygon. (The task required here would not tax those arts in the least—they can even do it for collections of adjoining polygons that lie on a curved surface, where parts of that surface are to be trimmed away at the intersection with yet another surface . . . .) Once the collection of such pixels is known it is not difficult to detect their intersection or not with those of a nearby trace. (We have already alluded to one way: sharing of a common pixel location. Another is to detect an intersection of a line segment formed by two adjacent pixels in the trace with a line segment on the boundary of a polygon belonging to the zone.) The principal difference between this more general shape for a zone and the earlier well behaved rectangle is that the rectangle can be more simply described symbolically as a [PIS, W, H] and the temptation is to then use simple comparisons to horizontal and vertical ranges to ascertain if any (think: each) ‘nearby’ point on the trace falls within the rectangle, or that none do.
We also realize that there is a significant operational difference between comparing a trace segment expressed as a complete (think: ‘long’) list of discrete values (whether as measured or as rendered into pixel locations) against a (long) comparably formatted “complete” list of values representing an arbitrary zone, on the one hand, and on the other, comparing against a compact (analytical) description for a ‘well behaved’ zone that is tempting precisely because it is brief. It is not so much that one way is right and the other is wrong. It is more that each has its own type of sneaky mischief. We needn't get stuck here, and it is sufficient to suggest some of the traps. No list of points will ever exhaust those that might be needed along a line, or worse, within a region. So examining two lists to find commonality can't be trusted absolutely to produce a proper intersection/non-intersection determination. We find ourselves on the one hand invoking Nyquist and bandwidth limitations, while on the other hand complaining about the large amounts of memory needed to always render at a resolution commensurate with bandwidth. Smart rendering can help, as can some other techniques.
Now at this point it will not be surprising if we observe that comparison at the common pixel level is outwardly convenient (since a ‘scope already has a Rendering Mechanism for producing such pixels, whether to be displayed or not), but is ‘rendering sensitive,’ as it were. We note also that if we were intent upon it, we could consume much memory and render to some other minutely quantized level of description that is never displayed as such. To implement such a ‘split personality’ might be confusing, even if we were careful to be consistent, since there may arise questions concerning the possibility that the displayed results (rendered one way) might not match the invisible results (rendered another way) upon which Composite Trigger decisions are based. Furthermore, add-on rules for the different renderings may or may not always solve the problem, although in general the results can be quite satisfactory. We suspect that the price for this outward convenience is higher than thought at first glance.
Finally, it seems that no matter how we proceed, we eventually do end up using a variant of one or both of these two decisions: “Is this described location (i.e., a pixel-like item, whether displayable in the Frame Buffer or a non displayed resident in some ‘comparison space’) shared with another (comparable) collection of described locations?” and “Does this line segment (known by its end points) intersect any of those (similarly known) line segments?” At the end of the day, we begin to appreciate why computer graphics systems have such a voracious appetite for processing power: they take such baby steps, and there are so very many to take . . . .
It is now easy to appreciate the relative ease with which the notion of a zone can be implemented by a rectangular region R of XLEFT to XRIGHT and of YUP to YDOWN [described analytically as the points (XL, YD), (XR, YD), (XR, YU), (XL,YU)] by asking if each member (XP, YP) of the Acquisition Record (perhaps after some digital signal processing for fidelity purposes, but not yet as rendered for Frame Buffer use!) meets the condition:
(XL≦XP≦XR) AND (YD≦YP≦YU)
If this logical conjunction is TRUE, then we can declare that the trace has entered or touched the region R. Equally important, we can also be absolutely certain that, if that logical conjunction is FALSE, then the trace did not enter or touch the zone. Note that only simple comparisons on one axis at a time are needed: there is no need to check, say, the vertical axis unless the time axis condition is already met. Furthermore, we can state these comparisons in terms of actual times (relative to the time reference) and voltages. This is asking if members of the Acquisition Record fall within the shadow, as it were, of a range, which is somewhat different than asking if two disparate descriptions become quantized into a use of the same descriptor (pixel location). Upon reflection, we appreciate that the orthogonal nature of the display space axes, and a requirement that the sides of the rectangle be parallel to those axes, allows us to detect that two line segments intersect without going through the pain of having to construct (solve for) the intersection. We decide that, for those reasons, we prefer rectangular zones, or ‘composite zones’ made up of a reasonable number of nicely oriented adjacent rectangles.
We can similarly find the comparable answer for that trace and another zone, as well as for another trace and any zone. In a multiple zone case, it is then just a matter of recognizing a desired corresponding pattern of TRUE and FALSE for however many of those zones that have been invoked as part of a Composite Trigger specification.
Now consider
In
We turn now to another manner of S/W Trigger that can be used as part of a Composite Trigger condition: lack of monotonicity on an edge. With reference then to
Once again the operator has conjured the INFINISCAN Mode menu 95, and then selected (clicked on) the button marked ‘NON-MONOTONIC EDGE.’ With that done, the system produces the menu portion 96, which is specific to that choice. Within menu portion 96 are three choices for edges: rising, falling, or either. In this case, the operator clicked on the button 98 for ‘either.’
At this point we must digress slightly to establish what is meant by an ‘edge.’ There are other automatic measurements that the ‘scope can make, and pursuant to those the system needs to know the excursion limits of the signal of interest. That is, its maximum and minimum values. For various reasons it is typical for the respective 90% and 10% values of those two excursion limits to be called the ‘measurement limits.’ Such measurement limits can be found automatically using the 90%/10% criterion, found automatically using a different percentage, or simply specified as this voltage and that voltage. (The IEEE has also published a standard 181-2003 that defines what an ‘edge’ is.) We here stipulate that the identification or specification of such measurement limits is conventional, and we will consider an edge to be points along the trace that lie between the measurement limits. If the Y value (voltage) of an entry in the Acquisition Record falls within the measurement limits it can be considered to lie on either a rising or falling edge, and forming the ΔY for successive pixels will indicate which.
One way to proceed is to traverse the Acquisition Record and maintain in running fashion useful indicators about whether the current location is part of an edge, and if so, whether rising or falling. Now we need an operational definition of non-monotonic. We might look for sign reversals for a piece-wise first derivative that are inspired by inflections in the trace. This approach can open the door to ‘noise on flat spots’ looking like a failure to be monotonic. Filtering by digital signal processing can be a cure for this, but its description is more suited to the frequency domain than to the natural time domain of a waveform. A hysteresis value H is the easy cure for this in the time domain, and leads us instead to this definition: If, for the duration of a falling edge a voltage value for a subsequent point along that edge is greater than H plus the voltage value of any prior point on the edge, then the edge is non-monotonic. For a rising edge, if the voltage value of a subsequent point is less than the voltage of any prior point along the edge as diminished by H, then that edge is non-monotonic. We now have most of the tools we need to implement a S/W Trigger based on a non-monotonic edge.
The hysteresis value H is supplied/edited by manipulating the content of box 99 in the menu portion 96.
For the example of
Finally, we touch briefly on the detection of a runt excursion for use as the S/W component of a Composite Trigger. The operational definition is this. If a waveform descend through the lower measurement limit, then later rises by H and then descends again to the lower measurement limit without having first reached the upper measurement limit, it is a runt. If a waveform rises through the upper measurement limit, then later falls by H and then rises again to the upper measurement limit without having first reached the lower measurement limit, it is also a runt.