This disclosure relates to infrared imaging, and more specifically to updating non-uniformity calibration tables for infrared imaging.
An infrared imaging system can operate in an environment where the system views a target scene through a window while the system is flying through the atmosphere. For example, an infrared imaging system can be used for missile-based imaging or other high-velocity imaging platforms (e.g., airplane-based platforms). Infrared imagers typically consist of an n×m sensor array of photon collecting cells producing an n×m array of output pixel values. Because these cells vary in their offset and gain responses to a given input flux, the digitization of their output will contain false pixel to pixel variations which must be corrected if this output is to accurately represent the input scene. Typical infrared imagers correct the pixel to pixel variation using n×m arrays of gain and offset correction values. In cases where the collecting cells' responses to flux does not change over time, these calibration tables can be calculated in the factory; however, in the more common case where the offset and/or gain responses do change over time, these calibration tables must be calculated either during or just prior to use. In this case they are typically generated using a uniform reference source that is introduced into the optical path of the imaging device, such as a shutter or a flag. In particular, a solenoid or motor driven shutter or flag (thin emissive metal plate) is moved into the path of the infrared camera, causing a spatially uniform, plate temperature dependent spectrum of radiation to be projected to the infrared camera's sensor. This technique is suitable in implementations and use-cases where: the imaged scene temperature is similar to the camera's internal temperature; the time when the camera is blind as a result of the movement of the plate is tolerable; only a single level of reference flux from a fixed plate temperature is required so as to update the offset correction (i.e. new gain correction coefficients are not necessary); and the negative impact of a moving mechanism is acceptable. However, there are several applications where this technique is not suitable, such as missile-based imaging systems.
Various aspects of at least one example are discussed below with reference to the accompanying figures, which are not intended to be drawn to scale. The figures are included to provide an illustration and a further understanding of the various aspects and examples and are incorporated in and constitute a part of this specification but are not intended to limit the scope of the disclosure. The drawings, together with the remainder of the specification, serve to explain principles and operations of the described and claimed aspects and examples. For purposes of clarity, not every component may be labeled in every figure.
Techniques for non-uniformity correction are provided for use in an infrared imaging system. For examples, the techniques as described herein are useful when the gain or offset response characteristics of an infrared imager can be observed to have changed so much that application of the current calibration tables no longer produce a sufficiently accurate representation of the input scene. The techniques as described herein are also useful when the currently observed scene conditions deviate from a calibration point so much (e.g., beyond a given threshold), that the use of the available calibration data (obtained, for example, during factory calibration) to provide non-uniformity correction of the current scene would be expected to be inaccurate.
The techniques as described herein are particularly well-suited, according to some embodiments, to an imaging system integrated into a missile or another high velocity platform for seeking and tracking the missile/platform position during its flight to a target or destination. Other high-speed applications can be benefit as well, as will be appreciated. In an embodiment, the on-board imaging system includes an infrared camera, a projecting optic, and an infrared light source. As will be appreciated in light of this disclosure, the infrared light source (e.g., LED) can be used for infrared camera response calibration to minimize residual non-uniformity correction errors in the image which arise from either the imaged scene light flux deviating from a calibration point, thereby reducing the applicability of the stored calibration data, or from drift in the offset and gain response of the camera's pixels since the time of initial calibration, which is typical for infrared imagers. The light source is projected into the optical path of the infrared camera (e.g., an infrared focal plane array) from a position located nominally at a pupil plane of the imaging system. To minimize both the target scene blocked by and the added emissions from the projection components, the light source can be coupled into a suitable transmission medium such as an optical fiber or waveguide or other light-guiding structure having a relatively small spatial cross-section with respect to the field of view of the projecting optic and camera. The output of the light-guiding structure can be positioned at the pupil plane with the output numerical aperture configured such that the pointing direction of the optical light-guiding structure subtends the field of view as seen by the camera from the pupil plane. To achieve a calibrated response from the camera, the light source can be tuned to one or more known intensity levels, and each pixel's response to each of those known intensity levels can be used to calculate the pixel-specific coefficients of the camera response function under the current conditions. The pixel-specific coefficients may be stored in, for example, n×m non-uniformity calibration tables. The calibration tables can then be used or otherwise made available to correct for non-uniformity, as will be appreciated.
An example infrared imaging system includes an infrared sensor configured to receive infrared light emitted by both target scene and background flux, an analog to digital converter (ADC) to turn the collected signal into image data values, and an infrared light source configured to provide calibrating light to adjust the background flux. The light source is positioned such that an output of the light source is at a pupil of the infrared imaging system. The response of the camera to the corresponding light source as described herein is not limited to a single spectral wavelength band. In an example, the light source includes a first infrared light source configured to output a first light signal at a first wavelength and a first output intensity, a second infrared light source configured to output a second light signal at a second wavelength and a second output intensity, and a beam combiner configured to combine the first light signal and the second light signal into a combined beam and direct the combined beam to a light-guide structure, wherein the light-guide structure is configured to transmit the combined beam from the beam combiner to the pupil of the infrared imaging system. The infrared imaging system can further include at least one image processing device configured to receive the image data generated by the infrared sensor and ADC, calculate the input scenic flux for each wavelength, and determine whether the scene fluxes are sufficiently corrected by the pixel response function coefficients of the closest calibration tables. If the processing device determines that the scene fluxes are not sufficiently corrected by the pixel response function coefficients of the closest calibration tables, the system can generate (an) updated calibration table(s) for the infrared imaging system.
When subjected to dynamic environments such as for a missile seeking a target, the response uniformity of a camera in an on-board infrared imaging system can be impacted by changes in pixel-to-pixel response to the infrared light collected by the imaging system. Left uncorrected, any pixel-to-pixel-response non-uniformity of a given infrared imaging system (and particularly with respect to cooled infrared imaging systems) substantially limits the ability to get an accurate understanding of the flux distribution of an imaged scene. The pixel-to-pixel differences, in their offset and gain response, can form a veiling image overlaid onto a target scene image, reducing the ability to discern objects in the underlying image, particularly low contrast ones that may be of interest. This fixed pattern noise can be addressed by characterizing the response function at each pixel and correcting for the degradation to the image resulting from pixel-to-pixel variations. This calibration process is generally referred to as non-uniformity correction.
Further, once an infrared imaging system is calibrated with respect to a specific set of scene intensities, a non-uniformity of response can re-emerge as a fixed pattern noise and degrade the ability to interpret the image. The non-uniformity correction approach typically makes assumptions about how the camera response changes with respect to input flux. Often a linear response is assumed over a set flux range and at least two calibration flux levels are chosen against which the camera response at those levels is measured. In such an example offset and gain terms are calculated for each pixel such that the corrected output of the pixels at either of these calibration flux levels is the same for every pixel. This two-point non-uniformity correction is applied to correct for scene flux levels between the two selected calibration fluxes, and typically also just above and below the flux interval. The non-uniformity correction data is therefore most directly applicable when imagining scenes with the same flux as the calibration points. Because the assumption of linearity is only approximate, imaged flux levels that differ from the calibration levels can only be imperfectly corrected, typically with a residual non-uniformity correction error that increases depending on how far the flux level is from a calibration point. Drift of the pixel offset and gain response over time can also be an additional source of residual error.
In practice there are a number of challenges to creating calibration coefficients for a pixel's response to a known flux that is meant to be close to the flux of a later imaged scene. Generally, the attempt is made by, for example: 1) a factory or manufacturer calibration of the system response under anticipated operational and scene flux conditions, creating gain and offset correction coefficients for each pixel; and 2) replacing the factory calibration offset calibration coefficients via a periodic imaging of the uniform flux from an electro-mechanical metal flag or shutter at the camera's internal ambient temperature that is brought into position near the pupil, creating new offset correction coefficients from the pixel-to-pixel variations in response.
However, offset correction coefficients created for the ambient camera temperature flux may not necessarily correct offset non-uniformities at the higher flux level of the target. In the case where pixel gains have drifted, the new offset correction coefficients will perforce include correction for drift in gain, and so can have useful “accuracy” only near the temperature at which they are calculated. Missile-based seeking or imaging is an application where the mean scene temperature is relatively much lower than, and the target to be examined much higher than, the camera's internal ambient temperature. In addition, there are several applications where using a mechanically operated component to generate the scene used to perform a fresh offset calibration can negatively impact the ability of the imaging system to acquire images accurately and in a timely manner. For missile-based imaging, the operational period is too brief and dynamic to tolerate seconds of non-imaging calibration time, and the limited reliability of a moving mechanism in a high shock and vibration environment is too negatively impactful and thus not practical. As such, the techniques as described herein use an on-demand calibration source type with no necessary moving parts.
System and Device Architecture
It should be noted that the window 115 is shown by way of example only and, in some examples, the window can be generalized to any aperture or path providing a line of sight between the infrared imaging device 105 and the target scene 110.
Additionally, as shown in
In system 100 as shown in
In order to account for non-uniformity in the imaging data, and as will be appreciated in light of this disclosure, an infrared imaging system can include a calibration source that provides, for example, controllable irradiation levels that provide for updating non-uniformity calibration tables of a mono or multiband infrared imager. In certain implementations that include a two-color infrared focal plane array (FPA), the FPA can be illuminated with light from a pair of infrared LEDs, each within a color band of interest. The LEDs can be coupled into a light-guide structure that won't impair the system's 120 field of view, such as a multimodal optical fiber, and projected from one of the imaging system's pupils. This approach enables a non-uniformity calibration source that is all solid-state, that can provide calibration information in a single camera frame with multi-spectral illumination, has controlled effective temperatures, and is minimally invasive. As will be further appreciated, the uniformity of the FPA illumination can be tailored by adjusting various factors such as, for example, the LED coupling angles, fiber optic core diameter, and fiber optic projection angle.
As further shown in
As further shown in
Referring back to
As shown in the example embodiment of
Additionally, each of the LEDs 305 and 310 can be configured to output a particular operational intensity. For example, each of the LEDs 305 and 310 can be configured to output a signal between about 180 mW and about 220 mW. In certain implementations, each of the LEDs 305 and 310 can be configured to output a different signal intensity. For example, LED 305 can be configured to output a signal intensity of about 180 mW while LED 310 is configured to output a signal intensity of about 220 mW. In some examples, in response to an instruction from controller 300, one or both of LEDs 305 and 310 can be configured to alter their signal intensities during operation.
As further shown in
In certain implementations, the LED controller 300 can be operably connected to a processing device such as an image processing device (e.g., image processing device 125 as shown in
It should be noted that, as described above, the infrared system as shown in
However, in some examples, the calibration source optics can be projected directly from or close to the cold stop aperture of the infrared imaging system without using a re-imaging optical system. For example, the output of the calibration source can be directly inserted into a vacuum assembly of the infrared imaging device and positioned adjacent to the cold stop aperture. As shown in
However, in order to include the stiffening ferrule 407 in the vacuum assembly, the vacuum assembly may be modified to include an opening for the stiffening ferrule. Similarly, to provide atmospheric integrity for the vacuum assembly, the opening around the stiffening ferrule can be sealed and/or insulated.
Depending upon the speed of the computing device, the number of pixels in the focal plane, and the number of frames that must be collected to perform the calibration algorithm, a similar process as that shown in
For example, the response characteristics of the infrared imager can be measured and illustrated in a line graph, as shown in
As further shown in
To reduce the size of the residuals, the response values can be calibrated piece-wise. For the example shown in
Referring back to
The second type of failure, drift of imaging system pixel response, can be detected in two ways. The first is by examining the corrected imaging system response to a known signal intensity and determining whether the absolute response error exceeds some threshold; this is generally not possible for certain imaging systems, such as missiles in flight, which cannot create a uniform background of known flux level to use as input to the imaging system. However, the embodiments described herein can be used to determine whether the average response has changed. This system can calculate the supposed current background flux from the average background response (counts); calculate and command a light level to add a known delta flux; measure the average response to this new flux; calculate the expected average response to this new flux (using the previously established average gain value); and finally, determine whether the delta between the actual and calculated average response exceeds a threshold, triggering a recalibration.
In addition to the above method, which determines whether there has been a change in overall average imaging system response, there is a second method of determining non-uniformity calibration table failure that measures how much individual pixels' responses have drifted away from each other. In this method, the processing device examines the pixel to pixel differences for corrected pixels known to be subject to identical input intensities, whether by using a large area known to receive only background flux; or, in a case where there is a non-uniform scene, by shifting the imaging device in object space so as to collect information about specific sample points in object space from multiple pixels. If the average standard deviation of these values exceeds a determined threshold, for example the maximum standard deviation allowable at that average flux level, the non-uniformity calibration tables are determined to be insufficiently accurate, triggering a recalibration.
Referring again to
In such an example, and provided that the imaging device in this embodiment can be temporarily pointed at a featureless background, the processing device can perform a calibration from calibration temperatures/flux levels corresponding to 1) the current background flux, 2) the maximum estimated scene flux, and optionally 3) a flux value in between. Based upon the calculations, the computing device can step through 535 these light levels, pausing to collect image data at each level. The computing device can then turn off the calibration light source and process 540 the collected image data to calculate new mp and bp values for the pixel normalization equation ra=mprp+bp, as well as a new flux threshold for determining future flux range problems. For example, as shown in
It should be noted that the specific values for ranges of interests, pixel responses, thresholds, and signal levels as shown in
As noted above, as the infrared LEDs included in the calibration source are able to change intensity quickly, the computing device can be configured to verify that the change in output signal intensity at the calibration source results in an appropriate change to the response (counts) while continuing to monitor for changes in pixel response that exceed a determined threshold. For example, based upon the speed of the intensity changes of the infrared LEDs, the computing device can verify the change to the flux in a single frame as captured by the camera, thereby reducing the amount of time the computing device is not imaging the target scene. This is a valuable improvement over traditional mechanical solutions, as the time used by mechanically positioned calibration devices typically consumes many frame periods, and in that time the system is blind to the target scene. In certain applications such as missile or other similar munition seeking, as the missile approaches the target scene, noticeable changes in the target scene occur more quickly and the guidance system of the missile has less time to compensate for course corrections. As such, it is advantageous to minimize the amount of time the camera is not able to accurately image the target.
In certain implementations, the computing device 700 can include any combination of a processor 710, a memory 720, a storage system 730, and an input/output (I/O) system 740. As can be further seen, a bus and/or interconnect 705 is also provided to allow for communication between the various components listed above and/or other components not shown. Other componentry and functionality not reflected in the block diagram of
The processor 710 can be any suitable processor, and may include one or more coprocessors or controllers, such as an audio processor, a graphics processing unit, or hardware accelerator, to assist in control and processing operations associated with computing device 700. In some embodiments, the processor 710 can be implemented as any number of processor cores. The processor (or processor cores) can be any type of processor, such as, for example, a micro-processor, an embedded processor, a digital signal processor (DSP), a graphics processor (GPU), a network processor, a field programmable gate array or other device configured to execute code. The processors can be multithreaded cores in that they may include more than one hardware thread context (or “logical processor”) per core. Processor 710 can be implemented as a complex instruction set computer (CISC) or a reduced instruction set computer (RISC) processor.
The memory 720 can be implemented using any suitable type of digital storage including, for example, flash memory and/or random-access memory (RAM). In some embodiments, the memory 720 can include various layers of memory hierarchy and/or memory caches as are known to those of skill in the art. The memory 720 can be implemented as a volatile memory device such as, but not limited to, a RAM, dynamic RAM (DRAM), or static RAM (SRAM) device. The storage system 730 can be implemented as a non-volatile storage device such as, but not limited to, one or more of a hard disk drive (HDD), a solid-state drive (SSD), a universal serial bus (USB) drive, an optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up synchronous DRAM (SDRAM), and/or a network accessible storage device.
In certain implementations, the memory 720 can include operating instructions 725 that, when executed by processor 710, can cause the processor to perform one or more of the process steps and functions as described herein. For example, if computing device 700 represents the computing device as described above in regard to
In certain implementations, the storage system 730 can be configured to store one or more calibration tables 735 including, for example, the original calibration table and any newly calculated calibration tables.
The I/O system 740 can be configured to interface between various I/O devices and other components of the computing device 700. I/O devices may include, but not be limited to, a user interface 742 and a network interface 744.
It will be appreciated that in some embodiments, the various components of computing device 700 can be combined or integrated in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software.
The various embodiments disclosed herein can be implemented in various forms of hardware, software, firmware, and/or special purpose processors. For example, in one embodiment at least one non-transitory computer readable storage medium has instructions encoded thereon that, when executed by one or more processors, cause one or more of the methodologies disclosed herein to be implemented. Other componentry and functionality not reflected in the illustrations will be apparent in light of this disclosure, and it will be appreciated that other embodiments are not limited to any particular hardware or software configuration. Thus, in other embodiments the computing device 700 can include additional, fewer, or alternative subcomponents as compared to those included in the example embodiment of
The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent.
Example 1 includes an infrared imaging system. The system includes an infrared sensor configured to receive light emitted by a target scene to generate image data based on the received light; a light source configured to provide calibrating light to augment scene flux by precise and specifiable amounts, the light source positioned such that an output of the light source is at a pupil of the infrared imaging system, the light source including a first infrared light source configured to output a first light signal at a first wavelength and a first output intensity, and a light-guide structure configured to transmit the first light signal to a pupil of the infrared imaging system; and at least one image processing device. The at least one imaging device is configured to receive the image data generated by the infrared sensor, determine whether image non-uniformities at a current scene flux can be corrected by existing calibration tables, and if the image non-uniformities cannot be corrected by the existing calibration tables, generate an updated calibration table for the infrared imaging system.
Example 2 includes the subject matter of Example 1, wherein the light source further includes a second infrared light source configured to output a second light signal at a second wavelength and a second output intensity and a beam combiner configured to combine the first light signal and the second light signal into a combined beam and direct the combined beam to the light-guide structure, wherein the light-guide structure is configured to transmit the combined beam from the beam combiner to the pupil of the infrared imaging system
Example 3 includes the subject matter of Example 2, wherein the light source further includes a light controller configured to control operation of the first infrared light source and the second infrared light source.
Example 4 includes the subject matter of Example 3, wherein the first and second infrared light sources are first and second infrared LEDs, respectively, and the light controller is further configured to alter at least one of the first wavelength and the first output intensity of the first signal output by the first infrared LED and alter at least one of the second wavelength and the second output intensity of the second signal output by the second infrared LED.
Example 5 includes the subject matter of any of the preceding Examples, wherein the infrared sensor is integrated into a cryocooled infrared imaging device.
Example 6 includes the subject matter of Example 5, wherein the infrared sensor further includes a baffle surrounding an infrared focal plane array and defining a cold stop aperture that limits an amount of light that reaches the infrared focal plane array.
Example 7 includes the subject matter of Example 6, wherein the cryocooled infrared imaging device includes a vacuum assembly configured to house the infrared sensor.
Example 8 includes the subject matter of Example 7, wherein the output of the light source is positioned within the vacuum assembly and adjacent to the cold stop aperture.
Example 9 includes the subject matter of any of the preceding Examples, wherein the infrared imaging system further includes a re-imaging optical system positioned between the infrared sensor and the target scene and configured to condition light emitted by the target scene.
Example 10 includes the subject matter of Example 9, wherein the output of the light source is positioned adjacent to a lens of the re-imaging optical system.
Example 11 includes an infrared imaging system. The infrared imaging system includes a cryocooled infrared imaging device configured to receive light emitted by a target scene to generate image data based on the received light; a re-imaging optical system positioned between the cryocooled infrared imaging device and the target scene and configured to condition light emitted by the target scene; a light source having an output positioned adjacent to a lens of the re-imaging optical system, the light source including a first infrared light source configured to output a first light signal at a first wavelength and a first output intensity, and a light-guide structure configured to transmit the first light signal to a pupil of the infrared imaging system; and at least one image processing device. The at least one imaging device is configured to receive the image data generated by the cryocooled infrared imaging device, determine whether image non-uniformities at a current scene flux can be corrected by existing calibration tables, and if the image non-uniformities cannot be corrected by the existing calibration tables, generate an updated calibration table for the infrared imaging system.
Example 12 includes the subject matter of Example 11, wherein the light source further includes a second infrared light source configured to output a second light signal at a second wavelength and a second output intensity and a beam combiner configured to combine the first light signal and the second light signal into a combined beam and direct the combined beam to the light-guide structure, wherein the light-guide structure is configured to transmit the combined beam from the beam combiner to the pupil of the infrared imaging system.
Example 13 includes the subject matter of Example 12, wherein the light source further includes a light controller configured to control operation of the first infrared light source and the second infrared light source, and wherein the first and second infrared light sources are first and second infrared LEDs, respectively, and the light controller is further configured to alter at least one of the first wavelength and the first output intensity of the first signal output by the first infrared LED and alter at least one of the second wavelength and the second output intensity of the second signal output by the second infrared LED.
Example 14 includes the subject matter of any of preceding Examples 11-13, wherein the cryocooled infrared imaging device includes a baffle surrounding an infrared focal plane array and defining a cold stop aperture that limits an amount of light that reaches the infrared focal plane array.
Example 15 includes the subject matter of Example 14, wherein the cryocooled infrared imaging device further includes a vacuum assembly configured to house an infrared sensor, wherein the output of the light source is positioned within the vacuum assembly and adjacent to the cold stop aperture.
Example 16 includes a computer program product including one or more non-transitory machine-readable mediums encoded with instructions that when executed by one or more processors cause a process to be carried out for providing non-uniformity correction in an infrared imaging system. The process includes generating a non-uniformity calibration table, monitoring image data captured by an infrared sensor, calculating at least one characteristic of the image data that corresponds to one or more errors in non-uniformity correction for comparison against at least one threshold, and if the at least one characteristic exceeds the at least one threshold, generating an updated non-uniformity calibration table for the infrared imaging system by initiating a non-uniformity calibration table process, generating one or more adjusted operating parameters for a light source corresponding to scene flux values of the updated non-uniformity calibration table, and causing transmission of the one or more adjusted operating parameters to a light controller of the light source, thereby altering an output of the light source.
Example 17 includes the subject matter of Example 16, wherein generating the updated non-uniformity calibration table includes updating an existing non-uniformity calibration table based upon current operating conditions.
Example 18 includes the subject matter of Example 16 or 17, wherein generating the updated non-uniformity calibration table includes providing instructions to the light source to step through a plurality of output intensities of an infrared output as generated by the light source, measuring a response by each pixel of the infrared sensor to each of the plurality of output intensities, determining an average scenic flux value for each of the plurality of output intensities, and determining a correspondence between the measured pixel response and the average scenic flux value for each of the plurality of output intensities.
Example 19 includes the subject matter of any of preceding Examples 16-18, wherein the light source includes the light controller, a first infrared light source operably coupled to the light controller and configured to output a first light signal at a first wavelength and a first output intensity, a second infrared light source operably coupled to the light controller and configured to output a second light signal at a second wavelength and a second output intensity, a beam combiner configured to combine the first light signal and the second light signal into a combined beam and direct the combined beam to a light-guide structure, and wherein the light-guide structure is configured to transmit the combined beam from the beam combiner to a pupil of the infrared imaging system.
Example 20 includes the subject matter of Example 19, wherein the light controller is configured to receive the one or more adjusted operating parameters from the one or more processors, determine changes to at least one of the first signal and the second signal based upon the one or more adjusted operating parameters, and adjust at least one of the first signal and the second signal based upon the determined changes.
The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. In addition, various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood in light of this disclosure. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications. It is intended that the scope of the present disclosure be limited not be this detailed description, but rather by the claims appended hereto. Future filed applications claiming priority to this application may claim the disclosed subject matter in a different manner and may generally include any set of one or more elements as variously disclosed or otherwise demonstrated herein.
Terms used in the present disclosure and in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including, but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes, but is not limited to,” etc.).
Additionally, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations.
In addition, even if a specific number of an introduced claim recitation is explicitly recited, such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two widgets,” without other modifiers, means at least two widgets, or two or more widgets). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” or “one or more of A, B, and C, etc.” is used, in general such a construction is intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together, etc.
All examples and conditional language recited in the present disclosure are intended for pedagogical examples to aid the reader in understanding the present disclosure and are to be construed as being without limitation to such specifically recited examples and conditions. Although example embodiments of the present disclosure have been described in detail, various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the present disclosure. Accordingly, it is intended that the scope of the present disclosure be limited not by this detailed description, but rather by the claims appended hereto.