The present disclosure is generally related to latency profiling and, more particularly, to method and apparatus of a latency profiling mechanism.
Unless otherwise indicated herein, approaches described in this section are not prior art to the claims listed below and are not admitted to be prior art by inclusion in this section.
When data, whether transmitted in frames or packets, is processed through two (or more) different pipelines each of which having one or more processing stages, the time that one pipeline finishes processing the data may be different from the time that another pipeline finishes processing that data. The difference in time between the two pipelines in processing the same data is referred to as latency. However, the latency between the two pipelines may not necessarily remain constant and, rather, may vary (e.g., increase or decrease) for one reason or another such as performance degradation or abnormality that occurs in any number of the processing stages in the pipelines.
The following summary is illustrative only and is not intended to be limiting in any way. That is, the following summary is provided to introduce concepts, highlights, benefits and advantages of the novel and non-obvious techniques described herein. Select, not all, implementations are further described below in the detailed description. Thus, the following summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.
In one example implementation, a method may involve monitoring at least one attribute associated with each of one or more frames of images by tracking a respective identifier of each of the one or more frames as each of the one or more frames is processed through a first pipeline of one or more processing stages of an image processing device. The method may also involve obtaining one or more indications related to one or more performance indices in the first pipeline of one or more processing stages based at least in part on the monitoring of the at least one attribute.
In another example implementation, a method may involve assigning a respective identifier to each of one or more frames of a plurality of frames of images. The method may also involve recording a starting time and an ending time of processing of each of the one or more frames as each of the one or more frames is processed through a pipeline of one or more processing stages of an image processing device. The method may further involve obtaining one or more indications of a per-stage latency for each processing stage of the pipeline of one or more processing stages.
In yet another example implementation, a device may include a memory unit and a processing unit. The memory unit may be configured to store data therein. The processing unit may be coupled to the plurality of processing modules and the memory unit. The processing unit may be configured to monitor at least one attribute associated with each of the one or more frames by tracking a respective identifier of each of the one or more frames as each of the one or more frames is processed through a first pipeline of one or more processing stages. The processing unit may also be configured to obtain one or more indications related to one or more performance indices in the first pipeline of one or more processing stages based at least in part on a result of the monitoring.
In still another example implementation, a device may include a memory unit and a processing unit. The memory unit may be configured to store data therein. The processing unit may be coupled to the plurality of processing modules and the memory unit. The processing unit may be configured to assign a respective identifier to each of the one or more frames. The processing unit may also be configured to store, in the memory unit, data of a starting time and an ending time of processing of the one or more frames for each processing stage of a pipeline of one or more processing stages. The processing unit may further be configured to receive an indication that there is a condition related to one or more performance indices in the pipeline.
The proposed latency profiling mechanism, whether implemented as a method or an apparatus, provides insight to per-stage latency of a pipeline of processing stages. That is, the proposed latency profiling mechanism allows a bird's eye view over each processing stage of a given pipeline and enables analysis of latency to identify one or more issues associated with one or more processing stages of a pipeline. Moreover, based on the identified latency, a signal or interrupt may be sent to notify one or more processing stages to accelerate for overdrive or to decelerate for voltage frequency scaling. Advantageously, a device may remain in a low-power state as usual while keeping high-performance operation(s) if necessary. Moreover, results of the latency profiling may be used as performance indices to improve the design of processing modules in the future.
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of the present disclosure. The drawings illustrate implementations of the disclosure and, together with the description, serve to explain the principles of the disclosure. It is appreciable that the drawings are not necessarily in scale as some components may be shown to be out of proportion than the size in actual implementation in order to clearly illustrate the concept of the present disclosure.
For better appreciation of the benefits and advantages of techniques, mechanisms, methods, devices, apparatuses and systems according to the present disclosure, detailed description of various implementations, or implementations, is provided in the context of Wi-Fi display (hereinafter referred to as “WFD”). However, those skilled in the art would appreciate that the inventive concepts described herein may be utilized in any other suitable context and/or application. For example, the inventive concepts described herein may be utilized in any wireless or wired communication, not limited to Wi-Fi, and/or any types of display devices or electronic devices.
In the context of WFD, in which the same multimedia content (e.g., video) may be displayed, played or otherwise presented by a source device (e.g., a smartphone) and streamed via Wi-Fi to a sink device (e.g., a television) to be displayed, played or otherwise presented by the sink device, the concept of WFD latency refers to a difference in time in displaying, playing or otherwise presenting the same multimedia content between the two devices after the multimedia content is processed by two pipelines of processing stages. Using video content as an example, data of the video content may be propagated through one pipeline of one or more processing stages to be displayed on the source device and through another pipeline of one or more processing stages to be displayed on the sink device.
Given the difference between the amounts of processing time through the two pipelines, the same video content may be displayed on the source device at a first point in time and displayed on the sink device at a second point in time different from the first point in time. The difference between the first point in time and the second point in time is the WFD latency. The WFD latency between the two pipelines may not necessarily be fixed as it may vary (e.g., increase or decrease) for one reason or another such as performance degradation or abnormality that occurs in one of the pipelines, for example.
For a vendor that provides a device having above-described pipelines, at least from the perspective of quality assurance (QA) staff as well as research and development (RD) staff of the vendor, it may be necessary to identify cause(s) of the variation in WFD latency between the two pipelines in order to troubleshoot and/or improve the design of one or more processing stages of the pipelines. Often time, though, what the vendor may be able to display is merely a portion of each pipeline in concern from user space to kernel space, and it is hard to gain an overview of the overall WFD latency.
Advantageously, implementations of the present disclosure utilize a systematic, uniform profiling mechanism believed to ease efforts in the evaluation of WFD latency for QA and RD. Under the latency profiling mechanism in accordance with the present disclosure, each frame may be embedded with a unique token, which is passed across processes/threads as well as across user/kernel space drivers. Moreover, each processing stage or module of a given pipeline may define its own stage, e.g., each processing stage may be profiled with trace points added. The profiling data may then be collected from each processing stage and processed to provide numerical and/or graphical information for analysis, display, report generation and/or other purposes. For example, profiling results may be provided to a latency monitor, which may be either a software or processing module implemented on-chip or off-chip, for the latency monitor to analyze the collected information, e.g., a starting time and an ending time in processing a given frame at each processing stage, to determine per-stage latency of a pipeline, e.g., a display pipeline for WFD. With the per-stage latency known, the latency monitor may send a signal/interrupt to notify one or more processing stages or modules to accelerate, e.g., for overdrive, or to decelerate, e.g., for voltage frequency scaling. Accordingly, a device may remain in a low-power state as usual while keeping high-performance operation(s) if necessary. Moreover, results of the latency profiling may be used as performance indices to improve the design of processing modules in the future.
Utilizing implementation(s) of the latency profiling mechanism of the present disclosure, QA staff may be able to troubleshoot according to a report of the latency profiling mechanism. Moreover, RD staff may be able to have a bird's-eye view of an entire pipeline as well as each processing stage thereof, and this would aid the analysis of latency issues more efficiently. Furthermore, the profiling results may be used as a post-silicon performance index by a hardware designer in improving the hardware design of one or more processing stages/modules of a given pipeline.
Referring to
In example framework 100, modules 104 and 106 may form a first pipeline of processing stages, with each of modules 104 and 106 functioning as a respective processing stage. That is, modules 104 and 106 may form processing stage 0 and processing stage 11, respectively. Similarly, modules 108, 110, 112 and 114 may form a second pipeline of processing stages, with each of modules 108, 110, 112 and 114 functioning as a respective processing stage. That is, modules 108, 110, 112 and 114 may form processing stage 21, processing stage 22, processing stage 23 and processing stage 24, respectively.
In an example scenario of WFD, the first pipeline of processing stages (processing stages 0 and 11) may process one or more frames of a plurality of frames of images for display on a source device, which may be a mobile device such as a smartphone. The second pipeline of processing stages (processing stages 0, 21, 22, 23 and 24) may process the same one or more frames for display on a sink device, which may be a television that is wirelessly coupled to the source device via Wi-Fi.
In the example scenario shown in
Each processing stage of the first and second pipelines may perform a respective function different from that of the other processing stages. Accordingly, the amount of time for a given processing stage to process a given amount of data, e.g., a frame of image-related data, may differ from one processing stage to another processing stage.
Referring to
Referring to
Additionally, latency profiling mechanism 120 may monitor at least one attribute associated with a respective duplicate frame of each of the one or more frames by tracking a respective identifier of the respective duplicate frame as the respective duplicate frame is processed through the second pipeline of the image processing device. Latency profiling mechanism 120 may also obtain one or more indications related to one or more performance indices in each of the first and second pipelines based at least in part on the monitoring of the one or more attributes. Moreover, results of any, some or all of the operations of latency profiling mechanism 120, e.g., from the monitoring operation and/or the obtaining operation, may be provided to a display device which displays the results.
Latency profiling mechanism 120 may be configured to determine whether there is a condition related to the one or more performance indices in either or both of the first pipeline and the second pipeline. The condition may include (1) a fluctuation in frame rate through the one or more processing stages of the first pipeline, (2) an increase in processing time through the one or more processing stages of the first pipeline, or (3) both of the above. Furthermore, latency profiling mechanism 120 may also be configured to adjust at least one processing stage of the first pipeline and/or the second pipeline in response to determining that there is the condition related to the one or more performance indices in the first pipeline and/or the second pipeline. For instance, as shown in
Additionally or alternatively, latency profiling mechanism 120 may be configured to embed the respective identifier in metadata associated with each of the one or more frames.
In monitoring the attribute(s) associated with each of the one or more frames, latency profiling mechanism 120 may obtain different values of the attribute(s) respectively corresponding to the different stages of the one or more processing stages of the respective pipeline. The respective timestamp paired with the respective identifier of each of the one or more frames may indicate, for example, a starting time and an ending time of processing of a respective frame by each processing stage. The one or more performance indices may include, for example, a per-stage latency for each processing stage of the respective pipeline.
Latency profiling mechanism 120 may accomplish the above by performing a number of operations. For instance, latency profiling mechanism 120 may assign a respective identifier to each of the one or more frames of images. Latency profiling mechanism 120 may also record a starting time and an ending time of processing of each of the one or more frames as each of the one or more frames is processed through the first pipeline and the second pipeline. Latency profiling mechanism 120 may further obtain one or more indications of a per-stage latency for each processing stage of the pipeline of one or more processing stages. Additionally, latency profiling mechanism 120 may determine whether there is a condition related to the per-stage latency for at least one processing stage of the pipeline of one or more processing stages. Such determined condition may include, for example, (1) a fluctuation in frame rate through the one or more processing stages of the pipeline, (2) an increase in processing time through the one or more processing stages of the pipeline, or (3) both of the above.
Example apparatus 300 may include at least those components shown in
Memory unit 304 may be a random access memory (RAM) or any suitable memory device configured to store data therein.
Multimedia processing modules 308 may, similar to processing modules 102, 104, 106, 108, 110, 112 and 114 of example framework 100, form a first pipeline of one or more processing stages and a second pipeline of one or more processing stages, with each multimedia processing module 308 functioning as a respective processing stage. For instance, in the context of WFD, multimedia processing modules 308 form a first pipeline of one or more processing stages to process one or more frames of images to be displayed by a source device as well as a second pipeline of one or more processing stages to process the same one or more frames of images to be streamed via Wi-Fi to a sink device to be displayed by the sink device.
Processing unit 302 may be communicatively coupled to memory unit 304, system clock 306 and each of the multimedia processing modules 308. In particular, processing unit 302 may receive, from system clock 306, a clock signal indicative of time. Processing unit 302 may receive data from each of the multimedia processing modules 308 and store such data in memory unit 304. For instance, processing unit 302 may receive, from each of the multimedia processing modules 308, data related to a starting time and an ending time associated with processing a given frame by the respective multimedia processing module, and processing unit 302 may store such data in memory unit 304. Referring to
In some implementations, processing unit 302 may be configured to monitor at least one attribute associated with each of the one or more frames. Processing unit 302 may accomplish this by, for example, tracking a respective identifier of each of the one or more frames as each of the one or more frames is processed through the first pipeline. Processing unit 302 may also be configured to obtain one or more indications related to one or more performance indices in the first pipeline based at least in part on a result of the monitoring.
In some implementations, in monitoring the at least one attribute associated with the one or more frames, processing unit 302 may be configured to obtain different values of the at least one attribute respectively corresponding to the different stages of the one or more processing stages of the first pipeline.
In some implementations, processing unit 302 may be further configured to monitor at least one attribute associated with a respective duplicate frame of each of the one or more frames by tracking a respective identifier of the respective duplicate frame as the respective duplicate frame is processed through the second pipeline.
In some implementations, the at least one attribute associated with each of the one or more frames may include a respective timestamp paired with the respective identifier of each of the one or more frames, a respective checksum value which varies after being processed by each processing stage, or both. In some implementations, the respective timestamp paired with the respective identifier of each of the one or more frames may indicate a starting time and an ending time of processing of a respective frame.
In some implementations, the one or more performance indices in the first pipeline may include a per-stage latency for each processing stage of the one or more processing stages of the first pipeline.
In some implementations, processing unit 302 may be further configured to embed the respective identifier in metadata associated with each of the one or more frames.
In some implementations, processing unit 302 may be configured to determine whether there is a condition related to the one or more performance indices in the first pipeline. Additionally, processing unit 302 may also be configured to adjust at least one processing stage of the one or more processing stages of the first pipeline in response to determining that there is the condition related to the one or more performance indices in the first pipeline. For instance, processing unit 302 may send a signal to any of the processing stages to accelerate or decelerate the processing stage that is being adjusted. In some implementations, the condition related to the one or more performance indices in the first pipeline may include a fluctuation in frame rate through the one or more processing stages of the first pipeline, an increase in processing time through the one or more processing stages of the first pipeline, or both.
In some implementations, processing unit 302 may be configured to assign a respective identifier to each of one or more frames of images. Processing unit 302 may also be configured to store, in memory unit 304, data of a starting time and an ending time of processing of the one or more frames for each processing stage of the first pipeline and the second pipeline formed by the processing modules. Processing unit 302 may further be configured to receive an indication that there is a condition related to one or more performance indices in the pipeline and/or the second pipeline.
In the configuration described above, in which processing unit 302, memory unit 304, system clock 306 and multimedia processing modules 308 are disposed within a boundary, perimeter, housing, casing or enclosure of apparatus 400, processing unit 302, memory unit 304 and system clock 306 together perform functions similar or identical to those of latency profiling mechanism 120 of example framework 100.
Alternatively, in another configuration, apparatus 300 may also include latency monitoring unit 310 which is external to the boundary, perimeter, housing, casing or enclosure in which processing unit 302, memory unit 304 and system clock 306 are disposed. Memory unit 304 may be communicatively coupled to latency monitoring unit 310, e.g., through a universal serial bus (USB) port or any suitable communication port. With latency monitoring unit 310, at least some of the functions of latency profiling mechanism 120 of example framework 100 may be performed by latency monitoring unit 310. For example, latency monitoring unit 310 may be configured to determine whether there is a condition related to the one or more performance indices in the first pipeline and/or the second pipeline.
Example apparatus 400 may include at least those components shown in
In the configuration shown in
Memory unit 404 may be a random access memory (RAM) or any suitable memory device configured to store data therein.
Multimedia processing modules 408 may, similar to processing modules 102, 104, 106, 108, 110, 112 and 114 of example framework 100, form a first pipeline of one or more processing stages and a second pipeline of one or more processing stages, with each multimedia processing module 408 functioning as a respective processing stage. For instance, in the context of WFD, multimedia processing modules 408 form a first pipeline of one or more processing stages to process one or more frames of images to be displayed by a source device as well as a second pipeline of one or more processing stages to process the same one or more frames of images to be streamed via Wi-Fi to a sink device to be displayed by the sink device.
Processing unit 402 may be communicatively coupled to memory unit 404, system clock 406, each of the multimedia processing modules 408 and latency monitoring unit 410. In particular, processing unit 402 may receive, from system clock 406, a clock signal indicative of time. Processing unit 402 may receive data from each of the multimedia processing modules 408 and store such data in memory unit 404. For instance, processing unit 402 may receive, from each of the multimedia processing modules 408, data related to a starting time and an ending time associated with processing a given frame by the respective multimedia processing module, and processing unit 402 may store such data in memory unit 404. Referring to
In some implementations, latency monitoring unit 410 may be configured to monitor at least one attribute associated with each of the one or more frames. Latency monitoring unit 410 may accomplish this by, for example, tracking a respective identifier of each of the one or more frames as each of the one or more frames is processed through the first pipeline. Latency monitoring unit 410 may also be configured to obtain one or more indications related to one or more performance indices in the first pipeline based at least in part on a result of the monitoring.
In some implementations, in monitoring the at least one attribute associated with the one or more frames, latency monitoring unit 410 may be configured to obtain different values of the at least one attribute respectively corresponding to the different stages of the one or more processing stages of the first pipeline.
In some implementations, latency monitoring unit 410 may be further configured to monitor at least one attribute associated with a respective duplicate frame of each of the one or more frames by tracking a respective identifier of the respective duplicate frame as the respective duplicate frame is processed through the second pipeline.
In some implementations, the at least one attribute associated with each of the one or more frames may include a respective timestamp paired with the respective identifier of each of the one or more frames, a respective checksum value which varies after being processed by each processing stage, or both. In some implementations, the respective timestamp paired with the respective identifier of each of the one or more frames may indicate a starting time and an ending time of processing of a respective frame.
In some implementations, the one or more performance indices in the first pipeline may include a per-stage latency for each processing stage of the one or more processing stages of the first pipeline.
In some implementations, latency monitoring unit 410 may be further configured to embed the respective identifier in metadata associated with each of the one or more frames.
In some implementations, latency monitoring unit 410 may be configured to determine whether there is a condition related to the one or more performance indices in the first pipeline. Additionally, latency monitoring unit 410 may also be configured to adjust at least one processing stage of the one or more processing stages of the first pipeline in response to determining that there is the condition related to the one or more performance indices in the first pipeline. For instance, latency monitoring unit 410 may send a signal to any of the processing stages to accelerate or decelerate the processing stage that is being adjusted. In some implementations, the condition related to the one or more performance indices in the first pipeline may include a fluctuation in frame rate through the one or more processing stages of the first pipeline, an increase in processing time through the one or more processing stages of the first pipeline, or both.
In some implementations, processing unit 402 may be configured to assign a respective identifier to each of one or more frames of images. Processing unit 402 may also be configured to store, in memory unit 404, data of a starting time and an ending time of processing of the one or more frames for each processing stage of a given pipeline of one or more processing stages formed by the processing modules. Processing unit 402 may further be configured to receive an indication that there is a condition related to one or more performance indices in the pipeline.
Block 510 (Monitor at least an attribute associated with a frame which is processed through a first pipeline of processing stages) may refer to latency profiling mechanism 120 monitoring at least one attribute associated with each of one or more frames of images by tracking a respective identifier of each of the one or more frames as each of the one or more frames is processed through a first pipeline of one or more processing stages of an image processing device.
Block 520 (Obtain one or more indications related to one or more performance indices in the first pipeline) may refer to latency profiling mechanism 120 obtaining one or more indications related to one or more performance indices in the first pipeline of one or more processing stages based at least in part on the monitoring of the at least one attribute.
In some implementations, in monitoring the at least one attribute associated with the one or more frames, example process 500 may involve latency profiling mechanism 120 obtaining different values of the at least one attribute respectively corresponding to the different stages of the one or more processing stages of the first pipeline.
In some implementations, example process 300 may further involve latency profiling mechanism 120 monitoring at least one attribute associated with a respective duplicate frame of each of the one or more frames by tracking a respective identifier of the respective duplicate frame as the respective duplicate frame is processed through a second pipeline of one or more processing stages of the image processing device.
In some implementations, the at least one attribute associated with each of the one or more frames may include a respective timestamp paired with the respective identifier of each of the one or more frames, a respective checksum value which varies after being processed by each processing stage, or both. In some implementations, the respective timestamp paired with the respective identifier of each of the one or more frames may indicate a starting time and an ending time of processing of a respective frame.
In some implementations, the one or more performance indices in the first pipeline of one or more processing stages may include a per-stage latency for each processing stage of the one or more processing stages of the first pipeline.
In some implementations, example process 500 may further involve latency profiling mechanism 120 embedding the respective identifier in metadata associated with each of the one or more frames.
Additionally or alternatively, example process 500 may involve latency profiling mechanism 120 determining whether there is a condition related to the one or more performance indices in the first pipeline of one or more processing stages. In some implementations, the condition related to the one or more performance indices in the first pipeline of one or more processing stages may include a fluctuation in frame rate through the one or more processing stages of the first pipeline, an increase in processing time through the one or more processing stages of the first pipeline, or both. In some implementations, example process 500 may further involve latency profiling mechanism 120 adjusting at least one processing stage of the one or more processing stages of the first pipeline in response to determining that there is the condition related to the one or more performance indices in the first pipeline of one or more processing stages.
In some implementations, example process 500 may further involve latency profiling mechanism 120 providing data of one or more results of the monitoring, the obtaining, or both the monitoring and the obtaining to a display device which displays the one or more results. Additionally or alternatively, example process 500 may also involve latency profiling mechanism 120 providing data of one or more results of at least one of the monitoring, the obtaining, or the determining to a display device which displays the one or more results.
Block 610 (Assign an identifier to at least a frame of a plurality of frames of images) may refer to latency profiling mechanism 120 assigning a respective identifier to each of one or more frames of a plurality of frames of images.
Block 620 (Record a starting time and an ending time of processing of the frame as the frame is processed through a pipeline of one or more processing stages) may refer to latency profiling mechanism 120 recording a starting time and an ending time of processing of each of the one or more frames as each of the one or more frames is processed through a pipeline of one or more processing stages of an image processing device.
Block 630 (Obtain one or more indications of a per-stage latency for each processing stage of the pipeline) may refer to latency profiling mechanism 120 obtaining one or more indications of a per-stage latency for each processing stage of the pipeline of one or more processing stages.
In some implementations, example process 600 may further involve latency profiling mechanism 120 determining whether there is a condition related to the per-stage latency for at least one processing stage of the pipeline of one or more processing stages. In some implementations, the condition may include a fluctuation in frame rate through the one or more processing stages of the pipeline, an increase in processing time through the one or more processing stages of the pipeline, or both.
The herein-described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
Further, with respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
Moreover, it will be understood by those skilled in the art that, in general, terms used herein, and especially in the appended claims, e.g., bodies of the appended claims, are generally intended as “open” terms, e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc. It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to implementations containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an,” e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more;” the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number, e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations. Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention, e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc. In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention, e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc. It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
From the foregoing, it will be appreciated that various implementations of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various implementations disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
The present disclosure is a non-provisional patent application claiming the priority benefit of U.S. Provisional Patent Application No. 62/061,839 filed on 9 Oct. 2014, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62061839 | Oct 2014 | US |