The present disclosure is generally related to visual display technologies and, more particularly, to realization of a simulated transparent display for a device, which could be a portable device or a stationary device.
Unless otherwise indicated herein, approaches described in this section are not prior art to the claims listed below and are not admitted to be prior art by inclusion in this section.
Electronic visual display devices of various sizes and kinds have been prevailing in modern living. For example, information displays, interactive touch screens, monitors, televisions, bulletin boards, public signage, indoor and outdoor commercial displays, and the like, have been widely used in stores, work places, train stations, airports, and other public areas. In addition, most personal electronic devices, such as mobile phones, tablet computers, laptop computers, and the like, usually include one or more displays integrated therein. While showing an intended visual content (e.g., text(s), graphic(s), image(s), picture(s), and/or video(s)), however, each of these displays is opaque in nature. That is, while being able to see the intended visual content shown on the display, a user of the display is not able, even partially, to “see through” the display and see the object(s) and surrounding behind the display. This opaque nature of existing displays inevitably excludes realization of applications that could be otherwise implemented with a transparent or semi-transparent display.
The following summary is illustrative only and is not intended to be limiting in any way. That is, the following summary is provided to introduce concepts, highlights, benefits and advantages of the novel and non-obvious techniques described herein. Select implementations are further described below in the detailed description. Thus, the following summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.
An objective of the present disclosure is to propose novel schemes pertaining to a simulated transparency for implementing a simulated transparent or semi-transparent display with an opaque display.
In one aspect, an apparatus may include a memory configured to store one or more sets of instructions. The apparatus may also include a processor coupled to execute the one or more sets of instructions in the memory. Upon executing the one or more sets of instructions, the processor may be configured to receive data of an image of a surrounding of a display. Moreover, the processor may also be configured to construct a see-through window of the image. When presented on the display, the see-through window may substantially match the surrounding and thus create a visual effect with which at least a portion of the display is substantially transparent to a user. Furthermore, the processor may further be configured to present the see-through window on the display.
In another aspect, a method of simulating a display to be substantially transparent to a user may involve capturing a first image of a surrounding of the display with a first camera, the first image having a first viewing angle. The method may also involve capturing a second image of the user with a second camera, with the second image having a second viewing angle. The method may also involve constructing a see-through window of the first image. When presented on the display, the see-through window may substantially match the surrounding and thus create a visual effect with which at least a portion of the display is substantially transparent to the user.
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of the present disclosure. The drawings illustrate implementations of the disclosure and, together with the description, serve to explain the principles of the disclosure. It is appreciable that the drawings are not necessarily in scale as some components may be shown to be out of proportion than the size in actual implementation in order to clearly illustrate the concept of the present disclosure.
Each of
Each of
Detailed embodiments and implementations of the claimed subject matters are disclosed herein. However, it shall be understood that the disclosed embodiments and implementations are merely illustrative of the claimed subject matters which may be embodied in various forms. The present disclosure may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments and implementations set forth herein. Rather, these exemplary embodiments and implementations are provided so that description of the present disclosure is thorough and complete and will fully convey the scope of the present disclosure to those skilled in the art. In the description below, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments and implementations.
With a transparent or semi-transparent display in accordance with the present application, a viewer or user of the display would be able to see the object(s) and surrounding that may be partially or completely blocked by an otherwise opaque display. This feature is especially useful for portable devices such as mobile phones and tablet devices. For example, with a transparent or semi-transparent display of a mobile device, a user would, while using the mobile device and simultaneously walking down a street, be able to see whether there is an obstacle in the street that he or she may not be able to see (and might otherwise trip or stumble over), thereby enhancing safety of the user when using the mobile device while moving. In addition, the transparent or semi-transparent display of the mobile device in accordance with the present disclosure would create a unique user experience that is otherwise not attainable with an opaque display, thereby enhancing the user experience.
Under the proposed schemes, an opaque display may be simulated to serve as a transparent or semi-transparent display to a user. Through a computation performed by a special-purpose processor, a see-through window may be constructed by an apparatus using the processor and having an otherwise opaque display such that, when displayed on the opaque display and seen from a viewpoint of the user, the see-through window would substantially match a surrounding of the display, thereby creating a visual effect with which at least a portion of the display appears to be transparent to the user. Moreover, under the proposed schemes, by adaptively and continually updating the see-through window, the display would appear to be transparent even if there is a relative movement between any two of the user, the display, and the surrounding. Furthermore, under the proposed schemes, a transparency setting may be determined for the see-through window to simulate a semi-transparent display that may blend the see-through window with one or more other displaying objects such as one or more graphical user interface (GUI) objects and/or one or more augmented reality (AR) objects.
In some embodiments, at least a part of the see-through window may be blurred to create a visual effect that mimics a single depth of focus of human eyes. For example, see-through window 490 includes a partial image 4739 of a pedestrian and an image 4749 of a dog. When presented on display 480, see-through window 490 may have an area around or encompassing partial image 4739 (the pedestrian) blurred out to some extent such that, when presented along with image 4749 (the dog), which is not blurred out, a visual effect is created that mimics a single depth of focus of human eyes, with the focus on the fog rather than on the pedestrian.
In some embodiments, the see-through window may include one or more other displaying objects which do not have counterparts in the actual surrounding. Such one or more other objects may include one or more icons, one or more buttons, one or more graphical user interface (GUI) objects, and/or one or more augmented reality (AR) objects. To illustrate this feature,
In some embodiments, a color temperature setting of see-through window 590 may be determined and/or adjusted, and see-through window 590 may be presented on display 580 with the color temperature setting. In some embodiments, the color temperature setting may be adjusted or determined such that see-through window 590 may have a color temperature that is closer to that of surrounding 570, thereby enhancing the matching between see-through window 590 and surrounding 570.
In some embodiments, even when the surrounding is changing, the user is moving to a different location, and/or the display is being moved to a different location, the above schemes of construction and presentation of the see-through window may be adaptively and continually repeated so that the display stays substantially transparent to the user in response to a relative movement of any of the user, the display, and the surrounding with respect to any other thereof.
In scenario 100, the transparency-simulating apparatus may include a main camera 181 that faces surrounding 170. The transparency-simulating apparatus may receive data of an image of surrounding 170 taken by main camera 181 to create a see-through window such as see-through window 490 of
An image of a surrounding, such as image 699 of
To simplify computation and analysis of the 3D pyramid depicted in
As described in
In some embodiments, instead of constructing see-through 890 out of image 899, a new image 899 may be taken and used to re-calculate an updated set of cropping parameters and/or an updated set of deforming parameters. That is, the original (i.e., non-zoomed-in) image 899 may serve as a “preview image” of the surrounding, and the set(s) of cropping/deforming parameters generated from the “preview image” may serve as a first pass calculation used to find or determine an optical zoom setting of the camera. With the camera adjusted to using the determined optical zoom setting, an updated (i.e., a zoomed-in version of) image 899, or the “zoomed image” of the same surrounding, is taken and used to calculate the updated set(s) of cropping/deforming parameters. The final see-through window to be presented on the display is then constructed from the zoomed image using the updated set(s) of cropping/deforming parameters. The purpose of this two-step approach that involves an adjustment of the optical zoom setting of the camera is to maximize a pixel resolution of the see-through window.
0.5*[1−(1+D1/D2)(Hd/(2*D1*tan(θ1/2)))]. [EQ1]
Given that viewing angle θ1 is a known value as part of the image, and height Hd of display 980 is also a known value, the cropping parameters (Bt/H1) and (Bb/H1) can be readily calculated as long as D1 (distance between display 980 and focal plane 999 of the surrounding) and D2 (distance between display 980 and user 960) are known.
In some embodiments, either or both of D1 and D2 can be substituted by some predetermined, typical value(s) for common situations. That is, even though D1 and D2 may not be readily known, some typical values may be used for them for the calculation of the cropping parameters. For example, if display 960 is a cell phone, a typical value for D2 may be 0.5 meters, and a typical value for D1 may be 5-10 meters. Even if the predetermined typical values do not match the actual situation, certain degree of simulated transparency, though not perfect, can still be realized according to an implementation of the present disclosure.
In some embodiments, D1 may be determined more accurately if the camera is a dual-lens or multi-lens camera. With the two or more lenses of the camera, a series of focusing operations may be performed to better estimate D1. That is, images of a same object of different focal lengths can be taken and analyzed, and a more accurate estimation of D1 may be achieved. In some embodiments, a dedicated distance detection sensor may be used along with the camera to get a more accurate estimation of D1.
In some embodiments, the construction of the see-through window may incorporate 3D modeling of the surrounding if the camera is a dual-lens or multi-lens camera. Using a 3D modeling terminology, the camera may perform an operation of “segmentation” on the surrounding. Specifically, the camera may take multiple images, rather than just a single image, of the surrounding. Each of the multiple images of the surrounding may be taken on a respectively different focal plane with respect to the camera. That is, each image of the surrounding may be taken at a respectively different focal plane 999 of
A set of related angles, such as angles θ, θt and θb of
tan θ=(2*Et/H2−1)*tan(θ2/2) [EQ2]
cot θt=0.5*(Hd/D2)+tan θ [EQ3]
cot θb=0.5*(Hd/D2)−tan θ[EQ4]
H
1=2*D1*tan(θ1/2) [EQ5]
H
2=2*D2*tan(θ2/2) [EQ6]
Accordingly, the set of cropping parameters can be readily derived as:
(Bt/H1)=0.5*(1−Hd/D1)−(D1/H1)*cot θt [EQ7]
(Bb/H1)=0.5*(1−Hd/D1)−(D1/H1)*cot θb [EQ8]
Among the various parameters, viewing angle θ1 is a known value as part of the image of the surrounding, and viewing angle θ2 is also a known value as part of the image of user 1060. Height Hd of display 1080 is known, and (Et/H2) can be derived from an analysis of the image of user 1060. Alternatively, (Et/H2) can use a predetermined typical value. Therefore, the cropping parameters (Bt/H1) and (Bb/H1) can be readily calculated as long as D1 (distance between display 1080 and focal plane 1099 of the surrounding) and D2 (distance between display 1080 and user 1060) are known.
Various techniques for determining or estimating D1 are disclosed above, and the same techniques may be applied to the determining or estimating of D2. That is, D2 may be substituted by some predetermined, typical value, such as 0.5 meters. Alternatively, D2 may be determined or estimated more accurately provided that the front camera is a dual-lens or multi-lens camera and that a series of focusing operations is performed. Alternatively, D2 may be determined or estimated using a dedicated distance detection sensor. In addition, when display 1080 is a cell phone that is typically used by a single particular user, a size or area of head 1161 of the user on image 1155, as shown in
(Bt/H1)=0.5*(1−Hd/D1)−(D1/H1)*cot θt+S1/H1 [EQ7S]
(Bb/H1)=0.5*(1−Hd/D1)−(D1/H1)*cot θb−S1/H1 [EQ8S]
Note that S1 is typically a known design parameter value.
tan θ=(2*Et/H2−1−2*S2/H2)*tan(θ2/2) [EQ2S]
With EQ2S replacing EQ2, the set of cropping instructions presented in EQ7 and EQ8 stands valid for spatial relationship 1300. Note that S2 is typically a known design parameter value.
Transparency-simulating apparatus 1500 may include a special-purpose processor 1510 implemented in the form of one or more integrated-circuit (IC) chips and any supporting electronics, and may be installed in an electronic device or system carried by user 160, such as a computer (e.g., a personal computer, a tablet computer, a personal digital assistant (PDA)), a cell phone, a game console, and the like. In other words, transparency-simulating apparatus 1500 may be implemented in or as a portable device or a stationary device. Processor 1510 may be communicatively connected to various other operational components of transparency-simulating apparatus 1500 through communication bus 1590. Communication bus 1590 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal components of transparency-simulating apparatus 1500. For instance, transparency-simulating apparatus 1500 may also include a memory device 1520, and processor 1510 may communicatively connect to memory device 1520 through communication bus 1590. Memory device 1520 may be configured to store data, firmware and software programs therein. For example, memory device 1520 may store one or more sets of instructions such that, when processor 1510 executes the instructions, processor 1510 may be configured to receive data of images of a surrounding and of a user, to construct a see-through window for the image of the surrounding according to the methods described above with respect to
In some embodiments, transparency-simulating apparatus 1500 may also include a main camera 1530 (such as camera 181 of
In some embodiments, transparency-simulating apparatus 1500 may include a display 1550 (such as displays 480 and 580) which may be the display that the see-through window may be presented on. In some embodiments, either or both of main camera 1530 and front camera 1540 may be integrated with display 1550. In some embodiments, transparency-simulating apparatus 1500 may include an ambient light sensor 1560 that provides red-green-blue (RGB) data of the surrounding. Ambient light sensor 1560 may send the RGB data to processor 1510 through communication bus 1590. Processor 1510 may be configured to determine a color temperature setting based on either the image of the surrounding or RGB data from ambient light sensor 1560, and to present the see-through window on display 1550 with the color temperature setting.
In some embodiments, cameras 1530 and 1540 may adaptively and continually capture images and send them to processor 1510. Processor 1510 may be configured to adaptively and continually construct the see-through window and present it on the display. Accordingly, the display stays substantially transparent to the user, even in response to a relative movement of any of the user, the display, and the surrounding with respect to any other thereof.
In some embodiments, the image may be associated with a viewing angle, and may be captured by main camera 1530 or front camera 1540 with the viewing angle. In constructing the see-through window, processor 1510 may be configured to perform a number of operations. For instance, processor 1510 may determine a first spatial relationship denoting a location of the surrounding with respect to display 1550. Processor 1510 may also determine a second spatial relationship denoting a location of the user with respect to display 1550. Processor may also compute a set of cropping parameters, a set of deforming parameters, or both, based on the first spatial relationship, the second spatial relationship, the viewing angle of the image, and a dimension of display 1550. Processor 1510 may apply the set of cropping parameters, the set of deforming parameters, or both, to the image to generate the see-through window.
In some embodiments, in determining the first spatial relationship, processor 1510 may be configured to determine the first spatial relationship using a predetermined first distance, with the first distance denoting the location of the surrounding with respect to display 1550. Moreover, in determining the second spatial relationship, processor 1510 may be configured to determine the second spatial relationship using a predetermined second distance and a predetermined set of angles, with the second distance and the set of angles collectively denoting the location of the user with respect to display 1550.
In some embodiments, the image of the surrounding may be a first image, and the viewing angle of the image may be a first viewing angle. Processor 1510 may be further configured to receive data of an image of the user from front camera 1540. The image of the user may be a second image, and may be associated with a second viewing angle. The second image may be captured by front camera 1540 with the second viewing angle.
In some embodiments, in computing the set of cropping parameters, the set of deforming parameters, or both, processor 1510 may be configured to compute the set of cropping parameters, the set of deforming parameters, or both, based on the first spatial relationship, the second spatial relationship, the first viewing angle, the dimension of display 1550, and the second viewing angle.
In some embodiments, in determining the first spatial relationship, processor 1510 may be configured to determine the first spatial relationship using a first distance, the first distance denoting the location of the surrounding with respect to display 1550. The first distance may be estimated either by main camera 1530 performing focusing operations on the surrounding or by processor 1510 analyzing the first image. Moreover, in determining the second spatial relationship, processor 1510 may be configured to determine the second spatial relationship using a second distance and a set of angles, the second distance and the set of angles collectively denoting the location of the user with respect to display 1550. The second distance and the set of angles may be estimated either by front camera 1540 performing focusing operations on the user or by processor 1510 analyzing the second image.
In some embodiments, in analyzing the second image, processor 1510 may be configured to determine positions of eyes of the user, a spacing between the eyes of the user, an area of a head of the user as captured in the second image, or a combination of two or more thereof, by applying one or more face detection techniques to the second image.
In some embodiments, the image may be a preview image. In such cases, in constructing the see-through window, processor 1510 may be configured to perform a number of operations. For instance, processor 1510 may determine a first spatial relationship denoting a location of the surrounding with respect to display 1550, and also determine a second spatial relationship denoting a location of the user with respect to display 1550. Processor 1510 may compute a first set of cropping parameters, a first set of deforming parameters, or both, based on the first spatial relationship, the second spatial relationship, a viewing angle of the preview image, and a dimension of display 1550. Processor 1510 may determine an optical zoom setting of main camera 1530, which captures the image, based on the first set of cropping parameters, the first set of deforming parameters, or both, such that the optical zoom setting maximizes a pixel resolution of the see-through window. Processor 1510 may receive data of a zoomed image of the surrounding from the camera, with the optical zoom setting applied to the camera. Processor 1510 may compute a second set of cropping parameters, a second set of deforming parameters, or both, based on the first spatial relationship, the second spatial relationship, a viewing angle of the zoomed image, and the dimension of display 1550. Processor 1510 may apply the second set of cropping parameters, the second set of deforming parameters, or both, to the zoomed image to generate the see-through window.
In some embodiments, in presenting the see-through window on display 1550, processor 1510 may be configured to determine a color temperature setting for the see-through window. Additionally, processor 1510 may present the see-through window on display 1550 with the color temperature setting.
In some embodiments, in presenting the see-through window on display 1550, processor 1510 may be configured to blur at least a part of the see-through window to create a second visual effect of a substantially single depth of focus of human eyes. Moreover, processor 1510 may present the see-through window on display 1550 with the second visual effect.
In some embodiments, in presenting the see-through window on display 1550, processor 1510 may be configured to determine a transparency setting for the see-through window. Moreover, processor 1510 may present the see-through window on display 1550 with the transparency setting along with one or more other displaying objects. The one or more other displaying objects may include one or more icons, one or more buttons, one or more GUI objects, or one or more AR objects.
At 1610, process 1600 may involve processor 1510 capturing a first image (such as image 699) of a surrounding (such as surrounding 170 or 470) of a display (such as display 180) with a first camera (such as main camera 181). The first image may have a viewing angle (such as viewing angle 710 of
At 1620, process 1600 may involve processor 1510 constructing a see-through window (such as see-through window 490 or 690) of the first image. When presented on the display, the see-through window substantially matches the surrounding and creates a visual effect with which at least a portion of the display may be substantially transparent to the user (such as the visual effect with which at least a portion of display 480 may be substantially transparent with see-through window 490 presented). Block 1620 may begin with sub-block 1630.
At 1630, process 1600 may involve processor 1510 estimating a first distance (such as D1 of
At 1635, process 1600 may involve processor 1510 estimating a second distance (such as D2 of
At 1640, process 1600 may involve processor 1510 computing a set of cropping parameters (such as 881, 882, 883 and 884 of cropping process 810), a set of deforming parameters, or a combination of both (such as cropping-and-deforming process 820). The computing of the cropping parameters, the deforming parameters, or a combination of both, may be based on the first spatial relationship (such as D1 of
At 1645, process 1600 may involve processor 1510 applying the set of cropping parameters, the set of deforming parameters, or both, to the first image (such as image 699) to generate the see-through window (such as 690 of
At 1650, process 1600 may involve processor 1510 presenting the see-through window on the display. When presented on the display, the see-through window substantially matches the surrounding and creates a visual effect with which at least a portion of the display may be substantially transparent to the user (such as the visual effect with which display 480 may be substantially transparent with see-through window 490 presented). Block 1650 may begin with sub-block 1660.
At 1660, process 1600 may involve processor 1510 determining a color temperature setting for the see-through window. For example, a color temperature setting of see-through window 590 of
At 1665, process 1600 may involve processor 1510 blurring at least a part of the see-through window to create a visual effect of a substantially single depth of focus of human eyes. For example, processor 1510 may blur, for see-through window 490 of
At 1670, process 1600 may involve processor 1510 determining a transparency setting for the see-through window. For example, processor 1510 may determine and/or adjust see-through window 590 of
At 1675, process 1600 may involve processor 1510 presenting the see-through window on the display. In some embodiments, process 1600 may involve processor 1510 presenting the see-through window on the display with the color temperature setting determined in sub-block 1660. In some embodiments, process 1600 may involve processor 1510 presenting the see-through window on the display with the visual effect of the substantially single depth of focus of human eyes as created in sub-block 1665. In some embodiments, process 1600 may involve processor 1510 presenting the see-through window on the display with the transparency setting determined in sub-block 1670 along with one or more other displaying objects. The one or more other displaying objects may be one or more icons, buttons, GUI objects, or AR objects. Process 1600 may return from 1675 to 1610, forming a process loop therein.
By forming the process loop, process 1600 may involve processor 1510 adaptively and continually repeating the capturing of the first and second images, the constructing of the see-through window, and the presenting of the see-through window on the display such that the display appears to be substantially transparent to the user in response to a relative movement of any of the user, the display, and the surrounding with respect to any other thereof.
In some embodiments, in constructing the see-through window, process 1600 may involve a number of operations (e.g., performed by apparatus 1500). For instance, process 1600 may involve determining a first spatial relationship denoting a location of the surrounding with respect to the display. Process 1600 may also involve determining a second spatial relationship denoting a location of the user with respect to the display. Process 1600 may further involve computing a set of cropping parameters, a set of deforming parameters, or a combination of both, based on the first spatial relationship, the second spatial relationship, the first viewing angle, the second viewing angle, and a dimension of the display. Process 1600 may additionally involve applying the set of cropping parameters, the set of deforming parameters, or both, to the first image to generate the see-through window.
In some embodiments, in determining the first spatial relationship, process 1600 may involve estimating a first distance using main camera 1530, the first distance denoting the location of the surrounding with respect to the display. Additionally, in determining the second spatial relationship, process 1600 may involve estimating a second distance and a set of angles using either or both of the second camera (e.g., front camera 1540) and the second image, with the second distance and the set of angles collectively denoting the location of the user with respect to the display.
In some embodiments, each of the first and second cameras may be integrated with the display. In such cases, the computing of the set of cropping parameters, the set of deforming parameters, or both, may be further based on a respective offset between each of the first and second cameras and a center of the display.
In some embodiments, in presenting the see-through window on the display, process 1600 may involve determining a color temperature setting for the see-through window. Process 1600 may also involve presenting the see-through window on the display with the color temperature setting.
In some embodiments, in presenting the see-through window on the display, process 1600 may involve blurring at least a part of the see-through window to create a second visual effect of a substantially single depth of focus of human eyes. Moreover, process 1600 may involve presenting the see-through window on the display with the second visual effect.
In some embodiments, in presenting the see-through window on the display, process 1600 may involve determining a transparency setting for the see-through window. Furthermore, process 1600 may involve presenting the see-through window on the display with the transparency setting along with one or more other displaying objects. The one or more other displaying objects may include one or more icons, one or more buttons, one or more GUI objects, or one or more AR objects.
In some embodiments, process 1600 may also involve adaptively and continually repeating the capturing of the first and second images, the constructing of the see-through window, and the presenting of the see-through window on the display such that the display appears to be substantially transparent to the user in response to a relative movement of any of the user, the display, and the surrounding with respect to any other thereof.
The herein-described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
Further, with respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
Moreover, it will be understood by those skilled in the art that, in general, terms used herein, and especially in the appended claims, e.g., bodies of the appended claims, are generally intended as “open” terms, e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc. It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to implementations containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an,” e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more;” the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number, e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations. Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention, e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc. In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention, e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc. It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
From the foregoing, it will be appreciated that various implementations of the present disclosure are described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various implementations disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
The present disclosure claims the priority benefit of U.S. patent application Ser. No. 62/242,434, filed on 16 Oct. 2015, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62242434 | Oct 2015 | US |