The present disclosure relates to a video user interface for an electronic device and an associated method for use in determining depth information relating to a scene which is disposed in front of a display of the electronic device using a front-facing camera disposed behind the display, wherein the front-facing camera comprises an image sensor and a lens.
It is known to use front-facing cameras and 3D sensors provided with a mobile electronic device such as a mobile phone for taking “selfie” pictures, making video calls, and unlocking the mobile electronic device with facial biometry information. However, in order to accommodate such front-facing cameras and 3D sensors, known mobile electronic devices generally require a dedicated area, such as a notch on a front surface of the mobile electronic device, thereby reducing the area of the front surface of the mobile electronic device which is available for displaying images. For example, referring to
Similarly, mobile electronic devices which incorporate a 3D sensor including a projector and a detector such as a camera also require a dedicated area, such as a notch on a front surface of the mobile electronic device, thereby reducing the area of the front surface of the mobile electronic device which is available for displaying images. In order to accommodate front-facing cameras and 3D sensors, some known mobile electronic devices even comprise articulated or pop-up parts which increase the complexity of the mobile electronic devices.
According to a first aspect of the present disclosure there is provided a video user interface for an electronic device for use in determining depth information relating to a scene, the video user interface comprising:
The video user interface may comprise a processing resource which is configured to determine depth information relating to each of one or more regions of the scene based at least in part on the captured image and calibration data.
It should be understood that the method used by the processing resource to determine depth information relating to each of one or more regions of the scene is described in more detail in Sections 3, 4 and 5 of “Image and Depth from a Conventional Camera with a Coded Aperture”, Levin et al., ACM Transactions on Graphics, Vol. 26, No. 3, Article 70, pp. 70-1 to 70-9, which is incorporated herein by reference in its entirety. Moreover, it should be understood that protection may be sought for any of the features described in Sections 3, 4 and 5 of Levin et al.
The calibration data may comprise a plurality of calibration images of a plurality of calibration scenes and a corresponding plurality of measured depth values, wherein each calibration scene includes a point light source located at a different one of the measured depths and each calibration scene is captured by the image sensor through the coded aperture and the lens.
The measured depth value of the point light source in a corresponding calibration scene may comprise a measured distance from any part of the video user interface to the point light source in the corresponding calibration scene. For example, the measured depth value of the point light source in the corresponding calibration scene may comprise a distance from the lens of the video user interface to the point light source in the corresponding calibration scene. The measured depth value of the point light source in the corresponding calibration scene may comprise a distance from a focal plane of the lens of the video user interface to the point light source in the corresponding calibration scene, wherein the focal plane of the lens is defined such that different light rays which emanate from a point in the focal plane of the lens are focused to the same point on the image sensor.
The relative positions of the spatial filter, the image sensor and the lens when the image sensor captures the images of the point light source in the corresponding calibration scene through the coded aperture and the lens for the generation of the calibration data may be the same as the relative positions of the spatial filter, the image sensor and the lens when the image sensor captures the image of the scene through the coded aperture and the lens.
The captured image of the scene and the depth information relating to each region of the scene may together constitute a depth image or a depth map of the scene.
The processing resource may be configured to generate a depth image of the scene based on the determined depth information relating to each of one or more regions of the scene.
The depth information relating to each of the one or more regions of the scene may comprise a distance from any part of the video user interface to each of the one or more regions of the scene. For example, the depth information relating to each of one or more regions of the scene may comprise a distance from the lens of the video user interface to each of one or more regions of the scene. The depth information relating to each of one or more regions of the scene may comprise a distance from a focal plane of the lens of the video user interface to each of one or more regions of the scene, wherein the focal plane of the lens is defined such that different light rays which emanate from a point in the focal plane of the lens are focused to the same point on the image sensor.
The video user interface may be suitable for use in the generation of a depth image of the scene without sacrificing any area of the display, or by sacrificing a reduced area of the display relative to conventional video user interfaces which incorporate conventional 3D sensors. Such a video user interface may allow the generation of depth information relating to the scene using a single image sensor i.e. without any requirement for two or more image sensors, or without any requirement for a projector and an image sensor.
The video user interface may be imperceptible to, or may be effectively hidden from, a user of the electronic device.
The video user interface may be suitable for use in the recognition of one or more features in the scene. For example, such a video user interface may be suitable for use in the recognition of one or more features of a user such as one or more facial features of a user of the electronic device in the scene, for example for facial unlocking of the electronic device. Such a video user interface may allow emojis, or one or more other virtual elements, to be superimposed on top of an image of the scene captured by the image sensor through the coded aperture and the lens. Such a video user interface may allow the generation of an improved “selfie” image captured by the image sensor through the coded aperture and the lens. Such a video user interface may allow emojis, or one or more other virtual elements, to be superimposed on top of the “selfie” image captured by the image sensor through the coded aperture and the lens.
The spatial filter may comprise a binary spatial filter.
The spatial filter may comprise a plurality of spatial filter pixels, wherein the plurality of spatial filter pixels defines the coded aperture.
The spatial filter may comprise a plurality of opaque spatial filter pixels.
The plurality of opaque spatial filter pixels may define one or more gaps therebetween, wherein the one or more gaps define the coded aperture.
The spatial filter may comprise a plurality of transparent spatial filter pixels, wherein the plurality of transparent spatial filter pixels define the coded aperture.
At least some of the opaque spatial filter pixels may be interconnected or contiguous.
All of the opaque spatial filter pixels may be interconnected or contiguous. Imposing the constraint that all of the opaque spatial filter pixels of the candidate coded aperture are interconnected, may make manufacturing of the spatial filter which defines the coded aperture easier or simpler or may facilitate manufacturing of the spatial filter which defines the coded aperture according to a specific manufacturing process.
At least some of the opaque spatial filter pixels may be non-contiguous.
At least some of the transparent spatial filter pixels may be interconnected or contiguous.
At least some of the transparent spatial filter pixels may be non-contiguous.
The spatial filter may comprise a 2D array of spatial filter pixels, wherein the 2D array of spatial filter pixels defines the coded aperture.
The spatial filter may comprise a uniform 2D array of spatial filter pixels, wherein the uniform 2D array of spatial filter pixels defines the coded aperture.
The spatial filter may comprise an n×n array of spatial filter pixels, wherein the spatial filter pixels define the coded aperture and wherein n is an integer. n may be less than or equal to 100, n may be less than or equal to 20, n may be less than or equal to 15, n may be less than or equal to 13 and/or n may be less than or equal to 11.
The spatial filter may comprise an n×m array of spatial filter pixels, wherein the spatial filter pixels define the coded aperture and wherein n and m are integers. m may be less than or equal to 100, m may be less than or equal to 20, m may be less than or equal to 15, m may be less than or equal to 13 and/or m may be less than or equal to 11.
The display may comprise a light emitting diode (LED) display such as an organic light emitting diode (OLED) display.
The display and the image sensor may be synchronized so that the display emits light and the image sensor captures the image of the scene at different times. Synchronization of the display and the image sensor in this way may avoid any light from the display being captured by the image sensor to thereby prevent light from the display altering, corrupting or obfuscating the captured image of the scene.
The display may be at least partially transparent. The spatial filter may be disposed behind the display.
An area of the display may be at least partially transparent. The spatial filter may be disposed behind the at least partially transparent area of the display.
The spatial filter may be disposed between the display and the lens.
The spatial filter may be disposed between the lens and the image sensor.
The spatial filter may be integrated with the lens, for example wherein the spatial filter is integrated within a body of the lens or disposed on a surface of the lens.
The spatial filter may be disposed on a rear surface of the display on an opposite side of the display to the scene.
The display may define the spatial filter.
The display may comprise one or more at least partially transparent areas and one or more at least partially opaque areas. The spatial filter may be defined by the one or more at least partially transparent areas and the one or more at least partially opaque areas of the display. The plurality of spatial filter pixels may be defined by the one or more at least partially transparent areas and the one or more at least partially opaque areas of the display.
The one or more at least partially transparent areas of the display and/or the one or more at least partially opaque areas of the display may be temporary or transitory.
The display may comprise a plurality of light emitting pixels.
The light emitting pixels may define the spatial filter.
The light emitting pixels may define the one or more at least partially transparent areas of the display and/or the one or more at least partially opaque areas of the display.
The display may comprise one or more gaps between the light emitting pixels.
The one or more gaps between the light emitting pixels may define the spatial filter.
The one or more gaps between the light emitting pixels may define the one or more at least partially transparent areas of the display and/or the one or more at least partially opaque areas of the display.
Using the display to define the spatial filter may have a minimal impact upon a quality of an image displayed by the display. Using the display to define the spatial filter may be imperceptible by a user of the electronic device.
The image sensor may comprise a visible image sensor which is sensitive to visible light. The image sensor may comprise a visible image sensor or an RGB image sensor.
The image sensor may comprise an infra-red image sensor which is sensitive to infra-red light such as near infra-red (NIR) light. The image sensor may comprise an infra-red image sensor.
The video user interface may comprise a plurality of image sensors. For example, the video user interface may comprise an infra-red image sensor defined by, or disposed behind, the display for use in generating a depth image of a scene disposed in front of the display and a separate visible image sensor defined by, or disposed behind, the display for capturing conventional images of the scene disposed in front of the display.
The video user interface may comprise a source, emitter or projector of infra-red light for illuminating the scene with infra-red light. The source, emitter or projector of infra-red light may be disposed behind the display. Use of a source, emitter or projector of infra-red light in combination with an infra-red image sensor for use in generating a depth image of a scene disposed in front of the display may provide improved depth information relating to the scene.
The geometry of the coded aperture may be selected so as to maximize a divergence parameter value.
The divergence parameter may be defined so that the greater the divergence parameter value calculated for a given coded aperture geometry, the better the discrimination that may be achieved between regions of different depths in the image of the scene captured by the image sensor when using the given coded aperture geometry. Consequently, selecting the coded aperture geometry so as to maximize the calculated divergence parameter value, provides the maximum level of depth discrimination.
The coded aperture geometry may be selected by:
Calculating the divergence parameter value for each candidate coded aperture geometry may comprise:
The plurality of different scale factor values may be selected from a predetermined range of scale factor values, wherein each scale factor value corresponds to a different depth of a point source in a corresponding calibration scene selected from a predetermined range of depths of the point source.
Calculating the divergence parameter value for each different pair of scaled versions of the candidate coded aperture may comprise calculating the divergence parameter value based on a statistical blurry image intensity distribution for each of the two scaled versions of the candidate coded aperture of each different pair of scaled versions of the candidate coded aperture.
The divergence parameter may comprise a Kullback-Leibler divergence parameter DKL defined by:
D
KL
[P
k1(y), Pk2(y)]=∫yPk1(y)[log Pk1(y)−log Pk2(y)]dy
where y is a blurry image, such as a simulated blurry image, of a point light source captured by the image sensor through the candidate coded aperture, and Pk1(y) and Pk2(y) are the statistical blurry image intensity distributions of the blurry image y at different scale factor values k1 and k2 corresponding to different depths of the point light source in a scene. Each of the statistical blurry image intensity distributions Pk1(y) and Pk2(y) may follow a Gaussian distribution.
It should be understood that the method described above for selecting the geometry of the coded aperture is described in more detail in Section 2 of Levin et al. and that protection may be sought for any of the features described in Section 2 of Levin et al.
According to an aspect of the present disclosure there is provided an electronic device comprising the video user interface as described above.
The electronic device may be mobile and/or portable. For example, the electronic device may comprise a phone such as a mobile phone, a cell phone, or a smart phone, or wherein the electronic device comprises a tablet or a laptop.
According to an aspect of the present disclosure there is provided a method for use in determining depth information relating to a scene using a video user interface, wherein the video user interface comprises a display, a spatial filter defining a coded aperture, an image sensor and a lens, and the method comprises:
The method may comprise determining depth information relating to each of one or more regions of the scene based at least in part on the captured image and calibration data.
The calibration data may comprise a plurality of calibration images of a plurality of calibration scenes and a corresponding plurality of measured depth values, wherein each calibration scene includes a point light source located at a different one of the measured depths and each calibration scene is captured by the image sensor through the coded aperture and the lens.
The measured depth of the point light source in the corresponding calibration scene may comprise a measured distance from any part of the video user interface to the point light source in the corresponding calibration scene. For example, the measured depth of the point light source in the corresponding calibration scene may comprise a measured distance from the lens of the video user interface to the point light source in the corresponding calibration scene. The measured depth of the point light source in the corresponding calibration scene may comprise a measured distance from a focal plane of the lens of the video user interface to the point light source in the corresponding calibration scene, wherein the focal plane of the lens is defined such that different light rays which emanate from a point in the focal plane of the lens are focused to the same point on the image sensor.
The relative positions of the spatial filter, the image sensor and the lens when the image sensor captures the images of the point light source in the corresponding calibration scene through the coded aperture and the lens for the generation of the calibration data may be the same as the relative positions of the spatial filter, the image sensor and the lens when the image sensor captures the image of the scene through the coded aperture and the lens.
The captured image of the scene and the depth information relating to each region of the scene may together constitute a depth image or a depth map of the scene.
The depth information relating to each of the one or more regions of the scene may comprise a distance from any part of the video user interface to each of the one or more regions of the scene. For example, the depth information relating to each of one or more regions of the scene may comprise a distance from the lens of the video user interface to each of one or more regions of the scene. The depth information relating to each of one or more regions of the scene may comprise a distance from a focal plane of the lens of the video user interface to each of one or more regions of the scene, wherein the focal plane of the lens is defined such that different light rays which emanate from a point in the focal plane of the lens are focused to the same point on the image sensor.
The method may comprise:
Deblurring the captured image yof the scene using each calibration image fk may comprise deconvolving the captured image y of the scene using each calibration image fk.
It should be understood that the method for use in generating the depth image of the scene is described in more detail in Sections 3, 4 and 5 of Levin et al. and that protection may be sought for any of the features described in Sections 3, 4 and 5 of Levin et al.
The method may comprise using the determined depth information relating to each of one or more regions of the scene to generate an all-focus image of the scene. It should be understood that the all-focus image of the scene may be generated using the method described at Section 5.2 of Levin et al. and that protection may be sought for any of the features described in Section 5.2 of Levin et al.
Similarly, the method may comprise using the determined depth information relating to each of one or more regions of the scene to generate a re-focused image of the scene. It should be understood that the refocused image of the scene may be generated using the method described at Section 5.4 of Levin et al. and that protection is may be sought for any of the features described in Section 5.4 of Levin et al.
The method may comprise performing a calibration procedure to generate the calibration data.
Performing the calibration procedure may comprise:
It should also be understood that the calibration procedure is described in more detail in Section 5.1 of Levin et al. and that protection may be sought for any of the features described in Section 5.1 of Levin et al.
The method may comprise generating a depth image of the scene based on the determined depth information relating to each of one or more regions of the scene.
According to an aspect of the present disclosure there is provided a method for recognizing one or more features in a scene, comprising:
The one or more features in the scene may comprise one or more features of a user of the electronic device in the scene.
The one or more features in the scene may comprise one or more facial features of a user of the electronic device in the scene.
According to an aspect of the present disclosure there is provided a method for unlocking an electronic device, the method comprising:
The method may comprise unlocking the electronic device in response to recognizing one or more features of a user of the electronic device in the scene.
The method may comprise unlocking the electronic device in response to recognizing one or more facial features of a user of the electronic device in the scene.
It should be understood that any one or more of the features of any one of the foregoing aspects of the present disclosure may be combined with any one or more of the features of any of the other foregoing aspects of the present disclosure.
A video user interface for an electronic device and associated methods will now be described by way of non-limiting example only with reference to the accompanying drawings of which:
Referring initially to
Referring to
In use, the image sensor 116 captures an image of the scene 130 disposed in front of the mobile electronic device 102 through the display 108, the coded aperture of the spatial filter 118, and the lens 114, and the processing resource 120 processes the image captured by the image sensor 116 to determine depth information relating to each of one or more regions of the scene 130.
The processing resource 120 synchronizes the display 108 and the image sensor 116 so that the display 108 emits light and the image sensor 116 captures the image of the scene 130 at different times. Synchronization of the display 108 and the image sensor 116 in this way may avoid any light from the display 108 being captured by the image sensor 116 to thereby prevent light from the display 108 altering, corrupting or obfuscating the captured image of the scene 130.
The image of the scene 130 captured by the image sensor 116 and the depth information relating to each region of the scene 130 may together constitute a depth image or a depth map of the scene 130. The depth information relating to each of the one or more regions of the scene 130 may comprise a distance from any part of the video user interface 104 to each of the one or more regions of the scene 130. For example, the depth information relating to each of one or more regions of the scene 130 may comprise a distance from the lens 114 of the video user interface 104 to each of one or more regions of the scene 130. The depth information relating to each of one or more regions of the scene 130 may comprise a distance from a focal plane of the lens 114 of the video user interface 104 to each of one or more regions of the scene 130, wherein the focal plane of the lens 114 is defined such that different light rays which emanate from a point in the focal plane of the lens 114 are focused to the same point on the image sensor 116.
As will be described in more detail below, the processing resource 120 is configured to determine depth information relating to each of one or more regions of the scene 130 based at least in part on the image of the scene 130 captured by the image sensor 120 and calibration data. As will be understood by one skilled in the art, the spatial filter 118 allows light to reach the image sensor 116 in a specifically calibrated pattern, which can be decoded to retrieve depth information. Specifically, as may be appreciated from “Image and Depth from a Conventional Camera with a Coded Aperture”, Levin et al., ACM Transactions on Graphics, Vol. 26, No. 3, Article 70, pp. 70-1 to 70-9, which is incorporated herein by reference in its entirety, when compared with a conventional uncoded aperture, the coded aperture defined by the spatial filter 118 may be used to provide improved depth discrimination between different regions of an image of a scene having different depths. Accordingly, it should be understood that protection may be sought for any of the features of Levin et al.
The calibration data comprises a plurality of calibration images of a plurality of calibration scenes and a corresponding plurality of measured depth values, wherein each calibration scene includes a point light source located at a different one of the measured depths and each calibration scene is captured by the image sensor 116 through the coded aperture and the lens 114. The measured depth of the point light source in a corresponding calibration scene comprises a measured distance from any part of the video user interface 104 to the point light source in the corresponding calibration scene. For example, the measured depth of the point light source in the corresponding calibration scene may comprise a measured distance from the lens 114 to the point light source in the corresponding calibration scene. The measured depth of the point light source in the corresponding calibration scene may comprise a measured distance from a focal plane of the lens 114 to the point light source in the corresponding calibration scene, wherein the focal plane of the lens 114 is defined such that different light rays which emanate from a point in the focal plane of the lens 114 are focused to the same point on the image sensor 116.
It should be understood that the relative positions of the spatial filter 118, the image sensor 116 and the lens 114 when the image sensor 116 captures the images of the point light source in the corresponding calibration scene through the coded aperture and the lens 114 for the generation of the calibration data, should be the same as the relative positions of the spatial filter 118, the image sensor 116 and the lens 114 when the image sensor 116 captures the image of the scene 130 through the coded aperture and the lens 114.
Referring to
In effect, the calibration distance determined at step 168 for each region j of the scene 130 provides depth information relating to each region j of the scene 130. For example, the captured image of the scene 130 and the calibration distance determined at step 168 for each region j of the scene 130 may together be considered to constitute a depth image or a depth map of the scene 130.
It should be understood that the method generally designated 160 for use in generating depth information relating to the scene 130 is described in more detail in Sections 3, 4 and 5 of Levin et al. and that protection may be sought for any of the features described in Sections 3, 4 and 5 of Levin et al.
Furthermore, as will be understood by one of ordinary skill in the art, the depth information relating to the scene 130 may be used to generate an all-focus image of the scene 130 as described at Section 5.2 of Levin et al. and/or to generate a re-focused image of the scene 130 as described at Section 5.4 of Levin et al. Accordingly, it should be understood that protection may be sought for any of the features described in Sections 5.2 and/or 5.4 of Levin et al.
The calibration data is generated by performing a calibration procedure 170 which is illustrated in
It also should be understood that the calibration procedure 170 is described in more detail in Section 5.1 of Levin et al. and that protection may be sought for any of the features described in Section 5.1 of Levin et al.
The geometry of the coded aperture defined by the spatial filter 118 may be optimized by selecting the geometry of the coded aperture so as to maximize a divergence parameter value. The divergence parameter is defined so that the greater the divergence parameter value calculated for a given coded aperture geometry, the better the depth discrimination that is achieved between regions of different depths in the image of the scene 130 captured by the image sensor 116 when using the given coded aperture geometry. Specifically, the coded aperture geometry is selected by generating, for example randomly generating, a plurality of different candidate coded aperture geometries, calculating a divergence parameter value for each candidate coded aperture geometry, and selecting the candidate coded aperture geometry which has the maximum calculated divergence parameter value.
Specifically, the divergence parameter value for each candidate coded aperture geometry is calculated by applying a plurality of different scale factor values to the geometry of the candidate coded aperture to obtain a plurality of scaled versions of the candidate coded aperture, calculating a divergence parameter value for each different pair of scaled versions of the candidate coded aperture selected from the plurality of scaled versions of the candidate coded aperture, and identifying the divergence parameter value for each candidate coded aperture geometry as the minimum divergence parameter value calculated for any different pair of scaled versions of the candidate coded aperture selected from the plurality of scaled versions of the candidate coded aperture. For example,
The plurality of different scale factor values applied to each candidate coded aperture geometry is selected from a predetermined range of scale factor values, wherein each scale factor value corresponds to a different depth of the point light source in a scene selected from a predetermined range of depths of the point light source. For the example of the specific candidate coded aperture geometry of
The divergence parameter value for each different pair of scaled versions of the candidate coded aperture is calculated by calculating the divergence parameter value based on a statistical blurry image intensity distribution for each of the two scaled versions of the candidate coded aperture of each different pair of scaled versions of the candidate coded aperture. Specifically, the divergence parameter value for each different pair of scaled versions of the candidate coded aperture is calculated by calculating a Kullback-Leibler divergence parameter DKL defined by:
D
KL
[P
k1(y), Pk2(y)]=∫yPk1(y)[log Pk1(y)−log Pk2(y)]dy
Thus, for the example of the specific candidate coded aperture geometry of
The divergence parameter value calculated for the candidate coded aperture geometry is then compared to divergence parameter values calculated for one or more other candidate coded aperture geometries and the candidate coded aperture geometry having the maximum divergence parameter value is selected for the spatial filter 118. For example,
It should be understood that the method described above for selecting the geometry of the coded aperture is described in more detail in Section 2 of Levin et al. and that protection may be sought for any of the features described in Section 2 of Levin et al.
Referring now to
Referring now to
Referring now to
Referring now to
One of ordinary skill in the art will understand that various modifications may be made to the video user interfaces and methods described above without departing from the scope of the present disclosure. For example, any of the image sensors 116, 216, 316, 416, 516 may be sensitive to visible light, for example any of the image sensors 116, 216, 316, 416, 516 may be a visible image sensor or an RGB image sensor. Any of the image sensors 116, 216, 316, 416, 516 may be sensitive to infra-red light such as near infra-red (NIR) light, for example any of the image sensors 116, 216, 316, 416, 516 may be an infra-red image sensor. The video user interface may comprise a plurality of image sensors. For example, the video user interface may comprise an infra-red image sensor defined by, or disposed behind, the display for use in generating a depth image of a scene disposed in front of the display as described above and a separate visible image sensor defined by, or disposed behind, the display for capturing conventional images of the scene disposed in front of the display. The video user interface may comprise a source, emitter or projector of infra-red light for illuminating the scene with infra-red light. The source, emitter or projector of infra-red light may be disposed behind the display. Use of a source, emitter or projector of infra-red light in combination with an infra-red image sensor for use in generating a depth image of a scene disposed in front of the display may provide improved depth information relating to the scene.
Any of the video user interfaces 104, 204, 304, 404, 504 described above may be used in an electronic device of any kind, for example a mobile and/or portable electronic device of any kind, including in a phone such as a mobile phone, a cell phone, or a smart phone, or in a tablet or a laptop.
Embodiments of the present disclosure can be employed in many different applications including in the recognition of one or more features in the scene. For example, any of the video user interfaces 104, 204, 304, 404, 504 may be suitable for use in the recognition of one or more features of a user, such as one or more features of a user, of the electronic device in the scene, for facial unlocking of the electronic device. Such a video user interface may allow emojis, or one or more other virtual elements, to be superimposed on top of an image of the scene captured by the image sensor through the coded aperture and the lens. Such a video user interface may allow the generation of an improved “selfie” image captured by the image sensor through the coded aperture and the lens. Such a video user interface may allow emojis, or one or more other virtual elements, to be superimposed on top of the “selfie” image captured by the image sensor through the coded aperture and the lens.
Although preferred embodiments of the disclosure have been described in terms as set forth above, it should be understood that these embodiments are illustrative only and that the claims are not limited to those embodiments. Those skilled in the art will understand that various modifications may be made to the described embodiments without departing from the scope of the appended claims. Each feature disclosed or illustrated in the present specification may be incorporated in any embodiment, either alone, or in any appropriate combination with any other feature disclosed or illustrated herein. In particular, one of ordinary skill in the art will understand that one or more of the features of the embodiments of the present disclosure described above with reference to the drawings may produce effects or provide advantages when used in isolation from one or more of the other features of the embodiments of the present disclosure and that different combinations of the features are possible other than the specific combinations of the features of the embodiments of the present disclosure described above.
The skilled person will understand that in the preceding description and appended claims, positional terms such as ‘above’, ‘along’, ‘side’, etc. are made with reference to conceptual illustrations, such as those shown in the appended drawings. These terms are used for ease of reference but are not intended to be of limiting nature. These terms are therefore to be understood as referring to an object when in an orientation as shown in the accompanying drawings.
Use of the term “comprising” when used in relation to a feature of an embodiment of the present disclosure does not exclude other features or steps. Use of the term “a” or “an” when used in relation to a feature of an embodiment of the present disclosure does not exclude the possibility that the embodiment may include a plurality of such features.
The use of reference signs in the claims should not be construed as limiting the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
2019760.4 | Dec 2020 | GB | national |
The present application is a national stage entry according to 35 U.S.C. § 371 of PCT application No.: PCT/SG2021/050751 filed on Dec. 6, 2021; which claims priority to British patent application 2019760.4, filed on Dec. 15, 2020; all of which are incorporated herein by reference in their entirety and for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SG2021/050751 | 12/6/2021 | WO |