This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2019-0006885, filed on Jan. 18, 2019, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.
The disclosure relates generally to an encoding apparatus and method for reconstructing a three-dimensional (3D) image based on depth information of an object in a structured depth camera system.
3D technology is an image processing technology that produces “stereoscopic perception” of the human eyes. Compared to two-dimensional (2D) images on a plane, images to which the 3D technology is applied (hereinafter, “3D images”) offer a sensory experience of viewing an object as real to a person. To render a 3D image, a depth is considered in addition to factors (color and luminance) required to render a 2D image.
The 3D technology may find its use in accurate user face identification, realistic 3D avatars, virtual makeup, virtual dressing, 3D photography, gesture recognition, 3D content creation for virtual reality (VR), support of accurate and realistic augmented reality (AR), scene understanding, and 3D scanning.
A 3D image may be reconstructed by combining a picture (2D image) captured by an image sensor of a camera with depth information. The depth information may be measured or obtained by, for example, a 3D sensing camera. In general, 3D sensing may be achieved by using multiple cameras or a depth camera.
In a multiple camera-based scheme, visible light is captured by two or more cameras and depth information is measured based on the captured visible light. The depth camera-based scheme measures depth information by at least one projector and at least one camera. Structured light (SL) and time of flight (ToF) schemes are mainly adopted to use the depth camera.
The SL scheme was proposed to increase the computation accuracy of corresponding points between stereo images in a traditional stereo vision. A depth camera system adopting the SL scheme is referred to as a structured depth camera system.
In the SL scheme, a projector projects a pattern with some regularity onto an object (or a subject) or a scene (or imaging surface) to be restored in 3D, a camera captures the projected pattern, and correspondences between the pattern and the image are obtained.
That is, the projector may radiate (or project) light (or a laser beam) of a source pattern (or a radiation pattern). The camera may receive radiated light reflected from one or more objects, and a transformation between a recognition pattern (or a reflection pattern) and the source pattern is analyzed based on the received light, thereby obtaining depth information.
The transformation of the recognition pattern may be caused by the surface shape of a scene. The scene shape may be a presentation of the depth of, for example, an object or background existing in the scene of the camera.
In the ToF scheme, a time taken for light densely radiated by a projector to be reflected and received by a camera is measured and depth information is obtained based on the time measurement.
The SL scheme is feasible when the distance to an object is short (within 10 meters (m)) because it is likely to have a low recognition rate for a far object. On the other hand, the ToF scheme is feasible for a far object (at a distance of 10 m or more) because it offers a high recognition rate for a far object, relative to the SL scheme.
In general, it is possible to obtain depth information in the SL scheme by using any uniform pattern, a grid, feature points for a simple scene, a plane or a scene without fluctuation of at least depth. However, the SL scheme may have limitations in estimating the depth of an object in a complex scene that includes objects with various or large depth differences.
The present disclosure has been made to address the above-mentioned problems and disadvantages, and to provide at least the advantages described below.
According to an aspect of the present disclosure, an apparatus for measuring a depth in a structured depth camera system including a projector and a camera for receiving light radiated by a source pattern from the projector and reflected from one or more objects in a scene is provided. The apparatus includes an interface unit configured to receive an electrical signal based on light received from the camera, and at least one processor configured to obtain depth information of the scene based on the electrical signal received from the interface unit. The at least one processor is configured to identify a recognition pattern corresponding to the light received by the camera based on the electrical signal, to obtain a total distance value between properties of a partial source pattern corresponding to a target fragment in the source pattern and properties of a partial recognition pattern corresponding to the target fragment in the recognition pattern, and to estimate depth information of the target fragment based on the obtained total distance value.
In accordance with another aspect of the present disclosure, a method of measuring a depth in a structured depth camera system including a projector and a camera for receiving light radiated by a source pattern from the projector and reflected from one or more objects in a scene is provided. The method includes identifying a recognition pattern corresponding to the light received by the camera based on an electrical signal, obtaining a total distance value between properties of a partial source pattern corresponding to a target fragment in the source pattern and properties of a partial recognition pattern corresponding to the target fragment in the recognition pattern, and estimating depth information of the target fragment based on the obtained total distance value.
The above and other aspects, features and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Various embodiments of the present disclosure are described with reference to the accompanying drawings. However, various embodiments of the present disclosure are not limited to particular embodiments, and it should be understood that modifications, equivalents, and/or alternatives of the embodiments described herein can be variously made. With regard to description of drawings, similar components may be marked by similar reference numerals.
An encoding apparatus and method for obtaining depth information of objects having various or large depth differences is provided. The encoding apparatus and method may be used for reconstructing a 3D image based on the obtained depth information in a structured depth camera system.
Additionally, an apparatus and method for providing redundant encoding of a pattern for a structured depth camera is provided.
In addition, an apparatus and method for correcting errors related to detecting and classifying feature points from a recognition pattern of a camera in a structured depth camera system is provided.
The terms “have”, “may have”, “include”, or “may include” may signify the presence of a feature (e.g., a number, a function, an operation, or a component, such as a part), but does not exclude the presence of one or more other features.
As used herein, the expressions “A or B”, “at least one of A and B”, “at least one of A or B”, “one or more of A and B”, and “one or more of A or B” may include any and all combinations of one or more of the associated listed items. Terms such as “A or B”, “at least one of A and B”, or “at least one of A or B” may refer any and all of the cases where at least one A is included, where at least one B is included, or where both of at least one A and at least one B are included.
Terms such as “1st”, “2nd”, “first” or “second” may be used for the names of various components irrespective of sequence and/or importance. These expressions may be used to distinguish one component from another component. For example, a first user equipment (UE) and a second UE may indicate different UEs irrespective of sequence or importance. For example, a first component may be referred to as a second component and vice versa without departing from the scope of the disclosure.
It is to be understood that when an element (e.g., a first element) is referred to as being “operatively” or “communicatively” “coupled with”, “coupled to”, “connected with” or “connected to” another element (e.g., a second element), the element can be directly coupled with/to another element or coupled with/to another element via an intervening element (e.g., a third element). In contrast, when an element (e.g., a first element) is referred to as being “directly coupled with”, “directly coupled to”, “directly connected with” or “directly connected to” another element (e.g., a second element), it should be understood that there is no intervening element (e.g., a third element).
As used herein, the expressions “configured to” or “set to” may be interchangeably used with the expressions “suitable for”, “having the capacity to”, “designed to”, “adapted to”, “made to”, or “capable of”. The expressions “configured to” or “set to” should not be construed to only mean “specifically designed to” in hardware. Instead, the expression “a device configured to” may mean that the device is “capable of” operating together with another device or other components. For example, a “processor configured to perform A, B, and C” or “a processor set to perform A, B, and C” may mean a dedicated processor (e.g., an embedded processor) for performing a corresponding operation or a generic-purpose processor (e.g., a central processing unit (CPU) or an application processor (AP)) which may perform corresponding operations by executing one or more software programs which are stored in a memory device.
The terms as used in the disclosure are provided to describe specific embodiments, but are not intended to limit the scope of other embodiments. Singular forms of terms and expressions may include plural referents unless the context clearly indicates otherwise. Technical or scientific terms and expressions used in the disclosure may have meanings as generally understood by those of ordinary skill in the art. In addition, terms and expressions, as generally defined in dictionaries, may be interpreted as having the same or similar meanings. Unless otherwise defined, terms and expressions should not be interpreted as having ideally or excessively formal meanings. Additionally, the terms and expressions, as defined in the disclosure, may not be interpreted as excluding embodiments of the disclosure.
Accordingly, a method of obtaining depth information of a target fragment by using the property values of feature points in the target fragment in a radiation pattern of a projector and using the property values of the feature points in the target fragment in a recognition pattern of a camera in a structured depth camera system may be provided.
Thus, a method of obtaining distance values between a source pattern and a recognition pattern, such that the accuracy of depth information of a background and one or more objects in a scene is improved in a structured depth camera system may be provided.
The accuracy of the total distance value of one fragment may be improved by adjusting property-based distance values between feature points of a source pattern corresponding to the fragment and feature points of a recognition pattern corresponding to the fragment, using weights.
Referring to
The projector 110 may be apart from the camera 120 by a predetermined distance (hereinafter, a “baseline”). The baseline may produce a parallax point between the projector 110 and the camera 120.
The projector 110 may radiate light by using a source pattern. The source pattern may be defined in real time or predefined in various manners. The source pattern may be defined, for example, by multiple feature points in the single scene 130 and assigning multiple property values to the respective feature points.
The light that the projector 110 has radiated by using the source pattern may be reflected from the background and the one or more objects P in the scene 130.
The camera 120 may receive the light reflected from the background and the one or more objects P in the scene 130. The light received at the camera 120 may have a recognition pattern. The camera 120 may convert the received light to an electrical signal or obtain the recognition pattern of the received light, and then output the signal or recognition pattern. The recognition pattern may have the multiple feature points defined in the scene 130 and each of the feature points may have multiple property values.
The light that the projector 110 has radiated by using the source pattern may be reflected at different reflection angles according to the depths of the background and the one or more objects P in the scene 130. For similar depths across the background in the scene, for example, the light may be reflected at a predictable reflection angle based on the parallax point. On the other hand, the object P having a large depth difference in the scene 130 may reflect the light at a reflection angle that is difficult to predict. The light reflected from the object P having a large depth difference in the scene 130 may affect other reflected light, causing an error in depth information which is to be estimated later.
A method of correcting an error that may occur in depth information is disclosed. The method is based on multiple properties of feature points included in a source pattern and a recognition pattern.
Referring to
The source pattern, which is an optical pattern of light radiated by the projector 110, may be different from the recognition pattern, which is an optical pattern captured by the camera 120. The pattern discrepancy may be caused by the parallax between a position in which the projector 110 radiates an optical pattern (source pattern) and a position in which the camera 120 captures an optical pattern (recognition pattern).
The source pattern used to radiate light by the projector 110 may be predetermined. The recognition pattern captured by the received light at the camera 120 may be analyzed based on the predetermined pattern. The analysis result may be used to measure the depths of the surfaces (i.e., a scene fragment) of the background and the one or more objects in the scene 130.
The optical patterns, which are the source pattern and the recognition pattern, may be generated by laser interference or projection. However, optical pattern generation is not limited to any particular scheme.
Referring to
The source pattern that the projector uses to radiate light may be reflected from a background and one or more objects in a scene and captured by the camera. The source pattern of the projector may not match the source pattern captured by the camera.
For example, the source pattern of the projector may be transformed to the recognition pattern-captured by the camera in view of the parallax between the projector and the camera. The source pattern of the projector may include a plurality of first pattern fragments. The recognition pattern captured by the camera may include a plurality of second pattern fragments. The plurality of first pattern fragments may be reflected from fragments in the scene and transformed to the plurality of second pattern fragments.
For example, the source pattern is patterned in the pattern mask 170 and the recognition pattern is captured in the camera frame 180.
The recognition pattern captured by the camera may be transformed according to the depth of a corresponding scene fragment in view of a parallax effect at the position of the camera. Herein, the scene may refer to an area in which an optical pattern is reflected. Further, a scene fragment may refer to part of the scene. Accordingly, scene fragments may be specified according to the surfaces of the background and the one or more objects.
Referring to
The projector 410 may generate beams based on a source pattern P1. The beams may be projected onto a scene 440, a background, or an object. The scene 440 may refer to an area onto which the beams generated by the projector 410 are projected. The object may refer to an item existing in the scene. The background and the one or more objects in the scene may have different depths.
The beams generated by the projector 410 may be, but are not limited to, infrared (IR) beams, beams in various frequencies such as visible light, or ultraviolet (UV) light beams.
The camera 420 may receive the beams radiated based on the source pattern P1 and reflected from the scene 440, identify whether there is any reflected beam, and detect the brightness or intensity of the reflected beam. The camera 420 may convert the detected beam to an electrical signal and provide the electrical signal so that depth information of the scene is estimated. The camera 420 may detect a recognition pattern based on the received beam or the converted electrical signal and provide the detected recognition pattern so that the depth information of the scene is estimated.
The camera 420 may detect IR beams. However, the frequency band of a beam detectable by the camera 420 is not limited to an IR band. The camera 420 may be configured to detect a beam in the frequency band of the beams radiated from the projector 410. The camera 420 may be a two dimensional (2D) camera capable of detecting a signal in the frequency band of a reflected beam and capable of capturing a specific area (e.g., the area of the scene 440). The 2D camera may be configured as an array of photosensitive pixels. The photosensitive pixels of the 2D camera may be divided into pixels used to detect a ToF and pixels used to detect a reflected pattern. At least one of the photosensitive pixels in the 2D camera may be used to detect both the ToF and the reflected pattern.
The measuring device 430 may control operations of the projector 410 and the camera 420, and may obtain depth information of the background and/or the one or more objects in the scene 440 based on the reflected beam detected by the camera 420.
Referring to
The measuring device 500 may include an interface unit 510, a processor 520, and a memory 530. While the measuring device 500 is shown as including a single processor 520 in
The interface unit 510 may receive an electrical signal from light received by the camera. The electrical signal may be generated by converting light reflected from a background and/or one or more objects in a scene by the camera 420.
The at least one processor 520 may obtain the depth of the scene based on the electrical signal received from the interface unit 510.
The at least one processor 520 may identify a recognition pattern corresponding to the light received by the camera based on the electrical signal received through the interface unit 510. The at least one processor 520 may obtain a total distance value between properties of a partial source pattern corresponding to a target fragment in the source pattern and properties of a partial recognition pattern corresponding to the target fragment in the recognition pattern. The at least one processor 520 may estimate depth information of the target fragment based on the obtained total distance value.
The at least one processor 520 may identify per-property distance values of each feature point included in the target fragment based on the partial source pattern and the partial recognition pattern. The at least one processor 520 may obtain a total distance value by reflecting valid weight coefficients in the per-property distance values and summing the per-property distance values in which the valid weight coefficients of the respective feature points are reflected.
The at least one processor 520 may calculate a valid weight coefficient for a corresponding property (a jth property) by multiplying a normalization coefficient νj for the property (the jth property) by a distance-based weight πi for a corresponding feature point (an ith feature point) in a target fragment.
A first table that defines per-property distance values, a second table that defines per-property normalization coefficients, and a third table that defines distance-based weights for respective feature points included in one fragment may be recorded (stored) in the memory 530. The at least one processor 520 may obtain the total distance value based on information available from the memory 530, that is, the per-property distance values, the per-property normalization coefficients, and the distance-based weights of each feature point.
Referring to
The identification of the recognition pattern may include identifying feature points included in the recognition pattern and multiple properties of each of the feature points. The feature points may be located in the recognition pattern corresponding to one fragment in which distance values are obtained for depth estimation. The feature points may correspond to intersection points of a center cell and cells arranged in the form of a chessboard in a chessboard-shaped recognition pattern.
The electronic device obtains a total distance value in step 620. The total distance value may be obtained by calculating a distance value of each pair of feature points in one to one correspondence, that is, two feature points between the source pattern and the recognition pattern, and summing the distance values for each pair of feature points. The electronic device may consider valid weight coefficients in summing the calculated distance values of all pairs. The valid weight coefficient for a feature point may be used to determine the influence that the calculated distance value for a corresponding pair of feature points will have on obtaining the total distance value. The valid weight coefficients may be determined by taking into consideration the influence of a depth value or a probability of error occurrence. The influence of the depth value may be an influence when estimating the depth value. The probability of error occurrence may be a possibility that an error is generated in a pattern radiated by the projector and received by the camera.
A valid weight coefficient ωi,j may be defined as the product between a normalization coefficient vj for a corresponding property (a jth property) and a distance-based weight πi for a corresponding feature point (an feature point). The normalization coefficient νj may be configured as a unique value for each property, and the distance-based weight πi may be configured based on a distance from a center cell.
The source pattern may be a partial source pattern corresponding to a part of one fragment, and the recognition pattern may also be a partial recognition pattern corresponding to the part of the fragment.
Once the total distance value is obtained as described above, the electronic device estimates depth information of a background and/or one or more objects in the fragment or the scene based on the total distance value in step 630.
Referring to
Therefore, 12 combinations may be produced from the first property 701 and the second property 702, and thus 12 types of feature points are available for one fragment.
Referring to
According to one embodiment, a blue square first feature point 801 is located at the center vertex, and a yellow square second feature point 802 is located towards an upper left position of the first feature point 801. Third to ninth feature points may be located in a clockwise direction from the position of the second feature point. The third feature point 803 is a green circle, the fourth feature point 804 is a yellow square, the fifth feature point 805 is a green triangle, the sixth feature point 806 is a blue circle, the seventh feature point 807 is a yellow circle, the eighth feature point 808 is a red triangle, and the ninth feature point 809 is a green triangle.
In addition, if size is used as a third property, feature points included in one fragment may be distinguished by a predetermined number of different sizes. For example, if three different sizes are defined for the third property, 36 types in total may be defined to distinguish feature points from one another.
Referring to
According to an embodiment, a blue triangular first feature point 811 is located at the center, and a green circular second feature 812 point is located towards an upper left position of the first feature point 811. Third to ninth feature points may be located in a clockwise direction from the position of the second feature point 812. The third feature point 813 is a green square, the fourth feature point 814 is a yellow square, the fifth feature point 815 is a red square, the sixth feature point 816 is a blue triangle, the seventh feature point 817 is a green triangle, the eighth feature point 818 is a green triangle, and the ninth feature point 819 is a red circle.
If size is used as a third property, feature points included in one fragment may be distinguished by a predetermined number of different sizes. For example, if three different sizes are defined for the third property, 36 types in total may be defined to distinguish feature points from one another.
As described above, it may be assumed that the projector has radiated light by the source pattern illustrated in
Referring to
In
Referring to
If the feature point in the source pattern (or the feature point in the recognition pattern) is green and the feature point in the recognition pattern (or the feature point in the source pattern) is red, the difference δ2 between the feature points is defined as “7”. If the feature point in the source pattern (or the feature point in the recognition pattern) is green and the feature point in the recognition pattern (or the feature point in the source pattern) is blue, the difference δ2 between the feature points is defined as “5”. If the feature point in the source pattern (or the feature point in the recognition pattern) is blue and the feature point in the recognition pattern (or the feature point in the source pattern) is red, the difference δ2 between the feature points is defined as “6”.
In
For each of the first property 701 (shape) for which the distance values are defined in
A normalization coefficient νj for a jth property may be assigned based on the likelihood of the property being changed. That is, the normalization coefficient νj may, for example, lead to a higher weight being assigned for a property which is less likely to be changed, and a lower weight being assigned for a property which is more likely to be changed.
Accordingly, the normalization coefficient ν1 for the first property 701 (shape) may be assigned as “1” and the normalization coefficient ν2 for the second property 702 (color) may be assigned as “2.5”. This is based on the premise that the second property 702 (color) is less likely to experience an error caused by change or distortion than the first property 701 (shape). That is, it is determined that the color property is less likely to be altered (changed) than the shape property due to light reflecting from an object.
Referring to
Considering that a smaller distance is more likely to affect a difference value, weights are assigned based on distances in the above manner. Further, even when the number of feature points included in one fragment increases, distance-based weights may be similarly assigned.
Accordingly, based on
The 9 feature points of the source pattern are then sequentially selected, and the feature points in the same positions of the selected feature points in the recognition pattern are selected. The difference values between the first property 701 of the feature points selected in the source pattern and the first property 701 of the feature points selected in the recognition pattern may be calculated based on the difference values defined for the first property 701, as illustrated in
The first property 701 of a feature point 802 (the feature point in the upper left position of the center feature point) is square in the source pattern of
Accordingly, the first property 701 difference values of all feature points may be identified as listed in Table 1, below. In Table 1, the feature points are indexed sequentially, starting from the upper left position of
The second property 702 of the feature point 802 (the feature point in the upper left position of the center feature point) is yellow in the source pattern of
Accordingly, the second property 702 difference values of all feature points may be identified as listed in Table 2, below. In Table 2, the feature points are indexed sequentially, starting from the upper left position.
The total distance value of the fragment 800 may be obtained by adding the sum of the identified first property 701 distance values of the respective feature points and the sum of the identified second property 702 distance values of the respective feature points. In obtaining the total distance value, a valid weight coefficient ωi,j may be assigned to each feature point, for each property. A valid weight coefficient may be obtained, for example, by a normalization coefficient νi,j assigned to a property and a distance-based weight πi assigned to a feature point.
A total distance value Δ between a source pattern and a recognition pattern for one fragment may be obtained by using Equation (1), below.
Δ=ΣiΣjδ(i,j)×ω(i,j) (1)
Here, i represents the index of a feature point in the fragment, j represents the index of a property, δi,j represents a distance value of an ith feature point for a jth property, and ωi,j represents a valid weight coefficient of the ith feature point for the jth property. i may be applied equally to the source pattern and the recognition pattern.
The valid weight coefficient ωi,j of the ith feature point for the jth property in Equation 1 may be calculated by using Equation (2), below.
ωi,j=νj×πj (2)
Here, νj is a normalization coefficient for the jth property and πi is a distance-based weight for the ith feature point.
The normalization coefficient νj for the jth property may be assigned based on a possible change in the property. That is, a relatively high weight may be assigned to a property having a low likelihood of change for the normalization coefficient νj, and a relatively low weight may be assigned to a property having a high likelihood of change for the normalization coefficient νj.
For example, the normalization coefficient ν1 for the first property 701 (shape) may be assigned as “1”, whereas the normalization coefficient ν2 for the second property 702 (color) may be assigned as “2.5”. The assignment is based on the premise that the second property 702 (color) is less likely to experience errors caused by change or variation than the first property 701 (shape). That is, it is determined that light reflection from an object leads to a lower likelihood of alteration in the second property 702 (color) than in the first property 701 (shape).
The total distance value obtained in the manner as proposed above may be as listed in Table 3, below.
According to Table 3, a total distance value Δ of 48.625 may be obtained by summing the per-property distance values of the respective feature points.
Thus, the accuracy of determining distance values representing the depths of a background and one or more objects in a scene may be improved.
A method of detecting a source pattern, corresponding to a known fragment, from a recognition pattern, corresponding to a whole scene, is described below.
Referring to
Referring to
According to an embodiment, 3-level brightness (black, gray and white) is used as a property for feature points. Further, 2-level brightness (black and gray) is used as a property for cells. The properties of the feature points and cells should be used such that each feature point or cell may be distinguished from neighboring feature points or cells. A center cell at the center of the pattern may also be used as a feature point. In this case, the feature point property may be used for the center cell, which will be treated as a feature point.
Referring to
According to an embodiment, a second highest weight P1 is assigned to feature points corresponding to the vertices of the center cell, a third highest weight P2 is assigned to feature points corresponding to the vertices of cells above, under, to the left, and to the right of the center cell, and a lowest weight P3 is assigned to the remaining vertices of cells in diagonal directions to the center cell.
Referring to
For example, an identification index “0” may be assigned to the center cell, an identification index “1” may be assigned to a feature point in the upper left position of the center cell, and identification indexes “2”, “3” and “4” may sequentially be assigned to feature points at the same distance level in a clockwise direction.
Identification indexes “5” and “6” may then be assigned to two feature points above the center cell, respectively, indexes “7” and “8” may be assigned to two feature points to the right of the center cell, respectively, indexes “9” and “10” may be assigned to two feature points under the center cell, respectively, and indexes “11” and “12” may be assigned to two feature points to the left of the center cell, respectively.
Subsequently, an identification index “13” may be assigned to the feature point at the upper left corner of the pattern, an identification index “14” may be assigned to the feature point at the upper right corner of the pattern, an identification index “15” may be assigned to the feature point at the lower right upper corner of the pattern, and an identification index “16” may be assigned to the feature point at the lower left upper corner of the pattern.
Referring to
Referring to
In
In
Total distance values for the five candidate patterns are calculated based on the known pattern of
It may be noted from Table 4 that the candidate pattern, Key-point E is closest in features to the known pattern, among the five candidate patterns.
The property values of the feature points which are not identifiable in the pattern of
?
?
?
2
1
2
In Table 5, distance values Δ for the identification indexes of 8, 13, and 16 (corresponding to the identification indexes of
For example, since the property value of a target feature point for the identification index of 8 (corresponding to the identification indexes of
Since the property value of a target feature point for the identification index of 13 (corresponding to the identification indexes of
Finally, since the property value of a target feature point for the identification index of 16 (corresponding to the identification indexes of
As is apparent from the foregoing description, the classification error rate of a structured depth camera system may be decreased simply by modifying a source pattern in a projector. That is, the number of accurately reconfigured 3D points may be increased, and holes and artifacts may be decreased.
While the present disclosure has been particularly shown and described with reference to certain embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2019-0006885 | Jan 2019 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
8326020 | Lee et al. | Dec 2012 | B2 |
9270386 | Bar-On | Feb 2016 | B2 |
9635339 | Campbell et al. | Apr 2017 | B2 |
20120242829 | Shin et al. | Sep 2012 | A1 |
20140056508 | Lee et al. | Feb 2014 | A1 |
20140293009 | Nakazato | Oct 2014 | A1 |
20170353326 | Hashiura et al. | Dec 2017 | A1 |
20180321384 | Lindner et al. | Nov 2018 | A1 |
20190011721 | Gordon | Jan 2019 | A1 |
20190017814 | Je et al. | Jan 2019 | A1 |
20190340776 | Nash | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
3 293 481 | Mar 2018 | EP |
10-2013-0035291 | Apr 2013 | KR |
1020150041901 | Apr 2015 | KR |
1020150101749 | Sep 2015 | KR |
1020180126475 | Nov 2018 | KR |
Entry |
---|
International Search Report dated Mar. 17, 2020 issued in counterpart application No. PCT/KR2019/016953, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20200234458 A1 | Jul 2020 | US |