The invention relates to a generation method for a stereoscopic image, a displaying method and an electronic apparatus, and more particularly, to a generation method for a multi-view auto-stereoscopic image, a displaying method and an electronic apparatus.
When human eyes view an object, the left eye and the right eye respectively capture views of the same object, wherein a disparity exists between the image captured by the left eye and the image captured by the right eye, and thereby the brain is able to fuse the images captured by the left and right eyes to form a stereoscopic image based on the disparity.
Conventional generation method for an auto-stereoscopic image and a display device therewith utilize a pair of image capturing devices slightly spaced apart to capture a left view and a right view of the same object, wherein a disparity exists between the left view and the right view. Next, the left view and the right view are interlaced to render an interlaced image, wherein a lenticular lens layer is utilized for refracting ingredients of the left view of the interlaced image and ingredients of the right view of the interlaced image to the left eye and the right eye, respectively. Therefore, fusing the ingredients of the left view of the interlaced image refracted to the left eye and the ingredients of the right view of the interlaced image refracted to the right eye, the brain is convinced that the eyes are viewing a realistic stereoscopic image of the object.
To achieve a realistic stereoscopic image of the object, conventional display device requires at least two cameras to capture the left view and the right view of the object. To achieve a multi-view auto-stereoscopic image, more cameras corresponding to the multiple viewpoints are required to capture a corresponding number of views. As number of the viewpoints increases, the number of required cameras increases, which raises cost and image-processing time. Besides, the conventional pair of cameras is required to be configured along an orientation from the left eye to the right eye so that a comprehensive stereoscopic image can be rendered. That is, if the cameras are configured along orientations other than the orientation from the left eye to the right eye, a disparity information resulted from the captured views cannot be comprehended by the brain for conceiving stereoscopic impression, which restricts utility of the display device.
It is further noticed that interlacing procedure adopted by the conventional display device can only segment an image along a direction parallel to an edge of the image. Since the segmentation direction needs to be parallel to the orientation of the lenticular lenses so as to properly refract the interlaced image to the eyes, it is concluded that the conventional display device is restricted to a portrait displaying mode for the eyes to perceive the refracted interlaced image properly, which is also unfavorable to the utility of the display device.
To solve the aforementioned problem, the invention discloses a generation method for a multi-view auto-stereoscopic image which includes a first image capturing device capturing a first view, a second image capturing device capturing a second view, a processing unit computing a disparity between the first view and the second view, the processing unit disregarding the second view, the processing unit generating a first disparity map based on the first view and the disparity, and the processing unit rendering N first disparity images based on the first disparity map along a rendering direction. The N is a positive integer.
According to an embodiment of the invention, the processing unit rendering N first disparity images includes the processing unit computing a virtual disparity along the rendering direction based on the first disparity map, and the processing unit computing the N first disparity images based on the first disparity map and the virtual disparity.
According to an embodiment of the invention, each of the N first disparity images includes a plurality of valid pixels and a plurality of holes. The generation method of the invention further includes the processing unit generating an image processing window including at least one part of the plurality of valid pixels and at least one part of the plurality of holes, and the processing unit filling the at least one part of the plurality of holes based on the at least one part of the plurality of valid pixels. The at least one part of the plurality of valid pixels is beside the at least one part of the plurality of holes, and the at least one part of the plurality of holes is adjacent to a window edge of the image processing window.
According to an embodiment of the invention, the generation method of the invention further includes the processing unit rendering M second disparity images based on the Nth first disparity image along a reverse rendering direction opposite to the rendering direction. M is a positive integer and equals N, and the Mth second disparity image is substantially identical to the first view the processing unit rendering M second disparity images based on the Nth first disparity image along a reverse rendering direction opposite to the rendering direction includes the processing unit computing a reverse virtual disparity along the reverse rendering direction, and the processing unit computing the M second disparity images based on the Nth first disparity image and the reverse virtual disparity.
According to an embodiment of the invention, each of the M second disparity images includes a plurality of valid pixels and a plurality of holes. The generation method of the invention further includes the processing unit generating an image processing window including at least one part of the plurality of valid pixels and at least one part of the plurality of holes, and the processing unit filling the at least one part of the plurality of holes based on the at least one part of the plurality of valid pixels. The at least one part of the plurality of valid pixels is beside the at least one part of the plurality of holes, and the at least one part of the plurality of holes is adjacent to a window edge of the image processing window.
According to an embodiment of the invention, the generation method of the invention further includes the processing unit segmenting the first view into a first view strip set along a segmenting direction obliquely intersecting an edge of the first view, the processing unit respectively segmenting the N first disparity images into N first disparity image strip sets along the segmenting direction, the processing unit respectively segmenting the M second disparity images into M second disparity image strip sets along the segmenting direction, and the processing unit interlacing the first view strip set, the N first disparity image strip sets, and the M second disparity image strip sets for rendering a display image.
According to an embodiment of the invention, the generation method of the invention further includes the processing unit rendering P third disparity images based on the Mth second disparity image along the reverse rendering direction. P is a positive integer. The processing unit rendering the P third disparity images based on the Mth second disparity image along the reverse rendering direction includes the processing unit computing a reverse virtual disparity along the reverse rendering direction, and the processing unit computing the P third disparity images based on the Mth second disparity image and the reverse virtual disparity.
According to an embodiment of the invention, each of the P third disparity images includes a plurality of valid pixels and a plurality of holes. The generation method of the invention further includes the processing unit generating an image processing window including at least one part of the plurality of valid pixels and at least one part of the plurality of holes, and the processing unit filling the at least one part of the plurality of holes based on the at least one part of the plurality of valid pixels. The at least one part of the plurality of valid pixels is beside the at least one part of the plurality of holes, and the at least one part of the plurality of holes is adjacent to a window edge of the image processing window.
According to an embodiment of the invention, the generation method of the invention further includes the processing unit rendering Q fourth disparity images based on the Pth third disparity image along the rendering direction. Q is a positive integer and equals P. The Qth fourth disparity image is substantially identical to the first view. The processing unit rendering Q fourth disparity images based on the Pth third disparity image along the rendering direction includes the processing unit computing a virtual disparity along the rendering direction, and the processing unit computing the Q fourth disparity images based on the Pth first disparity image and the virtual disparity.
According to an embodiment of the invention, each of the Q fourth disparity images comprises a plurality of valid pixels and a plurality of holes. The generation method of the invention further includes the processing unit generating an image processing window including at least one part of the plurality of valid pixels and at least one part of the plurality of holes, and the processing unit filling the at least one part of the plurality of holes based on the at least one part of the plurality of valid pixels. The at least one part of the plurality of valid pixels is beside the at least one part of the plurality of holes, and the at least one part of the plurality of holes is adjacent to a window edge of the image processing window.
According to an embodiment of the invention, the generation method of the invention further includes the processing unit segmenting the first view into a first view strip set along a segmenting direction obliquely intersecting an edge of the first view, the processing unit respectively segmenting the N first disparity images into N first disparity image strip sets along the segmenting direction, the processing unit respectively segmenting the M second disparity images into M second disparity image strip sets along the segmenting direction, the processing unit respectively segmenting the P third disparity images into P third disparity image strip sets along the segmenting direction, the processing unit respectively segmenting the Q fourth disparity images into Q fourth disparity image strip sets along the segmenting direction, and the processing unit interlacing the first view strip set, the N first disparity image strip sets, the M second disparity image strip sets, the P third disparity image strip sets, and the Q fourth disparity image strip sets for rendering a display image.
According to an embodiment of the invention, the generation method of the invention further includes the processing unit computing a depth of the first disparity map by analyzing the first disparity map, and the processing unit normalizing the depth with respect to a depth displaying allowance of a display device for allowing the depth of the first disparity map to be compatible with the depth displaying allowance of the display device.
The invention further discloses a displaying method for a multi-view auto-stereoscopic image which includes providing a first disparity image and at least one second disparity image, a processing unit segmenting the first disparity image into a first disparity image strip set along a segmenting direction obliquely intersecting an edge of the first disparity image, the processing unit segmenting the at least one second disparity image into at least one second disparity image strip set along the segmenting direction, the processing unit interlacing the first disparity image strip set and the at least one second disparity image strip set for rendering a display image, and the processing unit controlling a display device to display the display image. The segmenting direction is decomposed into a first component along a first direction and a second component along a second direction perpendicular to the first direction, and each of the first component and the second component is greater than zero. The display device has a portrait displaying mode and a landscape displaying mode. The segmenting direction includes a portrait direction and a landscape direction substantially perpendicular to the portrait direction. When the display device displays the display image at the portrait displaying mode, the processing unit segments the first disparity image and the at least one second disparity image along the portrait direction. When the display device displays the display image at the landscape displaying mode, the processing unit segments the first disparity image and the at least one second disparity image along the landscape direction.
According to an embodiment of the invention, the displaying method of the invention further includes providing an oblique lenticular lens layer including a plurality of oblique lenticular lenses, and disposing the oblique lenticular lens layer on the display device with the orientation of each of the plurality of oblique lenticular lenses substantially parallel to the segmenting direction. An orientation of each of the plurality of oblique lenticular lenses obliquely intersects an edge of the oblique lenticular lens layer.
The invention further discloses an electronic apparatus which includes a display device for displaying a display image, and an oblique lenticular lens layer disposed on the display device and including a plurality of oblique lenticular lenses. The display image is a multi-view auto-stereoscopic image, and the display image includes a first disparity image strip set and at least one second disparity image strip set. The first disparity image strip set and the at least one second disparity image strip set obliquely intersect an edge of the display image. An orientation of each of the plurality of oblique lenticular lenses obliquely intersects an edge of the oblique lenticular lens layer. An orientation of the first disparity image strip set and an orientation of the at least one second disparity image strip set being substantially parallel to the orientation of each of the plurality of oblique lenticular lenses. The display device has a portrait displaying mode and a landscape displaying mode. When the display device displays the display image at the portrait displaying mode, the orientation of the first disparity image strip set and the orientation of the at least one second disparity image strip set are substantially parallel to a portrait direction. When the display device displays the display image at the landscape displaying mode, the orientation of the first disparity image strip set and the orientation of the at least one second disparity image strip set are substantially parallel to a landscape direction. The portrait direction is substantially perpendicular to the landscape direction.
According to the generation method for a multi-view auto-stereoscopic image of the invention, after disparity information has been computed based on the raw first view and second view, all raw information other than the first view can be disregarded and only the first view and the computed disparity information are required to render a plurality of disparity images via recursive algorithm for fusing the multi-view auto-stereoscopic image. That is, the generation method requires only a pair of image capturing devices capturing views for computing the disparity information and further requires only one of the captured views for fusing the stereoscopic image, which provides convenience in photographing since the disparity information is allowed to adopt a reasonable rendering direction that is independent of the installation orientation of the image capturing device. Therefore, the invention not only saves equipment cost and image-processing time but also improves convenience in photographing. Besides, the electronic apparatus with the generation method and the displaying method of the invention can obliquely segment an image for interlacing, and an interlaced image can be displayed on the display device with the compatible oblique lenticular lens layer. Therefore, by utilizing the generation method and the displaying method, the display device of the electronic apparatus can be disposed at arbitrary installation orientation including the portrait displaying mode and the landscape displaying mode to effectively present the stereoscopic image, which greatly improves utility of the display device.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. In the following discussion and claims, the system components are differentiated not by their names but by their function and structure differences. In the following discussion and claims, the terms “include” and “comprise” are used in an open-ended fashion and should be interpreted as “include but is not limited to”. Also, the term “couple” or “link” is intended to mean either an indirect or a direct mechanical or electrical connection. Thus, if a first device is coupled or linked to a second device, that connection may be through a direct mechanical or electrical connection, or through an indirect mechanical or electrical connection via other devices and connections.
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” etc., is used with reference to the orientation of the Figure(s) being described. The components of the present invention can be positioned in a number of different orientations. As such, the directional terminology is used for purposes of illustration and is in no way limiting. Accordingly, the drawings and descriptions will be regarded as illustrative in nature and not as restrictive.
Please refer to
As shown in
Please refer to
Step 100: the first image capturing device 98 capturing the first view 1;
Step 110: the second image capturing device 99 capturing the second view 2;
Step 120: the processing unit 90 computing a disparity D0 between the first view 1 and the second view 2;
Step 129: the processing unit 90 disregarding the second view 2;
Step 130: the processing unit 90 generating a first disparity map 3 based on the first view 1 and the disparity D0;
Step 140: the processing unit 90 rendering N first disparity images 11 based on the first disparity map 3 along a rendering direction X1, where N is a positive integer.
Please refer to
Next, the processing unit 90 computes the disparity D0 based on the difference between the first view 1 and the second view 2 (Step 120) and then generates the first disparity map 3 (as shown in
After the first disparity map 3 is generated, the processing unit 90 computes the N first disparity images 11 based on the first disparity map 3 along the rendering direction X1, wherein N is a positive integer (Step 140). The processing unit 90 only requires the first view 1 and the disparity D0 for generating the first disparity map 3 and subsequent disparity images, such as the N first disparity images 11. As a result, the second view 2 and any information related with the second view 2 can be disregarded (Step 129) after the disparity D0 is computed. In practical application, after the disparity D0 is computed, the processing unit 90 can perform subsequent algorithm without using the second view 2 and any information related to the second view 2.
Please refer to
Repeating the abovementioned process, the processing unit 90 applies the virtual disparity D1 to the first one of the N first disparity images 11 for rendering the second one of the N first disparity images 11 and then applies the virtual disparity D1 to the second one of the N first disparity images 11 for rendering the third one of the N first disparity images 11 and so on until the Nth one of the N first disparity images 11 is rendered. The Nth one of the N first disparity images 11 can be regarded as a terminal image along the rendering direction X1 starting from the first disparity map 3 (or the first view 1). As shown in
Please refer to GIG. 18.
Step 200: the processing unit 90 rendering the M second disparity images 12 based on the Nth first disparity image 11 along a reverse rendering direction X2 opposite to the rendering direction X1, where M is a positive integer and equals N, and the Mth second disparity image 12 is substantially identical to the first view 1;
Step 230: the processing unit 90 segmenting the first view 1 into a first view strip set 19 along a segmenting direction Y obliquely intersecting an edge 15 of the first view 1;
Step 240: the processing unit 90 respectively segmenting the N first disparity images 11 into N first disparity image strip sets 61 along the segmenting direction Y;
Step 250: the processing unit 90 respectively segmenting the M second disparity images 12 into M second disparity image strip sets 62 along the segmenting direction Y;
Step 260: the processing unit 90 interlacing the first view strip set 19, the N first disparity image strip sets 61 and the M second disparity image strip sets 62 for rendering a display image 8.
Please refer to
It should be noticed that the virtual disparity D2 is opposite in direction to the virtual direction D1. Therefore, if the N-1th and the Nth of the N first disparity images 11 are respectively regarded as views viewed by the human's left and right eyes, the Nth of the N first disparity images 11 and the first of the M second disparity images 12 can be regarded as views viewed by the human's right and left eyes, as shown in
Please refer to
Step 300: the processing unit 90 rendering the P third disparity images 13 based on the Mth of the M second disparity images 12 along the reverse rendering direction X2, where P is a positive integer and the reverse rendering direction X2 is opposite to the rendering direction X1;
Step 400: the processing unit 90 rendering Q fourth disparity images 14 from the Pth of the P third disparity images 13 along the rendering direction X1, where Q is a positive integer equal to P and the Qth of the fourth disparity images 14 is substantially identical to the first view 1;
Step 430: the processing unit 90 segmenting the first view 1 into a first view strip set 19 along a segmenting direction Y obliquely intersecting an edge 15 of the first view 1;
Step 440: the processing unit 90 respectively segmenting the N first disparity images 11 into N first disparity image strip sets 61 along the segmenting direction Y;
Step 450: the processing unit 90 respectively segmenting the M second disparity images 12 into M second disparity image strip sets 62 along the segmenting direction Y;
Step 460: the processing unit 90 respectively segmenting the P third disparity images 13 into P third disparity image strip sets 63 along the segmenting direction Y;
Step 470: the processing unit 90 respectively segmenting the Q fourth disparity images 14 into Q fourth disparity image strip sets 64 along the segmenting direction Y;
Step 480: the processing unit 90 interlacing the first view strip set 19, the N first disparity image strip sets 61, the M second disparity image strip sets 62, the P third disparity image strip sets 63, and the Q fourth disparity image strip sets 64 for rendering a display image 8.
Please refer to
In the process of computing the P third disparity images 13, the processing unit 90 computes a reverse virtual disparity D2 along the reverse rendering direction X2 and applies the computed reverse virtual disparity D2 to the Mth one of the M second disparity images 12 for to sequentially render the P third disparity images 13. Next, the processing unit 90 computes a virtual disparity D1 along the rendering direction X1 and applies the computed virtual disparity D1 to the Pth one of the P third disparity images 13 to render the Q fourth disparity images 14. The process of generating the P+Q disparity images (as shown in
Since the Qth of the Q fourth disparity images 14 is substantially identical to the first disparity map 3 (or the first view 1), thereby the recursive algorithm can be regarded as completed after the Qth one of the Q fourth disparity images 14 is rendered. That is, according to the embodiment of the invention, the abovementioned rendering process from the Mth one of the M second disparity images 12 to the Qth one of the Q fourth disparity images 14 can be regarded as a cycle of the recursive algorithm, and the rendering process for the N+M+Q+P disparity images can be regarded as a bilateral recursive algorithm. The bilateral recursive algorithm and the unilateral recursive algorithm mainly differ in ranges of the fields of vision but both achieve stereoscopic effect, which is not illustrated here for the sake of simplicity. It should be noticed that the virtual disparity D1 and the reverse virtual disparity D2 adopted in the rendering process for P+Q disparity images can be different or identical in quantity to the virtual disparity D1 and the reverse virtual disparity D2 adopted in the rendering process for N+M disparity images, and P and Q can respectively be different or identical to M and N.
Please refer to
Next, as shown in
It should be noticed that the Qth one of the Q fourth disparity images 14 (also called the last one of the Q fourth disparity images 14) is substantially identical to the first view 1, and therefore the processing unit 90 can disregard the Qth one of the Q fourth disparity images 14 before interlacing step. That is, in practical interlacing step, the processing unit 90 adopts only Q−1 fourth disparity image strip sets to be interlaced with the other disparity image strip sets, but the invention is not limited thereto. In another embodiment, the processing unit 90 can replace the first view 1 with the Qth of the Q fourth disparity images 14 for interlacing. How image strip sets are interlaced to render a display image has been disclosed by conventional algorithmic principles, which is not illustrated here for the sake of simplicity. However, the image strip sets adopted by the conventional algorithmic principles are generated with segmentation direction parallel to an edge of each of the images, which are in contrast to the obliquely segmented image strip sets disclosed by the present application.
Please refer to
For example, the processing unit 90 can generate a mask (also called a computing matrix) of a particular size which covers at least one part of the disocclusion area. The mask can be a low-pass filter that smoothes the covered disocclusion area, such as through an averaging procedure, but not limited to this. In practical application, the processing unit 90 disposes the disocclusion area consisting of holes on an edge of the mask for performing hole-filling algorithm to generate a more realistic display effect. That is, by taking the hole with the most neighboring valid pixels as a target pixel, the processing unit 90 computes a value via the smoothing procedure performed on the valid pixels inside the mask and fills the target pixel with the computed value to convert the target pixel from a hole to a valid pixel. The valid pixel converted from the hole can be subsequently utilized as one of the inputs of the smoothing procedure for further filling a next hole, and so on. By repeating the aforementioned hole-filling algorithm, the entire disocclusion area can be gradually converted into valid pixels, starting from a side of the disocclusion area neighboring the valid pixels and proceeding towards an inside (or the other side) of the disocclusion area. Therefore, as illustrated by an arrow indicating a hole-filling sequence J in
Please refer to
Since the first disparity map 3 can be regarded as a diagram with function of indicating the disparity D0 based on the first view 1 (and the second view 2) and with substantially identical features and sizes to the first view 1, the depth T of the first view 1 can substantially be regarded as identical to a depth of the first disparity map 3. Besides, each of the disparity images is rendered by superimposing a virtual disparity value or a reverse virtual disparity value onto the base disparity value of the first disparity map 3 and superimposed disparity values do not affect relative depths between objects shown in each of the disparity images, so a depth of each of the disparity images is substantially identical to the depth T of the first disparity map 3. Therefore, a normalizing procedure K normalizing the depth T of the first disparity map 3 according to the depth displaying allowance t of the display device 91 can also be utilized by the processing unit 90 to normalize the depths of all the views and disparity images so that the depths of all the views and disparity images can be compatible with the depth displaying allowance t of the display device 91.
Please refer to
Step 500: providing the first disparity image 11 and the at least one second disparity image 12;
Step 510: the processing unit 90 segmenting the first disparity image 11 into a first disparity image strip set 61 along a segmenting direction Y obliquely intersecting an edge 15 of the first disparity image 11;
Step 520: the processing unit 90 segmenting the at least one second disparity image 12 into at least one second disparity image strip set 62 along the segmenting direction Y;
Step 530: the processing unit 90 interlacing the first disparity image strip set 61 and the at least one second disparity image strip set 62 for rendering a display image 8;
Step 540: the processing unit 90 controlling a display device 91 to display the display image 8;
Step 550: providing an oblique lenticular lens layer 92 including a plurality of oblique lenticular lenses 920, an orientation Z of each of the plurality of oblique lenticular lenses 920 obliquely intersects an edge 925 of the oblique lenticular lens layer 92;
Step 560: disposing the oblique lenticular lens layer 92 on the displaying device 91 with the orientation Z of each of the plurality of oblique lenticular lenses 920 substantially parallel to the segmenting direction Y.
Please refer to
Next, the processing unit 90 interlacing the first disparity image strip set 61 and the second disparity image strip set 62 for rendering the display image 8 (Step 530). As for how the displaying method 7 of the invention can segment and interlace the first disparity image and the second disparity image, please refer to the above paragraphs regarding the generation methods 6, 6′ and the electronic apparatus 9 for further illustration, which is not illustrated here for the sake of simplicity. After interlacing, the processing unit 90 controls a display device 91 to display the display image 8 (Step 540). By disposing the oblique lenticular lens layer 92 on the displaying device 91 (Step 550 and Step 560), the display device 91 can display an auto-stereoscopic image of the display image 8 through the oblique lenticular lens layer 92. The oblique lenticular lens layer 92 includes a plurality of oblique lenticular lenses 920, and an orientation Z of each of the plurality of oblique lenticular lenses 920 obliquely intersects an edge 925 of the oblique lenticular lens layer 92.
It should be noticed that the orientation Z of each of the plurality of oblique lenticular lenses 920 is substantially parallel to the segmenting direction Y. The segmenting direction Y can be decomposed into a first component Y1 along a first direction 921 and a second component Y2 along a second direction 922 perpendicular to the first direction 921, and each of the first component Y1 and the second component Y2 is greater than zero, as shown in
According to the generation method for a multi-view auto-stereoscopic image of the invention, after disparity information has been computed based on the raw first view and second view, all raw information other than the first view can be disregarded and only the first view and the computed disparity information are required to render a plurality of disparity images via recursive algorithm for fusing the multi-view auto-stereoscopic image. That is, the generation method requires only a pair of image capturing devices capturing views for computing the disparity information and further requires only one of the captured views for fusing the stereoscopic image, which provides convenience in photographing since the disparity information is allowed to adopt a reasonable rendering direction that is independent of the installation orientation of the image capturing device. Therefore, the invention not only saves equipment cost and image-processing time but also improves convenience in photographing. Besides, the electronic apparatus with the generation method and the displaying method of the invention can obliquely segment an image for interlacing, and an interlaced image can be displayed on the display device with the compatible oblique lenticular lens layer. Therefore, by utilizing the generation method and the displaying method, the display device of the electronic apparatus can be disposed at arbitrary installation orientation including the portrait displaying mode and the landscape displaying mode to effectively present the stereoscopic image, which greatly improves utility of the display device.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
107123901 | Jul 2018 | TW | national |
This invention claims the benefit of US Provisional Patent Application No. 62/535,239, filed on Jul. 21, 2017, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62535239 | Jul 2017 | US |