This application generally relates to measuring and reconstructing the shapes of physical objects, including objects that have specular surfaces.
Objects that are composed of a highly-glossy material, such as specular objects, have reflection characteristics that differ from objects that are composed of a diffuse material. For example, a diffuse material reflects light from a directional light source in virtually all directions, but a highly-glossy material reflects light from a directional light source primarily in only one direction or a few directions. These reflections from a highly-glossy material are specular reflections and are caused by the shiny surface of the highly-glossy material, which often has a mirror-like surface finish.
Some embodiments of a device comprise one or more computer-readable storage media and one or more processors that are coupled to the one or more computer-readable media. The one or more processors are configured to cause the device to obtain encoded images of an object; generate respective light-modulating-device-pixel indices for areas of the images based on the encoded images; generate respective coordinates of points on the object based on the light-modulating-device-pixel indices; generate respective surface normals at the points based on the light-modulating-device-pixel indices; map the respective coordinates of the points to a spherical image sensor, thereby producing respective spherical-coordinate representations of the points; generate spherical-coordinate representations of the respective surface normals based on the spherical coordinates of the points; and generate reconstructed surface coordinates based on the spherical-coordinate representations of the respective surface normals.
Some embodiments of a method comprise obtaining respective surface normals for points on an object, mapping the surface normals to a spherical image sensor, and generating reconstructed surface coordinates based on the surface normals that have been mapped to the spherical image sensor.
Some embodiments of one or more computer-readable media store instructions that, when executed by one or more computing devices, cause the one or more computing devices to perform operations that comprise obtaining respective spherical coordinates of points on an object, obtaining respective spherical-coordinate representations of surface normals at the points on the object, and generating reconstructed surface coordinates based on the respective spherical coordinates and on the respective spherical-coordinate representations of the surface normals.
The following paragraphs describe certain explanatory embodiments. Other embodiments may include alternatives, equivalents, and modifications. Additionally, the explanatory embodiments may include several novel features, and a particular feature may not be essential to some embodiments of the devices, systems, and methods that are described herein.
In this embodiment, the light-modulating devices 120 are electronically-controllable light-diffusing panels, which are electronically controllable between a transparent mode and a diffuse mode. An example of an electronically-controllable light-diffusing panel is a liquid-crystal display (LCD) panel, which has programmable pixels that modulate a backlight. Another example of an electronically-controllable light-diffusing panel is electrochromic glass. Electrochromic glass includes a layer that has light-transmission properties that are switchable between a transparent mode, in which the layer is completely or almost completely transparent, and a diffuse mode, in which the layer assumes a frosted or opaque appearance.
The light source 125 may provide uniform or nearly-uniform area illumination, for example when the light source 125 is a panel that is composed of a high density of light-producing pixels. In some embodiments, the light source 125 is a backlight from a common display device (e.g., an LCD display, and LED display). Also, in some embodiments, the light source 125 is an imaging projector that has programmable, luminous pixels.
The light source 125 and the light-modulating devices 120 output light rays {right arrow over (r)}. As described herein, a light ray {right arrow over (r)} includes two components: an illumination light ray {right arrow over (r)}in, which travels from the light source 125 through the light-modulating devices 120 to the surface of the object 130, and a reflected light ray {right arrow over (r)}r,e, which is the reflection of the illumination light ray {right arrow over (r)}in from the surface of the object 130. Each light ray {right arrow over (r)}, its illumination light ray {right arrow over (r)}in, and its reflected light ray {right arrow over (r)}re may be described or identified by the intersections of the illumination light ray {right arrow over (r)}in with the two light-modulating devices 120. For example, a light ray {right arrow over (r)} is described by [u, v] and [s, t] in
The image-capturing device 110 captures the reflected light ray {right arrow over (r)}re. The reflected light ray {right arrow over (r)}re and the illumination light ray {right arrow over (r)}in are both part of the same light ray {right arrow over (r)}, and the reflected light ray {right arrow over (r)}re is composed of light from the illumination light ray {right arrow over (r)}in that has been reflected from a point 131 on the surface of the object 130. The image-capturing device 110 generates an image from captured reflected light rays {right arrow over (r)}re. A light-modulating-device-pixel (LMD-pixel) index of the region (e.g., one pixel, a contiguous set of pixels) of the captured image that includes the point 131 on the surface describes the two LMD pixels that transmitted the light ray {right arrow over (r)} between the light source 125 and the point 131. In this example, the light-modulating-device-pixel index includes pixel (s, t) and pixel (u, v) in the region of the image that includes the point 131. The LMD-pixel indices of an image may be represented by one or more index maps.
Furthermore, because information about the shape of the object 130 is obtained by capturing reflections from it, and because the reflections are viewpoint dependent, in order to recover the full surface of the object 130, the measurement system 10 can observe the object's reflections from multiple points of view (viewpoints), for example by using one or more additional image-capturing devices 110 or by observing the object 130 in different poses (e.g., by rotating the object). In the example embodiment of
The measurement system 10 may calibrate the positions of the light-modulating devices 120 and the image-capturing device 110, as well as the rotating stage 135 in embodiments that include the rotating stage 135. In some embodiments, the calibration procedure includes generating calibration information, such as one or more transformations (e.g., one or more transformation matrices). A transformation may define a rotation and a translation from an image-capturing device 110 to a rotating stage 135, from an image-capturing device 110 to an object 130, from an image-capturing device 110 to a light source 125, or from an image-capturing device 110 to a light-modulating device 120. Also, a transformation may define a rotation and a translation between different poses of the object 130. For example, some embodiments of the measurement system 10 generate the calibration information as described in U.S. patent application Ser. No. 15/279,130.
The example embodiment of a measurement system 10 in
Some embodiments of the one or more measurement devices 100 use a spherical-coordinate system to combine the coordinates of the points on the surface of the object or to combine the surface normals. For example, because the respective surface normals or the respective coordinates in each image may be scaled differently, when combining the surface normals or coordinates, the one or more measurement devices 100 may bring the respective surface normals or coordinates from the different images into a uniform scale.
A point (x, y, z) in the traditional Cartesian coordinate system can be converted to spherical coordinates as follows:
Any vector can be represented as a linear combination of three mutually-orthogonal unit vectors, {circumflex over (r)}, {circumflex over (θ)}, and {circumflex over (ϕ)}, which are the unit vectors in spherical coordinates. At any point in space, unit vector {circumflex over (r)} points in the radially outward direction, unit vector {circumflex over (ϕ)} is perpendicular to unit vector {circumflex over (r)} and points in the direction of increasing polar angle ϕ, and unit vector {circumflex over (θ)} points in the direction of increasing azimuthal angle θ and is orthogonal to both unit vector {circumflex over (r)} and unit vector {circumflex over (ϕ)}. For example, vector {right arrow over (V)} can be represented in terms of these unit vectors as described by the following:
{right arrow over (V)}=V
r
{circumflex over (r)}+V
θ
{circumflex over (θ)}+V
ϕ{circumflex over (ϕ)}, (4)
where Vr, Vθ, and Vϕ are the components of {right arrow over (V)} along unit vector {circumflex over (r)}, unit vector {circumflex over (θ)}, and unit vector {circumflex over (ϕ)}, respectively.
Let the unit vectors in Cartesian coordinates along the x-axis, y-axis and z-axis be unit vector {circumflex over (x)}, unit vector ŷ, and unit vector {circumflex over (z)}, respectively. If the components of a vector {right arrow over (V)} along these axes are Vx, Vy, and Vz, then the vector {right arrow over (V)} can be represented as follows:
{right arrow over (V)}=V
x
{circumflex over (x)}+V
y
yŷ+V
z
{circumflex over (z)}. (5)
The representation of this vector {right arrow over (V)} can be converted to a representation that uses spherical coordinates, for example as described by the following:
V
r
=V
x cos(θ)sin(ϕ)+Vy sin(θ)sin(ϕ)+Vz cos(ϕ), (6)
V
θ
=−V
x sin(θ)+Vy cos(θ), and (7)
V
ϕ
=V
x cos(θ)cos(ϕ)+Vy sin(θ)cos(ϕ)−Vz sin(ϕ). (8)
Some embodiments of the measurement system 10 convert the representations of any coordinates and surface normals to representations in spherical coordinates, for example using the conversions described above. Furthermore, the remainder of this description presents all derivations in a spherical-coordinate system. Additionally, the following description uses a point cloud to represent a collection of coordinates, although some embodiments of the measurement system 10 use another representation of a collection of coordinates.
The mapping from the object's surface to the spherical image sensor 315 can be achieved by radially projecting each point on the surface of the object 330 onto the spherical image sensor 315. As a result, both the surface point as well as the pixel to which it is mapped have the same azimuthal angle θ and the same polar angle ϕ. In the spherical-coordinate system, this mapping is an orthographic projection.
Let P be a point on the surface of the object that has three-dimensional coordinates (r, θ, ϕ). This surface point P is projected radially onto the surface of the spherical image sensor 315 by extending the line connecting the center of projection 316 and the surface point P to meet the surface of the spherical image sensor 315, as shown in
Given the point cloud of the object 330, a suitable center of projection (e.g., the centroid of the point cloud) can be selected, such as the center of projection 316 that is shown in
The surface of the three-dimensional object 330 can be completely described by specifying the radial distance of the surface points from the origin at each azimuthal angle θ and polar angle ϕ. Consequently, the radial distance R of the surface points of the object 330 can be modeled as a function of the azimuthal angle θ and the polar angle ϕ. The task of determining the object's shape reduces to finding the surface function R(θ, ϕ). The surface point P of the object 330 that maps to an image pixel l has the coordinates (R(θ, ϕ), θ, ϕ), although the surface function R(θ, ϕ) may need to be determined.
Consider an image pixel l with an azimuthal angle a and a polar angle ϕ. Let {right arrow over (N)} be the surface normal that is associated with this pixel. Let Nr, Nθ, and Nϕ be the components of the surface normal {right arrow over (N)} along vector {circumflex over (r)}, vector {circumflex over (θ)}, and vector {circumflex over (ϕ)}, respectively. The surface normal {right arrow over (N)} can be described as follows:
{right arrow over (N)}=N
r
{circumflex over (r)}+N
θ
{circumflex over (θ)}+N
ϕ{circumflex over (ϕ)}. (9)
The surface normal {right arrow over (N)} at point P on the surface of the object 330 can also be expressed in terms of the gradient of the surface function R(θ, ϕ). In the spherical-coordinate system, the surface normal {right arrow over (N)} can be expressed as follows:
Expressing equation (9) in the form of equation (10) yields the following expression of the surface normal {right arrow over (N)} that is associated with a pixel:
Equating the corresponding components of the surface normal {right arrow over (N)} in equation (10) and equation (11) produces the following partial differential equations:
These equations can be further simplified by using the following substitution: g(θ, ϕ)=log(R(θ, ϕ)). Note that this implicitly assumes that R(θ, ϕ)>0. This is valid because the center of projection 316 can be selected such that it does not lie on the surface of the object 330. Accordingly, the partial differential equations can be described as follows:
R(θ, ϕ)=eg(θ, ϕ). (16)
Note that the function g(θ, ϕ) can be determined only up to a constant value c. Therefore,
R(θ, ϕ)=e(g(θ, ϕ)+c), (17)
R(θ, ϕ)=eceg(θ, ϕ), and (18)
R(θ, ϕ)=keg(θ, ϕ), (19)
where k=ec is a scale ambiguity. Thus, if point P on the surface of the object 430 has the coordinates (R(θ, ϕ), θ, ϕ), then the coordinates of point P's reconstruction Pr can be described by (keg(θ, ϕ), θ, ϕ).
In some embodiments, because the spherical-normal-vector integration reconstructs the entire object 430 in one pass or iteration, there is only a single scale ambiguity k to be resolved. This can be done by determining the ground truth for one point on the surface of the object 430.
One technique to measure the ground truth of a point on the object's surface includes determining the object's convex hull. In the regions where the convex hull coincides with the object, the hull is a good estimate of the ground truth.
Consider the point P, which is in a place where the convex hull 542 coincides with the object 530. Let the measured coordinates of this point P be (Ractual, θ, ϕ). Let point Q be the corresponding point, which is the point that has the same azimuthal angle θ and the same polar angle ϕ, on the reconstructed point cloud 540. Let the radial distance of point Q from the origin be Rrecon. Then the scale factor k can be determined as follows:
The reconstructed point cloud 540 can now be brought to the same scale as the object 530 by multiplying the radial-distance coordinate (R) of each point on the reconstructed point cloud 540 with the scale factor k. Note that the convex hull 542 is just one possible way to determine a ground-truth point. Some embodiments of measurement systems use other methods. Also, in some embodiments, at least one point is necessary to determine the scale factor k. And to account for measurement errors and to increase the robustness of the scale-factor determination, the ground-truth measurement can be extended to a set of points (rather than just a single point). Thus, some embodiments of measurement systems calculate the scale factor k using multiple points on the object (e.g., points on the convex hull 542 where the convex hull 542 coincides with the object 530) and the corresponding points on the reconstructed point cloud 540.
Furthermore, in order to solve partial differential equations (14) and (15) numerically, the surface normal {right arrow over (N)} may be sampled on a uniform grid in the θ−ϕ plane. However, in many practical applications, this sampling is not uniform. To solve the partial differential equations (14) and (15), uniformly-sampled data can be obtained from this set of non-uniform, scattered data points.
For example, each component of the surface normals {right arrow over (N)} can be modeled as a function of the azimuthal angle θ and the polar angle ϕ. This two-dimensional function can be determined by fitting a surface through the available non-uniform samples using Delaunay triangulation. Delaunay triangulation approximates the surface using triangles. The available sample points act as vertices of these triangles. For any point which lies in the interior of a triangle, the function value is obtained through linear interpolation of the function values at the vertices of the triangle. Using this method, uniformly-spaced surface normals {right arrow over (N)} can be generated.
However, if each component of the surface normals {right arrow over (N)} was interpolated separately, then the surface normals {right arrow over (N)} may not be unit length. To rectify this, at each sample point, the surface normal {right arrow over (N)} can be divided by its norm. This produces a set of uniformly-spaced, unit-length surface normals {right arrow over (N)}, which can be used to solve the partial differential equations. Also, these operations can be performed in the Cartesian domain and, once they are finished, all the resulting surface normals {right arrow over (N)} can be converted to spherical coordinates.
Each point on the surface of the object 630 is associated with an R-buffer value. If the mapping between the pixel of the spherical image sensor 615 (sensor pixel) and the point on the surface is unique (for example, point D in
Normal-vector integration may be performed using only the points that have the same R-buffer value. Accordingly, points with an R-buffer value of 1 are integrated in the first iteration or pass, points with an R-buffer value of 2 are integrated in the second iteration or pass, and so on. After each iteration or pass of the integration, the scale-factor ambiguity can be resolved. Finally, the results of all of the integration iterations or passes can be combined to produce a single reconstructed point cloud that models the object 630.
The effectiveness and robustness of three embodiments of measurement systems were tested. In one embodiment, a synthetically-generated ellipsoid was used for quantitative testing. In order to check the performance with more complex models, one embodiment used the Stanford Bunny model, and another embodiment used a horse model.
For quantitative evaluation of the algorithm, the relative RMS error between the reconstructed point cloud in
In Table 1, a noise standard deviation of zero indicates that no noise was added to the point cloud. The relative RMS error in the reconstructed point cloud (the reconstruction) is just 0.4% when the input point cloud is not noisy. This shows that the reconstruction is very accurate. Even when the noise standard deviation is 0.1, the error is less than 1%. This indicates that the embodiment of the algorithm can be used for de-noising and smoothing noisy point clouds.
This test case was complicated because of the following complications: (1) In many cases, multiple 3D object points were mapped to the same sensor pixel, and (2) some of the 3D points were degenerate (i.e., the surface normals were perpendicular to the radial direction). To overcome these complications, the point cloud of the bunny was reconstructed in three parts, each of which had a respective center of projection: the head, the body, and the tail. For each part, one ground-truth point was used as a reference in order to calculate the correct scale factor. The three reconstructed point clouds were then combined to form a single reconstructed point cloud. Qualitatively, the reconstructed point clouds closely resemble the ground truth. The embodiment of a reconstruction algorithm was even able to handle problematic concave areas, such as the inner ears of the bunny.
Similar to the Stanford Bunny, this test case was also difficult because of the following complications: (1) In many cases, multiple 3D object points were mapped to the same sensor pixel, and (2) some of the 3D points were degenerate (i.e., the surface normals were perpendicular to the radial direction). To resolve these complications, the point cloud was reconstructed in six smaller point clouds, each of which had a respective center of projection: the head, the body, and the four legs. In each case, one ground-truth point was used as a reference in order to calculate the correct scale factor. The reconstructed smaller point clouds were then combined to form a single reconstructed point cloud.
Furthermore, although the operational flows that are described herein are performed by a measurement device, some embodiments of these operational flows are performed by two or more measurement devices or by one or more other specially-configured computing devices.
In block B1300, a measurement device obtains images 1312 of an object, and the measurement device decodes the images 1312 and generates respective LMD-pixel indices 1331 (e.g., LMD-pixel-index maps) for each of the images 1312. The images 1312 show the object from different viewpoints.
The LMD-pixel indices describe the LMD-pixel indices ((s, t) and (u, v) in
Next, in block B1305, the measurement device performs ray triangulation based on the LMD-pixel indices 1331 to generate a respective normal field 1332 or point cloud for each image 1312 of the object. For example, for a light ray {right arrow over (r)}, the measurement device may triangulate its illumination light ray {right arrow over (r)}in and its reflected light ray {right arrow over (r)}re to determine the surface normal of the point on the object's surface that reflected the light ray {right arrow over (r)}.
Thus, the measurement device calculates a respective surface normal {right arrow over (n)} for each of a plurality of points on the surface of the object based on the direction of the illumination light ray {right arrow over (r)}in of the specular reflection at the point and on the direction of the reflected light ray {right arrow over (r)}re of the specular reflection at the point. For example, some embodiments calculate the surface normal n as described by the following:
However, in other embodiments, the normal fields 1332 can be generated by other means. For example, in embodiments that measure the shapes of diffuse objects, the normal fields 1332 may be generated using coordinates that were obtained from time-of-flight cameras, structured-light scanners, or coded-aperture cameras that captured images of the object. Also for example, some embodiments use photometric stereo techniques that rely solely on normal-field calculation. And surface normals can also be estimated using only a three-dimensional point cloud.
The flow then moves to block B1310, where the measurement device performs normal-field integration on the normal fields 1332 (or the point cloud) to generate combined surface coordinates 1333. This may include converting coordinates from Cartesian coordinates to spherical coordinates. The combined surface coordinates 1333 are the spherical coordinates of respective points on the surface of the object (e.g., a point cloud of spherical coordinates) and collectively describe an integrated surface. To accomplish this, some embodiments of the measurement device combine all of the normal fields 1332 or point clouds into a single normal field or point cloud and then perform computations that can be described by some or all of equations (1)-(20). The combined surface coordinates 1333, which may be a point cloud, describe the relative positions of respective points on the surface in a uniform scale.
Additionally, in block B1310, the measurement device may separate the normal fields 1332 (or the point cloud) into different sets of normals (or different point clouds), each of which may have a different center of projection. Each of the different sets of normals (or point clouds) can be described by respective combined surface coordinates 1333. For example, if a line from the center of projection to the projection of the image surface would pass through two points in the normal fields, then those two points can be separated into different sets of normals. Accordingly, each set of normals (or each point cloud) can include only points for which a line from the center of projection to the projection of the image surface would pass through only one point, and each set of normals can include normals from different normal fields 1332. Also for example, if the surface normals {right arrow over (N)} at some points are orthogonal to the radial-direction vector r for a particular center of projection, then these points can be separated out to form a second set of points. Another center of projection can then be selected for the points in the second set so that the surface normals {right arrow over (N)} for these points are not orthogonal to the radial-direction vector {right arrow over (N)}.
After block B1310, the flow proceeds to block B1315, where revised scale-factor calculation is performed based on the combined surface coordinates 1333. This scale-factor calculation produces a revised scale factor 1334. In order to calculate the revised scale factor 1334, some embodiments of the measurement device compare a measurement of one or more dimensions of the object as described by the combined surface coordinates 1333 to another measurement of the one or more dimensions of the object (e.g., a measurement that is input by a user, a measurement from a time-of-flight camera, a measurement from a structured-light scanner, a measurement from a coded-aperture camera). Thus, if the object as described by the combined surface coordinates 1333 is 1 cm on the x-axis, and a measurement of the physical object on the x-axis is 2 cm, then the revised scale factor 1334 may be 2×uniform scale factor. In embodiments that separate the normal fields into different sets of normal fields and generate respective combined surface coordinates 1333 for the sets, a respective revised scale factor 1334 can be calculated for each set.
After block B1315 the flow proceeds to block B1320, where revised surface coordinates 1335 are calculated based on the combined surface coordinates 1333 and on the revised scale factor 1334. Also, in embodiments that separate the normal fields 1332 into different sets of normal fields and generate respective combined surface coordinates 1333 for the sets, respective revised coordinates can be calculated for each set and then combined to produce the respective combined surface coordinates 1335.
Additionally, some embodiments obtain the coordinates and a surface normal for each point by using other means. For example, some embodiments obtain the coordinates and surface normals from other sources than encoded images, such as measurements from a stereoscopic camera, a time-of-flight camera, etc. Some of these sources may provide the coordinates and the surface normals without requiring the decoding of images.
Next, in block B1415, the measurement device selects a center of projection. The center of projection may be selected such that, based on the collection of points, the center of projection does not appear to lie on the surface of the object. Then, in block B1420, the measurement device uses the center of projection to map the Cartesian coordinates of the points to a spherical image sensor, thereby producing a collection (e.g., a point cloud) of spherical coordinates for the points. For example, the measurement device may calculate the spherical coordinates for a point as described by equations (1)-(3), using the Cartesian coordinates of the point and using the center of projection as the origin.
Next, in block B1425, the measurement device converts the surface normals from Cartesian coordinates to spherical coordinates based on the surface normals and on the spherical coordinates for the points, for example as described by equations (5)-(8). Thus, at the end of block B1425, the measurement device has coordinates and a surface normal for each point, and the coordinates and surface normal are represented in spherical coordinates.
The flow then proceeds to block B1430, where the measurement device generates reconstructed surface coordinates based on the spherical representations of the normal vectors or on the spherical representations of the coordinates (e.g., the azimuthal angle θ and the polar angle ϕ), for example as described by one or more of equations (12)-(19). In some embodiments, the measurement device directly calculates the reconstructed surface coordinates for the points when numerically solving one or more of equations (12)-(15).
The flow then moves to block B1435, where the measurement device calculates a scale factor for the reconstructed surface coordinates. For example, in some embodiments, the measurement device calculates the scale factor using one or more points where a convex hull coincides with the object and the corresponding points in the spherical coordinates. Also, some embodiments of the measurement device calculate the scale factor using one or more obtained measurements of the object.
Finally, in block B1440, the measurement device generates rescaled, reconstructed surface coordinates (e.g., a point cloud of reconstructed surface coordinates) based on the reconstructed surface coordinates and on the scale factor. Depending on the embodiment, the reconstructed surface coordinates may include more points than the number of points in the collection of spherical coordinates, an equal number of points to the number of points in the collection of spherical coordinates, or fewer points than the number of points in the collection of spherical coordinates.
Next, in block B1515, the measurement device selects a center of projection. The center of projection may be selected such that, based on the collection of points, the center of projection does not appear to lie on the surface of the object. Then, in block B1520, the measurement device uses the center of projection to map the Cartesian coordinates of the points to a spherical image sensor, thereby producing a collection (e.g., a point cloud) of spherical coordinates for the points. For example, the measurement device may calculate the spherical coordinates for a point as described by equations (1)-(3), using the Cartesian coordinates of the point and using the center of projection as the origin.
Next, in block B1525, the measurement device converts the surface normals from Cartesian coordinates to spherical coordinates based on the surface normals and on the spherical coordinates for the points, for example as described by equations (5)-(8). Thus, at the end of block B1525, the measurement device has coordinates and a surface normal for each point, and the coordinates and surface normal are represented in spherical coordinates.
The flow then moves to block B1530, where the measurement device assigns a respective R-buffer value to the points. For example, in some embodiments, all points for which a line from the center of projection to the spherical image sensor passes through only that point are assigned the same R-buffer value (e.g., 1, 2, 3, 4). And if a line from the center of projection to the spherical image sensor passes through multiple points, then the R-buffer value of the point closest to the center of projection is set to one value, the R-buffer value of the next closest point is set to another value, and so on. Also, in some embodiments, the same R-buffer value is assigned to the points in a group of points, for example all the points in an ear of the bunny in
The flow then proceeds to block B1535, where the measurement device selects the first R-buffer value Bv, which is 1 in this example. Then, in block B1540, the measurement device generates generates reconstructed surface coordinates for the points that have been assigned the current R-buffer value Bv (which is 1 in the first iteration of block B1540) based on the spherical representations of the normal vectors or on the spherical representations of the coordinates (e.g., the azimuthal angle θ and the polar angle ϕ) of the points, for example as described by equations (12)-(19). Following, in block B1545, the measurement device calculates a scale factor for the current R-buffer value Bv. The flow then advances to block B1550, where the measurement device generates rescaled, reconstructed surface coordinates for the current R-buffer value Bv based on the reconstructed surface coordinates and on the scale factor.
Next, in block B1555, the measurement device determines if blocks B1540-B1550 have been performed for all R-buffer values. If they have not (block B1555=No), then the flow moves to block B1560, where the next R-buffer value Bv is selected (Bv=Bv+1), and the flow returns to block B1540. Otherwise (block B1555=Yes) the flow moves to block B1565, where the measurement device combines the rescaled, reconstructed surface coordinates for all of the R-buffer values.
Also, some embodiments separate the points into groups of points without using R-buffer values, perform operations similar to blocks B1540-B1550 for each group, and combine the results similar to block B1565.
Next, in block B1615, the measurement device selects a first center of projection Cp and corresponding points. The corresponding points are points that have a surface normal {right arrow over (N)} that is not orthogonal to the radial-direction vector {circumflex over (r)} of the currently-selected center of projection.
Then, in block B1620, the measurement device uses the currently-selected center of projection to map the Cartesian coordinates of the corresponding points to a spherical image sensor, thereby producing a collection (e.g., a point cloud) of spherical coordinates for the corresponding points. For example, the measurement device may calculate the spherical coordinates for a point as described by equations (1)-(3), using the Cartesian coordinates of the point and using the currently-selected center of projection as the origin.
Next, in block B1625, the measurement device converts the surface normals of the corresponding points from Cartesian coordinates to spherical coordinates based on the surface normals and on the spherical coordinates for the points, for example as described by equations (5).-(8). Thus, at the end of block B1625, the measurement device has coordinates and a surface normal for each point, the coordinates and surface normal are represented in spherical coordinates, and the spherical coordinates use the currently-selected center of projection as the origin.
The flow then moves to block B1630, where the measurement device generates reconstructed surface coordinates for the corresponding points based on the spherical representations of the normal vectors and on the spherical representations of the coordinates (e.g., the azimuthal angle θ and the polar angle ϕ) of the corresponding points, for example as described by equations (12)-(19). Following, in block B1635, the measurement device calculates a scale factor for the currently-selected center of projection Cp. The flow then advances to block B1640, where the measurement device generates rescaled, reconstructed surface coordinates for the currently-selected center of projection Cp based on the reconstructed surface coordinates and on the scale factor.
Next, in block B1645, the measurement device determines if blocks B1620-B1640 have been performed for all centers of projection. If not (block B1645=No), then the flow moves to block B1650, where the next center of projection (Cp=Cp+1) and its corresponding points are selected, and the flow returns to block B1620. Otherwise (block B1645=Yes) the flow moves to block B1555.
In block B1655, the measurement device aligns all the rescaled, reconstructed surface coordinates to the same center of projection. In some embodiments, this is performed by selecting one center of projection as the point of reference, and shifting the center of projection of all the rescaled, reconstructed surface coordinates that used another center of projection. Finally, in block B1660, the measurement device combines the aligned rescaled, reconstructed surface coordinates.
Additionally, some operational flows use multiple R-buffer values (e.g., as described in
The measurement device 1700 includes one or more processors 1701, one or more I/O components 1702, and storage 1703. Also, the hardware components of the measurement device 1700 communicate by means of one or more buses or other electrical connections. Examples of buses include a universal serial bus (USB), an IEEE 1394 bus, a PCI bus, an Accelerated Graphics Port (AGP) bus, a Serial AT Attachment (SATA) bus, and a Small Computer System Interface (SCSI) bus.
The one or more processors 1701 include one or more central processing units (CPUs), which include microprocessors (e.g., a single core microprocessor, a multi-core microprocessor); one or more graphics processing units (GPUs); one or more application-specific integrated circuits (ASICs); one or more field-programmable-gate arrays (FPGAs); one or more digital signal processors (DSPs); or other electronic circuitry (e.g., other integrated circuits). The I/O components 1702 include communication components (e.g., a GPU, a network-interface controller) that communicate with input and output devices, which may include a keyboard, a display device, a mouse, a printing device, a touch screen, a light pen, an optical-storage device, a scanner, a microphone, a drive, a controller (e.g., a joystick, a control pad), and the network 1799. In some embodiments, the I/O components 1702 also include specially-configured communication components that communicate with the image-capturing device 1710, the two or more light-modulating devices 1720, and the light source 1725.
The storage 1703 includes one or more computer-readable storage media. As used herein, a computer-readable storage medium, in contrast to a mere transitory, propagating signal per se, refers to a computer-readable media that includes a tangible article of manufacture, for example a magnetic disk (e.g., a floppy disk, a hard disk), an optical disc (e.g., a CD, a DVD, a Blu-ray), a magneto-optical disk, magnetic tape, and semiconductor memory (e.g., a non-volatile memory card, flash memory, a solid-state drive, SRAM, DRAM, EPROM, EEPROM). Also, as used herein, a transitory computer-readable medium refers to a mere transitory, propagating signal per se, and a non-transitory computer-readable medium refers to any computer-readable medium that is not merely a transitory, propagating signal per se. The storage 1703, which may include both ROM and RAM, can store computer-readable data or computer-executable instructions.
The measurement device 1700 also includes a decoding module 1703A, a coordinate-calculation module 1703B, a coordinate-conversion module 1703C, an integration module 1703D, a rescaling module 1703E, and a communication module 1703F. A module includes logic, computer-readable data, or computer-executable instructions, and may be implemented in software (e.g., Assembly, C, C++, C#, Java, BASIC, Perl, Visual Basic), hardware (e.g., customized circuitry), or a combination of software and hardware. In some embodiments, the devices in the system include additional or fewer modules, the modules are combined into fewer modules, or the modules are divided into more modules. When the modules are implemented in software, the software can be stored in the storage 1703.
The decoding module 1703A includes instructions that, when executed, or circuits that, when activated, cause the measurement device 1700 to decode images and determine LMD-pixel indices, for example as performed in block B1300 in
The coordinate-calculation module 1703B includes instructions that, when executed, or circuits that, when activated, cause the measurement device 1700 to calculate surface normals (e.g., normal fields) or three-dimensional Cartesian coordinates of points on the surface of an object, for example as performed in block B1305 in
The coordinate-conversion module 1703C includes instructions that, when executed, or circuits that, when activated, cause the measurement device 1700 to convert Cartesian coordinates to spherical coordinates, for example as performed in blocks B1415-B1425 in
The integration module 1703D includes instructions that, when executed, or circuits that, when activated, cause the measurement device 1700 to generate combined surface coordinates based on a set of surface normals or a set of spherical coordinates, for example as performed in block B1310 in
The resealing module 1703E includes instructions that, when executed, or circuits that, when activated, cause the measurement device 1700 to generate a revised scale factor and generate revised surface coordinates, for example as performed in blocks B1315 and B1320 in
The communication module 1703F includes instructions that, when executed, or circuits that, when activated, cause the measurement device 1700 to communicate with one or more other devices, for example the image-capturing device 1710, the two or more light-modulating devices 1720, and the light source 1715.
The image-capturing device 1710 includes one or more processors 1711, one or more I/O components 1712, storage 1713, a communication module 1713A, and an image-capturing assembly 1714. The image-capturing assembly 1714 includes one or more image sensors, one or more lenses, and an aperture. The communication module 1713A includes instructions that, when executed, or circuits that, when activated, cause the image-capturing device 1710 to receive a request for an image from a requesting device, retrieve a requested image from the storage 1713, or send a retrieved image to the requesting device (e.g., the measurement device 1700).
At least some of the above-described devices, systems, and methods can be implemented, at least in part, by providing one or more computer-readable media that contain computer-executable instructions for realizing the above-described operations to one or more computing devices that are configured to read and execute the computer-executable instructions. The systems or devices perform the operations of the above-described embodiments when executing the computer-executable instructions. Also, an operating system on the one or more systems or devices may implement at least some of the operations of the above-described embodiments.
Furthermore, some embodiments use one or more functional units to implement the above-described devices, systems, and methods. The functional units may be implemented in only hardware (e.g., customized circuitry) or in a combination of software and hardware (e.g., a microprocessor that executes software).
The scope of the claims is not limited to the above-described embodiments and includes various modifications and equivalent arrangements. Also, as used herein, the conjunction “or” generally refers to an inclusive “or,” though “or” may refer to an exclusive “or” if expressly indicated or if the context indicates that the “or” must be an exclusive “or.”
This application claims the benefit of U.S. Application No. 62/433,088, which was filed on Dec. 12, 2016, and the benefit of U.S. Application No. 62/450,888, which was filed on Jan. 26, 2017.
Number | Date | Country | |
---|---|---|---|
62433088 | Dec 2016 | US | |
62450888 | Jan 2017 | US |