IMAGE PROCESSING WITH ITERATIVE CLOSEST POINT (ICP) TECHNIQUE

Information

  • Patent Application
  • 20190188872
  • Publication Number
    20190188872
  • Date Filed
    December 18, 2017
    6 years ago
  • Date Published
    June 20, 2019
    4 years ago
Abstract
In various embodiments of an image processing method and apparatus, first and second point clouds representing respective images of a scene/object from different viewpoints are obtained. Extracted features points from the first point cloud are matched with extracted feature points from the second point cloud, using depth based weighting, as part of an ICP initiation process. The first and second point clouds are then further ICP processed using results of the initiation process to generate at least one coordinate-transformed point cloud.
Description
TECHNICAL FIELD

The present disclosure relates generally to image processing, and more particularly to image processing of images obtained by a depth camera using an iterative closest point (ICP) technique.


DISCUSSION OF THE RELATED ART

Iterative Closest Point (ICP) is an algorithm employed to minimize the difference between two clouds of points. ICP is often used to reconstruct 2D or 3D surfaces from different scans in medical imaging, 3D rendering of real world objects, localizing robots, and so forth.


In ICP-based image processing, one point cloud, often called the reference cloud, is kept fixed, while the other point cloud undergoes a coordinate transformation for a best match to the reference cloud. The classical ICP algorithm can be summarized as follows:


Given first and second point clouds P and P′, which may represent respective images of an object or scene taken from a camera from different vantage points:


For each point pi∈P:


First, find the closest point qi=p′c(i)∈P′.


Next, find rotation R and translation t that minimizes Σi[Rpi+t−qi]2.


Then, update the position of all the points p of P according to pnew=Rp+t.


Finally, reiterate until convergence.


Accordingly, ICP iteratively revises a transformation based on a combination of rotation and translation, to minimize errors in distances between the corresponding points of the first and second point clouds. Thereby, the reference (first) point cloud, and the coordinate-transformed second point cloud, become substantially aligned.


While ICP is a valuable tool in various image processing applications, conventional ICP has been found to be imprecise under certain conditions.


SUMMARY

An image processing method and apparatus may employ depth-based weighting in an iterative closest point (ICP) process to generate a coordinate-transformed point cloud.


In various embodiments of a method and apparatus according to the technology, first and second point clouds representing respective images of a scene/object from different viewpoints are obtained. Extracted features points from the first point cloud are matched with extracted feature points from the second point cloud. An initial rotation and translation of the first point cloud with respect to the second point cloud may be determined, to initially align the first and second point clouds, using depth-based weighting of the feature points. ICP processing may then be performed using the initial rotation and translation, to generate at least one coordinate-transformed point cloud.


Each of the first and second point clouds may be a point cloud obtained from a stereo camera.


The ICP processing may involve performing a depth weighted based alignment of corresponding points of the first and second point clouds, where points at depths closer to a viewpoint are weighted higher than points further from the viewpoint.


Depth regularization may be performed on each of the first and second point clouds prior to the matching of the feature points.


In an aspect, an electronic device includes memory and at least one processor coupled to the memory. The at least one processor executes instructions to: perform depth regularization on each of first and second point clouds; determine an initial rotation and translation of the first point cloud with respect to the second point cloud to initially align the first and second point clouds; and perform iterative closest point (ICP) processing using the initial rotation and translation, to generate at least one coordinate-transformed point cloud.


In another aspect, a system includes: at least one camera configured to capture images of a scene from each of first and second viewpoints and obtain, respectively, first and second point clouds corresponding to the scene; and image processing apparatus including memory and at least one processor coupled to the memory. The at least one processor may execute instructions to: match feature points extracted from a first point cloud with feature points extracted from a second point cloud; determine an initial rotation and translation of the first point cloud with respect to the second point cloud to initially align the first and second point clouds, using depth-based weighting of the feature points; and perform iterative closest point (ICP) processing using the initial rotation and translation, to generate at least one coordinate-transformed point cloud.


In still another aspect, a non-transitory computer-readable recording medium stores instructions that, when executed by at least one processor, implement an image processing method. The method may include: obtaining first and second point clouds representing respective images of a scene from different viewpoints; matching feature points extracted from the first point cloud with feature points extracted from the second point cloud; determining an initial rotation and translation of the first point cloud with respect to the second point cloud to initially align the first and second point clouds, is using depth-based weighting of the feature points; and performing iterative closest point


(ICP) processing using the initial rotation and translation, to generate at least one coordinate-transformed point cloud.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other aspects and features of the present technology will become more apparent from the following detailed description, taken in conjunction with the accompanying drawings in which like reference numerals indicate like elements or features, wherein:



FIG. 1 illustrates an example system according to the present technology;



FIG. 2 is a flow chart illustrating an illustrative method for processing image point clouds in accordance with an embodiment of the present technology;



FIG. 3A depicts an experimental point cloud;



FIG. 3B is an experimental disparity map of a point cloud without regularization;



FIG. 3C is an experimental disparity map of a point cloud with regularization;



FIG. 4 is a flow chart of an example ICP processing method according to an embodiment of the present technology; and



FIG. 5 is a functional block diagram of an example image processing apparatus 10 according to the present technology.





DETAILED DESCRIPTION

The following description, with reference to the accompanying drawings, is provided to assist in a comprehensive understanding of certain exemplary embodiments of the new technology disclosed herein for illustrative purposes. The description includes various specific details to assist a person of ordinary skill the art with understanding the technology, but these details are to be regarded as merely illustrative. For the purposes of simplicity and clarity, descriptions of well-known functions and constructions may be omitted when their inclusion may obscure appreciation of the technology by a person or ordinary skill in the art.


The present technology may use an enhanced ICP-based processing approach to find rotation and translation of a camera between positional states at which first and second image frames are captured. The framework may be adapted to stereo based depth from a dual camera sensor. The depth information may be synchronized with gray information that can be coupled with the ICP process.


The present technology may use a method to smooth a depth map and to weight every 3D point contribution to an ICP process with parameters from a stereo system. Gray level information may be used to improve classical ICP.


The present technology may build a 3D point cloud and compute a camera motion from a pair of stereo camera sensors that each produce depth information.


Hereinafter, for ease of explanation, processing operations of the present technology will be described as occurring at the pixel level. A pixel, however, is but one example of an image element, Thus, the below-described processing may alternatively be performed using larger image element units, such as macroblocks, rather than pixels. The use of larger image elements may reduce processing complexity but at the expense of accuracy/resolution.



FIG. 1 is an example system 5 according to the present technology. System 5 includes a first camera C1 located at a first position x1, y1, z1 in three dimensional space. First camera C1 may be a depth camera, which is a camera capable of measuring depth of image elements in a scene as well as capturing the scene's image information, Preferably, first camera C1 is a stereo camera or a pair of cameras functioning as a stereo camera. In the alternative, first camera C1 may measure depth using a technique such as infrared or sonar. First camera C1 is oriented so that its field of view surrounds the camera's optical axis A1. The field of view has a vertex at a viewpoint V1 which may be a point on axis A1 and considered to coincide with the first position.


First camera C1 may capture a first image of a scene including one or more is objects O. The first image may be represented by a first point cloud, which is provided to an image processing apparatus 10. A point cloud is generally defined as a set of data points in some coordinate system. In embodiments of the present technology, each point of the point cloud represents an image element such as a pixel, and may be characterized with a gray level or a luminance/color value, and also a depth value, to thereby form a 3D point cloud. Object 0 has feature points such as fi and fj. First camera C1 may obtain a depth map of distances between the vertex V1 and the surface points of the scene objects.


If first camera C1 is a stereo camera, it captures both a left image and a right image of the scene. The first point cloud may be a point cloud representing the left image, the right image, or a composite image of the left and right images, With the left and right images, the camera may obtain depth information of objects in the scene using a disparity map based on differences in positions of common points in the left and right images. Such depth information may be relative to the coordinates of the viewpoint V1.


A second depth camera C2, which may also be a stereo camera, may be located at a second position x2, y2, z2 corresponding to a second viewpoint V2, and may capture a second image of the same scene while being oriented along a second optical axis A2. Alternatively, instead of a second camera C2, the same camera C1 has moved to the second position x2, y2, z2 and captures the second image. (Hereafter, camera C2 is understood to refer to the second camera or to the first camera having moved to the second position.) Camera C2 may likewise capture depth information using a disparity map based on left and right images. In either case, a second point cloud representing the second image is provided to image processing apparatus 10. The second point cloud may likewise be a point cloud of a left image, a right image or a composite image of the left and right images taken by camera C2. Additional images of the scene from different perspectives may be captured by the camera(s), and additional point clouds obtained, following movement to different viewpoints. Although shown separately, first camera C1, second camera C2, image processing apparatus 10, user interface 12 and display device 14 may all be part of a common electronic device such as a portable communication device or a medical imaging apparatus.


Image processing apparatus 10 may utilize the enhanced ICP-based processing according to the techniques described herein to generate at least one coordinate-transformed point cloud. For instance, the second point cloud may be coordinate-transformed based on the ICP processing, so that it may be precisely aligned with the first point cloud. Since camera C2 is situated at the second position, it is translated by a distance t relative to the first position, where t is a vector distance between V2 and V1, or (x2-x1), (y2-y1), (z2-z1). Note that the coordinates of the viewpoints V1 and V2 may not be initially known, so they are just initially approximated. Further, at the time of image capture, the second camera C2 may have been rotated by a rotation R with respect to the optical axis A1 of the first camera C1, i.e., the optical axis A2 of the second camera C2's field of view is rotated with respect to the axis A1. Hence, the rotation R may be a vector rotation having components along three orthogonal axes. The coordinate-transformed point cloud may be transformed by the ICP processing in terms of R and t, with six degrees of freedom—three for the rotation and three for the translation. For instance, the ICP processing may result in the second point cloud having some or all of its points shifted based on computed values for R and t. Image processing apparatus 10 may output the transformed point cloud to a database and/or to a display 14.


In addition, image processing apparatus 10 may build a database of transformed point clouds, and may also generate 3D rendered images based on the transformed s point clouds. For instance, once at least the first and second point cloud images have been captured and aligned using ICP-based processing, image processing apparatus 10 may perform panoramic or other combinatorial image processing to build a database of 3D rendered composite images representing the scene/object(s) 0 from more than two viewpoints. A user interface 12 may thereafter allow a user to view a rendered image from a selected viewpoint on display 14.


Methods and apparatus in accordance with the present technology described below have been conceived with the aim of alleviating one or more shortcomings of conventional ICP based image processing. For example, ICP needs to find correspondences between points, and is very sensitive to the initialization state. Conventional ICP may generate inaccurate transformations if such initialization is imprecise.


Further, conventional ICP does not converge if the two frames are generated from viewpoints that are too far away from one another, Conventional ICP also may not take occlusion into consideration and does not handle missing points or miscorrespondences very well.


Additionally, in a stereo system, the foreground depth is more accurate than background depth. Thus, with the use of conventional ICP, the background can introduce noise into the system. Furthermore, with previous ICP approaches, correspondences are found via a closest neighbor computation, where two planes with a sliding translation can lead to bad matches.


The present technology may obviate some or all of the above problems via one or more of: (i) improved initialization via features point matching; (ii) weighting the contribution of each pixel as a function of its depth; (iii) ignoring points located on a plane or whose closest point is too far away; and (iv) incorporating gray level information in the matching algorithm. Specific embodiments to achieve one or more of these ends will now be described.



FIG. 2 is a flow chart illustrating an illustrative computer-implemented image processing method, 100, that processes image point clouds in accordance with an embodiment. The method involves an enhanced ICP process that may be used for computing a rotation and translation of the camera(s) between first and second image frames captured from different viewpoints. The method first obtains 102 first and second point clouds P and P′ representing respective first and second image frames. These point clouds may be obtained from one or more stereo cameras or image sensors imaging a common object(s) or scene from different viewpoints, e.g., as in the environment illustrated in FIG. 1. In other cases, the point clouds may be obtained by other suitable means, such as over a network from a remote source.


Depth regularization may then be performed 104 on each of the first and second point clouds. This operation may serve to remove noise and implement an edge preserving smoothing of an input depth map associated with a point cloud. For instance, in the case of stereo cameras, stereo systems provide a disparity map between two input pictures. The real depth is inversely proportional to the disparity, so that the farther away the imaged object is from the camera, the smaller is the disparity. Thus, the disparity map provides a depth map for the points in each image. Disparity is discrete, and would typically lead to stratification of the obtained depth map if depth regularization were not performed.


The following optimization problem may be solved in accordance with the present technology to carry out the depth regularization and generate smooth edge preserving depth:





min{tilde over (d)}∫(|{tilde over (d)}(x)−d(x)|+μ|∇{tilde over (d)}(x)|G(|∇d(x)|))dx.   (eqn. 1)


In eqn. 1, x may represent a pixel coordinate or count of an input depth map, where the integration is performed over the range (dx) of x in a point cloud or image frame. d(x) represents a depth value at a position x of the input depth map. The term d(x) represents a modified (i.e., regularized) depth value at the position x. After eqn. 1 is solved, the modified depth values d(x) over the range of x may be substituted for d(x) in the subsequent ICP processing. The norm |{tilde over (d)}(x)−d(x)| is thus indicative of the closeness of the modified depth values to the original depth values.


Eqn. 1 is solved when a set of {tilde over (d)}(x) over the range of x is obtained, or an expression for {tilde over (d)}(x) is obtained, that generates a minimum result for the integration function over dx. The resulting set of {tilde over (d)}(x) may optimally provide a depth map which is less noisy than the original one, but with sharp edges preserved between objects at different depths. In eqn. 1, the symbol ∇ represents a gradient of the argument, and |∇d(x)| represents a change in depth at a pixel coordinate x relative to the depth at one or more adjacent pixel coordinates. For a high noise depth map, |∇d(x)| is on average higher than that for a low noise depth map, due to the noise artificially changing the depth values throughout the point cloud. The integration of |∇d(x)| over the range of x is thus higher for the high noise depth map. The integration of |∇d(x)| may be minimized over the range of x in the solution of eqn. 1, so as to minimize total overall pixel to pixel depth variation (e.g. in a 2D image frame). Also, it may be desirable to minimize the norm |{tilde over (d)}(x)−d(x)| integrated over dx as part of the solution to eqn. 1. With this approach, when a large difference in depth exists between adjacent pixel coordinates, this difference is recognized as a difference between two objects close and far, rather than as noise. Thus, large changes in d(x) are not made under this condition, so as to preserve strong edges between the objects. On the other hand, noise may be recognized (and reduced through processing based on the algorithm) when the depth change between adjacent pixels is relatively small.


The symbol μ denotes a constant, and G(|∇d(x)|) is preferably a decreasing function of the input (the gradient of the depth at x), e.g., a function inversely proportional to the gradient |∇d(x)|. As an example, G(|∇d(x)|) may be set as 1/(|∇d(x)|). Thus, if the gradient |∇d(x)| is relatively large, G(|∇d(x)|) is relatively small, and vice versa. With these considerations, eqn. 1 may be solved to thereby transform the input depth map into a regularized depth map comprised of a set of regularized depth values {tilde over (d)}(x)over the range of x. The optimization problem may be solved with processing using a Euler Lagrange algorithm approach or an equivalent.


It is also possible to use another layer of filtering in conjunction with the regularization performed according to eqn. 1. For instance, a Kalman filtering process may be additionally used.


By way of example, FIG. 3A shows a simulated point cloud image to exemplify an image captured with a stereo camera. An object 30 is assumed to be relatively close to the camera viewpoint, i.e. situated at a shallow depth, whereas scene elements 32 and 34 are assumed to relatively far. FIG. 3B depicts an experimental disparity map of the point cloud image of FIG. 3A without regularization. Dark areas of the disparity map indicate the presence of objects that are relatively close to the viewpoint, while bright areas indicate objects relatively far away from the viewpoint. It is seen that this map is significantly stratified. On the other hand, after smoothing with processing based on eqn. (1) above, a smoother disparity map as shown in FIG. 3C is generated. Exemplary relative depth levels of scene elements 30, 32 and 34 are illustrated in each figure.


Referring still to FIG. 2, a feature points extraction process is performed 106 following the depth regularization. As noted earlier, a conventional ICP algorithm is very sensitive to the initial conditions since the correspondences are found by taking the closest points; and this can lead to mismatches and to a local minima trap in the convergence. Accordingly, in the present embodiment, feature points extraction and matching is performed during an initialization process to improve initial alignment of the point clouds. The feature points extraction entails finding matching keypoints between the first and second point clouds (i.e., between two frames). Feature point matching between two frames may be done with a standard method, of which some examples include the use of scale-invariant feature transform (SIFT), or speeded-up-robust-features (SURF) point/key and matching.


Following feature points extraction, a guided ICP process 108 is performed to derive a rotation and translation of the second point cloud with respect to the first. An example process is described in reference to FIG. 4. The guided ICP process 108 may generally involve determining rotation and translation differences between the matching feature points using depth-based weighting, as part of an iterative closest point (ICP) initiation process. Thereafter, further ICP processing of the first and second point clouds is performed based on results of the initiation process, to generate at least one coordinate-transformed point cloud, i.e., an adjusted point cloud in which at least some of the points are moved. The adjusted point cloud and an updated camera position may be output at 110.



FIG. 4 is a flow chart of an example computer-implemented method that implements processing operations in the feature point extraction 106 and the ICP process 108 of FIG. 2. The method is based in part on the recognition that, considering depth of image elements captured in a stereo camera system, the precision of the depth is inversely proportionate to the depth itself. Hence, points located relatively far away from the camera should be noisier and should have a reduced influence on the criterion desired to be minimized. Therefore, the contribution of the points may be scaled as a function of the depth, whereby a weighting scheme based on depth is provided.


Accordingly, in operation 202, the method finds initial values for rotation R and translation t of the second point cloud with respect to the first point cloud, or vice versa, that minimize distances between matching feature points extracted from the first and second point clouds, so as to initially align the first and second point clouds and initialize the overall ICP process. The rotation and translation between the point clouds are preferably computed using a depth based weighting function, where feature points closer to the viewpoint of the associated point cloud (having shallower depths) are weighted higher than those further away. More specifically, in an original image I from where a first point cloud P is extracted, the method may first find features points fi using a SIFT key-point detector. The same may be done on an image I′ from which a second point cloud P′ is extracted, to create feature points fi′. The feature points may be defined in the image in P by pif, and in the image in P′ by ptf.


Next, the process may match pif and pif such that pif corresponds to the closest matching feature point in p′. The matching feature points are hereafter denoted pm(i)f and pm(i)f. The feature point matching may take gray levels or pixel colors of the points into account. The number of matching feature points, which are subsequently used in the ICP initialization for initial alignment of the point clouds, is typically several orders of magnitude smaller than the number of points in each point cloud. For instance, in an example, the initialization may only use 10-20 feature points, whereas a point cloud may contain hundreds or thousands of points.


Thereafter, the process may find initial rotation R and initial translation t that minimizes distances between the matching feature points using a depth based weighting function. For instance, initial values of R and t may be found to minimize an average or median distance between corresponding matching feature points of the first and second point clouds. To this end, an optimization routine to compute optimum values of R and t may be run to find a minimum of the following expression:





Σiw(dif)[Rpm(i)f+t−pm(i)f]2   (2)


where w(·) is a predetermined decreasing function, and dif is the depth of the matching feature point pm(i)f and is preferably a depth {tilde over (d)}if for the feature point, obtained through the regularization process described earlier.


Expression (2) may be understood as follows: assume pm(i)f is a first point located at coordinates referenced to an origin of the first point cloud, where the origin may be the viewpoint (V1 or V2) of the point cloud. When a “rotation is applied” to the first point, which is expressed as Rpm(i)f, a vector beginning at the origin and ending at the first point pm(i)f is rotated by R. This results in the first point being shifted to another point in space coinciding with the end point of the rotated vector, and defined by another set of coordinates. The shifted point is then shifted again by the translation t (so that the first point can be considered “twice shifted”). The position of the matching feature point pm(i)f of the second point cloud is assumed for the sake of comparison to also have coordinates referenced to the same reference origin (e.g. the viewpoint). The distance between the location of the twice shifted point and that of the matching feature point pm(i)f of the second point cloud is then determined.


The expression Rpm(i)f+t−pm(i)f therefore represents the distance between: (i) the location of the first feature point of the first point cloud, after being twice shifted via applied rotation and translation; and (ii) the location of the matching feature point of the second point cloud. This distance is squared, and the square is multiplied by the depth-based weighting variable w(dif). The calculation is repeated for every matching feature point pair, and the results are summed. An optimization process finds optimum initial values of R and t to arrive at a minimum for the summed results. The initial rotation R and translation t may then be applied to all the points of the first or second point clouds to set up a preliminary alignment of the two point clouds.


Thus, in accordance with an embodiment of the present technology, the ICP process is initialized in a manner that uses depth weighting of the matching feature points, thereby initially aligning (or approximately aligning) the first point cloud with the second point cloud. Feature points closer to the image sensor contribute to the initial alignment with higher weights than those further away.


It is noted here that the weighting in expression (2) may also take occlusions into account. Feature points that are known to be occluded by other image elements in at least one of the images I or I′ may be assigned lower weights in comparison to non-occluded points located at the same depths. In other words the weighting function w(·) may be a smaller value in the case of an occlusion.


Following the initialization, at operation 204, for each point pi of the first point cloud, the closest point qi of the second point cloud is found. That is, for each point pi∈P, the closest point qi=p′c(i)∈P′ is determined.


The next operation 206 may find, starting from the initial alignment at 202, an incremental change in rotation R and translation t between the first and second point clouds that attains a best depth weighted-based alignment of the point clouds. That is, a further rotation R and translation t may be found which, when applied to all the points pi or qi of the first or second point clouds, substantially aligns the two point clouds by minimizing an average or median distance between the respective closest points pi, qi. More specifically, this operation may find rotation R and translation t in a depth-weighted manner that minimizes the following expression:





Σiw(di, |pi−qi|)[Rpi+t−qi]2   (3)


where w(x,y) is a decreasing function of (x, y) (where x represents di and y represents |pi−qi|), di is preferably a regularized depth of pi or qi (or an average regularized depth between pi and qi), and Rpi represents a “rotation applied to point pi”, so as to rotate a vector beginning at a reference origin and ending at point pi, similar to that discussed above for expression (2), where the translation t may be applied after the applied rotation to yield a “twice shifted” point pi. The difference between the twice shifted point pi and the closest point qi is then squared and the result multiplied by the weighting function w(x, y).


In expression (3), since w(x, y) is a decreasing function, w(di,|pi−qi|) applies a weight that is (i) inversely correlated with the depth of pi and/or qi; and (ii) positively correlated with an alignment metric between pi and qi. For example, if pi and qi are aligned prior to an additional position shift by a further application of R and t, the norm |pi−qi|) is zero and (x, y) is a relatively shorter vector. Thereby, the weighting factor is higher than for the case of pi and qi being misaligned prior to the additional shift.


In an embodiment, w(x, y) may be 0 if pi is located on a planar surface. (That is, points determined to be located on a planar surface may be given zero weights and thus ignored.) In addition, points that are known to represent occlusions may be assigned lower weights than non-occluded points at the same depth.


When the additional rotation and translation are found at operation 206 based on the optimization routine that minimizes expression (3), positions of all the points of the first (or second) point cloud are then updated (208) by applying the rotation R and translation t to the positions of the points. In the case of the first point cloud being updated relative to the second point cloud, this may be expressed as updating the position of all the points p of P according to pnew=Rp+t, where pnew denotes a coordinate-transformed position for a given point p.


Lastly, the process is reiterated 210 until convergence by repeating operations 204, 206 and 208 until a predetermined metric for convergence is satisfied. The result may be an optimum coordinate-transformation of the points p of one of both of the point clouds. Thus, the points pnew are iteratively adjusted until their locations are optimized for an alignment between the first and second point clouds.


Accordingly, various embodiments of the present technology adapt ICP to a stereo system using a depth edge preserving regularization, a smart weighting, and by incorporating gray level information to the framework.


The processing of method 100 may be performed by at least one processor of image processing apparatus 10. The at least one processor may be dedicated hardware circuitry, or, a general purpose processor that is converted to a special purpose processor by executing program instructions loaded from memory.



FIG. 5 is a functional block diagram of an example image processing apparatus 10 according to the present technology. Apparatus 10 includes one or more processors 50 that performs the processing in the above-described methods. Processor(s) 50 may include a depth regularization engine 54, a feature point extraction and matching engine 56 and an ICP processing engine 58. Input point clouds P and P′ from one or more stereo cameras or other sources are received by an input interface 52 and provided to depth regularization engine 54. There, the depth maps are smoothed, and the point clouds with smoothed depth maps are provided to feature point extraction and matching engine 56 which may perform the above-described operations 106 and 202. ICP processing engine 58 may perform the guided ICP process 108, involving operations 204, 208 and 210. A memory 60 may be used by one or more of the engines 54, 56 and 58 for necessary storage. Memory 60 may also store program instructions read by and executed by processor(s) 50 to carry out its operations. An output interface 62 outputs coordinate transformed point clouds for displaying and/or further processing (e.g. 3D rendering from selected viewpoints as controlled by a user interface 12 seen in FIG. 1). A database 64 may be used to store output point clouds, 3D rendered images, etc.


Image processing apparatus 10 may be included as part of an electronic device having other functionality (as mentioned earlier in connection with FIG. 1). Some examples of the electronic device include but are not limited to a camera, a medical imaging apparatus, a portable electronic device, a personal computer, a notebook computer, a smart phone, a tablet, a smart TV, a set-top box, and a robot. Any of these examples may include a camera providing the point clouds, where the camera may be a depth camera such as a stereo camera. A portable electronic device may be sized and configured to be easily carried in a typical user's single hand.


In some cases, it may be beneficial to perform ICP processing by utilizing the above-described depth regularization even without the matching and depth-based weighting of the feature points and/or the subsequently processed depth-based weighting of points of the point clouds. In other cases, it may be beneficial to perform ICP processing by utilizing the depth-based weighting of the feature points even without the depth regularization.


Exemplary embodiments of the present technology have been described herein with reference to signal arrows, block diagrams and algorithmic expressions. Each block of the block diagrams, and combinations of blocks in the block diagrams, and operations according to the algorithmic expressions can be implemented by hardware accompanied by computer program instructions. Such computer program instructions may be stored in a non-transitory computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block/schematic diagram.


The term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a central processing unit (CPU) and/or other processing circuitry (e.g., digital signal processor (DSP), microprocessor, etc.). Moreover, a “processor” includes computational hardware and may refer to a multi-core processor that contains multiple processing cores in a computing device. Various elements associated with a processing device may be shared by other processing devices.


The above-described methods according to the present technology can be implemented in hardware, firmware or via the use of software or computer code that can be stored in a recording medium such as a CD ROM, RAM, a floppy disk, a hard disk, or a magneto-optical disk or computer code downloaded over a network originally stored on a remote recording medium or a non-transitory machine readable medium and to be stored on a local recording medium, so that the methods described herein can be rendered using such software that is stored on the recording medium using a general purpose computer, or a special processor or in programmable or dedicated hardware, such as an ASIC or FPGA. As would be understood in the art, the computer, the processor, microprocessor controller or the programmable hardware include memory components, e.g., RAM, ROM, Flash, etc. that may store or receive software or computer code that when accessed and executed by the computer, processor or hardware implement the processing methods described herein. In addition, it would be recognized that when a general purpose computer accesses code for implementing the processing shown herein, the execution of the code transforms the general purpose computer into a special purpose computer for executing the processing described herein.


While the technology described herein has been particularly shown and described with reference to example embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the claimed subject matter as defined by the following claims and their equivalents.

Claims
  • 1. An image processing method comprising: executing, by at least one processor, instructions read from a memory for:obtaining first and second point clouds representing respective images of a scene from different viewpoints;matching feature points extracted from the first point cloud with feature points extracted from the second point cloud;determining an initial rotation and translation of the first point cloud with respect to the second point cloud to initially align the first and second point clouds, using depth-based weighting of the feature points; andperforming iterative closest point (ICP) processing using the initial rotation and translation, to generate at least one coordinate-transformed point cloud.
  • 2. The image processing method of claim 1, wherein each of the first and second point clouds is a point cloud obtained from a stereo camera.
  • 3. The image processing method of claim 1, wherein performing the ICP processing comprises performing a depth weighted based alignment of corresponding points of the first and second point clouds, wherein points at depths closer to a viewpoint are weighted higher than points further from the viewpoint.
  • 4. The image processing method of claim 1, further comprising performing depth regularization on each of the first and second point clouds prior to the matching of the feature points.
  • 5. The image processing method of claim 4, wherein the depth regularization is performed by computing a regularized depth {tilde over (d)}(x) over a range of image element positions x, that satisfies the following expression: min{tilde over (d)}∫(|{tilde over (d)}(x)−d(x)|+μ|∇{tilde over (d)}(x)|G(|∇d(x)|))dx, where d(x) is a depth value of an image element at position x of an input depth map, ∇ represents a gradient, μ is a constant, and G(|∇d(x)|) is a decreasing function of |∇d(x)|.
  • 6. (canceled)
  • 7. The image processing method of claim 1, further comprising performing depth regularization on each of the first and second point clouds prior to the matching of the feature points, wherein,each of the first and second point clouds is a point cloud obtained from a stereo camera: andperforming the ICP processing comprises performing a depth weighted based alignment of corresponding points of the first and second point clouds, wherein points at depths closer to a viewpoint are weighted higher than points further from the viewpoint.
  • 8. The image processing method of claim 1, wherein the matching of feature points is based at least in part on gray levels of the feature points in the first and second point clouds.
  • 9. The image processing method of claim 1, wherein the feature points are extracted using scale-invariant feature transform (SIFT) or speeded-up-robust-features (SURF) features point/key and matching.
  • 10. The image processing method of claim 1, wherein said determining an initial rotation and translation comprises determining rotation R and translation t that minimizes Σiw(dif)[Rpm(i)f+t−pm(i)f′]2, where pm(i)f is a matching feature point of the first point cloud, pm(i)f′ is a matching feature point of the second point cloud, which matches pm(i)f, w(·) is a decreasing function, dif is depth of the matching feature point and dif is depth of pm(i)f.
  • 11. The image processing method of claim 1, wherein said performing ICP processing includes assigning zero weights to points that are determined to be located on a planar surface.
  • 12. The image processing method of claim 1, wherein said performing ICP processing includes assigning a lower weight to a first point that is part of an occlusion, relative to a weight assigned to a second point having the same depth value.
  • 13. The image processing method of claim 1, wherein the ICP processing includes determining a further rotation R and a further translation t that minimizes the following expression: Σiw(di, |pi−qi|)[Rpi+t−qi]2 where pi is a point of the first point cloud, qi is a point of the second point cloud that is closest to pi, w(x, y) is a decreasing function of (x, y), where x represents di and y represents |pi−qi|, di is a depth of pi or qi, and Rpi represents a rotation applied to the point pi.
  • 14. An electronic device comprising: memory; andat least one processor coupled to the memory and executing instructions to:perform depth regularization on each of first and second point clouds;determine an initial rotation and translation of the first point cloud with respect to the second point cloud to initially align the first and second point clouds; andperform iterative closest point (ICP) processing using the initial rotation and translation, to generate at least one coordinate-transformed point cloud.
  • 15. The electronic device of claim 14, wherein the at least one processor executes further instructions to: following the depth regularization, match feature points extracted from the first point cloud with feature points extracted from the second point cloud, anddetermine the initial rotation and translation using depth-based weighting of the feature points,
  • 16. The electronic device of claim 14, further comprising an input interface configured to receive the first and second point clouds from a stereo camera.
  • 17. The electronic device of claim 14, wherein the ICP processing comprises a process of depth weighted based alignment of corresponding points of the first and second point clouds, wherein points at depths closer to a viewpoint are weighted higher than points further from the viewpoint.
  • 18. The electronic device of claim 14, wherein the depth regularization is performed by computing a regularized depth {tilde over (d)}(x) over a range of image element positions x, that satisfies the following expression: min{tilde over (d)}∫(|d(x)−d(x)|+μ|∇{tilde over (d)}(x)|G(|∇d(x)|))dx, where d(x) is a depth value of an image element at position x of an input depth map, ∇ represents a gradient, μ is a constant, and G (|∇d(x)|) is a decreasing function of |∇d(x)|.
  • 19. The electronic device of claim 14, wherein: each of the first and second point clouds is a point cloud obtained from at least one stereo camera; andthe ICP processing comprises a process of depth weighted based alignment of corresponding points of the first and second point clouds, wherein points at depths closer to a viewpoint are weighted higher than points further from the viewpoint.
  • 20. The electronic device of claim 14, wherein the ICP processing includes determining a further rotation R and a further translation t that minimizes the following expression: Σiw(di, |pi−qi|)[Rpi+t−qi]2 where pi is a point of the first point cloud, qi is a point of the second point cloud that is closest to pi, w(x, y) is a decreasing function of (x, y), where x represents di and y represents |pi−di|, di is a depth of pi or qi, and Rpi represents a rotation applied to the point pi.
  • 21-25. (canceled)
  • 26. A system comprising: at least one camera configured to capture images of a scene from each of first and second viewpoints and obtain, respectively, first and second point clouds corresponding to the scene; andimaging processing apparatus comprising:memory; andat least one processor coupled to the memory and executing instructions to:match feature points extracted from a first point cloud with feature points extracted from a second point cloud;determine an initial rotation and translation of the first point cloud with respect to the second point cloud to initially align the first and second point clouds, using depth-based weighting of the feature points; andperform iterative closest point (ICP) processing using the initial rotation and translation, to generate at least one coordinate-transformed point cloud.
  • 27. (canceled)