The disclosed subject matter relates to computer vision and, specifically, stereo image reconstruction.
Stereo image reconstruction constitutes an important task in computer vision, and has wide-ranging applications, including, for example, three-dimensional (3D) mapping, robot and drone navigation, and computer-assisted driving. Stereo image reconstruction generally involves processing a pair of two-dimensional (2D) images taken from slightly different viewpoints to compute the scene depth, a third dimension, based on the difference in apparent location of an image feature between the two images. The key problem addressed by any stereo image reconstruction method is to match a pixel in one image (herein the “base” image) to a pixel showing the same object or feature in the other image (herein the “match” image). When the input image pair has been rectified using a stereo rectification process, based on the one-dimensional (1D) horizontal displacement between the two matching pixels in the rectified images, often called the “disparity,” the depth of the object or feature can then be straightforwardly calculated.
Among many available stereo matching algorithms, semi-global matching (SGM) has become a popular technique due to its efficiency and parallelizability, which render it suitable for real-time stereo image reconstruction on field-programmable gate arrays (FPGAs) and graphic processing units (GPUs). In SGM (as in other stereo matching algorithms), each pixel in the base image is assigned a matching cost as a function of the disparity between that pixel of the base image relative to a pixel of the match image. For any given disparity value, the matching cost measures the degree of dissimilarity (e.g., in terms of intensity or color values) between the corresponding pair of pixels of the base and match images, or between respective image patches surrounding the two corresponding pixels. By jointly minimizing the cost function for all pixels in conjunction with an additional term that penalizes differences in disparity between adjacent or neighboring pixels in the image (to encourage smooth variations of the disparity across the image and thereby to reduce the effect of image noise), the true disparities for the pixels can be estimated. While “global stereo matching” techniques to solve this 2D optimization problem exist, they tend to be NP-complete and, thus, computationally expensive. SGM, therefore, approximates the 2D optimization by multiple independent 1D optimization problems corresponding to multiple canonical scanline directions in the image. The 1D optimization problems can be solved exactly in polynomial time, using dynamic programming, by aggregating matching costs along the various scanline directions. The resulting multiple aggregated costs for each pixel are then summed up over all scanline directions to compute a final aggregated pixelwise cost for each disparity, and a winner-take-all (WTA) strategy is used to select for each pixel the disparity value with the minimum aggregated cost. The disparities of the individual pixels can be assembled into a disparity map for the stereo image pair. The summation across scanline directions, which serves in part to reduce streaking artifacts in the disparity map, works well for some images, but is ineffective for images capturing weakly textured slanted surfaces, and inadequate when the individual scanline optimizations yield inconsistent results. Accordingly, an improved stereo matching algorithm is desirable.
Described herein is an approach to stereo matching that utilizes a learned predictive model, trained based on stereo image pairs and associated ground-truth disparity maps, to fuse a plurality of proposal cost volumes each representing a pixelwise matching cost as a function of disparity. In various embodiments, at least some of the proposal cost volumes are generated by 1D scanline optimizations as known from SGM, but the ad-hoc steps of aggregating matching costs across scanline directions followed by WTA disparity selection are replaced by determining, for each pixel, the minimum-cost disparity values for all of the proposal cost volumes individually, and then selecting one among these minimum-cost disparity values (herein also “disparity proposals”) for the respective pixel. The selection may be achieved by using the predictive model to compute pixelwise posterior probability distributions over the disparity proposals, and picking the disparity proposal with the highest posterior probability. Heuristically, this procedure selects the “best” among the various disparity proposals. In some embodiments, a refined disparity estimate for each pixel is obtained by averaging over a subset of the disparity proposals that are close in value to the selected “best” disparity, weighted by the respective posterior probabilities. The sum of these posterior probabilities may be assigned to the disparity estimate for the pixel as a confidence measure. A subsequent spatial filtering step may enhance the spatial smoothness of the disparity map and confidence map by computing the median of the disparity and confidence measure, respectively, over local neighborhoods of pixels with similar intensity or color value.
The predictive model generally operates on pixelwise feature vectors of disparity and cost values sampled from the proposal cost volumes. For the sake of computational tractability, the sampling may be sparse to yield “light-weight” feature vectors. For example, in some embodiments, each feature vector includes (merely) the minimum-cost disparity values of all proposal cost volumes and, for each of the minimum-cost disparity values, the respective matching cost values of all proposal cost volumes. Beneficially, such a feature representation is fixed-size regardless of the range of disparity values, rendering the model generalizable. The predictive model may be a classification or regression model. Random forest classifiers, for instance, have proven to provide a good trade-off between accuracy and efficiency. However, the model may also be, for example, a support vector machine, decision tree, neural network, etc.
The foregoing will be more readily understood from the following detailed description of various embodiments, in particular, when taken in conjunction with the accompanying drawings.
Described herein are systems, methods, devices, and computer program products for computer vision and, more specifically, stereo image reconstruction.
The system 100 further includes computing hardware 104, including one or more general-purpose processors configured by suitable software and/or one or more special-purpose processors, to process the stereo images 103 in accordance with the methods described herein below. In some embodiments, the computing hardware 104 includes one or more general-purpose computers, with memory storing software programs for execution by one or more central processing units (CPUs) and/or GPUs. In other embodiments, the images are processed in whole or in part by an FPGA, application-specific integrated circuit (ASIC), digital signal processor (DSP), or other type of special-purpose processor, which, beneficially, can drastically reduce power requirements as compared with implementations using general-purpose hardware. The overall computational functionality may also be distributed between special-purpose and programmed general-purpose processors.
The computing hardware 104 receives the images 103 from the image-acquisition hardware 102 via one or more suitable interfaces and/or connectors 106, depending on the types of devices and the manner in which they are integrated. In some embodiments, the computing hardware 104 and image-acquisition hardware 102 may both be part of the same stand-alone device (that is, a self-contained device capable of operating without further hardware). For example, an FPGA or other processor may be integrated as an embedded device into a virtual-reality headset or robot equipped with a stereo camera. Similarly, a laptop, tablet, smart phone, or other personal computing device may have an embedded camera. In these embodiments, an internal device bus may transport the camera output signals from the camera(s) to the processor(s) of the stand-alone device. Alternatively, the computing hardware 104 may be provided as one or more devices separate from the image-acquisition hardware 102, and the image data may be transmitted via a wired or wireless communication link. For instance, a stereo camera device may send its camera output signals via a cable (e.g., a USB cable, a Gigabit ethernet cable, a ribbon cable, etc.) or via Bluetooth to a nearby (e.g., desktop, laptop, or tablet) computer. As another example, stereo image pairs may be uploaded from the image-acquisition hardware 102 to the internet, from which the computing hardware 104 may download them via a wired or wireless network connection. Other configurations of the image-acquisition and computation hardware 102, 104 may occur to those of ordinary skill in the art, the above non-limiting examples being intended merely to provide a flavor of various application contexts.
As shown in
As depicted, the functional blocks may include a proposal cost volume generator 108 that processes the stereo images 103 to create multiple proposal cost volumes 109 for each image pair 103, and a machine-learning module 110 that fuses the proposal cost volumes 109 to generate a single disparity map and associated confidence map 111 for each image pair. The proposal cost volume generator 108 may, for instance, be or include a conventional SGM scanline optimizer that generates a proposal cost volume for each of multiple canonical scanline directions in a manner briefly explained below. The machine-learning module 110 may include a featurizer 112 that samples the proposal cost volumes 109 to generate a feature vector 113 for each image pixel, a prediction model 114 that generates initial estimates of the disparity and associated confidence measure for each pixel, and, optionally, an averaging module 116 that averages over disparity proposals in the vicinity of the initial disparity estimate to obtain refined disparity estimates. A spatial-filtering module 118 may perform post-processing to smoothen the disparity map and associated confidence map. The smoothened maps 119 may be provided as output, e.g., to a downstream processing module or a display device (not shown).
The machine-learning module 110 may also include a learning module 120 that implements a machine-learning algorithm to train the prediction model 114 (i.e., determine the values of adjustable model parameters 121) in a supervised manner based on training data including, in addition to the stereo image pairs 103, associated ground-truth disparity maps 122 (e.g., as obtained with another, highly accurate but possibly computationally expensive stereo image reconstruction algorithm, or by other means, such as time-of-flight or other direct depth measurements). Note, however, that once training of the prediction model 114 has been completed (i.e., all adjustable parameters of the model have been set), the learning module 120 is no longer needed. Accordingly, a computer-vision system 100 intended for use solely during the inference phase may be preloaded with a trained prediction model 114 and omit the learning algorithm 120.
Turning now to
Turning to
In various embodiments, one or more of the proposal cost volumes Kn({right arrow over (p)},d) are obtained by 1D scanline optimization along scanline directions {right arrow over (r)}, such as, e.g., any or all of the eight canonical directions, which include, in (x, y)-coordinate notation, horizontal directions (1,0) and (−1,0), vertical directions (0,1) and (0, −1), and diagonal directions (1,1), (1, −1), (−1,1), and (−1, −1). The proposal cost volume L{right arrow over (r)}({right arrow over (p)},d) for any given scanline direction {right arrow over (r)} is computed recursively by aggregating pixelwise matching costs along the scanline (usually starting with a pixel on the image border), e.g., in accordance with:
Herein, C{right arrow over (p)}(d) is a unary data term that encodes the cost, or penalty, of assigning pixel {right arrow over (p)} to disparity d, and V(d,d′) is a pairwise smoothness term that penalizes disparity differences between neighboring pixels {right arrow over (p)} and {right arrow over (p)}−{right arrow over (r)} in the scanline. The unary term C{right arrow over (p)}(d) may be computed from the pixel intensities within a fixed-size image region {right arrow over (p)} (or “patch”) centered at {right arrow over (p)} in the base image and a corresponding patch centered at pixel {right arrow over (p)}−{right arrow over (d)} (where {right arrow over (d)} indicates the magnitude d as well as the direction of the disparity) in the match image.
The unary term may, for example, take the form of the sum, over the image region {right arrow over (p)}, of the absolute intensity differences between a pixel {right arrow over (q)} in the base image and a corresponding pixel {right arrow over (q)}−{right arrow over (d)} in the match image:
C
{right arrow over (p)}(d)={right arrow over (p)}|Ibase({right arrow over (q)})−Imatch({right arrow over (q)}−{right arrow over (d)})|.
Other suitable forms of the unary term C{right arrow over (p)}(d) include, for example, a normalized cross-correlation (NCC) of the pixel intensities of the corresponding image patches, and the output of a matching-cost convolutional neural network (MC-CNN), specifically, either the fast or the accurate variant (MC-CNN-fast, MC-CNN-acrt); these and other cost terms are well-known in the art. The smoothness term V(d,d′) is chosen, in some embodiments, to have the following specific form:
which favors first-order smoothness, that is, has a preference for fronto-parallel surfaces. However, other forms of V(d,d′) may also be used.
In some embodiments, proposal cost volumes are computed for fewer than all eight canonical 1D scanline directions. For instance, to facilitate further processing of computed portions of the proposal cost volumes (e.g., computation of pixelwise feature vectors) before the computation of the proposal cost volumes in their entirety is completed, a subset of five scanline directions that all fall within a semicircle of directions (e.g., the two horizontal directions (1,0) and (−1,0), the vertical top-to-bottom direction (0,−1), and the two diagonal top-to-bottom directions (1,−1) and (−1,−1)) may be selected.
Conversely, the proposal cost volumes Kn({right arrow over (p)},d) used in method 300 are not necessarily limited to the 1D-scanline proposal cost volumes L{right arrow over (r)}({right arrow over (p)},d) in one or more of the eight canonical directions. In some embodiments, an SGM cost volume generated by aggregating over the individual scanline directions, S({right arrow over (p)},d)=Σ{right arrow over (r)}L{right arrow over (r)}({right arrow over (p)},d), serves itself as one of multiple proposal cost volumes. Further, in addition to proposal cost volumes computed by 1D scanline optimization for the base image, the set of proposal cost volumes also includes, in some embodiments, one or more proposal cost volumes obtained by 1D scanline optimizations performed for the match image (in other words, computed with reversed roles of the two images as base and match images). (Scanline optimizations computed, for the same scanline direction, for the base image versus the match image differ, for example, in the pixels at the image margin that constitute the start and end points of a scanline. Further differences can result from pixels in the base image being occluded in the match image or vice versa.) The set of proposal cost volumes may be further extended by repeating the computations for different image resolutions (e.g., full resolution, half resolution, and quarter resolution for three proposal cost volumes for each scanline direction and/or other type of proposal).
Each proposal cost volume Kn({right arrow over (p)},d) inherently provides an associated disparity proposal for each pixel, corresponding to the disparity value with minimum cost:
The different disparity proposals, such as the disparities of different 1D scanline solutions (e.g., as reflected in the disparity maps of
Accordingly, with renewed reference to
({circumflex over (d)}{right arrow over (p)}, {right arrow over (p)})=F({right arrow over (f)}{right arrow over (p)}) with {circumflex over (d)}{right arrow over (p)}ϵ0+, {right arrow over (p)}ϵ[0,1]
In some embodiments, the feature vector {right arrow over (f)}{right arrow over (p)} is a concatenation of the locations of the pixelwise WTA solutions (i.e., the minimum-cost disparities)
of all proposal cost volumes and the corresponding matching costs Km({right arrow over (p)},d{right arrow over (p)}*(n)) in all proposal cost volumes, where both n and m run over the number N of proposal cost volumes. The feature vectors are, in these embodiments, composed of N disparity proposals and N2 sparsely sampled costs, such that {right arrow over (f)}{right arrow over (p)}ϵN+N
K
m({right arrow over (p)},d{right arrow over (p)}*(n)) are low.
Other combinations of disparities and matching costs sparsely sampled from the proposal cost volumes may be used for the feature vectors. In general, sparse sampling—as distinguished from a naïve concatenation of the matching costs for all disparities across all proposal cost volumes—is beneficial for two reasons. First, it provides a light-weight feature representation and corresponding predictive model that entail small runtime overhead and, thus, largely retain the computational efficiency that renders SGM attractive. For example, with eight 1D-scanline proposal cost volumes and ||=256 possible disparity values, the above-defined feature vector has only 8+82=72 dimensions, whereas the naïve feature representation would have 8·256=2048 dimensions. Second, using a fixed-size feature representation, that is, a feature representation that is independent of the size of the disparity range , renders the model generalizable to scenes with varying disparity range , allowing the range to differ, e.g., between the training and inference phases.
Given a training set of stereo image pairs and associated ground-truth disparity maps, the predictive model F can be learned in a supervised manner in multiple ways, depending on the type of model. Classifier models have been shown to provide good results, but regression models may, in principle, also be used. Further, among various types of classifiers well-known in the art, such as support vector machines, decision trees, random forests, neural networks, k-nearest-neighbor (k-NN) algorithms, etc., random forests have been found to provide a good trade-off between accuracy and efficiency. In brief, a random-forest classifier includes an ensemble of decision trees whose leaf nodes each correspond to one of the possible classifications, in the instant case, one of the proposals 1 . . . N. Each decision tree bifurcates at each node into two branches that partitions the input, as represented by points in a suitable feature space (e.g., corresponding to the (N+N2)-dimensional feature vectors {right arrow over (f)}{right arrow over (p)}), into two groups using a suitable split function. The parameters of the split functions are adjusted during training, based on training data with known classifications, to select splits that are discriminative with respect to the classifications. The proper classification of a feature vector {right arrow over (f)}{right arrow over (p)} derived from a set of proposal cost volumes can be straightforwardly determined from the ground-truth disparity by identifying the proposal cost volume whose minimum-cost proposal comes closest. The model can be trained using any of various well-known algorithms for training random forests, for instance, a greedy algorithm.
is then selected, in operation 404, as the initial disparity estimate for pixel {right arrow over (p)}. To refine this initial estimate, in step 406, a subset of disparity proposals close to the initial estimate and their corresponding posterior probabilities is determined:
*{right arrow over (p)}={(d*{right arrow over (p)}(k),ρ{right arrow over (p)}(k))|k=1 . . . NΛ|d*{right arrow over (p)}(k)−d*{right arrow over (p)}(n*)|<ϵd}
When multiple proposal cost volumes agree on a solution for a pixel, the subset *{right arrow over (p)} contains multiple elements, even for small thresholds ϵd of the permissible disparity difference. The refined (and in some embodiments final) pixelwise disparity estimates {circumflex over (d)}{right arrow over (p)} are computed, in operation 408, by averaging over the disparity proposals d*{right arrow over (p)}(k) within the subset *{right arrow over (p)}, weighted by the respective posterior probabilities ρ{right arrow over (p)}(k):
Further, the associated confidence measures {right arrow over (p)} are determined by summing over the posterior probabilities within the subset:
{right arrow over (p)}=Σkρ{right arrow over (p)}(k).
Note that the refined disparity estimates have sub-pixel precision, that is, {circumflex over (d)}{right arrow over (p)} can assume real, non-integer values. Moreover, all operations of method 308 are fully parallelizable at the pixel level, which renders them suitable for FPGA implementations.
With disparity and confidence maps computed based on the pixelwise feature vectors, the method 308 is complete at 410. The disparity map and/or confidence map may be output (e.g., on screen), immediately provided as input to downstream computations (e.g., depth computations for depth sensing or 3D image-reconstruction), or stored in a computer-readable medium for later use.
Referring again to
{right arrow over (p)}
={{right arrow over (q)}|∥{right arrow over (q)}−{right arrow over (p)}∥<ϵ
{right arrow over (p)}Λ{circumflex over (ρ)}{right arrow over (q)}>Λ|I({right arrow over (p)})−I({right arrow over (q)})|<ϵI},
is defined for each pixel. The local neighborhood {right arrow over (p)} is centered around the pixel {right arrow over (p)} (with distances between pixels {right arrow over (p)} and {right arrow over (q)} being smaller than a specified distance threshold ϵ{right arrow over (p)}) and includes pixels {right arrow over (q)} having similar image intensity (with an absolute intensity difference between the pixels {right arrow over (p)} and {right arrow over (q)} being smaller than a specified intensity-difference threshold ϵI) and having associated confidence measures {right arrow over (ρ)}{right arrow over (q)} above a specified confidence threshold . The spatially filtered disparity and confidence estimates are given by the medians:
{right arrow over (d)}
{right arrow over (p)}=median {circumflex over (d)}{right arrow over (q)} and {right arrow over (p)}=median {right arrow over (q)} with {right arrow over (q)}ϵ{right arrow over (p)}.
In some embodiments, following computation of the disparity map (with or without spatial filtering, as the case may be) or portions thereof, depths associated with the estimated disparities are computed in operation 312. For a disparity d computed from a pair of stereo images taken with two cameras with focal length f spaced a distance B apart (or a single camera moved by a distance B between acquisition of the two images), the depth z can be computed according to:
The depth computation, which completes the method 300 at 314, may flow into downstream methods, where it can be used in various ways and producing various outputs, depending on the particular application. For example, in some applications, a three-dimensional image or model of the imaged scene is reconstructed based on the two-dimensional stereo image(s) and the associated depths; for this purpose, data may be assembled across a set of more than two images that collectively cover the scene from a larger range of angles than a mere two images can afford (possibly from all angles). Such three-dimensional images or models can be used, for instance, in virtual-reality or augmented-reality applications (e.g., to provide, as visual output in an on-screen display, a virtual tour through a building or other space previously captured in images, or in gaming to place virtual objects properly within a scene filmed by a user-worn camera), in security applications where depth information may, for instance, improve facial recognition, or in engineering or similar applications that may use a three-dimensional model of an object as input. Depth sensing based on stereo reconstruction can also be used in various automation and navigation tasks. For example, robots and self-driving vehicles may use the determined depth of objects to avoid collisions, grab objects, etc. Yet another task employing depth sensing is motion tracking, e.g., as used in security applications to detect intruders or in human-machine interaction contexts to facilitate gesture-based control of a machine. As will be appreciate by those of ordinary skill in the art, disparities and depths need not necessarily be computed across the entire original image range in all applications. Rather, in some contexts, it may suffice to compute a disparity map and determine depth for an image region corresponding to a certain object of interest, e.g., as determined from the 2D images by edge detection and other means (and possibly confirmed by depth determinations around a margin of the detected edge).
In general, the operations, algorithms, and methods described herein may be implemented in any suitable combination of software, hardware, and/or firmware, and the provided functionality may be grouped into a number of components, modules, or mechanisms. Modules and components can constitute either software components (e.g., code embodied on a non-transitory machine-readable medium) or hardware-implemented components. A hardware-implemented component is a tangible unit capable of performing certain operations and can be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client, or server computer system) or one or more processors can be configured by software (e.g., an application or application portion) as a hardware-implemented component that operates to perform certain operations as described herein.
In various embodiments, a hardware-implemented component can be implemented mechanically or electronically. For example, a hardware-implemented component can comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware-implemented component can also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware-implemented component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) can be driven by cost and time considerations.
Accordingly, the term “hardware-implemented component” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily or transitorily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein. Considering embodiments in which hardware-implemented components are temporarily configured (e.g., programmed), each of the hardware-implemented components need not be configured or instantiated at any one instance in time. For example, where the hardware-implemented components comprise a general-purpose processor configured using software, the general-purpose processor can be configured as respective different hardware-implemented components at different times. Software can accordingly configure a processor, for example, to constitute a particular hardware-implemented component at one instance of time and to constitute a different hardware-implemented component at a different instance of time.
Hardware-implemented components can provide information to, and receive information from, other hardware-implemented components. Accordingly, the described hardware-implemented components can be regarded as being communicatively coupled. Where multiple such hardware-implemented components exist contemporaneously, communications can be achieved through signal transmission (e.g., over appropriate circuits and buses that connect the hardware-implemented components). In embodiments in which multiple hardware-implemented components are configured or instantiated at different times, communications between such hardware-implemented components can be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware-implemented components have access. For example, one hardware-implemented component can perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware-implemented component can then, at a later time, access the memory device to retrieve and process the stored output. Hardware-implemented components can also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
The various operations of example methods described herein can be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors can constitute processor-implemented components that operate to perform one or more operations or functions. The components referred to herein can, in some example embodiments, comprise processor-implemented components.
Similarly, the methods described herein can be at least partially processor-implemented. For example, at least some of the operations of a method can be performed by one of processors or processor-implemented components. The performance of certain of the operations can be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors can be located in a single location (e.g., within an office environment, or a server farm), while in other embodiments the processors can be distributed across a number of locations.
The one or more processors can also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations can be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs)).
Example embodiments can be implemented in digital electronic circuitry, in computer hardware, firmware, or software, or in combinations of them. Example embodiments can be implemented using a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable medium for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
A computer program can be written in any form of description language, including compiled or interpreted languages, and it can be deployed in any form, including as a standalone program or as a component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
In example embodiments, operations can be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method operations can also be performed by, and apparatus of example embodiments can be implemented as, special purpose logic circuitry, e.g., an FPGA or an ASIC.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In embodiments deploying a programmable computing system, it will be appreciated that both hardware and software architectures merit consideration. Specifically, it will be appreciated that the choice of whether to implement certain functionality in permanently configured hardware (e.g., an ASIC), in temporarily configured hardware (e.g., a combination of software and a programmable processor), or a combination of permanently and temporarily configured hardware can be a design choice. Below are set out hardware (e.g., machine) and software architectures that can be deployed, in various example embodiments.
The example computer system 500 includes a processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 504, and a static memory 506, which communicate with each other via a bus 508. The computer system 500 can further include a video display 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 500 also includes an alpha-numeric input device 512 (e.g., a keyboard or a touch-sensitive display screen), a user interface (UI) navigation (or cursor control) device 514 (e.g., a mouse), a disk drive unit 516, a signal generation device 518 (e.g., a speaker), and a network interface device 520.
The disk drive unit 516 includes a machine-readable medium 522 on which are stored one or more sets of data structures and instructions 524 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 524 can also reside, completely or at least partially, within the main memory 504 and/or within the processor 502 during execution thereof by the computer system 500, with the main memory 504 and the processor 502 also constituting machine-readable media.
While the machine-readable medium 522 is shown in an example embodiment to be a single medium, the term “machine-readable medium” can include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 524 or data structures. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding, or carrying instructions 524 for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such instructions 524. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media 522 include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 524 can be transmitted or received over a communication network 526 using a transmission medium. The instructions 524 can be transmitted using the network interface device 520 and any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi and WiMax networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions 524 for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
The following numbered examples are provided as illustrative embodiments.
A stereo-matching method comprising: computing, with one or more hardware processors, from a pair of two-dimensional digital stereo images, a plurality of proposal cost volumes each representing a pixelwise cost as a function of disparity; computing, with the one or more hardware processors, pixelwise feature vectors comprising disparity and cost values sparsely sampled from the proposal cost volumes; and using a predictive model to compute, with the one or more hardware processors, from the pixelwise feature vectors, disparity estimates for pixels within a base image of the pair of stereo images.
The method of example 1, wherein computing the pixelwise feature vectors comprises, for each of the feature vectors, selecting, from each of the plurality of proposal cost volumes, a respective minimum-cost disparity value.
The method of example 2, wherein computing the pixelwise feature vectors further comprises, for each of the feature vectors, sampling the cost values of all of the proposal cost volumes at each of the minimum-cost disparity values.
The method of example 2 or example 3, wherein computing the disparity estimates comprises: selecting, for each of the pixels, one of the minimum-cost disparity values from the plurality of proposal cost volumes based on posterior probabilities associated, for that pixel, with the plurality of proposal cost volumes.
The method of example 4, wherein, for each of the pixels, the selected one of the minimum-cost disparity values constitutes an initial disparity estimate, and wherein computing the disparity estimates further comprising: computing, for each of the pixels, a refined disparity estimate as a weighted average of minimum-cost disparity values in a vicinity of the initial disparity estimate, weights in the weighted average being posterior probabilities associated with the proposal cost volumes corresponding to the minimum-cost disparity values.
The method of any one of examples 1-5, further comprising: computing, using the predictive model, confidence measures associated with the disparity estimates for the pixels.
The method of example 6, further comprising: computing, for each of the pixels, a spatially filtered disparity estimate as a median of disparity estimates of pixels within a local neighborhood comprising pixels with similar image intensity and with associated confidence measures above a specified threshold.
The method of any one of examples 1-7, wherein the predictive model is a classifier model.
The method of example 8, wherein the classifier model is a random-forest model.
The method of any one of examples 1-9, wherein the proposal cost volumes comprise proposal cost volumes obtained by one-dimensional scanline optimizations for a plurality of scanline directions.
The method of any one of examples 1-10, wherein all of the scanline direction associated with the one-dimensional scanline optimizations fall within a single semicircle of directions.
The method of any one of examples 1-11, further comprising computing depths associated with the disparity estimates.
A computer-vision system comprising: at least one hardware processor configured to perform operations comprising: computing, from a pair of two-dimensional digital stereo images comprising a base image and a match image, a plurality of proposal cost volumes each representing a pixelwise cost as a function of disparity; computing pixelwise feature vectors comprising disparity and cost values sparsely sampled from the proposal cost volumes; and using a predictive model to compute, from the pixelwise feature vectors, disparity estimates for pixels within a base image of the pair of stereo images.
The system of example 13, wherein the at least one hardware processor comprises at least one of a field-programmable gate array or an application-specific integrated circuit.
The system of example 14, wherein all of the scanline direction associated with the one-dimensional scanline optimizations fall within a single semicircle of directions.
The system of example 13, wherein the at least one hardware processor comprises a general-purpose computer processor, the system further comprising memory storing instructions for execution by the general-purpose computer processor, at least some of the operations being performed by execution of the instructions.
The system of any one of examples 13-16, further comprising a stereo camera device.
The system of example 17, wherein the stereo camera device and the at least one processor are integrated into a stand-alone device.
The system of any of examples 13-18, wherein each of the pixelwise feature vectors comprises a minimum-cost disparity value from each of the plurality of proposal cost volumes, and wherein computing the disparity estimates further comprises: selecting, for each of the pixels, among the minimum-cost disparity values from the plurality of proposal cost volumes, a minimum-cost disparity value from a proposal cost volume having a highest associated posterior probability for that pixel.
One or more machine-readable media storing instructions for execution by one or more hardware processors, execution of the instructions causing the one or more hardware processors to perform operations comprising: computing, from a pair of two-dimensional digital stereo images comprising a base image and a match image, a plurality of proposal cost volumes each representing a pixelwise cost as a function of disparity; computing pixelwise feature vectors comprising disparity and cost values sparsely sampled from the proposal cost volumes, and using a predictive model to compute, from the pixelwise feature vectors, disparity estimates for pixels within a base image of the pair of stereo images.
Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.