Output of a neural network method for deep odometry assisted by static scene optical flow

Information

  • Patent Grant
  • 10552979
  • Patent Number
    10,552,979
  • Date Filed
    Wednesday, September 13, 2017
    6 years ago
  • Date Issued
    Tuesday, February 4, 2020
    4 years ago
Abstract
A method of visual odometry for a non-transitory computer readable storage medium storing one or more programs is disclosed. The one or more programs includes instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising: performing data alignment among sensors including a LiDAR, cameras and an IMU-GPS module; collecting image data and generating point clouds; processing, in the IMU-GPS module, a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds; and establishing an optical flow for visual odometry.
Description
PRIORITY/RELATED DOCUMENTS

This patent application incorporates by reference in their entireties and claims priority to these co-pending patent applications filed on Sep. 13, 2017, including the following: (1) “Data Acquisition and Input of Neural Network Method for Deep Odometry Assisted by Static Scene Optical Flow;” (2) “Data Acquisition and Input of Neural Network System for Deep Odometry Assisted by Static Scene Optical Flow;” (3) “Neural Network Architecture Method for Deep Odometry Assisted by Static Scene Optical Flow;” (4) “Neural Network Architecture System for Deep Odometry Assisted by Static Scene Optical Flow;” (5) “Output of a Neural Network System for Deep Odometry Assisted by Static Scene Optical Flow;” (6) “Training and Testing of a Neural Network Method for Deep Odometry Assisted by Static Scene Optical Flow;” and (7) “Training and Testing of a Neural Network System for Deep Odometry Assisted by Static Scene Optical Flow,” and all with the same inventor(s).


FIELD OF THE DISCLOSURE

The field of the disclosure is in general related to autonomous vehicles and, in particular, to a method and system for deep odometry assisted by static scene optical flow.


BACKGROUND OF THE DISCLOSURE

In recent years, an increasing amount of interest and research effort has been put toward intelligent or autonomous vehicles. With the continuous progress in autonomous technology, robot sensors are generating increasing amounts of real-world data. Autonomous vehicle research is highly dependent on the vast quantities of real-world data for development, testing and validation of algorithms before deployment on public roads. However, the cost of processing and analyzing these data, including developing and maintaining a suitable autonomous vehicle platform, regular calibration and data collection procedures, and storing the collected data, is so high that few research groups can manage it. Following the benchmark-driven approach of the computer vision community, a number of vision-based autonomous driving datasets have been released. Some existing datasets, however, may not be well generalized to different environments. Moreover, hand-crafted features may be employed to extract keypoints and descriptors, and find matching points to solve motion parameters. Such feature-based methods fail when a scene has no salient keypoints.


BRIEF SUMMARY OF THE DISCLOSURE

Various objects, features, aspects and advantages of the present embodiment will become more apparent from the following detailed description of embodiments of the embodiment, along with the accompanying drawings in which like numerals represent like components.


Embodiments of the present disclosure provide a method of visual odometry for a non-transitory computer readable storage medium storing one or more programs. The one or more programs includes instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising: performing data alignment among sensors including a LiDAR, cameras and an IMU-GPS module; collecting image data and generating point clouds; processing a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds; and establishing an optical flow for visual odometry.


In an embodiment, the method further includes: receiving a first image of a first pair of image frames, and extracting representative features from the first image of the first pair in a first convolution neural network (CNN); and receiving a second image of the first pair, and extracting representative features from the second image of the first pair in the first CNN.


In another embodiment, the method further includes: merging, in the first merge module, outputs from the first CNN; and decreasing feature map size in a second CNN.


In yet another embodiment, the method further includes: generating a first flow output for each layer in a first deconvolution neural network (DNN).


In still another embodiment, the method further includes: merging, in a second merge module, outputs from the second CNN and the first DNN, and generating a first motion estimate.


In yet still another embodiment, the method further includes: generating a second flow output for each layer in a second DNN, the second flow output serves as a first optical flow prediction.


In still yet another embodiment, the method further includes: in response to the first motion estimate, generating a first set of motion parameters associated with the first pair in a recurrent neural network (RNN).


In a further embodiment, the method further includes: training the visual odometry model by using at least one of the first optical flow prediction and the first set of motion parameters.


In an embodiment, the method further includes: receiving a first image of a second pair of image frames, and extracting representative features from the first image of the second pair in the first CNN; and receiving a second image of the second pair, and extracting representative features from the second image of the second pair in the first CNN.


In another embodiment, the method further includes: merging, in the first merge module, outputs from the first CNN; and decreasing feature map size in the second CNN.


In yet another embodiment, the method further includes: generating a first flow output for each layer in the first DNN.


In still another embodiment, the method further includes: merging, in the second merge module, outputs from the second CNN and the first DNN, and generating a second motion estimate.


In yet still another embodiment, the method further includes: generating a second flow output for each layer in the second DNN, the second flow output serves as a second optical flow prediction.


In still yet another embodiment, the method further includes: in response to the second motion estimate and the first set of motion parameters, generating a second set of motion parameters associated with the second pair in the RNN.


In a further embodiment, the method further includes: training the visual odometry model by using at least one of the second optical flow prediction and the second set of motion parameters.





BRIEF DESCRIPTION OF THE DRAWINGS

It should be noted that the drawing figures may be in simplified form and might not be to precise scale. In reference to the disclosure herein, for purposes of convenience and clarity only, directional terms such as top, bottom, left, right, up, down, over, above, below, beneath, rear, front, distal, and proximal are used with respect to the accompanying drawings. Such directional terms should not be construed to limit the scope of the embodiment in any manner.



FIG. 1 is a flow diagram showing a method of visual odometry, in accordance with an embodiment;



FIG. 2 is a block diagram of a system for visual odometry, in accordance with an embodiment;



FIG. 3A is a block diagram showing the system illustrated in FIG. 2 in more detail;



FIG. 3B is a schematic block diagram showing operation of the system illustrated in FIG. 3A;



FIG. 4 is a flow diagram showing a method for visual odometry, in accordance with still another embodiment;



FIG. 5 is a flow diagram showing a method for visual odometry, in accordance with yet another embodiment;



FIG. 6 is a flow diagram showing a method of visual odometry, in accordance with yet still another embodiment;



FIG. 7 is a flow diagram showing a method of visual odometry, in accordance with a further embodiment;



FIGS. 8A and 8B are flow diagrams showing a method of visual odometry, in accordance with a still further embodiment; and



FIG. 9 is a block diagram of a system for generating a ground truth dataset for motion planning, in accordance with some embodiments.





DETAILED DESCRIPTION OF THE EMBODIMENTS

The embodiment and its various embodiments can now be better understood by turning to the following detailed description of the embodiments, which are presented as illustrated examples of the embodiment defined in the claims. It is expressly understood that the embodiment as defined by the claims may be broader than the illustrated embodiments described below.


Any alterations and modifications in the described embodiments, and any further applications of principles described in this document are contemplated as would normally occur to one of ordinary skill in the art to which the disclosure relates. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, when an element is referred to as being “connected to” or “coupled to” another element, it may be directly connected to or coupled to the other element, or intervening elements may be present.


In the drawings, the shape and thickness may be exaggerated for clarity and convenience. This description will be directed in particular to elements forming part of, or cooperating more directly with, an apparatus in accordance with the present disclosure. It is to be understood that elements not specifically shown or described may take various forms. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.


In the drawings, the figures are not necessarily drawn to scale, and in some instances the drawings have been exaggerated and/or simplified in places for illustrative purposes. One of ordinary skill in the art will appreciate the many possible applications and variations of the present disclosure based on the following illustrative embodiments of the present disclosure.


The appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be appreciated that the following figures are not drawn to scale; rather, these figures are merely intended for illustration.


It will be understood that singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, relative terms, such as “bottom” and “top,” may be used herein to describe one element's relationship to other elements as illustrated in the Figures.


Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.


Many alterations and modifications may be made by those having ordinary skill in the art without departing from the spirit and scope of the embodiment. Therefore, it must be understood that the illustrated embodiment has been set forth only for the purposes of example and that it should not be taken as limiting the embodiment as defined by the following claims. For example, notwithstanding the fact that the elements of a claim are set forth below in a certain combination, it must be expressly understood that the embodiment includes other combinations of fewer, more, or different elements, which are disclosed herein even when not initially claimed in such combinations.


The words used in this specification to describe the embodiment and its various embodiments are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification structure, material or acts beyond the scope of the commonly defined meanings. Thus if an element can be understood in the context of this specification as including more than one meaning, then its use in a claim must be understood as being generic to all possible meanings supported by the specification and by the word itself.


The definitions of the words or elements of the following claims therefore include not only the combination of elements which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result.


In this sense it is therefore contemplated that an equivalent substitution of two or more elements may be made for any one of the elements in the claims below or that a single element may be substituted for two or more elements in a claim. Although elements may be described above as acting in certain combinations and even initially claimed as such, it is to be expressly understood that one or more elements from a claimed combination can in some cases be excised from the combination and that the claimed combination may be directed to a subcombination or variation of a subcombination.


Reference is now made to the drawings wherein like numerals refer to like parts throughout.


As used herein, the term “wireless” refers to wireless communication to a device or between multiple devices. Wireless devices may be anchored to a location and/or hardwired to a power system, depending on the needs of the business, venue, event or museum. In one embodiment, wireless devices may be enabled to connect to Internet, but do not need to transfer data to and from Internet in order to communicate within the wireless information communication and delivery system.


As used herein, the term “Smart Phone” or “smart phone” or “mobile device(s)” or “cellular phone” or “cellular” or “mobile phone” or the like refers to a wireless communication device, that includes, but not is limited to, an integrated circuit (IC), chip set, chip, system-on-a-chip including low noise amplifier, power amplifier, Application Specific Integrated Circuit (ASIC), digital integrated circuits, a transceiver, receiver, or transmitter, dynamic, static or non-transitory memory device(s), one or more computer processor(s) to process received and transmitted signals, for example, to and from the Internet, other wireless devices, and to provide communication within the wireless information communication and delivery system including send, broadcast, and receive information, signal data, location data, a bus line, an antenna to transmit and receive signals, and power supply such as a rechargeable battery or power storage unit. The chip or IC may be constructed (“fabricated”) on a “die” cut from, for example, a Silicon, Sapphire, Indium Phosphide, or Gallium Arsenide wafer. The IC may be, for example, analogue or digital on a chip or hybrid combination thereof. Furthermore, digital integrated circuits may contain anything from one to thousands or millions of signal invertors, and logic gates, e.g., “and”, “or”, “nand” and “nor gates”, flipflops, multiplexors, etc., on a square area that occupies only a few millimeters. The small size of, for instance, IC's allows these circuits to provide high speed operation, low power dissipation, and reduced manufacturing cost compared with more complicated board-level integration.


As used herein, the terms “wireless”, “wireless data transfer,” “wireless tracking and location system,” “positioning system” and “wireless positioning system” refer without limitation to any wireless system that transfers data or communicates or broadcasts a message, which communication may include location coordinates or other information using one or more devices, e.g., wireless communication devices.


As used herein, the terms “module” or “modules” refer without limitation to any software, software program(s), firmware, or actual hardware or combination thereof that has been added on, downloaded, updated, transferred or originally part of a larger computation or transceiver system that assists in or provides computational ability including, but not limited to, logic functionality to assist in or provide communication broadcasts of commands or messages, which communication may include location coordinates or communications between, among, or to one or more devices, e.g., wireless communication devices.



FIG. 1 is a flow diagram showing a method 100 of visual odometry, in accordance with an embodiment.


In some embodiments in accordance with the present disclosure, a non-transitory, i.e., non-volatile, computer readable storage medium is provided. The non-transitory computer readable storage medium is stored with one or more programs. When the program is executed by the processing unit of a computing device, i.e., that are part of a vehicle, the computing device is caused to conduct specific operations set forth below in accordance with some embodiments of the present disclosure.


In some embodiments, as illustrated in FIG. 9, examples of non-transitory storage computer readable storage medium may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In certain embodiments, the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. In some embodiments, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache).


In some embodiments in accordance with the present disclosure, in operation, a client application is transmitted to the computing device upon a request of a user, for example, by a smart phone 910 (see FIG. 9). For example, the first client device 910 may be a smart phone downloading the application from a computer server. In operation, the application is installed at the vehicle. Accordingly, specific functions may be executed by the user through a computing device, such as calibrating sensors and time synchronization, and, for example, sending and receiving calibration files for data alignment purposes.


In particular, referring to FIG. 1, in operation 102, data alignment, which includes sensor calibration and time synchronization, is performed. A vehicle is equipped with multiple complementary sensors which require calibration in order to represent sensed information in a common coordinate system. In an embodiment, sensors employed in the method include a light detection and ranging (LiDAR) sensor, one or more cameras such as monocular cameras or stereo cameras, and an inertial navigation module. The LiDAR sensor and the cameras are mounted on the roof of the vehicle. LiDAR sensors have become increasingly common in both industrial and robotic applications. LIDAR sensors are particularly desirable for their direct distance measurements and high accuracy. In an embodiment according to the present disclosure, the LIDAR sensor is equipped with many simultaneous rotating beams at varying angles, for example, a 64-beam rotating LiDAR. The multiple-beam LiDAR provides at least an order of magnitude more data than a single-beam LiDAR and enables new applications in mapping, object detection and recognition, scene understanding, and simultaneous localization and mapping (SLAM).


The inertial navigation module in an embodiment according to the present disclosure includes a global navigation satellite system (GNSS)-inertial measurement unit (IMU) module or an IMU-global positioning system (GPS) module. The GNSS satellite signals are used to correct or calibrate a solution from the IMU. The benefits of using GNSS with an IMU are that the IMU may be calibrated by the GNSS signals and that the IMU can provide position and angle updates at a quicker rate than GNSS. For high dynamic vehicles, IMU fills in the gaps between GNSS positions. Additionally, GNSS may lose its signal and the IMU can continue to compute the position and angle during the period of lost GNSS signal. The two systems are complementary and are often employed together. An integrated navigation system consisting of IMU and GPS is usually preferred due to the reduced dependency on GPS-only navigator in an area prone to poor signal reception or affected by multipath. The performance of the integrated system largely depends upon the quality of the IMU and the integration methodology. Considering the restricted use of high grade IMU and their associated price, low-cost IMUs are becoming the preferred choice for civilian navigation purposes. MEMS based inertial sensors have made possible the development of civilian land vehicle navigation as it offers small size and low-cost.


The data alignment among the sensors includes calibrating intrinsic parameters of the camera, and calibrating extrinsic parameters between the camera and the inertial navigation module. Moreover, transformation between the inertial navigation module and LiDAR coordinate may be achieved by a method similar to that described in “Unsupervised Calibration for Multi-beam Lasers” by Levinson, Jesse and Sebastian Thrun, Experimental Robotics, Springer Berlin Heidelberg, 2014. Modifications made in the method 100 include, for example, the intrinsic parameters of each beam are calibrated in advance using a supervised method. Also, LiDAR scans are collected in the form of sweep. A sweep is defined as a scan coverage of the LiDAR sensor rotating from 0 degree to 360 degrees. Moreover, motion distortion within the sweep is corrected assuming that the angular and linear velocity of the LiDAR motion is constant. In an embodiment, means for alleviating motion distortion is employed for points of LiDAR to ensure every point in the same sweep of LiDAR has an identical timestamp.


It is assumed that the environment is generally static and contains some 3D features, i.e., it is not just smooth ground. In order to achieve an accurate calibration, LiDAR measurements are recorded as the vehicle transitions through a series of known poses. Global pose information is irrelevant, as there is no existing map, so only local pose information is required. Local pose data may be acquired in any number of ways, e.g. from a wheel encoder and IMU, from an integrated GPS/IMU system, or from a GPS system with real-time corrections.


Furthermore, transformation between the cameras and the LiDAR coordinate may be calibrated using a method similar to that described in “Automatic Camera and Range Sensor Calibration Using a Single Shot” by Geiger, Andreas, et al., Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012. Modifications made in the method 100 include, for example, the intrinsic parameters of the cameras are calibrated in advance using a method described in “A Flexible New Technique for Camera Calibration” by Z. Zhang, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000. Also, the cameras include monocular cameras, which are calibrated by multiple shots instead of single shot. Moreover, registration is made by minimizing reprojection error and translation norm.


In addition to the calibration and transformation, time synchronization among the LiDAR sensor, cameras and inertial navigation module is achieved. Specifically, time synchronization between the LiDAR sensor and the inertial navigation module, between the inertial navigation module and the cameras, and between the LiDAR sensor and the cameras is achieved. In an embodiment, a time trigger is used to synchronize the LiDAR and cameras to ensure data alignment.


After data alignment is performed, in operation 104, these sensors are used to collect data in an environment. In an embodiment, images of the environment are captured by the cameras in approximately 30 Hz. LiDAR scans are collected in the form of a sweep in approximately 20 Hz. Vehicle poses, including position and orientation, are collected in an “east north up” (ENU) coordinate by the inertial navigation module in approximately 50 Hz.


In operation 106, based on the data from the sensors, machine learning is performed in a visual odometry model. Inputs to the visual odometry model for machine learning include images obtained by the cameras and point clouds obtained by the LiDAR. In an embodiment, for monocular cameras, consecutive RGB image frames in pairs are input. In another embodiment, for stereo cameras, RGB images with depth information (RGB-D) are input. In machine learning, convolutional neural networks (CNNs) have become popular in many fields of computer vision. CNN has been widely applied to classification, and recently presented architectures also allow for per-pixel predictions like semantic segmentation or depth estimation from single images. In an embodiment, as will be further discussed, a method of training CNNs end-to-end to learn predicting an optical flow field from a pair of images is disclosed.


In operation 108, a prediction of static optical flow for a pair of input image frames is generated. Moreover, in operation 110, a set of motion parameters for estimating a motion between the pair of input image frames is generated.


Subsequently, in operation 112, the visual odometry model is trained by using at least one of the prediction of static optical flow and the motion parameters.



FIG. 2 is a block diagram of a system 200 for visual odometry, in accordance with an embodiment.


Referring to FIG. 2, the system 200 includes a visual odometry model 24. The visual odometry model 24 includes one or more neural networks 241 and one or more merge modules 242. The neural networks may further include convolution neural networks (CNNs), deconvolution neural networks (DNNs) and a recurrent neural network (RNN). Moreover, the merge modules may include merge layers in the neural networks. In operation, the visual odometry model 24 receives images 201 from a camera and point clouds 202 from a LiDAR. Given vehicle poses 203 from an IMU-GPS module, the images 201 and the point clouds 202 are trained in the visual odometry model 24. The images 201, input in pair to the visual odometry model 24, are matched against the point clouds 202. The visual odometry model 24, in response to the images 201 and point clouds 202, generates a prediction of static optical flow 207 and a set of motion parameters 208, which in turn may be used to train the visual odometry model 24.



FIG. 3A is a block diagram showing the system 200 illustrated in FIG. 2 in more detail.


Referring to FIG. 3A, the visual odometry model 24 in the system 200 includes a first neural network 31, a second neural network 35 and a third neural network 38. The first neural network 31 further includes a first CNN 311, a second CNN 312, and a first merge module 310 between the first CNN 311 and the second CNN 312. The first CNN 311 is configured to, in response to a pair of consecutive image frames 201, extract representative features from the pair of consecutive image frames 201. The first merge module 310 is configured to merge the representative features. In an embodiment, the representative features are merged by a patch-wise correlation, which is similar to that described in “Flownet: Learning Optical Flow with Convolutional Networks” by Fischer et. al., arXiv preprint arXiv:1504.06852 (hereinafter referred to as “the reference”). In another embodiment, the representative features are merged by a simple concatenation. The second CNN 312 then decreases the feature map size. An output of the second CNN 312 constitutes a portion of a motion estimate 206.


The second neural network 35 further includes a first DNN 351, a second DNN 352 and a second merge module 350. The first DNN 351 is configured to, in response to an output from the second CNN 312, generate a first flow output for each layer at a first resolution. The first flow output, which may have a relatively low resolution, constitutes another portion of the motion estimate 206. The second merge module 350 is configured to, in response to the output from the second CNN 312 and the first flow output from the first DNN 351, merge these outputs by, for example, a patch-wise correlation or alternatively a simple concatenation as previously discussed, resulting in the motion estimate 206. The second DNN 352 is configured to, in response to the first flow output from the first DNN 351, generate a first flow output for each layer at a second flow output for each layer at a second resolution. The second resolution is higher than the first resolution. The second flow output, which may have a relatively high resolution, serves as a static scene optical flow.


The third neural network 38 includes an RNN. RNN refers to a general type of neural network where the layers operate not only on the input data but also on delayed versions of hidden layers and/or output. In this manner, RNN has an internal state which it can use as a “memory” to keep track of past inputs and its corresponding decisions. In an embodiment, the third neural network 38 includes a Long Short-Term Memory (LSTM) architecture. The LSTM architecture is employed to allow RNN to learn longer-term trends. This is accomplished through the inclusion of gating cells which allow the neural network to selectively store and “forget” memories. The third neural network 38 is configured to, in response to the motion estimate 206 from the second merge module 350 and a set of motion parameters associated with an immediately previous pair of consecutive image frames (shown in FIG. 3B), generate a set of motion parameters 208 for the current pair of image frames 201.



FIG. 3B is a schematic block diagram showing operation of the system 200 illustrated in FIG. 3A.


Referring to FIG. 3B, the first CNN 311 receives a first image 211 of a first pair of consecutive image frames 201 at time t1, and extracts representative features from the first image 211 of the first pair 201. Subsequently, the first CNN 311 receives a second image 212 of the first pair 201 at time t2, and extracts representative features from the second image 212 of the first pair 201. The extracted representative features are merged by the first merge module 310 and the merged features are reduced in feature map size by the second CNN 312. Next, the first DNN 351 generates a low-resolution flow output based on the output of the second CNN 312. The second merge module 350 generates a first motion estimate 261 by merging the output of the second CNN 312 and the low-resolution flow output of the first DNN 351. The second DNN 352 generates a first static scene optical flow 271 based on the low-resolution flow output of the first DNN 351. The RNN 38 generates a first set of motion parameters 281 based on the first motion estimate 261 and a set of motion parameters associated with an immediately previous pair of consecutive image frames.


Similarly, the first CNN 311 receives a first image 251 of a second pair of consecutive image frames at the time t2, and extracts representative features from the first image 251 of the second pair. Subsequently, the first CNN 311 receives a second image 252 of the second pair at time t3, and extracts representative features from the second image 252 of the second pair. The extracted representative features are merged by the first merge module 310 and the merged features are reduced in feature map size by the second CNN 312. Next, the first DNN 351 generates a low-resolution flow output based on the output of the second CNN 312. The second merge module 350 generates a second motion estimate 262 by merging the output of the second CNN 312 and the low-resolution flow output of the first DNN 351. The second DNN 352 generates a second static scene optical flow 272 based on the low-resolution flow output of the first DNN 351. The RNN 38 generates a second set of motion parameters 282 based on the second motion estimate 262 and a set of motion parameters 281 associated with the first pair of consecutive image frames 201.


In some existing approaches, hand-crafted features are employed to extract keypoints and descriptors, and find matching points to solve motion parameters. Such feature-based approaches may fail when the scene has no salient keypoints. In the present disclosure, an end-to-end trained deep network is employed for estimating motion parameters. Sequence learning sub-network can eliminate accumulated errors. Flow is used to enhance motion estimation because motion has a strong connection with flow. The method according to the present disclosure allows the visual odometry model to have higher generalization ability. Since the whole network is end-to-end trained, no hand-crafted features are required. The network suits well for new scenarios or scenes, while feature-based methods fail in new scenarios and redesign of features costs a lot of efforts and time. As a result, the deep network is well suited to large amount of data because the model capacity and totally learned model can handle big data well. Since the price of GPS devices goes down, GPS signals can be added to the deep network. Also, other signals can be added into the model. The present disclosure proposes a flow prediction for predicting motion, and employs flow as additional information to enhance motion estimation. In addition, the designed structure can easily fuse additional signals, such as GPS signals.



FIG. 4 is a flow diagram showing a method 400 for visual odometry, in accordance with still another embodiment.


Referring to FIG. 4, in operation 402, data alignment among sensors including a LiDAR, cameras and an inertia navigation module such as an IMU-GPS module is performed.


In operation 404, image data are obtained from the camera and point clouds are obtained from the LiDAR.


In operation 406, in the IMU-GPS module, a pair of consecutive images in the image data is processed to recognize pixels corresponding to a same point in the point clouds.


Subsequently, in operation 408, an optical flow for visual odometry is established.


In this way, with the aid of LiDAR data whose accuracy improves a learning process by increasing the IMU-GPS module's ability to accurately spot pixels in a pair of consecutive image frames, the IMU-GPS module is learning to establish an optical flow every time it processes image data and generates a more precise optical flow. Consequently, with sufficient training, the IMU-GPS module is able to generate precise optical flows reflecting movement of a vehicle.



FIG. 5 is a flow diagram showing a method 500 for visual odometry, in accordance with yet another embodiment.


Referring to FIG. 5, in operation 502, representative features from a pair input images are extracted in a first convolution neural network (CNN).


In operation 504, outputs from the first CNN are merged in a first merge module. The outputs include the representative features of a first image of the pair and the representative features of a second image of the pair. Moreover, the merge may be achieved by a patch-wise correlation or a simple concatenation.


Next, in operation 506, the merged features are reduced in feature map size in a second CNN. The output of the second CNN constitutes a portion of a motion estimate.


In operation 508, a first flow output for each layer is generated in a first deconvolution neural network (DNN) at a first resolution. In an embodiment, the first flow output has a relatively low resolution. The first flow output constitutes another portion of the motion estimate.


In operation 510, outputs from the second CNN and the first DNN are merged in a second merge module, resulting in a motion estimate.


In operation 512, a second flow output for each layer is generated in a second DNN at a second resolution higher than the first resolution. In an embodiment, the second flow output has a relatively high resolution and serves as a static scene optical flow.


In operation 514, accumulated errors are reduced in a recurrent neural network (RNN). The RNN, by sequence learning and prediction, generates a set of motion parameters for estimating motion between the pair of consecutive input images.



FIG. 6 is a flow diagram showing a method 600 of visual odometry, in accordance with yet still another embodiment.


Referring to FIG. 6, in operation 602, in response to a first image of a pair of consecutive image frames, representative features are extracted from the first image of the pair in a first convolution neural network (CNN) in a visual odometry model.


Next, in operation 604, in response to a second image of the pair, representative features are extracted from the second image of the pair in the first CNN.


In operation 606, outputs from the first CNN are merged in a first merge module.


In operation 608, merged features are reduced in feature map size in a second CNN.


In operation 610, a first flow output for each layer is generated in a first deconvolution neural network (DNN).


In operation 612, outputs from the second CNN and the first DNN are then merged in a second merge module.


Next, in operation 614, a second flow output for each layer is generated in a second DNN. The second flow output serves as an optical flow prediction.


In operation 616, a set of motion parameters associated with the first pair is generated in a recurrent neural network (RNN) in response to the motion estimate from the second merge module and a set of motion parameters associated with an immediately previous pair of input images.


In operation 618, the visual odometry model is trained by using at least one of the optical flow prediction and the set of motion parameters.


In operation 620, it is determined if the visual odometry model is sufficiently trained. If affirmative, in operation 622, the trained visual odometry model may enter a test mode. If not, then in operation 624, another pair of consecutive image frames is received. Moreover, in operation 626, the first set of motion parameters is provided to the RNN.



FIG. 7 is a flow diagram showing a method 700 of visual odometry, in accordance with a further embodiment.


Referring to FIG. 7, in operation 702, in response to images in pairs, a prediction of static scene optical flow for each pair of the images is generated in a visual odometry model through deep learning.


In operation 704, a set of motion parameters for each pair of the images is generated in the visual odometry model.


In operation 706, the visual odometry model is trained by using the prediction of static scene optical flow and the motion parameters.


In operation 708, motion between a pair of consecutive image frames is predicted by the trained visual odometry model.



FIGS. 8A and 8B are flow diagrams showing a method 800 of visual odometry, in accordance with a still further embodiment.


Referring to FIG. 8A, in operation 802, a first image of a first pair of image frames is received, and representative features are extracted from the first image of the first pair in a first convolution neural network (CNN).


In operation 804, a second image of the first pair is received, and representative features are extracted from the second image of the first pair in the first CNN.


In operation 806, outputs from the first CNN are merged in the first merge module.


In operation 808, the merged features are decreased in feature map size in a second CNN.


In operation 810, a first flow output for each layer is generated in a first deconvolution neural network (DNN).


In operation 812, outputs from the second CNN and the first DNN are merged in a second merge module, resulting in a first motion estimate.


In operation 814, a second flow output for each layer is generated in a second DNN. The second flow output serves as a first optical flow prediction.


In operation 816, in response to the first motion estimate, a first set of motion parameters associated with the first pair is generated in a recurrent neural network (RNN).


Subsequently, in operation 818, the visual odometry model is trained by using at least one of the first optical flow prediction and the first set of motion parameters.


Referring to FIG. 8B, in operation 822, a first image of a second pair of image frames is received, and representative features are extracted from the first image of the second pair in the first CNN.


In operation 824, a second image of the second pair is received, and representative features are extracted from the second image of the second pair in the first CNN.


In operation 826, outputs from the first CNN are merged in the first merge module.


In operation 828, the merged features are decreased in feature map size in the second CNN.


In operation 830, a first flow output for each layer is generated in the first DNN.


In operation 832, outputs from the second CNN and the first DNN are merged in the second merge module, resulting in a second motion estimate.


In operation 834, a second flow output for each layer is generated in the second DNN. The second flow output serves as a second optical flow prediction.


In operation 836, in response to the second motion estimate and the first set of motion parameters, a second set of motion parameters associated with the second pair is generated in the RNN.


In operation 838, the visual odometry model is trained by using at least one of the second optical flow prediction and the second set of motion parameters.



FIG. 9 is a block diagram of a system 900 for generating a ground truth dataset for motion planning, in accordance with some embodiments.


Referring to FIG. 9, the system 900 includes a processor 901, an computer server 902, a network interface 903, an input and output (I/O) device 905, a storage device 907, a memory 909, and a bus or network 908. The bus 908 couples the network interface 903, the I/O device 905, the storage device 907 and the memory 909 to the processor 901.


Accordingly, the processor 901 is configured to enable the computer server 902, e.g., Internet server, to perform specific operations disclosed herein. It is to be noted that the operations and techniques described herein may be implemented, at least in part, in hardware, software, firmware, or any combination thereof. For example, various aspects of the described embodiments, e.g., the processor 901, the computer server 902, or the like, may be implemented within one or more processing units, including one or more microprocessing units, digital signal processing units (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components.


The term “processing unit” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit including hardware may also perform one or more of the techniques of the present disclosure.


In some embodiments in accordance with the present disclosure, the computer server 902 is configured to utilize the I/O port 905 communicate with external devices via a network 908, such as a wireless network. In certain embodiments, the I/O port 905 is a network interface component, such as an Ethernet card, an optical transceiver, a radio frequency transceiver, or any other type of device that can send and receive data from the Internet. Examples of network interfaces may include Bluetooth®, 3G and WiFi® radios in mobile computing devices as well as USB. Examples of wireless networks may include WiFi®, Bluetooth®, and 3G. In some embodiments, the internet server 902 is configured to utilize the I/O port 905 to wirelessly communicate with a client device 910, such as a mobile phone, a tablet PC, a portable laptop or any other computing device with internet connectivity. Accordingly, electrical signals are transmitted between the computer server 900 and the client device 910.


In some embodiments in accordance with the present disclosure, the computer server 902 is a virtual server capable of performing any function a regular server has. In certain embodiments, the computer server 900 is another client device of the system 900. In other words, there may not be a centralized host for the system 900, and the client devices 910 in the system are configured to communicate with each other directly. In certain embodiments, such client devices 910 communicate with each other on a peer-to-peer (P2P) basis.


The processor 901 is configured to execute program instructions that include a tool module configured to perform a method as described and illustrated with reference to FIGS. 1, 4 through 7, 8A and 8B. Accordingly, the tool module is configured to execute the operations including: performing data alignment among sensors including a LiDAR, cameras and an IMU-GPS module; collecting image data and generating point clouds; processing, in the IMU-GPS module, a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds; and establishing an optical flow for visual odometry.


The network interface 903 is configured to access program instructions and data accessed by the program instructions stored remotely through a network (not shown).


The I/O device 905 includes an input device and an output device configured for enabling user interaction with the system 900. In some embodiments, the input device comprises, for example, a keyboard, a mouse, and other devices. Moreover, the output device comprises, for example, a display, a printer, and other devices.


The storage device 907 is configured for storing program instructions and data accessed by the program instructions. In some embodiments, the storage device 907 comprises, for example, a magnetic disk and an optical disk.


The memory 909 is configured to store program instructions to be executed by the processor 901 and data accessed by the program instructions. In some embodiments, the memory 909 comprises a random access memory (RAM) and/or some other volatile storage device and/or read only memory (ROM) and/or some other non-volatile storage device including other programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a solid state drive (SSD), a compact disc ROM (CD-ROM), a floppy disk, a cassette, magnetic media, optical media, or other computer readable media. In certain embodiments, the memory 909 is incorporated into the processor 901.


Thus, specific embodiments and applications have been disclosed. It should be apparent, however, to those skilled in the art that many more modifications besides those already described are possible without departing from the disclosed concepts herein. The embodiment, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalent within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements. The claims are thus to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, what can be obviously substituted and also what essentially incorporates the essential idea of the embodiment.

Claims
  • 1. A method of visual odometry for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising: performing data alignment among sensors including a light detection and ranging (LiDAR) sensor, cameras, and an IMU-GPS module;collecting image data and generating point clouds;processing a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds;establishing an optical flow for visual odometry;receiving a first image of a first pair of image frames, and extracting representative features from the first image of the first pair in a first convolution neural network (CNN);receiving a second image of the first pair, and extracting representative features from the second image of the first pair in the first CNN;merging, in a first merge module, outputs from the first CNN;decreasing feature map size in a second CNN;generating a first flow output for each layer in a first deconvolution neural network (DNN); andmerging, in a second merge module, outputs from the second CNN and the first DNN to generate a first motion estimate.
  • 2. The method according to claim 1 further comprising: generating a second flow output for each layer in a second DNN, the second flow output serves as a first optical flow prediction.
  • 3. The method according to claim 2 further comprising: in response to the first motion estimate, generating a first set of motion parameters associated with the first pair in a recurrent neural network (RNN).
  • 4. The method according to claim 3 further comprising: training the visual odometry model by using at least one of the first optical flow prediction and the first set of motion parameters.
  • 5. A method of visual odometry for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising: performing data alignment among sensors including a light detection and ranging (LiDAR) sensor, cameras and an IMU-GPS module;collecting image data and generating point clouds;processing a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds;establishing an optical flow for visual odometry;receiving a first image of a second pair of image frames, and extracting representative features from the first image of the second pair in a first convolutional neural network (CNN);receiving a second image of the second pair, and extracting representative features from the second image of the second pair in the first CNN;merging, in a first merge module, outputs from the first CNN;decreasing feature map size in a second CNN; andgenerating a first flow output for each layer in a first deconvolutional neural network (DNN); andmerging, in the second merge module, outputs from the second CNN and the first DNN to generate a second motion estimate.
  • 6. The method according to claim 5 further comprising: generating a second flow output for each layer in the second DNN, the second flow output serves as a second optical flow prediction.
  • 7. The method according to claim 6 further comprising: in response to the second motion estimate and the first set of motion parameters, generating a second set of motion parameters associated with the second pair in the RNN.
  • 8. The method according to claim 7 further comprising: training the visual odometry model by using at least one of the second optical flow prediction and the second set of motion parameters.
  • 9. A system for visual odometry, the system comprising: an interne server, comprising:an I/O port, configured to transmit and receive electrical signals to and from a client device;a memory;one or more processing units; andone or more programs stored in the memory and configured for execution by the one or more processing units, the one or more programs including instructions for: performing data alignment among sensors including a light detection and ranging (LiDAR) sensor, cameras and an IMU-GPS module;collecting image data and generating point clouds;processing, in the IMU-GPS module, a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds;establishing an optical flow for visual odometry;receiving a first image of a first pair of image frames, and extracting representative features from the first image of the first pair in a first convolution neural network (CNN);receiving a second image of the first pair and extracting representative features from the second image of the first pair in the first CNN;merging, in a first merge module, outputs from the first CNN;decreasing a feature map size in a second CNN;generating a first flow output for each layer in a first deconvolution neural network (DNN); andmerging, in a second merge module, outputs from the second CNN and the first DNN to generate a first motion estimate.
  • 10. The system according to claim 9 further comprising: generating a second flow output for each layer in a second DNN, the second flow output serves as a first optical flow prediction.
  • 11. The system according to claim 10 further comprising: in response to the first motion estimate, generating a first set of motion parameters associated with the first pair in a recurrent neural network (RNN).
  • 12. The system according to claim 11 further comprising: training the visual odometry model by using at least one of the first optical flow prediction and the first set of motion parameters.
  • 13. A system for visual odometry, the system comprising: an interne server, comprising:an I/O port, configured to transmit and receive electrical signals to and from a client device;a memory;one or more processing units; andone or more programs stored in the memory and configured for execution by the one or more processing units, the one or more programs including instructions for: performing data alignment among sensors including a light detection and ranging (LiDAR) sensor, cameras and an IMU-GPS module;collecting image data and generating point clouds;processing, in the IMU-GPS module, a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds;establishing an optical flow for visual odometry;receiving a first image of a second pair of image frames, and extracting representative features from the first image of the second pair in a first convolution neural network (CNN); andreceiving a second image of the second pair and extracting representative features from the second image of the second pair in the first CNN;merging, in a first merge module, outputs from the first CNN;decreasing feature map size in a second CNN;generating a first flow output for each layer in a first deconvolutional neural network (DNN); andmerging, in the second merge module, outputs from the second CNN and the first DNN to generate a second motion estimate.
  • 14. The system according to claim 13 further comprising: generating a second flow output for each layer in the second DNN, the second flow output serves as a second optical flow prediction.
  • 15. The system according to claim 14 further comprising: in response to the second motion estimate and the first set of motion parameters, generating a second set of motion parameters associated with the second pair in the RNN.
  • 16. The system according to claim 15 further comprising: training the visual odometry model by using at least one of the second optical flow prediction and the second set of motion parameters.
US Referenced Citations (187)
Number Name Date Kind
6535114 Suzuki et al. Mar 2003 B1
6777904 Degner et al. Aug 2004 B1
6975923 Spriggs Dec 2005 B2
7103460 Breed Sep 2006 B1
7689559 Canright Mar 2010 B2
7742841 Sakai et al. Jun 2010 B2
7783403 Breed Aug 2010 B2
7844595 Canright Nov 2010 B2
8041111 Wilensky Oct 2011 B1
8064643 Stein Nov 2011 B2
8082101 Stein Dec 2011 B2
8164628 Stein Apr 2012 B2
8175376 Marchesotti May 2012 B2
8271871 Marchesotti Sep 2012 B2
8346480 Trepagnier et al. Jan 2013 B2
8378851 Stein Feb 2013 B2
8392117 Dolgov Mar 2013 B2
8401292 Park Mar 2013 B2
8412449 Trepagner et al. Apr 2013 B2
8478072 Aisaka Jul 2013 B2
8553088 Stein Oct 2013 B2
8706394 Trepagnier et al. Apr 2014 B2
8718861 Montemerlo et al. May 2014 B1
8788134 Litkouhi et al. Jul 2014 B1
8908041 Stein Dec 2014 B2
8917169 Schofield Dec 2014 B2
8963913 Baek Feb 2015 B2
8965621 Urmson et al. Feb 2015 B1
8981966 Stein Mar 2015 B2
8983708 Choe et al. Mar 2015 B2
8993951 Schofield Mar 2015 B2
9002632 Emigh Apr 2015 B1
9008369 Schofield Apr 2015 B2
9025880 Perazzi May 2015 B2
9042648 Wang May 2015 B2
9088744 Grauer et al. Jul 2015 B2
9111444 Kaganovich Aug 2015 B2
9117133 Barnes Aug 2015 B2
9117147 Frazier Aug 2015 B2
9118816 Stein Aug 2015 B2
9120485 Dolgov Sep 2015 B1
9122954 Srebnik Sep 2015 B2
9134402 Sebastian Sep 2015 B2
9145116 Clarke Sep 2015 B2
9147255 Zhang Sep 2015 B1
9156473 Clarke Oct 2015 B2
9176006 Stein Nov 2015 B2
9179072 Stein Nov 2015 B2
9183447 Gdalvahu Nov 2015 B1
9185360 Stein Nov 2015 B2
9191634 Schofield Nov 2015 B2
9214084 Grauer et al. Dec 2015 B2
9219873 Grauer et al. Dec 2015 B2
9233659 Rosenbaum Jan 2016 B2
9233688 Clarke Jan 2016 B2
9248832 Huberman Feb 2016 B2
9248835 Tanzmeister Feb 2016 B2
9251708 Rosenbaum Feb 2016 B2
9277132 Berberian Mar 2016 B2
9280711 Stein Mar 2016 B2
9282144 Tebay et al. Mar 2016 B2
9286522 Stein Mar 2016 B2
9297641 Stein Mar 2016 B2
9299004 Lin Mar 2016 B2
9315192 Zhu Apr 2016 B1
9317033 Ibanez-guzman et al. Apr 2016 B2
9317776 Honda Apr 2016 B1
9330334 Lin May 2016 B2
9342074 Dolgov et al. May 2016 B2
9347779 Lynch May 2016 B1
9355635 Gao May 2016 B2
9365214 Shalom Jun 2016 B2
9418549 Kang et al. Aug 2016 B2
9428192 Schofield Aug 2016 B2
9436880 Bos Sep 2016 B2
9438878 Niebla Sep 2016 B2
9443163 Springer Sep 2016 B2
9446765 Shalom Sep 2016 B2
9459515 Stein Oct 2016 B2
9466006 Duan Oct 2016 B2
9476970 Fairfield Oct 2016 B1
9490064 Hirosawa Nov 2016 B2
9494935 Okumura et al. Nov 2016 B2
9507346 Levinson et al. Nov 2016 B1
9513634 Pack et al. Dec 2016 B2
9531966 Stein Dec 2016 B2
9535423 Debreczeni Jan 2017 B1
9538113 Grauer et al. Jan 2017 B2
9547985 Tuukkanen Jan 2017 B2
9549158 Grauer et al. Jan 2017 B2
9555803 Pawlicki Jan 2017 B2
9568915 Berntorp et al. Feb 2017 B1
9599712 Van Der Tempel et al. Mar 2017 B2
9600889 Boisson et al. Mar 2017 B2
9602807 Crane et al. Mar 2017 B2
9620010 Grauer et al. Apr 2017 B2
9625569 Lange Apr 2017 B2
9628565 Stenneth et al. Apr 2017 B2
9649999 Amireddy et al. May 2017 B1
9690290 Prokhorov Jun 2017 B2
9701023 Zhang et al. Jul 2017 B2
9712754 Grauer et al. Jul 2017 B2
9720418 Stenneth Aug 2017 B2
9723097 Harris et al. Aug 2017 B2
9723099 Chen et al. Aug 2017 B2
9723233 Grauer et al. Aug 2017 B2
9726754 Massanell et al. Aug 2017 B2
9729860 Cohen et al. Aug 2017 B2
9738280 Rayes Aug 2017 B2
9739609 Lewis Aug 2017 B1
9746550 Nath et al. Aug 2017 B2
9753128 Schweizer et al. Sep 2017 B2
9753141 Grauer et al. Sep 2017 B2
9754490 Kentley et al. Sep 2017 B2
9760837 Nowozin et al. Sep 2017 B1
9766625 Boroditsky et al. Sep 2017 B2
9769456 You et al. Sep 2017 B2
9773155 Shotton et al. Sep 2017 B2
9779276 Todeschini et al. Oct 2017 B2
9785149 Wang et al. Oct 2017 B2
9805294 Liu et al. Oct 2017 B2
9810785 Grauer et al. Nov 2017 B2
9823339 Cohen Nov 2017 B2
9870624 Narang Jan 2018 B1
9953236 Huang et al. Apr 2018 B1
9971352 Mudalige May 2018 B1
10147193 Huang et al. Dec 2018 B2
20040239756 Aliaga et al. Dec 2004 A1
20060188131 Zhang et al. Aug 2006 A1
20070070069 Samarasekera Mar 2007 A1
20070230792 Shashua Oct 2007 A1
20080144925 Zhu Jun 2008 A1
20080249667 Horvits et al. Oct 2008 A1
20090040054 Wang et al. Feb 2009 A1
20090208106 Dunlop Aug 2009 A1
20100049397 Liu et al. Feb 2010 A1
20100067745 Kovtun Mar 2010 A1
20100204964 Pack Aug 2010 A1
20100226564 Marchesotti Sep 2010 A1
20100281361 Marchesotti Nov 2010 A1
20100305755 Heracles Dec 2010 A1
20100315505 Michalke Dec 2010 A1
20110206282 Aisaka Aug 2011 A1
20110282622 Canter Nov 2011 A1
20120105639 Stein May 2012 A1
20120106800 Khan May 2012 A1
20120140076 Rosenbaum Jun 2012 A1
20120170801 De Oliveira et al. Jul 2012 A1
20120274629 Baek Nov 2012 A1
20120281904 Gong et al. Nov 2012 A1
20130182909 Rodriguez-Serrano Jul 2013 A1
20130298195 Liu Nov 2013 A1
20140145516 Hirosawa May 2014 A1
20140198184 Stein Jul 2014 A1
20140270484 Chandraker et al. Sep 2014 A1
20150062304 Stein Mar 2015 A1
20150353082 Lee Dec 2015 A1
20160037064 Stein Feb 2016 A1
20160055237 Tuzel Feb 2016 A1
20160094774 Li Mar 2016 A1
20160129907 Kim May 2016 A1
20160165157 Stein Jun 2016 A1
20160210528 Duan Jul 2016 A1
20160266256 Allen Sep 2016 A1
20160292867 Martini Oct 2016 A1
20160321381 English et al. Nov 2016 A1
20160334230 Ross et al. Nov 2016 A1
20160350930 Lin et al. Dec 2016 A1
20160375907 Erban Dec 2016 A1
20170053412 Shen et al. Feb 2017 A1
20170213134 Beyeler Jul 2017 A1
20170277197 Liao Sep 2017 A1
20170301111 Zhao Oct 2017 A1
20180046187 Martirosyan Feb 2018 A1
20180047147 Viswanathan Feb 2018 A1
20180053056 Rabinovich Feb 2018 A1
20180121762 Han May 2018 A1
20180136660 Mudalige May 2018 A1
20180137633 Chang May 2018 A1
20180157918 Levkova Jun 2018 A1
20180232906 Kim Aug 2018 A1
20180239969 Lakehal-ayat Aug 2018 A1
20180260956 Huang et al. Sep 2018 A1
20180365835 Yan et al. Dec 2018 A1
20190079534 Zhu et al. Mar 2019 A1
20190236393 Wang et al. Aug 2019 A1
20190259176 Dai Aug 2019 A1
Foreign Referenced Citations (32)
Number Date Country
103984915 Aug 2014 CN
105518744 Apr 2016 CN
105574505 May 2016 CN
106096568 Nov 2016 CN
106971178 Jul 2017 CN
1754179 Feb 2007 EP
2448251 May 2012 EP
2463843 Jun 2012 EP
2463843 Jul 2013 EP
2579211 Oct 2013 EP
2761249 Aug 2014 EP
2463843 Jul 2015 EP
2448251 Oct 2015 EP
2946336 Nov 2015 EP
299365 Mar 2016 EP
3081419 Oct 2016 EP
WO2005098739 Oct 2005 WO
WO2005098751 Oct 2005 WO
WO2005098782 Oct 2005 WO
WO2010109419 Sep 2010 WO
WO2013045612 Apr 2013 WO
WO2014111814 Jul 2014 WO
WO2014111814 Jul 2014 WO
WO2014201324 Dec 2014 WO
WO2015083009 Jun 2015 WO
WO2015103159 Jul 2015 WO
WO2015125022 Aug 2015 WO
WO201518600 Dec 2015 WO
WO2015186002 Dec 2015 WO
WO2016135736 Sep 2016 WO
WO2017013875 Jan 2017 WO
WO2018015716 Jan 2018 WO
Non-Patent Literature Citations (49)
Entry
P. Fischer, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852[cs.CV], Apr. 26, 2015 (Year: 2015).
Deep Convolutional Neural Network for Image Deconvolution, Li Xu et al., p. 1-9, 2014, retrieved from the internet at https://papers.nips.cc/paper/5485-deep-convolutional-neural-network-for-image-deconvolution.pdf. (Year: 2014).
Athanasiadis, Thanos et al., “Semantic Image Segmentation and Object Labeling”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 3, Mar. 2007.
Cordts, Marius et al. The Cityscapes Dataset for Semantic Urban Scene Understanding, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition {CVPR), Las Vegas, Nevada, 2016.
Dai, Jifeng et al., (Microsoft Research), “Instance-aware Semantic Segmentation via Multi-task Network Cascades”, CVPR 2016.
Fischer, et al. Flownet: Learning Optical Flow with Convolutional Networks. arXiv preprint arXiv:1504.06852. May 4, 2015.
Gould, et al. “Decomposing a Scene into Geometric and Semantically Consistent Regions,” 2009 IEEE 12th International Conference on Computer Vision (ICCV), 2009 (Year: 2009).
Geiger, Andreas et al., “Automatic Camera and Range Sensor Calibration using a single Shot”, Robotics and Automation (ICRA), pp. 1-8, 2012 IEEE International Conference.
Guameri, P. et al. “A Neural-Network-Based Model for the Dynamic Simulation of the Tire/Suspension System While Traversing Road Irregularities,” in IEEE Transactions on Neural Networks, vol. 19, No. 9, pp. 1549-1563, Sep. 2008.
Hou, Xiaodi and Zhang, Liqing, “Dynamic Visual Attention: Searching for Coding Length Increments”, Advances in Neural Information Processing Systems, vol. 21, pp. 681-688, 2008.
Hou, Xiaodi and Zhang, Liqing, “Saliency Detection: A Spectral Residual Approach”, Computer Vision and Pattern, CVPR'07—IEEE Conference, pp. 1-8, 2007.
Hou, Xiaodi and Zhang, Liqing, “Thumbnail Generation Based on Global Saliency”, Advances in Cognitive Neurodynamics, ICCN 2007, pp. 999-1003, Springer Netherlands, 2008.
Hou, Xiaodi et al. “Color Conceptualization”, Proceedings of the 15th ACM International Conference on Multimedia, pp. 265-268, ACM, 2007.
Hou, Xiaodi et al., “Boundary Detection Benchmarking: Beyond F-Measures”, Computer Vision and Pattern Recognition, CVPR'13, vol. 2013, pp. 1-8, IEEE, 2013.
Hou, Xiaodi et al., “A Mela-Theory of Boundary Detection Benchmarks”, arXiv preprint, rXiv:1302 5985, 2013.
Hou, Xiaodi et al., “A Time-Dependent Model of Information Capacity of Visual Attention”, International Conference on Neural Information Processing, pp. 127-136, Springer Berlin Heidelberg, 2006.
Hou, Xiaodi et al., “Image Signature: Highlighting Sparse Salient Regions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, No. 1, pp. 194-201, 2012.
Hou, Xiaodi, “Computational Modeling and Psychophysics in Low and Mid-Level Vision”, California Institute of Technology, 2014.
Huval, Brody et al. “An Empirical Evaluation of Deep Learning on Highway Driving”, arXiv:1504 01716v3 [cs.RO] Apr. 17, 2015.
International Application No. PCT/US19/35207, International Search Report and Written Opinion dated Aug. 22, 2019.
International Application No. PCT/US2018/037644, International Search Report and Written Opinion dated Nov. 22, 2018.
Jain, Suyong Dutt, Grauman, Kristen, “Active Image Segmentation Propagation”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition {CVPR), Las Vegas, Jun. 2016.
Kendall, Alex, Gal, Yarin, “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision”, rXiv:1703 04977v1 [cs CV] Mar. 15, 2017.
Levinson, Jesse et al., Experimental Robotics, Unsupervised Calibration for Multi-Beam Lasers, pp. 179-194, 12th Ed., Oussama Khatib, Vijay Kumar, Gaurav Sukhatme (Eds.) Springer-Verlag Berlin Heidelberg 2014.
Ihou, Bolei and Hou, Xiaodi and Zhang, Liqing, “A Phase Discrepancy Analysis of Object Motion”, Asian Conference on Computer Vision, pp. 225-238, Springer Berlin Heidelberg, 2010.
Li, Yanghao and Wang, Naiyan and Liu, Jiaying and Hou, Xiaodi, “Demystifying Neural Style Transfer”, arXiv preprint, rXiv:1701 01036, 2017.
Li, Yanghao and Wang, Naiyan and Liu, Jiaying and Hou, Xiaodi, “Factorized Bilinear Models for Image Recognition”, rXiv preprint arXiv:1611 05709, 2016.
Li, Yanghao et al., “Revisiting Batch Normalization for Practical Domain Adaptation”, arXiv preprint arXiv:1603.04779, 2016.
Li, Yin et al., “The Secrets of Salient Object Segmentation”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 280-287, 2014.
MacAodha, Oisin, et al., “Hierarchical Subquery Evaluation for Active 19 earning on a Graph”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
Nang, Panqu et al., “Understanding Convolution for Semantic Segmentation”, arXiv preprint arXiv:1702.08502, 2017.
Office Action from Chinese Application No. 201711313905.4, dated Mar. 27, 2019.
Paszke, Adam et al. Enet: A deep neural network architecture for real-lime semantic segmentation. CoRR, abs/1606 02147, 2016.
Ramos, Sebastian et al., “Detecting Unexpected Obstacles or Self-Driving Cars: Fusing Deep Learning and Geometric Modeling”, arXiv:1612 06573v1 [cs.CV] Dec. 20, 2016.
Richter, Stephan R. et al., “Playing for Data: Ground Truth from Computer Sames”, Intel Labs, European Conference on Computer Vision (ECCV), Amsterdam, the Netherlands, 2016.
Schroff, Florian et al. (Google), “FaceNet: A Unified Embedding for Face Recognition and Clustering”, CVPR 2015.
Spinello, Luciano, et al., “Multiclass Multimodal Detection and Tracking in Urban Environments”, Sage Journals, vol. 29 issue: 12, pp. 1498-1515 Article first published online: Oct. 7, 2010; issue published: Oct. 1, 2010.
Wei, Junqing et al. “A Prediction- and Cost Function-Based Algorithm for Robust Autonomous Freeway Driving”, 2010 IEEE Intelligent Vehicles Symposium, University of California, San Diego, USA, Jun. 21-24, 2010.
Welinder, P. et al. “The Multidimensional Wisdom of Crowds”; http://NWW.vision.caltechedu/visipedia/papers/WelinderEtalNIPS10.pdf 2010.
Xu, Li et al. Deep Convolutional Neural Network for Image Deconvolution, p. 1-9, (2014) [retrieved from the internet at https:// papers.nips .cc/paper/5485-deep-convolutional-neural-network-for-i mage-deconvolution. pdf. (Year: 2014) ].
Yang et al., “Dense Captioning with Joint Interference and Visual Context,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, 2017 (Year: 2017).
Yang, C. et al., “Neural Network-Based Motion Control of an Underactuated Wheeled Inverted Pendulum Model,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 25, No. 11, pp. 2004-2016, Nov. 2014.
Yu, Kai et al., “Large-scale Distributed Video Parsing and Evaluation Platform”, Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences, China, arXiv:1611 09580v1 [cs CV] Nov. 29, 2016.
Zhang, Z. et al. A Flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence (vol. 22 , Issue: 11 , Nov. 2000).
Farenza, M. et al., “Person Re-identification by Symmetry-Driven Accumulation of Local Feature,” IEEE Conference on Computer Vision and Patter Recognition, pp. 2360-2367, Dec. 31, 2010.
Ahmed, Ejaz et al. An Improved Deep Learning Architecture for person re-identification. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3908-3916 Jun. 12, 2015.
Khan, Furqan M. et la. Multi-Shot Person Re-Identification Using Part Appearance Mixture, IEEE Winter Conference on Applications of Computer Vision, pp. 605-614, Mar. 31, 2017.
Zheng, L. et al., “Scalable Person Re-identification: A Benchmark,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 1116-1124.
N. McLaughlin, N. et al. “Recurrent Convolutional Network for Video-Based Person Re-identification,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 1325-1334.
Related Publications (1)
Number Date Country
20190080470 A1 Mar 2019 US