The present disclosure relates generally to machine learning systems. More particularly, the present disclosure relates to implementing systems and methods for training and/or using machine learning models and algorithms.
Modern day vehicles have at least one on-board computer and have internet/satellite connectivity. The software running on these on-board computers monitor and/or control operations of the vehicles. The vehicle also comprises cameras, radars and LiDAR sensors for detecting objects in proximity thereto. The cameras capture images of the scenes in proximity to the vehicles. The LiDAR detectors generate LiDAR datasets that measure the distance from the vehicle to the objects at a plurality of different times. These images and distance measurements can be used by machine learning models and/or algorithms for identifying objects, tracking movements of the object, making predictions as to the object's trajectory, and planning paths of travel for the vehicle based on the predicted objects trajectory.
The present disclosure concerns implementing systems and methods for training and/or using a machine learning model or algorithm. The methods comprise: obtaining, by a computing device, a training data set comprising a collection of training examples (each training example comprising data point(s) (e.g., pixel values from an image or values of other sensor data) and a true value for a property to be predicted by the machine learning model/algorithm); selecting, by the computing device, a first subset of training examples from the collection of training examples based on, for example, at least one of a derivative vector of a loss function for each training example of the collection of training examples and an importance of each training example relative to other training examples; training, by the computing device, the machine learning model/algorithm using the first subset of training examples; and/or using the machine learning model/algorithm which has been trained to control operations of a mobile platform (e.g., an autonomous vehicle, articulating arm or other robotic device). A total number of training examples in the first subset of training examples is unequal to a total number of training examples in the collection of training examples. In this way, a portion of the training examples in the training data set are used to train the machine learning model/algorithm.
In some scenarios, the first subset of training examples is selected based on norms of derivative vectors of the loss function. Each derivative vector is determined for a respective training example of the collection of training examples contained in the training data set. For example, the training examples can be ranked in accordance with the norms of the derivative vectors of the loss function associated therewith. A given number of training examples with the best rankings are selected for inclusion in the first subset of training examples. All other training examples of the training data set are excluded from the first subset of training examples.
In those or other scenarios, the methods also comprise: selecting, by the computing device, a second subset of training examples based on at least one of a derivative vector of the loss function for each training example of the collection of training examples and an importance of each training example relative to the other training examples. The first subset of training examples is used in a first epoch of the training process and the second subset of data points is used in a second epoch of the training process. A total number of training examples in the second subset is different than (e.g., greater than) the total number of training examples in the first subset.
Implementing systems of the above-described methods for image-based perception and can include, but are not limited to, a processor and a non-transitory computer-readable storage medium comprising programming instructions that are configured to cause the processor to implement a method for training and/or using a machine learning model or algorithm.
The present solution will be described with reference to the following drawing figures, in which like numerals represent like items throughout the figures.
As used in this document, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” means “including, but not limited to.” Definitions for additional terms that are relevant to this document are included at the end of this Detailed Description.
An “electronic device” or a “computing device” refers to a device that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement. The memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions.
The terms “memory,” “memory device,” “data store,” “data storage facility” and the like each refer to a non-transitory device on which computer-readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices.
The terms “processor” and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions. Except where specifically stated otherwise, the singular term “processor” or “processing device” is intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process.
The term “vehicle” refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy. The term “vehicle” includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like. An “autonomous vehicle” is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator. An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions, or it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle's autonomous system and may take control of the vehicle.
In this document, when terms such as “first” and “second” are used to modify a noun, such use is simply intended to distinguish one item from another, and is not intended to require a sequential order unless specifically stated. In addition, terms of relative position such as “vertical” and “horizontal”, or “front” and “rear”, when used, are intended to be relative to each other and need not be absolute, and only refer to one possible position of the device associated with those terms depending on the device's orientation.
The term “spatial feature map” as used herein refers to a spatial-relational construct of an object. The spatial feature map is output from a function that converts or otherwise transforms a feature vector in one space (e.g., an image domain) into a feature vector in another space (e.g., a high-dimensional domain). For example, the function can return a spatial feature map comprising [a first detected feature identifier, a first detected feature classification, a first detected feature location in an image, a strength of a link from the first detected feature to a real object, a second feature identifier, a second detected feature classification, a second feature location in an image, a strength of a link from the second detect feature to the real object, . . . ] from an input vector [a first pixel identifier, a first pixel location, a first pixel color, a second pixel identifier, a second pixel location, a second pixel color, . . . ]. Each strength value of the spatial feature map can comprise a probabilistic strength of relation between the feature and a certain detected object (e.g., vehicle, pedestrian, bicycle, dog, etc.) in an image.
Machine learning models and/or algorithms can be used in various applications. For example, machine learning models and/or algorithms can be employed in image-based machine learning systems. Such image-based machine learning systems can be implemented in robotic systems (e.g., autonomous vehicles and articulating arms). The robotic systems may use the machine learning models and/or algorithms for various purposes such as feature extraction using multi-camera views to perform perception feature fusion for cuboid association using loss functions that iteratively process data points over multiple cycles.
In feature extraction scenarios, the machine learning models and/or algorithms may be trained to facilitate generation of spatial feature maps using captured images. The machine learning models and/or algorithms can include, but are not limited to, Convolutional Neural Networks (CNNs) and/or Recurrent Neural Networks (RNNs). Images may be input into trained CNN(s) and RNNs to produce output spatial feature maps. Each trained CNN/RNN takes a Red Green Blue (RGB) image as an input, and optionally outputs one or more predictions such as the class of the 2D image (e.g., a person, a vehicle, a cyclist, a dog, etc.). The class of the image is determined based on learned data patterns during training of the CNN/RNN. Each spatial feature map indicates a location and a strength of each detected feature in an image. The features can include, but are not limited to, edges, vertical lines, horizontal lines, bends and/or curves. A certain combination of features in a certain area of an image can indicate that a larger, more complex feature may exist in the image. For example, a spatial feature map could detect a cyclist from a combination of line features and circle features in an area of an image.
The machine learning models and/or algorithms are trained in accordance with a novel process. For example, a machine learning model fθ(x) is trained with a training data set comprising a collection of training examples (x0, y0), (x1, y1), . . . , (xn, yn), where each component x0, x1, . . . , xn represents a sensor output (e.g., an image) comprising a collection of data points d1, d2, . . . , dr (e.g., pixel values for the image) and each component y0, y1, . . . , yn represents a label or ground truth. n and r are integers. The terms “label” and “ground truth” as used here both refer to a true value for a property to be predicted (e.g., a type of object (such as a cyclist), a 3D size of an object (e.g., a predicted cuboid) or a position of the object in an image) by the machine learning models/algorithms. The training process generally involves processing the training examples iteratively during a plurality of epochs e1, e2, . . . , ew of a training process. w is an integer.
The term “epoch” as used herein in relation to conventional training processes refers to a period of time in which a full collection of training examples (x0, y0), (x1, y1), . . . , (xn, yn) is processed during the training of a machine learning model/algorithm. i is an integer equal to or greater than zero. In contrast, the term “epoch” as used herein in relation to the present solution refers to a period of time in which a subset of the training examples (xi, yi) is processed during the training of a machine learning model/algorithm for optimizing a loss function, where the subset comprises less than all of the training examples (x0, y0), (x1, y1), . . . , (xn, yn) of the training data set. The importance of the difference in these definitions for epoch will become evident as the discussion progresses.
In each such epoch of a conventional training process, all training examples of a training data set are iterated through to optimize a loss function l(yi, fθ(xi)). This iterative process may be costly, resource intensive and time consuming because the number of training examples n that are processed can be large (e.g., >1e6). In such conventional solutions, the training of a specific machine learning model/algorithm can take a relatively long amount of time (e.g., hours or days) on expensive and energy intensive equipment. This is undesirable in some applications.
The present solution provides a novel training process for machine learning models/algorithms with a reduced computation time, less resource intensity and/or an equivalent or improved loss function between observed and predicted outputs. The novel training process of the present solution involves obtaining a training data set (x0, y0), (x1, y1), . . . , (xn, yn). The training data set can be created using image(s) or other sensor data generated by one or more sensor(s) (e.g., cameras and/or LiDAR systems) on a mobile platform (e.g., an autonomous vehicle). The labels or ground truth values yi may be manually defined for each data point xi.
Next, a derivative vector θ of a loss function lθ(x) is determined for each training example (xi, yi). Techniques for determining a derivative vector of a loss function are well known. In some scenarios, the loss function lθ(x) generally involves comparing a true value yi with a predicted value yp to obtain an output representing a distance D (e.g., a Euclidean distance) between the true value yi and the predicted value yp. The distance D output from performance of the loss function lθ(x) should be small. The derivative vector θ is randomly initialized for use in a first epoch e0 in which a first subset of training examples are analyzed, and changed over all epochs e1, . . . , ew for the other subsets of training examples to iteratively improve the output of the loss function. The derivative vector θ may be changed in accordance with a known backpropagation algorithm in the direction of the derivative for the loss function (i.e., the direction of the steepest descent of the loss function). The backpropagation algorithm generally computes the gradient of the loss function with respect to the weights of the neural network for a set of input-output training examples, often called a training batch.
The norms of the derivative vectors θ may then be used to select a subset si of the training examples in a training data set that should be used in an epoch of the training process. The importance of each training example relative to the other training examples in a given training data set may additionally or alternatively be used to select the training examples to be contained in the subset si. The subsets of training examples si are then used in the plurality of epochs e0, e1, . . . , ew to train a machine learning model or algorithm. The importance of each training example can be based on, for example, an uncertainty of each training example given the other training examples in the training data set, a confidence score for the label of the training example, types of features that data points in a training example are associated with (e.g., a line, curve, etc.), sizes of the features, and/or relative locations of features.
The training example subset selection process is performed such that a total number of training examples in a subset for first epoch(s) (e.g., epochs e0, e1, . . . , e5) is different (e.g., is less) than the total number of training examples in subsets for subsequent second epoch(s) (e.g., epochs e6, e7, . . . , e20). The total number of training examples may be increased or otherwise varied for subsequent epochs. For example, five percent of the total training examples of a training data set are to be used in a first epoch e0, while twenty percent of the total training examples of the training data set are to be used in a second subsequent epoch e1, . . . , or ew. These percentages can be predefined or dynamically determined based on a total number of training examples in the training data set and/or a confidence value associated with a label of each training example (e.g., reflecting the accuracy of the data).
The training examples of the training data set may be selected for inclusion in a subset si based on their rankings. The training examples can be ranked in accordance with the norms of their derivative vectors. The training examples with the greatest or best rankings are then selected for inclusion in the subset si. For example, M training examples of the training data set are selected which have the greatest or best ranking, and therefore are contained in a subset s1. N training examples are selected which have the greatest or best ranking, and therefore are contained in a subset s2. N may be the same as or different than M. In effect, only a subset of the training examples is used in each epoch as compared to prior art systems which use all training examples in every epoch. This feature of the present solution reduces computational time of the training process.
The above described training process has a reduced training time without compromising interference performance and many novel features. The novel features include, but are not limited to, the dynamic use of training data depending on the importance thereof and the production of side effect information about the training examples by collecting the derivative information per training example.
The present solution will be described below in the context of an autonomous vehicle application. The present solution is not limited to autonomous vehicle applications. The present solution can be used in other applications such as other robotic applications (e.g., to control an articulating arm).
Referring now to
A user 122 of the computing device 110 can perform user-software interactions to access the sensor data 124 and use the sensor data to generate training data sets 126 for machine learning model(s) or algorithm(s) 128. Each training data set 126 comprises a plurality of training examples (x0, y0), (x1, y1), . . . , (xn, yn). The user 122 can manually define the labels or ground truth values yi for each data set xi. The training data set 126 is then stored in datastore 112 (e.g., a database) and/or used by the computing device 110 during a training process to train the machine learning model(s)/algorithm(s) 128 to, for example, facilitate scene perception by another mobile platform using loss functions that iteratively process training examples over multiple cycles. The scene perception can be achieved via feature extraction using multi-camera views, object detection using the extracted features and/or object prediction (e.g., predicted cuboids and associations of predicted cuboids with detected objects). The training process will be described in detail below.
Once trained, the machine learning model(s)/algorithm(s) 128 is(are) deployed on the other mobile platforms such as vehicle 1021. Vehicle 1021 can travel along a road in a semi-autonomous or autonomous manner. Vehicle 1021 is also referred to herein as an Autonomous Vehicle (AV). The AV 1021 can include, but is not limited to, a land vehicle (as shown in
When scene perception is made, AV 1021 performs operations to: generate one or more possible object trajectories for the detected object; and analyze at least one of the generated possible object trajectories to determine whether or not there is at least a threshold possibility or likelihood that a collision will occur between the AV and object if the AV is to follow a given trajectory. If not, the AV 1021 is caused to follow the given vehicle trajectory. If so, the AV 1021 is caused to (i) follow another vehicle trajectory with a relatively low probability of collision with the object or (ii) perform a maneuver to reduce the probability of collision with the object or avoid collision with the object (e.g., brakes and/or changes direction of travel).
Referring now to
As shown in
Operational parameter sensors that are common to both types of mobile platforms include, for example: a position sensor 236 such as an accelerometer, gyroscope and/or inertial measurement unit; a speed sensor 238; and an odometer sensor 240. The mobile platform also may have a clock 242 that the system uses to determine mobile platform time during operation. The clock 242 may be encoded into an on-board computing device, it may be a separate device, or multiple clocks may be available.
The mobile platform also will include various sensors that operate to gather information about the environment in which the mobile platform is traveling. These sensors may include, for example: a location sensor 260 (e.g., a Global Positioning System (GPS) device); and image-based perception sensors such as one or more cameras 262. The sensors also may include environmental sensors 268 such as a precipitation sensor and/or ambient temperature sensor. The image-based perception sensors may enable the mobile platform to detect objects that are within a given distance range of the mobile platform 200 in any direction, while the environmental sensors collect data about environmental conditions within the mobile platform's area of travel.
During operations, information is communicated from the sensors to the on-board computing device 220. The on-board computing device 220 can (i) cause the sensor information to be communicated from the mobile platform to an external device (e.g., computing device 110 of
Geographic location information may be communicated from the location sensor 260 to the on-board computing device 220, which may then access a map of the environment that corresponds to the location information to determine known fixed features of the environment such as streets, buildings, stop signs and/or stop/go signals.
In some scenarios, the on-board computing device 220 detect a moving object and perform operations when such detection is made. For example, the on-board computing device 220 may generate one or more possible object trajectories for the detected object, and analyze the possible object trajectories to assess the risk of a collision between the object and the AV if the AV was to follow a given platform trajectory. If the risk does not exceed the acceptable threshold, then the on-board computing device 220 may cause the mobile platform 200 to follow the given platform trajectory. If the risk exceeds an acceptable threshold, the on-board computing device 220 performs operations to: (i) determine an alternative platform trajectory and analyze whether the collision can be avoided if the mobile platform follows this alternative platform trajectory; or (ii) causes the mobile platform to perform a maneuver (e.g., brake, accelerate, or swerve).
Referring now to
Computing device 300 may include more or less components than those shown in
Some or all components of the computing device 300 can be implemented as hardware, software and/or a combination of hardware and software. The hardware includes, but is not limited to, one or more electronic circuits. The electronic circuits can include, but are not limited to, passive components (e.g., resistors and capacitors) and/or active components (e.g., amplifiers and/or microprocessors). The passive and/or active components can be adapted to, arranged to and/or programmed to perform one or more of the methodologies, procedures, or functions described herein.
As shown in
At least some of the hardware entities 314 perform actions involving access to and use of memory 312, which can be a Random Access Memory (RAM), a disk drive, flash memory, a Compact Disc Read Only Memory (CD-ROM) and/or another hardware device that is capable of storing instructions and data. Hardware entities 314 can include a disk drive unit 316 comprising a computer-readable storage medium 318 on which is stored one or more sets of instructions 320 (e.g., software code) configured to implement one or more of the methodologies, procedures, or functions described herein. The instructions 320 can also reside, completely or at least partially, within the memory 312 and/or within the CPU 306 during execution thereof by the computing device 300. The memory 312 and the CPU 306 also can constitute machine-readable media. The term “machine-readable media”, as used here, refers to a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 320. The term “machine-readable media”, as used here, also refers to any medium that is capable of storing, encoding or carrying a set of instructions 320 for execution by the computing device 300 and that cause the computing device 300 to perform any one or more of the methodologies of the present disclosure.
Referring now to
Method 400 begins with 402 and continues with 404 where a computing device obtains a training data set (e.g., training data set 126 of
Next in 406, a derivative vector θ of a loss function lθ(x) is determined for each training example (xi, yi). Techniques for determining derivative vectors for loss functions are well known. In some scenarios, the loss function lθ(x) involves comparing a true value yi with a predicted value yp to obtain an output representing a distance (e.g., a Euclidean distance) between the true value yi and the predicted value yp. The distance output by the loss function lθ(x) should be small. The derivative vector θ may be randomly initialized for use in a first epoch e0 in which a first subset of training examples s0 is analyzed, and changed over all epochs e1, . . . , ew for the other subsets of training examples si, . . . , sn to iteratively improve the output of the loss function. The derivative vector θ may be changed in accordance with a known backpropagation algorithm in the direction of the derivative for the loss function (i.e., the direction of the steepest descent of the loss function). The backpropagation algorithm generally computes the gradient of the loss function with respect to the weights of the neural network for a set of input-output training examples, often called a training batch.
Upon completing 406, the computing device performs operations to select a subset of training examples that should be used in each epoch of the training process based on the norms of the derivative vectors θ associated with the training examples and/or the importance of each training example relative to the other training examples in the given training data set. The training example subset selection process is performed such that a total number of training examples in a subset for first epoch(s) (e.g., epochs e0, e1, . . . , e5) is different (e.g., is less) than the total number of training examples in subsets for subsequent second epoch(s) (e.g., epochs e6, e7, . . . , e20). The total number of training examples may be increased or otherwise varied for subsequent epochs. For example, five percent of the total training examples of data set x1 are to be used in a first epoch e0, while twenty percent of the total training examples of data set x2 are to be used in a second subsequent epoch e1, . . . , or ew. These percentages can be predefined or dynamically determined based on a total number of training examples in each data set xi and/or confidence values associated with labels of the training examples.
The training examples may be selected for inclusion in a subset si based on their rankings. The training examples can be ranked in accordance with the norms of their derivative vectors. The training examples with the greatest or best rankings are then selected for inclusion in the subset si. For example, M training examples of a training data set are selected which have the greatest or best ranking, and therefore are contained in a subset s1. N training examples are selected which have the greatest or best ranking, and therefore are contained in a subset s2. N may be the same as or different than M. In effect, only a subset of the training examples is used in each epoch as compared to prior art systems which use all training examples in every epoch. This feature of the present solution reduces computational time of the training process.
The subsets are then used in 410 to train the machine learning model or algorithm. Techniques for training machine learning models/algorithms using training data are known. Subsequently, 412 is performed where method 400 ends or other operations are performed (e.g., return to 402).
Referring now to
Referring now to
During the training process of
As shown in
In 606, spatial feature maps are generated by the computing device using the images captured in 604. The images can be used by the trained machine learning model/algorithm (e.g., a CNN) to generate the spatial feature maps. For example, images are input into a trained CNN to produce output spatial feature maps. The trained machine learning model/algorithm can apply filters or feature detectors to the images to produce the spatial feature maps. For example, a trained CNN takes an RGB image as an input, and optionally outputs the class of the 2D image (e.g., a person, a vehicle, a cyclist, a dog, etc.). The class of the image is determined based on learned data patterns during training of the CNN. Each spatial feature map indicates a location and a strength of each detected feature in an image. The features can include, but are not limited to, edges, vertical lines, horizontal lines, bends and/or curves. A certain combination of features in a certain area of an image can indicate that a larger, more complex feature may exist in the image. For example, a spatial feature map could detect a cyclist (e.g., cyclist 114 of
In 608, predicted cuboids are defined at each location of an object in the images based on the spatial feature maps. Each predicted cuboid comprises an orientated 3D box encompassing features that are associated with a given object. Techniques for defining predicted cuboids from spatial feature maps are well known. One such known technique that can be employed in 608 is using linear regression of the feature's 3D coordinates to learn edges of an object in an image and using the edges to define a predicted 3D cuboidal shape for the object. The predicted 3D cuboidal shape defined for the object is referred to as a predicted cuboid. Such known techniques can be used in 608.
In 610, each predicted cuboid is associated with a given object. This association can be generally made by: determining whether two or more of the predicted cuboids should be associated with a same detected object; and assigning the predicted cuboids to detected objects based on results of the determinations. The assignment can be made, for example, by storing object identifiers in a datastore to be associated with the predicted cuboids.
In some scenarios, the determination as to whether predicted cuboids should be associated with the same object is made by generating a feature embedding from a region of the spatial feature maps for each predicted cuboid. The parameters for generating these feature embeddings can be learned via, for example, a triplet or quadruplet loss algorithm. The embeddings are then used to obtain values for the visual features of each object in the images. These visual feature values are compared to each other to determine whether they are the same as each other by a certain amount or degree (e.g., 70% or the difference between two visual feature values is less than a threshold value). The generation of an additional embedding trained with, for example, a triplet loss algorithm addresses different angles of the objects and any occlusion of the objects. The feature embedding can be generated by applying a function (e.g., a 2D convolution function) point-wise to each point of the spatial feature map included in the predicted cuboid so as to transform the same to a data point feature embedding (e.g., visual descriptor of what the object is). Thus, the term “feature embedding” as used herein refers to a vector representation of visual and spatial features extracted from an image.
Triplet loss algorithms are well known. The triplet loss function is a machine learning algorithm where, during training, a baseline input is compared to a positive input and a negative input. The distance from the baseline input to the positive input is minimized, and the distance from the baseline input to the negative input is maximized. The triplet loss algorithm can be described using a Euclidean distance function as shown by the following mathematical equation (1).
(A,P,N)=max(∥f(A)−f(P)∥2−∥f(A)−f(N)∥2+α,0) (1)
where A is an anchor input, P is a positive input of a same class as A, N is a negative input of a different class as A, α is a margin between positive and negative pairs, and f is a feature embedding.
Next, the computing device determines a difference between each set of two feature embeddings. For example, an L1 or L2 distance function can be used to determine this difference. The L1 distance function may be defined by the following mathematical equation (2).
where L1 represents results from performing the L1 distance function, y1 represents an embedding derived from one feature map, and y2 represents an embedding derived from a second feature map. The L2 distance function may be defined by the following mathematical equation (3).
where L2 represents results from performing the L2 distance function.
The computing device also determines a difference between coordinates of each set of predicted cuboids. Methods for determining differences between coordinate are well known. If the differences are less than respective threshold values, then the computing device concludes that the predicted cuboids should be associated with the same object. If the differences are less than the respective threshold values, then computing device concludes that the predicted cuboids should not be associated with the same object.
Once the object-cuboid associations have been made in 610 of
The predictions (e.g., cuboids) generated during method 600 can be used by a mobile platform for object trajectory prediction, general scene understanding, platform trajectory generation, and/or collision avoidance. A block diagram is provided in
In block 702, a location of the mobile platform is detected. This detection can be made based on sensor data output from a location sensor (e.g., location sensor 260 of
In block 704, an object is detected within proximity of the mobile platform. This detection is made based on sensor data output from a camera (e.g., camera 262 of
In block 706, a platform trajectory is generated using the information from blocks 702 and 704. Techniques for determining a platform trajectory are well known in the art. Any known or to be known technique for determining a platform trajectory can be used herein without limitation. For example, in some scenarios, such a technique involves determining a trajectory for the mobile platform that would pass the object when the object is in front of the mobile platform, the object has a heading direction that is aligned with the direction in which the mobile platform is moving, and the object has a length that is greater than a threshold value. The present solution is not limited to the particulars of this scenario. The platform trajectory 724 can be determined based on the information 720, the image-based perception information 722, and/or a road map 726 which is pre-stored in a datastore of the mobile platform. The platform trajectory 724 may represent a smooth path that does not have abrupt changes that would otherwise provide passenger discomfort. For example, the platform trajectory is defined by a path of travel along a given lane of a road in which the object is not predicted travel within a given amount of time. The platform trajectory 724 is then provided to block 708.
In block 708, a steering angle and velocity command is generated based on the platform trajectory 724. The steering angle and velocity command are provided to block 710 for dynamics control.
Although the present solution has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the present solution may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present solution should not be limited by any of the above described embodiments. Rather, the scope of the present solution should be defined in accordance with the following claims and their equivalents.