This patent document pertains generally to tools (systems, apparatuses, methodologies, computer program products, etc.) for image processing, object tracking, vehicle control systems, and autonomous driving systems, and more particularly, but not by way of limitation, to a system and method for online real-time multi-object tracking.
Multi-Object Tracking (MOT) is a popular topic in computer vision that has received lots of attention over past years in both research and industry. MOT has a variety of applications in security and surveillance, video communication, and self-driving or autonomous vehicles.
Multi-object tracking can be divided into two categories: online MOT and offline MOT. The difference between these two kinds of tracking is that online tracking can only use the information of previous image frames for inference, while offline tracking can use the information of a whole video sequence. Although offline tracking can perform much better than online tracking, in some scenarios such as self-driving cars, only online tracking can be used; because, the latter image frames cannot be used to perform inference analysis for the current image frame.
Recently, some online MOT systems have achieved state-of-the-art performance by using deep learning methods, such as Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM). However, all these methods cannot achieve real-time speed while maintaining high performance. Moreover, other purported real-time online MOT systems, such as those using only Kalman filters or a Markov Decision Process (MDP), also cannot achieve enough performance to be used in practice. Therefore, an improved real-time online MOT system with better performance is needed.
A system and method for online real-time multi-object tracking are disclosed. In various example embodiments described herein, we introduce an online real-time multi-object tracking system, which achieves state-of-the-art performance at a real-time speed of over 30 frames per second (FPS). The example system and method for online real-time multi-object tracking as disclosed herein can provide an online real-time MOT method, where each object is modeled by a finite state machine (FSM). Matching objects among image frames in a video feed can be considered as a transition in the finite state machine. Additionally, the various example embodiments can also extract motion features and appearance features for each object to improve tracking performance. Moreover, a Kalman filter can be used to reduce the noise from the results of the object detection.
In the example embodiment, each object in a video feed is modeled by a finite state machine, and the whole tracking process is divided into four stages: 1) similarity calculation, 2) data association, 3) state transition, and 4) post processing. In the first stage, the similarity between an object template or previous object data and an object detection result is calculated. Data indicative of this similarity is used for data association in the second stage. The data association of the second stage can use the similarity data to find the optimal or best matching between previous object data and the object detection results in the current image frame. Then, each object transitions its state according to the results of the data association. Finally, a post processing operation is used to smooth the bounding boxes for each object in the final tracking output.
The various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It will be evident, however, to one of ordinary skill in the art that the various embodiments may be practiced without these specific details.
As described in various example embodiments, a system and method for online real-time multi-object tracking are described herein. An example embodiment disclosed herein can be used in the context of an in-vehicle control system 150 in a vehicle ecosystem 101. In one example embodiment, an in-vehicle control system 150 with a real-time multi-object tracking module 200 resident in a vehicle 105 can be configured like the architecture and ecosystem 101 illustrated in
Referring now to
In an example embodiment as described herein, the in-vehicle control system 150 can be in data communication with a plurality of vehicle subsystems 140, all of which can be resident in a user's vehicle 105. A vehicle subsystem interface 141 is provided to facilitate data communication between the in-vehicle control system 150 and the plurality of vehicle subsystems 140. The in-vehicle control system 150 can be configured to include a data processor 171 to execute the real-time multi-object tracking module 200 for processing image data received from one or more of the vehicle subsystems 140. The data processor 171 can be combined with a data storage device 172 as part of a computing system 170 in the in-vehicle control system 150. The data storage device 172 can be used to store data, processing parameters, and data processing instructions. A processing module interface 165 can be provided to facilitate data communications between the data processor 171 and the real-time multi-object tracking module 200. In various example embodiments, a plurality of processing modules, configured similarly to real-time multi-object tracking module 200, can be provided for execution by data processor 171. As shown by the dashed lines in
The in-vehicle control system 150 can be configured to receive or transmit data from/to a wide-area network 120 and network resources 122 connected thereto. An in-vehicle web-enabled device 130 and/or a user mobile device 132 can be used to communicate via network 120. A web-enabled device interface 131 can be used by the in-vehicle control system 150 to facilitate data communication between the in-vehicle control system 150 and the network 120 via the in-vehicle web-enabled device 130. Similarly, a user mobile device interface 133 can be used by the in-vehicle control system 150 to facilitate data communication between the in-vehicle control system 150 and the network 120 via the user mobile device 132. In this manner, the in-vehicle control system 150 can obtain real-time access to network resources 122 via network 120. The network resources 122 can be used to obtain processing modules for execution by data processor 171, data content to train internal neural networks, system parameters, or other data.
The ecosystem 101 can include a wide area data network 120. The network 120 represents one or more conventional wide area data networks, such as the Internet, a cellular telephone network, satellite network, pager network, a wireless broadcast network, gaming network, WiFi network, peer-to-peer network, Voice over IP (VoIP) network, etc. One or more of these networks 120 can be used to connect a user or client system with network resources 122, such as websites, servers, central control sites, or the like. The network resources 122 can generate and/or distribute data, which can be received in vehicle 105 via in-vehicle web-enabled devices 130 or user mobile devices 132. The network resources 122 can also host network cloud services, which can support the functionality used to compute or assist in processing image input or image input analysis. Antennas can serve to connect the in-vehicle control system 150 and the real-time multi-object tracking module 200 with the data network 120 via cellular, satellite, radio, or other conventional signal reception mechanisms. Such cellular data networks are currently available (e.g., Verizon™, AT&T™, T-Mobile™, etc.). Such satellite-based data or content networks are also currently available (e.g., SiriusXM™, HughesNet™, etc.). The conventional broadcast networks, such as AM/FM radio networks, pager networks, UHF networks, gaming networks, WiFi networks, peer-to-peer networks, Voice over IP (VoIP) networks, and the like are also well-known. Thus, as described in more detail below, the in-vehicle control system 150 and the real-time multi-object tracking module 200 can receive web-based data or content via an in-vehicle web-enabled device interface 131, which can be used to connect with the in-vehicle web-enabled device receiver 130 and network 120. In this manner, the in-vehicle control system 150 and the real-time multi-object tracking module 200 can support a variety of network-connectable in-vehicle devices and systems from within a vehicle 105.
As shown in
Referring still to
Referring still to
The vehicle 105 may include various vehicle subsystems such as a vehicle drive subsystem 142, vehicle sensor subsystem 144, vehicle control subsystem 146, and occupant interface subsystem 148. As described above, the vehicle 105 may also include the in-vehicle control system 150, the computing system 170, and the real-time multi-object tracking module 200. The vehicle 105 may include more or fewer subsystems and each subsystem could include multiple elements. Further, each of the subsystems and elements of vehicle 105 could be interconnected. Thus, one or more of the described functions of the vehicle 105 may be divided up into additional functional or physical components or combined into fewer functional or physical components. In some further examples, additional functional and physical components may be added to the examples illustrated by
The vehicle drive subsystem 142 may include components operable to provide powered motion for the vehicle 105. In an example embodiment, the vehicle drive subsystem 142 may include an engine or motor, wheels/tires, a transmission, an electrical subsystem, and a power source. The engine or motor may be any combination of an internal combustion engine, an electric motor, steam engine, fuel cell engine, propane engine, or other types of engines or motors. In some example embodiments, the engine may be configured to convert a power source into mechanical energy. In some example embodiments, the vehicle drive subsystem 142 may include multiple types of engines or motors. For instance, a gas-electric hybrid car could include a gasoline engine and an electric motor. Other examples are possible.
The wheels of the vehicle 105 may be standard tires. The wheels of the vehicle 105 may be configured in various formats, including a unicycle, bicycle, tricycle, or a four-wheel format, such as on a car or a truck, for example. Other wheel geometries are possible, such as those including six or more wheels. Any combination of the wheels of vehicle 105 may be operable to rotate differentially with respect to other wheels. The wheels may represent at least one wheel that is fixedly attached to the transmission and at least one tire coupled to a rim of the wheel that could make contact with the driving surface. The wheels may include a combination of metal and rubber, or another combination of materials. The transmission may include elements that are operable to transmit mechanical power from the engine to the wheels. For this purpose, the transmission could include a gearbox, a clutch, a differential, and drive shafts. The transmission may include other elements as well. The drive shafts may include one or more axles that could be coupled to one or more wheels. The electrical system may include elements that are operable to transfer and control electrical signals in the vehicle 105. These electrical signals can be used to activate lights, servos, electrical motors, and other electrically driven or controlled devices of the vehicle 105. The power source may represent a source of energy that may, in full or in part, power the engine or motor. That is, the engine or motor could be configured to convert the power source into mechanical energy. Examples of power sources include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, fuel cell, solar panels, batteries, and other sources of electrical power. The power source could additionally or alternatively include any combination of fuel tanks, batteries, capacitors, or flywheels. The power source may also provide energy for other subsystems of the vehicle 105.
The vehicle sensor subsystem 144 may include a number of sensors configured to sense information about an environment or condition of the vehicle 105. For example, the vehicle sensor subsystem 144 may include an inertial measurement unit (IMU), a Global Positioning System (GPS) transceiver, a RADAR unit, a laser range finder/LIDAR unit, and one or more cameras or image capture devices. The vehicle sensor subsystem 144 may also include sensors configured to monitor internal systems of the vehicle 105 (e.g., an 02 monitor, a fuel gauge, an engine oil temperature). Other sensors are possible as well. One or more of the sensors included in the vehicle sensor subsystem 144 may be configured to be actuated separately or collectively in order to modify a position, an orientation, or both, of the one or more sensors.
The IMU may include any combination of sensors (e.g., accelerometers and gyroscopes) configured to sense position and orientation changes of the vehicle 105 based on inertial acceleration. The GPS transceiver may be any sensor configured to estimate a geographic location of the vehicle 105. For this purpose, the GPS transceiver may include a receiver/transmitter operable to provide information regarding the position of the vehicle 105 with respect to the Earth. The RADAR unit may represent a system that utilizes radio signals to sense objects within the local environment of the vehicle 105. In some embodiments, in addition to sensing the objects, the RADAR unit may additionally be configured to sense the speed and the heading of the objects proximate to the vehicle 105. The laser range finder or LIDAR unit may be any sensor configured to sense objects in the environment in which the vehicle 105 is located using lasers. In an example embodiment, the laser range finder/LIDAR unit may include one or more laser sources, a laser scanner, and one or more detectors, among other system components. The laser range finder/LIDAR unit could be configured to operate in a coherent (e.g., using heterodyne detection) or an incoherent detection mode. The cameras may include one or more devices configured to capture a plurality of images of the environment of the vehicle 105. The cameras may be still image cameras or motion video cameras.
The vehicle control system 146 may be configured to control operation of the vehicle 105 and its components. Accordingly, the vehicle control system 146 may include various elements such as a steering unit, a throttle, a brake unit, a navigation unit, and an autonomous control unit.
The steering unit may represent any combination of mechanisms that may be operable to adjust the heading of vehicle 105. The throttle may be configured to control, for instance, the operating speed of the engine and, in turn, control the speed of the vehicle 105. The brake unit can include any combination of mechanisms configured to decelerate the vehicle 105. The brake unit can use friction to slow the wheels in a standard manner. In other embodiments, the brake unit may convert the kinetic energy of the wheels to electric current. The brake unit may take other forms as well. The navigation unit may be any system configured to determine a driving path or route for the vehicle 105. The navigation unit may additionally be configured to update the driving path dynamically while the vehicle 105 is in operation. In some embodiments, the navigation unit may be configured to incorporate data from the real-time multi-object tracking module 200, the GPS transceiver, and one or more predetermined maps so as to determine the driving path for the vehicle 105. The autonomous control unit may represent a control system configured to identify, evaluate, and avoid or otherwise negotiate potential obstacles in the environment of the vehicle 105. In general, the autonomous control unit may be configured to control the vehicle 105 for operation without a driver or to provide driver assistance in controlling the vehicle 105. In some embodiments, the autonomous control unit may be configured to incorporate data from the real-time multi-object tracking module 200, the GPS transceiver, the RADAR, the LIDAR, the cameras, and other vehicle subsystems to determine the driving path or trajectory for the vehicle 105. The vehicle control system 146 may additionally or alternatively include components other than those shown and described.
Occupant interface subsystems 148 may be configured to allow interaction between the vehicle 105 and external sensors, other vehicles, other computer systems, and/or an occupant or user of vehicle 105. For example, the occupant interface subsystems 148 may include standard visual display devices (e.g., plasma displays, liquid crystal displays (LCDs), touchscreen displays, heads-up displays, or the like), speakers or other audio output devices, microphones or other audio input devices, navigation interfaces, and interfaces for controlling the internal environment (e.g., temperature, fan, etc.) of the vehicle 105.
In an example embodiment, the occupant interface subsystems 148 may provide, for instance, means for a user/occupant of the vehicle 105 to interact with the other vehicle subsystems. The visual display devices may provide information to a user of the vehicle 105. The user interface devices can also be operable to accept input from the user via a touchscreen. The touchscreen may be configured to sense at least one of a position and a movement of a user's finger via capacitive sensing, resistance sensing, or a surface acoustic wave process, among other possibilities. The touchscreen may be capable of sensing finger movement in a direction parallel or planar to the touchscreen surface, in a direction normal to the touchscreen surface, or both, and may also be capable of sensing a level of pressure applied to the touchscreen surface. The touchscreen may be formed of one or more translucent or transparent insulating layers and one or more translucent or transparent conducting layers. The touchscreen may take other forms as well.
In other instances, the occupant interface subsystems 148 may provide means for the vehicle 105 to communicate with devices within its environment. The microphone may be configured to receive audio (e.g., a voice command or other audio input) from a user of the vehicle 105. Similarly, the speakers may be configured to output audio to a user of the vehicle 105. In one example embodiment, the occupant interface subsystems 148 may be configured to wirelessly communicate with one or more devices directly or via a communication network. For example, a wireless communication system could use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as WiMAX or LTE. Alternatively, the wireless communication system may communicate with a wireless local area network (WLAN), for example, using WIFI®. In some embodiments, the wireless communication system 146 may communicate directly with a device, for example, using an infrared link, BLUETOOTH®, or ZIGBEE®. Other wireless protocols, such as various vehicular communication systems, are possible within the context of the disclosure. For example, the wireless communication system may include one or more dedicated short range communications (DSRC) devices that may include public or private data communications between vehicles and/or roadside stations.
Many or all of the functions of the vehicle 105 can be controlled by the computing system 170. The computing system 170 may include at least one data processor 171 (which can include at least one microprocessor) that executes processing instructions stored in a non-transitory computer readable medium, such as the data storage device 172. The computing system 170 may also represent a plurality of computing devices that may serve to control individual components or subsystems of the vehicle 105 in a distributed fashion. In some embodiments, the data storage device 172 may contain processing instructions (e.g., program logic) executable by the data processor 171 to perform various functions of the vehicle 105, including those described herein in connection with the drawings. The data storage device 172 may contain additional instructions as well, including instructions to transmit data to, receive data from, interact with, or control one or more of the vehicle drive subsystem 142, the vehicle sensor subsystem 144, the vehicle control subsystem 146, and the occupant interface subsystems 148.
In addition to the processing instructions, the data storage device 172 may store data such as image processing parameters, training data, roadway maps, and path information, among other information. Such information may be used by the vehicle 105 and the computing system 170 during the operation of the vehicle 105 in the autonomous, semi-autonomous, and/or manual modes.
The vehicle 105 may include a user interface for providing information to or receiving input from a user or occupant of the vehicle 105. The user interface may control or enable control of the content and the layout of interactive images that may be displayed on a display device. Further, the user interface may include one or more input/output devices within the set of occupant interface subsystems 148, such as the display device, the speakers, the microphones, or a wireless communication system.
The computing system 170 may control the function of the vehicle 105 based on inputs received from various vehicle subsystems (e.g., the vehicle drive subsystem 142, the vehicle sensor subsystem 144, and the vehicle control subsystem 146), as well as from the occupant interface subsystem 148. For example, the computing system 170 may use input from the vehicle control system 146 in order to control the steering unit to avoid an obstacle detected by the vehicle sensor subsystem 144 and the real-time multi-object tracking module 200, move in a controlled manner, or follow a path or trajectory based on output generated by the real-time multi-object tracking module 200. In an example embodiment, the computing system 170 can be operable to provide control over many aspects of the vehicle 105 and its subsystems.
Although
Additionally, other data and/or content (denoted herein as ancillary data) can be obtained from local and/or remote sources by the in-vehicle control system 150 as described above. The ancillary data can be used to augment, modify, or train the operation of the real-time multi-object tracking module 200 based on a variety of factors including, the context in which the user is operating the vehicle (e.g., the location of the vehicle, the specified destination, direction of travel, speed, the time of day, the status of the vehicle, etc.), and a variety of other data obtainable from the variety of sources, local and remote, as described herein.
In a particular embodiment, the in-vehicle control system 150 and the real-time multi-object tracking module 200 can be implemented as in-vehicle components of vehicle 105. In various example embodiments, the in-vehicle control system 150 and the real-time multi-object tracking module 200 in data communication therewith can be implemented as integrated components or as separate components. In an example embodiment, the software components of the in-vehicle control system 150 and/or the real-time multi-object tracking module 200 can be dynamically upgraded, modified, and/or augmented by use of the data connection with the mobile devices 132 and/or the network resources 122 via network 120. The in-vehicle control system 150 can periodically query a mobile device 132 or a network resource 122 for updates or updates can be pushed to the in-vehicle control system 150.
System and Method for Online Real-Time Multi-Object Tracking
A system and method for online real-time multi-object tracking are disclosed. In various example embodiments described herein, we introduce an online real-time multi-object tracking system, which achieves state-of-the-art performance at a real-time speed of over 30 frames per second (FPS). The example system and method for online real-time multi-object tracking as disclosed herein can provide an online real-time MOT method, where each object is modeled by a finite state machine (FSM). Matching objects among image frames in a video feed can be considered as a transition in the finite state machine. Additionally, the various example embodiments can also extract motion features and appearance features for each object to improve tracking performance. Moreover, a Kalman filter can be used to reduce the noise from the results of the object detection.
In the example embodiment, each object in a video feed is modeled by a finite state machine, and the whole tracking process is divided into four stages: 1) similarity calculation, 2) data association, 3) state transition, and 4) post processing. In the first stage, the similarity between an object template or previous object data and an object detection result is calculated. Data indicative of this similarity is used for data association in the second stage. The data association of the second stage can use the similarity data to find the optimal or best matching between previous object data and the object detection results in the current image frame. Then, each object transitions its state according to the results of the data association. Finally, a post processing operation is used to smooth the bounding boxes for each object in the final tracking output.
Initialized State
A new object detected in the video feed that has never been tracked before is set to the initialized state in its finite state machine. Thus, when a new object is detected by image analysis and object detection, the new object is initialized in the initialized state as a new tracking object. Because there may be some false positives in the detection results, it is possible that the new object is a false positive object detection. In order to avoid false positive object detections, we use a learning based method (such as XGBoost, Support Vector Machine, etc.) to train a classifier (here we call it the initialization classifier), so that we can judge if the detection result is a false positive. The features we use to train the initialization classifier include both vision features and bounding box information related to the detection result. Specifically, given a detection result (e.g., a bounding box position and a confidence score), vision features such as Histogram of Oriented Gradients (HOG) and Scale-Invariant Feature Transform (SIFT) can be extracted from the bounding box of the detected object. Then, the vision feature can be combined with the detection confidence score to feed the classifier.
By using the initialization classifier, we can determine whether a new object detection result is a false positive. If the new object detection result is a real object (e.g., not a false positive), the new object detection result transitions from the initialized state to the tracked state (described below). If the new object detection result is not a real object (e.g., a false positive), the new object detection result transitions from the initialized state to the removed state (also described below).
Tracked State
When a new image frame is received, objects currently in the tracked state need to be processed to determine if the objects currently in the tracked state can remain in the tracked state or should transition to the lost state. This determination depends on the matching detection results from a comparison of a prior image frame with the detection results for the new image frame. Specifically, given a new image frame from the video feed, the example embodiment can match all tracked and lost objects from the prior image frame with the detection results in the new image frame (this is called data association). As a result of this matching or data association process, some previously tracked objects may be lost in the new image frame. Other previously tracked objects may continue to be tracked in the current image frame. Other previously lost objects may re-appear to be tracked again. The detailed matching strategy is described below in connection with the description of the feature extraction and template updating strategies.
Feature Extraction
In an example embodiment, there are two kinds of features used for object data association: motion feature and appearance feature. For motion feature, a Kalman filter is set for each object in tracking history. When a new image frame is received, the Kalman filter can predict a bounding box position for an object in the new image frame according to the trajectory of the object. Then, the example embodiment can determine a similarity (or difference) score between the predicted bounding box position for an object by the Kalman filter and the position of each bounding box for objects detected in the detection results for the new image frame. In a tracking system of an example embodiment, we use Intersection Over Union (IOU) as the similarity score; because, IOU can describe the shape similarity between two bounding boxes. This similarity score is considered as the motion feature or motion similarity of a detected object.
The second part of the feature extraction used in an example embodiment is the appearance feature for each object. The appearance feature is a key feature to distinguish one object from another object. The appearance feature can be extracted by a pre-trained convolutional neural network (CNN). In other embodiments, the appearance feature can be extracted by use of hand-craft features or vision features, such as HOG and SIFT. Different features are suitable for different scenarios or applications of the technology. As such, the methods used for appearance feature extraction can vary to obtain the best performance for different applications. Once the appearance feature for a current object is extracted, the appearance feature of the object can be used to determine an appearance similarity (or difference) as related to the appearance features of previous objects and prior detection results from prior image frames.
Template Updating
If a currently detected object is successfully matched with a bounding box of a previously detected object, the example embodiment can update the appearance feature for the current object as its template. Specifically, the example embodiment can obtain the appearance feature extracted from the matching object bounding box and use the extracted appearance feature as the new template of the current object. In the example embodiment, we do not directly replace the old template with the new appearance feature; instead, the example embodiment keeps several templates (usually three) for each object that has ever been tracked.
When a template is updated, the example embodiment can set a similarity threshold and a bounding box confidence threshold. Only appearance features satisfying the following two conditions can be used to update an old template: First, the similarity score between the appearance feature for the current object and the old template should be less than the similarity threshold. This is because a low similarity score usually means the object has been changed a lot in the current image frame, so that the template should be updated. Second, the detection bounding box confidence level should be higher than the bounding box confidence threshold. This is because we need to avoid false positives in the detection results, and a bounding box with a low confidence level is more likely to be a false positive.
If an appearance feature is selected to be a new template, the example embodiment can determine which of the old templates should be replaced. There are several strategies that can be used for this purpose, such as a Least Recent Use (LRU) strategy, or a strategy that just replaces the oldest template in the template pool.
Lost State
Similar to the tracked state, there are three different kinds of transitions an object can make from the lost state. First, if the object is successfully matched with a detection result in the current image frame, the object will transition back to the tracked state from the lost state. Second, if there is no matching for this object, the object will remain in the lost state. Third, if the object has remained in the lost state for a number of cycles that is greater than a threshold, the object transitions from the lost state to the removed state, where the object is considered to have disappeared.
Because there is no matching detection result for an object in the lost state, the example embodiment does not update the appearance feature (e.g., the template) for these lost objects. However, the example embodiment does need to keep predicting the bounding box position by use of the Kalman filter; because, the example embodiment can use the motion feature for lost objects to perform data association in case the lost object re-appears in a new image frame. This is called a blind update.
Removed State
In an example embodiment, there are only two ways for an object to transition into the removed state. First, an object in the initialized state for which a detection result is considered to be a false positive transitions into the removed state. Second, an object that has remained in the lost state for too cycles transitions into the removed state and is considered to have disappeared from the camera view.
In various example embodiments, a threshold can be used to determine if an object has disappeared. In some embodiments, the larger the threshold is, the higher the tracking performance will be; because, sometimes an object disappears for a while and then may come back into view again. However, a larger threshold leads to a low tracking speed; because, there are more objects in the lost state and more processing overhead is needed to perform object matching during the period of data association. As such, there exists a trade-off between performance and speed.
Tracking Process
Similarity Calculation
With reference to block 310 shown in
Data Association
With reference to block 320 shown in
State Transition
With reference to blocks 330 shown in
Post Processing
Because almost all related tracking methods directly use the bounding boxes of detection results as the final output for each image frame, and detection results for each object may be unstable, there may be some variations in the final output. In order to avoid this problem and make the final output smoother, some modifications to the detection results can be made in the example embodiment. Specifically, we can use the weighted average of the detection result and the prediction of Kalman filter as the final tracking output, which can improve the tracking performance both in benchmark and visualization.
Referring now to
As used herein and unless specified otherwise, the term “mobile device” includes any computing or communications device that can communicate with the in-vehicle control system 150 and/or the real-time multi-object tracking module 200 described herein to obtain read or write access to data signals, messages, or content communicated via any mode of data communications. In many cases, the mobile device 130 is a handheld, portable device, such as a smart phone, mobile phone, cellular telephone, tablet computer, laptop computer, display pager, radio frequency (RF) device, infrared (IR) device, global positioning device (GPS), Personal Digital Assistants (PDA), handheld computers, wearable computer, portable game console, other mobile communication and/or computing device, or an integrated device combining one or more of the preceding devices, and the like. Additionally, the mobile device 130 can be a computing device, personal computer (PC), multiprocessor system, microprocessor-based or programmable consumer electronic device, network PC, diagnostics equipment, a system operated by a vehicle 119 manufacturer or service technician, and the like, and is not limited to portable devices. The mobile device 130 can receive and process data in any of a variety of data formats. The data format may include or be configured to operate with any programming format, protocol, or language including, but not limited to, JavaScript, C++, iOS, Android, etc.
As used herein and unless specified otherwise, the term “network resource” includes any device, system, or service that can communicate with the in-vehicle control system 150 and/or the real-time multi-object tracking module 200 described herein to obtain read or write access to data signals, messages, or content communicated via any mode of inter-process or networked data communications. In many cases, the network resource 122 is a data network accessible computing platform, including client or server computers, websites, mobile devices, peer-to-peer (P2P) network nodes, and the like. Additionally, the network resource 122 can be a web appliance, a network router, switch, bridge, gateway, diagnostics equipment, a system operated by a vehicle 119 manufacturer or service technician, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” can also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The network resources 122 may include any of a variety of providers or processors of network transportable digital content. Typically, the file format that is employed is Extensible Markup Language (XML), however, the various embodiments are not so limited, and other file formats may be used. For example, data formats other than Hypertext Markup Language (HTML)/XML or formats other than open/standard data formats can be supported by various embodiments. Any electronic file format, such as Portable Document Format (PDF), audio (e.g., Motion Picture Experts Group Audio Layer 3—MP3, and the like), video (e.g., MP4, and the like), and any proprietary interchange format defined by specific content sites can be supported by the various embodiments described herein.
The wide area data network 120 (also denoted the network cloud) used with the network resources 122 can be configured to couple one computing or communication device with another computing or communication device. The network may be enabled to employ any form of computer readable data or media for communicating information from one electronic device to another. The network 120 can include the Internet in addition to other wide area networks (WANs), cellular telephone networks, metro-area networks, local area networks (LANs), other packet-switched networks, circuit-switched networks, direct data connections, such as through a universal serial bus (USB) or Ethernet port, other forms of computer-readable media, or any combination thereof. The network 120 can include the Internet in addition to other wide area networks (WANs), cellular telephone networks, satellite networks, over-the-air broadcast networks, AM/FM radio networks, pager networks, UHF networks, other broadcast networks, gaming networks, WiFi networks, peer-to-peer networks, Voice Over IP (VoIP) networks, metro-area networks, local area networks (LANs), other packet-switched networks, circuit-switched networks, direct data connections, such as through a universal serial bus (USB) or Ethernet port, other forms of computer-readable media, or any combination thereof. On an interconnected set of networks, including those based on differing architectures and protocols, a router or gateway can act as a link between networks, enabling messages to be sent between computing devices on different networks. Also, communication links within networks can typically include twisted wire pair cabling, USB, Firewire, Ethernet, or coaxial cable, while communication links between networks may utilize analog or digital telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital User Lines (DSLs), wireless links including satellite links, cellular telephone links, or other communication links known to those of ordinary skill in the art. Furthermore, remote computers and other related electronic devices can be remotely connected to the network via a modem and temporary telephone link.
The network 120 may further include any of a variety of wireless sub-networks that may further overlay stand-alone ad-hoc networks, and the like, to provide an infrastructure-oriented connection. Such sub-networks may include mesh networks, Wireless LAN (WLAN) networks, cellular networks, and the like. The network may also include an autonomous system of terminals, gateways, routers, and the like connected by wireless radio links or wireless transceivers. These connectors may be configured to move freely and randomly and organize themselves arbitrarily, such that the topology of the network may change rapidly. The network 120 may further employ one or more of a plurality of standard wireless and/or cellular protocols or access technologies including those set forth herein in connection with network interface 712 and network 714 described in the figures herewith.
In a particular embodiment, a mobile device 132 and/or a network resource 122 may act as a client device enabling a user to access and use the in-vehicle control system 150 and/or the real-time multi-object tracking module 200 to interact with one or more components of a vehicle subsystem. These client devices 132 or 122 may include virtually any computing device that is configured to send and receive information over a network, such as network 120 as described herein. Such client devices may include mobile devices, such as cellular telephones, smart phones, tablet computers, display pagers, radio frequency (RF) devices, infrared (IR) devices, global positioning devices (GPS), Personal Digital Assistants (PDAs), handheld computers, wearable computers, game consoles, integrated devices combining one or more of the preceding devices, and the like. The client devices may also include other computing devices, such as personal computers (PCs), multiprocessor systems, microprocessor-based or programmable consumer electronics, network PC's, and the like. As such, client devices may range widely in terms of capabilities and features. For example, a client device configured as a cell phone may have a numeric keypad and a few lines of monochrome LCD display on which only text may be displayed. In another example, a web-enabled client device may have a touch sensitive screen, a stylus, and a color LCD display screen in which both text and graphics may be displayed. Moreover, the web-enabled client device may include a browser application enabled to receive and to send wireless application protocol messages (WAP), and/or wired application messages, and the like. In one embodiment, the browser application is enabled to employ HyperText Markup Language (HTML), Dynamic HTML, Handheld Device Markup Language (HDML), Wireless Markup Language (WML), WMLScript, JavaScript™, EXtensible HTML (xHTML), Compact HTML (CHTML), and the like, to display and send a message with relevant information.
The client devices may also include at least one client application that is configured to receive content or messages from another computing device via a network transmission. The client application may include a capability to provide and receive textual content, graphical content, video content, audio content, alerts, messages, notifications, and the like. Moreover, the client devices may be further configured to communicate and/or receive a message, such as through a Short Message Service (SMS), direct messaging (e.g., Twitter), email, Multimedia Message Service (MMS), instant messaging (IM), internet relay chat (IRC), mIRC, Jabber, Enhanced Messaging Service (EMS), text messaging, Smart Messaging, Over the Air (OTA) messaging, or the like, between another computing device, and the like. The client devices may also include a wireless application device on which a client application is configured to enable a user of the device to send and receive information to/from network resources wirelessly via the network.
The in-vehicle control system 150 and/or the real-time multi-object tracking module 200 can be implemented using systems that enhance the security of the execution environment, thereby improving security and reducing the possibility that the in-vehicle control system 150 and/or the real-time multi-object tracking module 200 and the related services could be compromised by viruses or malware. For example, the in-vehicle control system 150 and/or the real-time multi-object tracking module 200 can be implemented using a Trusted Execution Environment, which can ensure that sensitive data is stored, processed, and communicated in a secure way.
The example computing system 700 can include a data processor 702 (e.g., a System-on-a-Chip (SoC), general processing core, graphics core, and optionally other processing logic) and a memory 704, which can communicate with each other via a bus or other data transfer system 706. The mobile computing and/or communication system 700 may further include various input/output (I/O) devices and/or interfaces 710, such as a touchscreen display, an audio jack, a voice interface, and optionally a network interface 712. In an example embodiment, the network interface 712 can include one or more radio transceivers configured for compatibility with any one or more standard wireless and/or cellular protocols or access technologies (e.g., 2nd (2G), 2.5, 3rd (3G), 4th (4G) generation, and future generation radio access for cellular systems, Global System for Mobile communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), LTE, CDMA2000, WLAN, Wireless Router (WR) mesh, and the like). Network interface 712 may also be configured for use with various other wired and/or wireless communication protocols, including TCP/IP, UDP, SIP, SMS, RTP, WAP, CDMA, TDMA, UMTS, UWB, WiFi, WiMax, Bluetooth©, IEEE 802.11x, and the like. In essence, network interface 712 may include or support virtually any wired and/or wireless communication and data processing mechanisms by which information/data may travel between a computing system 700 and another computing or communication system via network 714.
The memory 704 can represent a machine-readable medium on which is stored one or more sets of instructions, software, firmware, or other processing logic (e.g., logic 708) embodying any one or more of the methodologies or functions described and/or claimed herein. The logic 708, or a portion thereof, may also reside, completely or at least partially within the processor 702 during execution thereof by the mobile computing and/or communication system 700. As such, the memory 704 and the processor 702 may also constitute machine-readable media. The logic 708, or a portion thereof, may also be configured as processing logic or logic, at least a portion of which is partially implemented in hardware. The logic 708, or a portion thereof, may further be transmitted or received over a network 714 via the network interface 712. While the machine-readable medium of an example embodiment can be a single medium, the term “machine-readable medium” should be taken to include a single non-transitory medium or multiple non-transitory media (e.g., a centralized or distributed database, and/or associated caches and computing systems) that store the one or more sets of instructions. The term “machine-readable medium” can also be taken to include any non-transitory medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
This application is a continuation of U.S. patent application Ser. No. 16/868,400, filed May 6, 2020, which is a continuation of U.S. patent application Ser. No. 15/906,561, filed Feb. 27, 2018, each of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6084870 | Wooten et al. | Jul 2000 | A |
6263088 | Crabtree et al. | Jul 2001 | B1 |
6594821 | Banning et al. | Jul 2003 | B1 |
6777904 | Degner | Aug 2004 | B1 |
6975923 | Spriggs | Dec 2005 | B2 |
7103460 | Breed | Sep 2006 | B1 |
7689559 | Canright | Mar 2010 | B2 |
7742841 | Sakai et al. | Jun 2010 | B2 |
7783403 | Breed | Aug 2010 | B2 |
7844595 | Canright | Nov 2010 | B2 |
8041111 | Wilensky | Oct 2011 | B1 |
8064643 | Stein | Nov 2011 | B2 |
8082101 | Stein | Dec 2011 | B2 |
8164628 | Stein | Apr 2012 | B2 |
8175376 | Marchesotti | May 2012 | B2 |
8271871 | Marchesotti | Sep 2012 | B2 |
8346480 | Trepagnier et al. | Jan 2013 | B2 |
8378851 | Stein | Feb 2013 | B2 |
8392117 | Dolgov | Mar 2013 | B2 |
8401292 | Park | Mar 2013 | B2 |
8412449 | Trepagnier et al. | Apr 2013 | B2 |
8478072 | Aisaka | Jul 2013 | B2 |
8553088 | Stein | Oct 2013 | B2 |
8706394 | Trepagnier et al. | Apr 2014 | B2 |
8718861 | Montemerlo et al. | May 2014 | B1 |
8788134 | Litkouhi | Jul 2014 | B1 |
8908041 | Stein | Dec 2014 | B2 |
8917169 | Schofield | Dec 2014 | B2 |
8963913 | Baek | Feb 2015 | B2 |
8981966 | Stein | Mar 2015 | B2 |
8983708 | Choe et al. | Mar 2015 | B2 |
8993951 | Schofield | Mar 2015 | B2 |
9002632 | Emigh | Apr 2015 | B1 |
9008369 | Schofield | Apr 2015 | B2 |
9025880 | Perazzi | May 2015 | B2 |
9042648 | Wang | May 2015 | B2 |
9081385 | Ferguson et al. | Jul 2015 | B1 |
9088744 | Grauer et al. | Jul 2015 | B2 |
9111444 | Kaganovich | Aug 2015 | B2 |
9117133 | Barnes | Aug 2015 | B2 |
9118816 | Stein | Aug 2015 | B2 |
9120485 | Dolgov | Sep 2015 | B1 |
9122954 | Srebnik | Sep 2015 | B2 |
9134402 | Sebastian | Sep 2015 | B2 |
9145116 | Clarke | Sep 2015 | B2 |
9147255 | Zhang | Sep 2015 | B1 |
9156473 | Clarke | Oct 2015 | B2 |
9176006 | Stein | Nov 2015 | B2 |
9179072 | Stein | Nov 2015 | B2 |
9183447 | Gdalyahu | Nov 2015 | B1 |
9185360 | Stein | Nov 2015 | B2 |
9191634 | Schofield | Nov 2015 | B2 |
9214084 | Grauer et al. | Dec 2015 | B2 |
9219873 | Grauer et al. | Dec 2015 | B2 |
9233659 | Rosenbaum | Jan 2016 | B2 |
9233688 | Clarke | Jan 2016 | B2 |
9248832 | Huberman | Feb 2016 | B2 |
9248835 | Tanzmeister | Feb 2016 | B2 |
9251708 | Rosenbaum | Feb 2016 | B2 |
9277132 | Berberian | Mar 2016 | B2 |
9280711 | Stein | Mar 2016 | B2 |
9282144 | Tebay et al. | Mar 2016 | B2 |
9286522 | Stein | Mar 2016 | B2 |
9297641 | Stein | Mar 2016 | B2 |
9299004 | Lin | Mar 2016 | B2 |
9315192 | Zhu | Apr 2016 | B1 |
9317033 | Ibanez-guzman | Apr 2016 | B2 |
9317776 | Honda | Apr 2016 | B1 |
9330334 | Lin | May 2016 | B2 |
9342074 | Urmson | May 2016 | B2 |
9347779 | Lynch | May 2016 | B1 |
9355635 | Gao | May 2016 | B2 |
9365214 | Ben Shalom | Jun 2016 | B2 |
9399397 | Mizutani | Jul 2016 | B2 |
9418549 | Kang et al. | Aug 2016 | B2 |
9438878 | Niebla | Sep 2016 | B2 |
9446765 | Ben Shalom | Sep 2016 | B2 |
9459515 | Stein | Oct 2016 | B2 |
9466006 | Duan | Oct 2016 | B2 |
9476970 | Fairfield | Oct 2016 | B1 |
9483839 | Kwon et al. | Nov 2016 | B1 |
9490064 | Hirosawa | Nov 2016 | B2 |
9494935 | Okumura et al. | Nov 2016 | B2 |
9507346 | Levinson et al. | Nov 2016 | B1 |
9513634 | Pack et al. | Dec 2016 | B2 |
9531966 | Stein | Dec 2016 | B2 |
9535423 | Debreczeni | Jan 2017 | B1 |
9538113 | Grauer et al. | Jan 2017 | B2 |
9547985 | Tuukkanen | Jan 2017 | B2 |
9549158 | Grauer et al. | Jan 2017 | B2 |
9555803 | Pawlicki | Jan 2017 | B2 |
9568915 | Berntorp | Feb 2017 | B1 |
9587952 | Slusar | Mar 2017 | B1 |
9599712 | Van Der Tempel et al. | Mar 2017 | B2 |
9600889 | Boisson et al. | Mar 2017 | B2 |
9602807 | Crane et al. | Mar 2017 | B2 |
9612123 | Levinson et al. | Apr 2017 | B1 |
9620010 | Grauer et al. | Apr 2017 | B2 |
9625569 | Lange | Apr 2017 | B2 |
9628565 | Stenneth et al. | Apr 2017 | B2 |
9649999 | Amireddy et al. | May 2017 | B1 |
9652860 | Maali et al. | May 2017 | B1 |
9669827 | Ferguson et al. | Jun 2017 | B1 |
9672446 | Vallespi-gonzalez | Jun 2017 | B1 |
9690290 | Prokhorov | Jun 2017 | B2 |
9701023 | Zhang et al. | Jul 2017 | B2 |
9712754 | Grauer et al. | Jul 2017 | B2 |
9720418 | Stenneth et al. | Aug 2017 | B2 |
9723097 | Harris | Aug 2017 | B2 |
9723099 | Chen | Aug 2017 | B2 |
9723233 | Grauer et al. | Aug 2017 | B2 |
9726754 | Massanell et al. | Aug 2017 | B2 |
9729860 | Cohen et al. | Aug 2017 | B2 |
9738280 | Rayes | Aug 2017 | B2 |
9739609 | Lewis | Aug 2017 | B1 |
9746550 | Nath | Aug 2017 | B2 |
9753128 | Schweizer et al. | Sep 2017 | B2 |
9753141 | Grauer et al. | Sep 2017 | B2 |
9754490 | Kentley et al. | Sep 2017 | B2 |
9766625 | Boroditsky et al. | Sep 2017 | B2 |
9769456 | You et al. | Sep 2017 | B2 |
9773155 | Shotton et al. | Sep 2017 | B2 |
9779276 | Todeschini et al. | Oct 2017 | B2 |
9785149 | Wang et al. | Oct 2017 | B2 |
9805294 | Liu et al. | Oct 2017 | B2 |
9810785 | Grauer et al. | Nov 2017 | B2 |
9823339 | Cohen | Nov 2017 | B2 |
9953236 | Huang et al. | Apr 2018 | B1 |
10147193 | Huang et al. | Dec 2018 | B2 |
10223806 | Luo et al. | Mar 2019 | B1 |
10223807 | Luo et al. | Mar 2019 | B1 |
10410055 | Wang et al. | Sep 2019 | B2 |
20030174773 | Comaniciu et al. | Sep 2003 | A1 |
20070183661 | El-maleh et al. | Aug 2007 | A1 |
20070183662 | Wang et al. | Aug 2007 | A1 |
20070230792 | Shashua | Oct 2007 | A1 |
20070286526 | Abousleman et al. | Dec 2007 | A1 |
20080249667 | Horvitz | Oct 2008 | A1 |
20090040054 | Wang et al. | Feb 2009 | A1 |
20090087029 | Coleman et al. | Apr 2009 | A1 |
20100049397 | Lin | Feb 2010 | A1 |
20100111417 | Ward et al. | May 2010 | A1 |
20100226564 | Marchesotti | Sep 2010 | A1 |
20100281361 | Marchesotti | Nov 2010 | A1 |
20110142283 | Huang et al. | Jun 2011 | A1 |
20110206282 | Aisaka | Aug 2011 | A1 |
20110247031 | Jacoby | Oct 2011 | A1 |
20120041636 | Johnson et al. | Feb 2012 | A1 |
20120105639 | Stein | May 2012 | A1 |
20120140076 | Rosenbaum | Jun 2012 | A1 |
20120274629 | Baek | Nov 2012 | A1 |
20120314070 | Zhang et al. | Dec 2012 | A1 |
20130051613 | Bobbitt | Feb 2013 | A1 |
20130083959 | Owechko et al. | Apr 2013 | A1 |
20130182134 | Grundmann et al. | Jul 2013 | A1 |
20130204465 | Phillips et al. | Aug 2013 | A1 |
20130266187 | Bulan et al. | Oct 2013 | A1 |
20130329052 | Chew | Dec 2013 | A1 |
20140072170 | Zhang et al. | Mar 2014 | A1 |
20140104051 | Breed | Apr 2014 | A1 |
20140142799 | Ferguson et al. | May 2014 | A1 |
20140143839 | Ricci | May 2014 | A1 |
20140145516 | Hirosawa | May 2014 | A1 |
20140198184 | Stein | Jul 2014 | A1 |
20140321704 | Partis | Oct 2014 | A1 |
20140334668 | Saund | Nov 2014 | A1 |
20150062304 | Stein | Mar 2015 | A1 |
20150310370 | Burry et al. | Oct 2015 | A1 |
20150353082 | Lee | Dec 2015 | A1 |
20160026787 | Nairn et al. | Jan 2016 | A1 |
20160037064 | Stein | Feb 2016 | A1 |
20160094774 | Li | Mar 2016 | A1 |
20160118080 | Chen | Apr 2016 | A1 |
20160129907 | Kim | May 2016 | A1 |
20160165157 | Stein | Jun 2016 | A1 |
20160210528 | Duan | Jul 2016 | A1 |
20160275766 | Venetianer et al. | Sep 2016 | A1 |
20160321381 | English | Nov 2016 | A1 |
20160334230 | Ross et al. | Nov 2016 | A1 |
20160342837 | Hong | Nov 2016 | A1 |
20160347322 | Clarke et al. | Dec 2016 | A1 |
20160375907 | Erban | Dec 2016 | A1 |
20170053169 | Cuban et al. | Feb 2017 | A1 |
20170124476 | Levinson et al. | May 2017 | A1 |
20170134631 | Zhao | May 2017 | A1 |
20170177951 | Yang et al. | Jun 2017 | A1 |
20170301104 | Qian et al. | Oct 2017 | A1 |
20170305423 | Green | Oct 2017 | A1 |
20180151063 | Pun et al. | May 2018 | A1 |
20180158197 | Dasgupta et al. | Jun 2018 | A1 |
20180260956 | Huang et al. | Sep 2018 | A1 |
20180283892 | Behrendt et al. | Oct 2018 | A1 |
20180373980 | Huval | Dec 2018 | A1 |
20180374233 | Zhou | Dec 2018 | A1 |
20190025853 | Julian et al. | Jan 2019 | A1 |
20190065863 | Luo et al. | Feb 2019 | A1 |
20190065864 | Yu | Feb 2019 | A1 |
20190066329 | Luo et al. | Feb 2019 | A1 |
20190066330 | Luo et al. | Feb 2019 | A1 |
20190108384 | Wang et al. | Apr 2019 | A1 |
20190132391 | Thomas et al. | May 2019 | A1 |
20190132392 | Liu et al. | May 2019 | A1 |
20190180469 | Gu | Jun 2019 | A1 |
20190197321 | Hughes | Jun 2019 | A1 |
20190210564 | Han et al. | Jul 2019 | A1 |
20190210613 | Sun et al. | Jul 2019 | A1 |
20190236950 | Li et al. | Aug 2019 | A1 |
20190266420 | Ge et al. | Aug 2019 | A1 |
Number | Date | Country |
---|---|---|
106340197 | Jan 2017 | CN |
106781591 | May 2017 | CN |
108010360 | May 2018 | CN |
2608513 | Sep 1977 | DE |
1754179 | Feb 2007 | EP |
2448251 | May 2012 | EP |
2463843 | Jun 2012 | EP |
2463843 | Jul 2013 | EP |
2761249 | Aug 2014 | EP |
2463843 | Jul 2015 | EP |
2448251 | Oct 2015 | EP |
2946336 | Nov 2015 | EP |
2993654 | Mar 2016 | EP |
3081419 | Oct 2016 | EP |
100802511 | Feb 2008 | KR |
2005098739 | Oct 2005 | WO |
2005098751 | Oct 2005 | WO |
2005098782 | Oct 2005 | WO |
2010109419 | Sep 2010 | WO |
2013045612 | Apr 2013 | WO |
2014111814 | Jul 2014 | WO |
2014111814 | Jul 2014 | WO |
2014166245 | Oct 2014 | WO |
2014201324 | Dec 2014 | WO |
2015083009 | Jun 2015 | WO |
2015103159 | Jul 2015 | WO |
2015125022 | Aug 2015 | WO |
2015186002 | Dec 2015 | WO |
2015186002 | Dec 2015 | WO |
2016090282 | Jun 2016 | WO |
2016135736 | Sep 2016 | WO |
2017013875 | Jan 2017 | WO |
2017079460 | May 2017 | WO |
2019040800 | Feb 2019 | WO |
2019084491 | May 2019 | WO |
2019084494 | May 2019 | WO |
2019140277 | Jul 2019 | WO |
2019168986 | Sep 2019 | WO |
Entry |
---|
Ahn, Kyoungho, Hesham Rakha, “The Effects of Route Choice Decisions on Vehicle Energy Consumption and Emissions”, Virginia Tech Transportation Institute, Blacksburg, VA 24061, pp. 1-34, date unknown. |
Athanasiadis, Thanos, et al., “Semantic Image Segmentation and Object Labeling”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 3, pp. 1-15, Mar. 2007. |
Bar-Hillel, Aharon et al. “Recent progress in road and lane detection: a survey.” Machine Vision and Applications 25 (2011): pp. 727-745. |
Barth, Matthew et al., “Recent Validation Efforts for a Comprehensive Modal Emissions Model”, Transportation Research Record 1750, Paper No. 01-0326, College of Engineering, Center for Environmental Research and Technology, University of California, Riverside, CA 92521, pp. 1-11, date unknown. |
Carle, Patrick J.F., “Global Rover Localization by Matching Lidar and Orbital 3D Maps.”, IEEE, Anchorage Convention Distriction, pp. 1-6, May 3-8, 2010. (Anchorage Alaska, US), May 3-8, 2019. |
Caselitz, T. et al., “Monocular camera localization in 3D LiDAR maps,” European Conference on Computer Vision (2014) Computer Vision—ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol. 8690, pp. 1-6, Springer, Cham. |
Chinese Application No. 201980015452.8, First Office Action dated Sep. 14, 2021, pp. 1-16. |
Cordts, Marius et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, pp. 1-29, 2016. |
Dai, Jifeng, et al. (Microsoft Research), “Instance-aware Semantic Segmentation via Multi-task Network Cascades”, CVPR, pp. 1, 2016. |
Engel, J. et la. “LSD-SLAM: Large Scare Direct Monocular SLAM,” pp. 1-16, Munich. |
Geiger, Andreas et al., “Automatic Camera and Range Sensor Calibration using a single Shot”, Robotics and Automation (ICRA), pp. 1-8, 2012 IEEE International Conference. |
Guarneri, P. et al., “A Neural-Network-Based Model for the Dynamic Simulation of the Tire/Suspension System While Traversing Road Irregularities,” in IEEE Transactions on Neural Networks, vol. 19, No. 9, pp. 1549-1563, Sep. 2008. |
Hou, Xiaodi and Harel, Jonathan and Koch, Christof, “Image Signature: Highlighting Sparse Salient Regions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, No. 1, pp. 194-201, 2012. |
Hou, Xiaodi and Yuille, Alan and Koch, Christof, “Boundary Detection Benchmarking: Beyond F-Measures”, Computer Vision and Pattern Recognition, CVPR'13, vol. 2013, pp. 1-8, IEEE, 2013. |
Hou, Xiaodi and Zhang, Liqing, “A Time-Dependent Model of Information Capacity of Visual Attention”, International Conference on Neural Information Processing, pp. 127-136, Springer Berlin Heidelberg, 2006. |
Hou, Xiaodi and Zhang, Liqing, “Color Conceptualization”, Proceedings of the 15th ACM International Conference on Multimedia, pp. 265-268, ACM, 2007. |
Hou, Xiaodi and Zhang, Liqing, “Dynamic Visual Attention: Searching For Coding Length Increments”, Advances in Neural Information Processing Systems, vol. 21, pp. 681-688, 2008. |
Hou, Xiaodi and Zhang, Liqing, “Saliency Detection: A Spectral Residual Approach”, Computer Vision and Pattern Recognition, CVPR'07—IEEE Conference, pp. 1-8, 2007. |
Hou, Xiaodi and Zhang, Liqing, “Thumbnail Generation Based on Global Saliency”, Advances in Cognitive Neurodynamics, ICCN 2007, pp. 999-1003, Springer Netherlands, 2008. |
Hou, Xiaodi et al., “A Meta-Theory of Boundary Detection Benchmarks”, arXiv preprint arXiv:1302.5985, pp. 1-4, 2013. |
Hou, Xiaodi, “Computational Modeling and Psychophysics in Low and Mid-Level Vision”, California Institute of Technology, pp. 1-125, 2014. |
Huval, Brody et al., “An Empirical Evaluation of Deep Learning on Highway Driving”, arXiv:1504.01716v3 [cs.RO] pp. 1-7, Apr. 17, 2015. |
Jain, Suyong Dutt, Grauman, Kristen, “Active Image Segmentation Propagation”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 1-10, Jun. 2016. |
Kendalli, Alex, et al., “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision”, arXiv:1703.04977v1 [cs.CV] pp. 1-11, Mar. 15, 2017. |
Levinson, Jesse et al., Experimental Robotics, Unsupervised Calibration for Multi-Beam Lasers, pp. 179-194, 12th Ed., Oussama Khatib, Vijay Kumar, Gaurav Sukhatme (Eds.) Springer-Verlag Berlin Heidelberg 2014. |
Li, Tian, “Proposal Free Instance Segmentation Based on Instance-aware Metric”, Department of Computer Science, Cranberry-Lemon University, Pittsburgh, PA., pp. 1-2, date unknown. |
Li, Yanghao et al., “Revisiting Batch Normalization for Practical Domain Adaptation”, arXiv preprint arXiv:1603.04779, pp. 1-12, 2016. |
Li, Yanghao, et al., “Demystifying Neural Style Transfer”, arXiv preprint arXiv:1701.01036, pp. 1-8, 2017. |
Li, Yanghao, et al., “Factorized Bilinear Models for Image Recognition”, arXiv preprint arXiv:1611.05709, pp. 1-9, 2016. |
Li, Yin and Hou, Xiaodi and Koch, Christof and Rehg, James M. and Yuille, Alan L., “The Secrets of Salient Object Segmentation”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 280-287, 2014. |
MacAodha, Oisin, Campbell, Neill D.F., Kautz, Jan, Brostow, Gabriel J., “Hierarchical Subquery Evaluation for Active Learning on a Graph”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-8, 2014. |
Mur-Artal, R. et al., “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Transaction on Robotics, Oct. 2015, pp. 1147-1163, vol. 31, No. 5, Spain. |
Norouzi, Mohammad, et al., “Hamming Distance Metric Learning”, Departments of Computer Science and Statistics, University of Toronto, pp. 1-9, date unknown. |
Paszke, Adam et al., Enet: A deep neural network architecture for real-time semantic segmentation. CoRR, abs/1606.02147, pp. 1-10, 2016. |
PCT International Search Report and Written Opinion dated May 23, 2019, International application No. PCT/ US2019/19839 filing date, Feb. 27, 2019. |
Ramos, Sebastian, et al., “Detecting Unexpected Obstacles for Self-Driving Cars: Fusing Deep Learning and Geometric Modeling”, arXiv: 1612.06573v1 [cs.CV] pp. 1-8, Dec. 20, 2016. |
Richter, Stephan R. et al., “Playing for Data: Ground Truth from Computer Games”, Intel Labs, European Conference on Computer Vision (ECCV), Amsterdam, the Netherlands, pp. 1-16, 2016. |
Sattler, T. et al., “Are Large-Scale 3D Models Really Necessary for Accurate Visual Localization?” CVPR, IEEE, 2017, pp. 1-10. |
Schindler, Andreas et al. “Generation of high precision digital maps using circular arc splines,” 2012 IEEE Intelligent Vehicles Symposium, Alcala de Henares, 2012, pp. 246-251. doi: 10.1109/IVS.2012.6232124. |
Schroff, Florian, et al., (Google), “FaceNet: A Unified Embedding for Face Recognition and Clustering”, CVPR, pp. 1-10, 2015. |
Somani, Adhira et al., “DESPOT: Online POMDP Planning with Regularization”, Department of Computer Science, National University of Singapore, pp. 1-9, date unknown. |
Spinello, Luciano, Triebel, Rudolph, Siegwart, Roland, “Multiclass Multimodal Detection and Tracking in Urban Environments”, Sage Journals, vol. 29 Issue 12, pp. 1498-1515 (p. 18), Article first published online: Oct. 7, 2010; Issue published: Oct. 1, 2010. |
Szeliski, Richard, “Computer Vision: Algorithms and Applications” http://szeliski.org/Book/, pp. 1-2, 2010. |
Wang, Panqu and Chen, Pengfei and Yuan, Ye and Liu, Ding and Huang, Zehua and Hou, Xiaodi and Cottrell, Garrison, “Understanding Convolution for Semantic Segmentation”, arXiv preprint arXiv:1702.08502, pp. 1-10, 2017. |
Wei, Junqing et al., “A Prediction- and Cost Function-Based Algorithm for Robust Autonomous Freeway Driving”, 2010 IEEE Intelligent Vehicles Symposium, University of California, San Diego, CA, USA, pp. 1-6, Jun. 21-24, 2010. |
Welinder, Peter, et al., “The Multidimensional Wisdom of Crowds”; http://www.vision.caltech.edu/visipedia/papers/WelinderEtaINIPS10.pdf, pp. 1-9, 2010. |
Yang, C., et al., “Neural Network-Based Motion Control of an Underactuated Wheeled Inverted Pendulum Model,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 25, No. 11, pp. 2004-2016, Nov. 2014. |
Yu, Kai et al., “Large-scale Distributed Video Parsing and Evaluation Platform”, Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences, China, arXiv:1611.09580v1 [cs.CV], pp. 1-7, Nov. 29, 2016. |
Zhang, Z. et al. A Flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence (vol. 22, Issue: 11, pp. 1-5, Nov. 2000). |
Zhou, Bolei and Hou, Xiaodi and Zhang, Liqing, “A Phase Discrepancy Analysis of Object Motion”, Asian Conference on Computer Vision, pp. 225-238, Springer Berlin Heidelberg, 2010. |
Number | Date | Country | |
---|---|---|---|
20220215672 A1 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16868400 | May 2020 | US |
Child | 17656415 | US | |
Parent | 15906561 | Feb 2018 | US |
Child | 16868400 | US |