The present disclosure relates to a method and system for camera-based detection, classification and tracking of distributed objects, and particularly to detecting, classifying and tracking moving objects along surface terrain through multiple zones without the transmission or storage of personally identifiable information.
The detection, classification and tracking of objects through space has a wide variety of applications. One such common application is in the monitoring and analysis of traffic patterns of people, vehicles, animals or other objects over terrain, for example, through city and suburban roads and intersections.
The detection and tracking of objects across surface terrain using cameras has been possible using overhead camera, such as where the camera view angle is essentially perpendicular to the surface of the terrain being monitored. The ability to mount camera directly overhead however is frequently difficult and costly because there are few overhead attachment points or they are not high enough to take in a significant undistorted field of view. As an alternative it is possible to move the camera view angle off the perpendicular axis, such as for example, to place it on a lamp post along a road or at a street corner looking across the traffic area rather than down from overhead. As the camera angle deviates from the perpendicular, however, it becomes more difficult to identify the terrain surface and more particularly an object's path over the surface. One solution to this problem is to use multiple cameras to create stereoscopic vision from which the objects movement through space can be more readily calculated. This solution has drawbacks in that it requires multiple cameras for each area being monitored, greatly increasing hardware and installation costs.
In the particular field of traffic monitoring there are also more rudimentary systems known but they are lacking in capabilities and usefulness. For example, collecting data on the traffic patterns of an intersection has been known through manual counting, depth sensors (e.g., infrared, radar, lidar, ultra wide band), or the installation of a device such as a pneumatic road tube, a piezoelectric sensor or an inductive loop. Manual counting has safety risks associated with a human operator and the counter collects a smaller sample size than other methods. Depth sensors and inductive loops are expensive. Moreover, all of these methods lack the ability to classify objects and track object paths. Namely, these previous traffic monitoring methods and devices are limited in the amount of data they can collect. For example, it is difficult to distinguish between a truck and a car with the data from a pneumatic road tube. An inductive loop cannot track pedestrians or bicycles. Finally, it is difficult or impossible to combine and evaluate the data from multiple traffic sensors in a manner that produces meaningful data to track traffic patterns.
The problems of known systems become particularly acute when the area to be monitored is large, for example in monitoring the traffic patterns in an entire cityscape. Specifically, to assess, for example, the usage volumes of streets, cross walks, overpasses and the like and the pathways of the objects traversing the same over an entire cityscape the system needs to track objects from one sensor zone to another. Typically, only camera based systems have such capability to track paths but then can only track continuous paths from zones if the zones overlap and objects can be handed from one zone sensor to the other for tracking. This method however is exceedingly expensive as it requires full coverage of all areas without discontinuities.
The present disclosure solves the above needs and deficiencies with known methods of detecting, classifying and tracking distributed objects, such as is useful in vehicular and pedestrian traffic monitoring and prediction system and methods. For example, the method and system disclosed herein may use a single side mounted camera to monitor each zone or intersection, and track objects across multiple discontiguous zones while maintaining privacy; i.e., without storing or transmitting personally identifiable information about objects.
In a first aspect of the system a system and method are provided for detecting, classifying and tracking distributed objects in a single zone or intersection via a single camera with a field of view over the zone. The system and method includes tracking objects transiting an intersection using a single camera sensor that acquires an image of the zone or cell, classifies an object or objects in the image, detects pixel coordinates of the objects in the image, transforms the pixel coordinates into a position in real space and updates a tracker with the position of the object over time.
In a second aspect of the system and method, a plurality of zones or cells are monitored in a cityscape, wherein the plurality of zones may be discontiguous and do not overlap and wherein the paths from zone to zone are predicted through object characteristic and path probability analysis, without the storage or transfer of personally identifiable information related to any of the distributed objects.
A third aspect of the system and method is provided to configure and calibrate the sensor units for each zone using a calibration application running on a calibration device (e.g., mobile device/smartphone). The system and method includes mounting a sensor such that it can monitor a cell. A user scans a QR code on the sensor with a mobile device that identifies the specific sensor and transmits a request for an image to the sensor. The mobile device receives an image from the sensor and the user orients a camera on the phone to capture the same image as the sensor. The user captures additional data including image, position, orientation and similar data from the mobile device and produces a 3D structure from the additional data. The GPS position of the sensor or an arbitrary point is used as an origin to translate pixel coordinates into a position in real space.
While the disclosure above and the detailed disclosure below is presented herein by way of example in the context of a specific intersection, it will be understood by those of ordinary skill in the art that the concepts may be applied to other trafficked pathways where there is a beneficial advantage to track and predict traffic patterns of humans, animals, vehicles or other objects on streets, sidewalks, paths or other terrain or spaces. With the foregoing overview in mind, specific details will now be presented bearing in mind that these details are for illustrative purposes only and are not intended to be exclusive.
The accompanying drawings illustrate various non-limiting examples and innovative aspects of the system and method for camera-based detection, classification and tracking of distributed objects, calibration of the same and prediction of pathways through multiple disparate zones in accordance with the present description:
In simplified overview, an improved system and method for camera-based detection, classification, and tracking of distributed objects is provided, as well as, a system and method of calibrating the system, and predicting object paths across discontiguous camera view zones is described herein. While the concepts of the disclosure will be disclosed and described herein in the context or pedestrians and vehicles in a cityscape for ease of explanation, it will be apparent to those of skill in the art that the same principles and methods can be applied to many applications in which objects are traversing any terrain.
System Configuration
Referring to
In various embodiment, such as shown in
Sensor Calibration
Before the image sensor in each sensing unit can accurately track objects in its view (e.g., the intersection), the sensing unit must be calibrated so that an image from a single camera unit (i.e., without stereoscopic images or depth sensors) can be used to identify the positions of the objects on the terrain in its view field.
An exemplary method for calibrating the sensor unit is illustrated in the flow chart of
Referring to
Next, in step 203, the installer/user runs a calibration application on a mobile device. The calibration application is used to collect measurement data as will be described in the following steps for each sensor unit once fixed in position. In step 204, the calibration application is used to provide the specific sensor unit to be calibrated with measurement data. This may be accomplished in any number of ways, entry of a sensor unit serial number read from the body of the sensor unit, scanning a barcode or QR code on the sensor unit, reading an RFID, unique identifier via Bluetooth, near field communication or other wireless communication.
Once the calibration application correctly identifies the sensor unit, the calibration application collects a sample image from the sensor unit in step 205. In an exemplary embodiment, the mobile device sends a request for the sample image to the cloud computer. The cloud computer requests the sample image from the sensor unit 101 over the internet and relays the sample image to the mobile device. In other embodiments the calibration unit may connect to and directly request the sample image from the sensor unit 101, which then sends a sample image to the sensor unit 101. The installer uses the sample image as a guide for the location to aim the mobile device when collecting images.
In step 206, the user orients the camera on the mobile device/calibration unit to take a first image that is substantially the same as the sample image. The calibration application uses a feature point matching algorithm, for example SIFT or SURF, to find tie points that match between the first image and the sample image. When a predetermined number of tie points are identified, the calibration application provides positive feedback to the user, such as by highlighting the tie point in the image or vibrating the phone or making a sound. In an exemplary embodiment, the tie points are identified and are distributed throughout the field of view of the sensor unit 101. In an exemplary embodiment at least 50 to 100 tie points are identified.
Upon receiving the positive feedback, in step 207 the calibration application preferably prompts the user to move the phone in a slow sweeping motion, keeping the camera oriented toward the sensor unit field of view (e.g., intersection). The sweeping process is illustrated in
In step 208, during the sweep the mobile device captures corresponding measurements of the mobile device's relative position to either the sample image or the previous image from the accelerometer, gyroscope and compass data. GPS coordinates may also be collected for each image.
As illustrated by
If a predetermined number of matching tie points are not detected the calibration application instructs the user to re-orient the mobile device and perform an additional sweep 211. Afterwards, the process goes back to repeat step 208.
The installation is complete when a predetermined number of images and their corresponding measurements, from the accelerometer, gyroscope, compass etc., are collected 212. In an exemplary embodiment, at least 6 images are collected for the calibration. In alternate exemplary embodiments at least 6 to 12 images are collected.
In an exemplary embodiment, the sensor unit also obtains its longitude and latitude during the installation process. If the sensor unit does not include a GPS receiver the user may hold the mobile device adjacent to the sensor unit and the application will transmit GPS coordinates to the sensor unit. If neither the sensor unit nor the mobile device have a GPS sensor the longitude and latitude coordinates are determined later from a map and transmitted or entered into the sensor unit.
Once the calibration data including the N images, N corresponding measurements from the compass, N−1 corresponding measurements of the relative position of the mobile device are obtained from the accelerometer and gyroscope and Kn tie points are collected, a transform is created in the process phase. This transform converts the pixel coordinates of an object in an image into real world longitude and latitude coordinates.
In an exemplary embodiment, the calibration data is stored in the sensor unit or the cloud computer upon completion of the sensor unit calibration. The processing phase to calculate the transform is carried out on the sensor unit or the cloud computer. A structure from motion (SFM) algorithm may be used to calculate the 3D structure of the intersection. The relative position and orientation measurements of each image are used to align the SFM coordinate frame with an arbitrary real-world reference frame, such as East-North-Up (“ENU”), and rescale distances to a real-world measurement system such as meters or the like.
The GPS position of the sensor unit or an arbitrary point in the sample image is used as the origin to translate the real-world coordinates previously obtained into latitude and longitude coordinates. In an exemplary embodiment, the GPS position and other metadata is stored in the Sensor Database 118 in the cloud computer.
An exemplary SFM algorithm is dense multi-view reconstruction. In this example, every pixel in the image sensor's field of view is mapped to the real-world coordinate system.
An additional exemplary SFM algorithm is a homography transform illustrated in
Once configured the sensor unit can track the path of distinct objects through each cell or intersection.
Detection and Tracking
In an exemplary embodiment illustrated in
The process of generating the path begins with the sensor unit taking a first image of the intersection at time t.
Referring to
The prediction module 602 predicts the path of objects identified in a second frame from time t−1. The predicted path of an object is based on the previous path of an object and its location in the second frame. Exemplary prediction modules 602 include a naïve model (e.g. Kalman Filter), a statistical model (e.g. particle filter) or a model learned from training data (e.g. recurrent neural network). Multiple models can be used as the sensor unit collects historical data. Additionally, multiple models can be used simultaneously and later selected by a user based on their accuracy.
The update module 603 attempts to combine the current object and location information from the first frame with the predicted path generated from the prediction module. If the current location of an object is sufficiently similar to the predicted position of a path the current location is added to the path. If an object's current location does not match an existing path a new path is created with a new unique path ID.
In an exemplary embodiment, the sensor unit 101 transmits the path to the cloud computer 103 or other sensor units 101. The path may be transmitted after each iteration, at regular intervals (e.g. after every minute) or once the sensor unit 101 determines that the path is complete. A path is considered complete if the object has not been detected for a predetermined period of time or if the path took the object out of the sensor unit's field of view. The completion determination may be made by the cloud computer instead of the sensor unit.
The sensor unit 101 may transmit path data to the cloud computer 103 as a JSON text object to a web API over HTTP. Other transmission methods (e.g. MQTT) can be used. The object transmitted does not need to be text based.
Coordinate Transformation
Next the detection module 601, using the convolutional neural net, locates a point A where the object touches the ground and is near the bottom edge of the object bounding box. Then the detection module 601 locates a point B where the object touches the ground and is near the first vertical edge of the object bounding box. With the first and second points identified a line is drawn between them. A second line is drawn that intersects the point A and is perpendicular with the first line. A point C intersects the second line and the second vertical edge. Points A, B and C define a base frame for the object. The position of the object in real space is any point on the base frame.
Path Merging
An exemplary method for tracking an object from a first intersection to a second intersection is illustrated in
As described above, an object's path is tracked while transiting the intersection. The tracking begins at time t1. While the following steps describe a cloud computer merging paths from a first sensor unit and a second sensor unit the process can be applied to a network of sensor units without a centralized cloud computer. The field of view on the ground of the sensor unit or the cell is modeled as a hexagon, square or any regular polygon. The objects predicted position is determined using a constant velocity model, using recurrent neural network or other similar method of time series prediction. An object's position is predicted based on the last known position of the object and the historical path of other similarly classified objects.
The cloud computer begins the process of merging paths by receiving data from the sensor units at the internet gateway 111 via an API or message broker 112. The sensor event stream 113 is the sequence of object identities and positions, including their unique path ID, transmitted to the cloud computer. A track completion module 114 in the cloud computer monitors the paths in the intersection. A track prediction module 115 predicts the next location of the object based on the process described above. When the predicted location of a first object lies outside the field of view of the first sensor unit at a time tn, if there are no adjacent monitored intersections that include the predicted location of the object, the path is completed. The completed path is stored in the Track Database 117.
If there exists a monitored second intersection including the predicted location of the first object, the cloud computer searches for a second object with an associated path to merge. The second object and the first object from the first intersection must have matching criteria for the merger to be successful. The matching criteria includes the second object and the first object having the same classification, the tracking of the second object began between times t1 and tn within the timeframe of the track predictions and the first position of the second object is within a radius r of the last known position of the first object. If the matching criteria is met a track merging module 116 merges the first object with the second object by replacing the second object's unique path ID with the first object's unique path ID.
The accuracy of the merging process is improved with the inclusion of object appearance information in addition to the identifying information. The object appearance information may include a histogram of oriented gradients or a convolutional neural network feature map.
If there are no tracked objects in the second intersection that meet the matching criteria of the first object, then the first path is completed.
If more than one object in the second intersection meet the matching criteria a similarity metric D (e.g. mean squared distance) is calculated for each object meeting the matching criteria in the second intersection. A matching object is selected from the plurality of objects in the second intersection, based on the similarity metric exceeding a predetermined threshold to merge with the first object.
The object appearance information may be incorporated into the similarity metric and the predetermined threshold. This improves accuracy when object mergers are attempted at a third, fourth or subsequent intersection.
If a plurality of matching objects have a similarity metric above the predetermined threshold, the object with the highest similarity metric is selected to merge with the first object. A high similarity metric is an indication that two objects are likely the same.
There exist additional methods of determining a matching object from a plurality of objects. The selecting process may be treated as a combinatorial assignment problem, in which the similarity of a first and second object by building a similarity matrix is tested. The matching object may also be determined by using the Hungarian algorithm or similar.
In an exemplary embodiment, the process of merging a first and second object from different intersections is performed interactively resulting in paths for the first object spanning an arbitrary number of sensor unit monitored intersections.
In some embodiments, the disclosed methods may be implemented as computer program instructions encoded on a non-transitory computer-readable storage media in a machine-readable format, or on other non-transitory media or articles of manufacture. In some examples, the signal bearing medium may encompass a computer-readable medium, such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, memory. In some implementations, the signal bearing medium may encompass a computer recordable medium, such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, etc. In some implementations, the signal bearing medium may encompass a communications medium, such as, but not limited to, a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.). Thus, for example, the signal bearing medium may be conveyed by a wireless form of the communications medium.
The non-transitory computer readable medium could also be distributed among multiple data storage elements, which could be remotely located from each other. The computing device that executes some or all of the stored instructions could be a sensor unit. Alternatively, the computing device that executes some or all of the stored instructions could be another computing device, such as a cloud computer.
It should be understood that this description (including the figures) is only representative of some illustrative embodiments. For the convenience of the reader, the above description has focused on representative samples of all possible embodiments, and samples that teach the principles of the disclosure. The description has not attempted to exhaustively enumerate all possible variations. That alternate embodiments may not have been presented for a specific portion of the disclosure, or that further undescribed alternate embodiments may be available for a portion, is not to be considered a disclaimer of those alternate embodiments. One of ordinary skill will appreciate that many of those undescribed embodiments incorporate the same principles of the disclosure as claimed and others are equivalent.
Applicant hereby claims priority to provisional U.S. patent application Ser. No. 62/830,234 filed Apr. 5, 2019, entitled “System and Method for Camera-Based Distributed Object Detection, Classification and Tracking.” The entire contents of the aforementioned application are herein expressly incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US20/25605 | 3/29/2020 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62830234 | Apr 2019 | US |