Embodiments disclosed herein are in the field of video data processing, in particular object detection and pattern recognition.
Various techniques exist for processing digital video data for purposes of image recognition, pattern recognition, object detection, etc. Typically, video data is captured by one type of processing device and then analyzed by the device or by another type of processing device which has received the captured data. For example, one method includes acquiring visual image primitives from a video input comprising visual information relevant to a human activity. The primitives are temporally aligned to an optimally hypothesized sequence of primitives transformed from a sequence of transactions as a function of a distance metric between the observed primitive sequence and the transformed primitive sequence. Another method detects a moving target with the use of a reference image and an inspection image from the images captured by one or more cameras. A moving target is detected from the reference image and the inspection image based on the orientation of corresponding portions in the reference image and the inspection image relative to a location of an epipolar direction common to the reference image and the inspection image; and displays any detected moving target on a display.
Current video data processing techniques typically operate on one type of video input data. Making available a larger source of data aggregated from multiple sources into a combined source has not been possible for individual contributors.
In addition, it has proven challenging to process large amounts of streaming video data effectively.
It would be desirable to have a system and method for receiving digital video data from multiple sources of different types and be able to analyze the raw data as a single data source from the different sources to determine facts about a scene both at a point in time and over a period of time.
Embodiments described herein include a system and method for video data processing. Video data from multiple streaming sources is processed in order to determine the status of various aspects of environments. The video data processing system uses video streams to measure activity levels in the physical world. This provides information that enables people and businesses to interact more effectively and efficiently with physical locations and cities.
In an embodiment, the video data processing system uses input video streams from a variety of sources. Sources include existing video feeds such as security cameras, video feeds contributed by system users through old smartphones placed in a fixed location simple webcams, or embedded sensors that contain a video feed and some video analysis software. The system includes a backend subsystem consisting of specially programmed processors executing software that manages video feeds, processes the video feeds into data, stores the data, and computes analytics and predictions.
Embodiments facilitate the processing and analysis of any possible video source, whatever its type or support. These sources include: existing public video cameras in the form of standard video streams; existing public video feeds in the form of .jpg files regularly updated on a website; simple webcams; security cameras installed for security purposes but whose feeds can be ingested by the video data processing system to extract data; video streams or files coming from old cell phones that run a video sensing application specific to the video data processing system. The sensing application can either produce actual video streams or encore video files, and pushes them on a hosted storage server such as FTP server or Amazon S3 server. Using a smartphone as a video sensor, a capture device is configured to stream data out through files. This solves a major problem of setting cameras and exporting their feeds to a different network location on the internet.
The system thus provides a unified framework to intake video frames coming from these various described sources, and to unify their geolocation and time reference so as to be able to compare any geolocated or time stamped data extracted from them.
In an embodiment using a smart phone, consecutive video files on the smartphone are encoded, time stamped, and pushed to an FTP server to produce a stable stream of video content without having to have a video streaming server in the loop, but rather a simple file server.
These video feeds are produced by multiple types of entities, including: companies or entities that own video feeds, and provide them for free—e.g. the DOT in New York; companies or entities (e.g. retailers) that own video feeds, and provide them to the video data processing system in exchange for having them transformed into valuable data; companies or organizations that are paid for access to the video feeds they own and operate—e.g. earthcam; companies with whom there is no monetary exchange, e.g. they provide their feed, in exchange for a minimal amount of data for free; and individual contributors who use old smart phones or contribute old cell phones which are hung on windows or wall surface. By running the sensing application on these old phones, new video data processing system video feeds are created.
Compiling video data from many different sources to create data insights and analytics has more scaling network effect than all single data sources combined. This is made possible in part by aggregating data from multiple sources (including individual contributors) into a combined, stable source.
Embodiments include various video algorithms dedicated to transforming a video signal into data and measurements. Embodiments further include data algorithms that combine measurements from video feeds with lower resolution activity maps, weather information, and local event data, to infer place activity in space and time. An output interface includes tools to turn the data extracted from videos into human readable information and useful actions.
The input video sources 102 are very varied in nature and quality as previously described. A backend subsystem 104 receives video data streams from the input video sources 102. Feed management module 112 receives the video data streams. Other management modules include a worker management module 114, a locations management and geographic intelligence module 116, and a data storage module 118. As used herein, “worker” implies one or more servers and one or more processors for processing data. Workers can be distributed geographically, and processing tasks may be distributed among workers in any fashion. Data storage module 118 is shown as a single module existing in backend 104. However, actual data storage can be, and typically is, distributed anywhere over the internet. Data storage module 118 is thus a data management module and possibly actual data storage, but not all data will be stored locally.
Input video sources 102 also communicate with a contributor management module 110. Contributor management module 110 oversees and tracks the various input video sources, including their locations and “owners”. In some instances, individual owners are paid for making their video data available to the system. Video analysis workers 106 represent multiple special purpose processors tasked with executing video analysis worker processes as further described below. Analyzed video data is stored by data storage manager 118, and also further analyzed by data analytics module 108 as further described below. Data analytics module 108 represents special purpose processors executing data analytics processes. Data analytics module 108 further has access to external data sources 122, which provide data such as weather information, event information related to a location of the video data, etc. Data analytics module 108 may combine external data with the output of the video analysis workers 106 to produce more meaningful output data that is stored by data storage management 118 and output to user interface and user applications 120. User interface and applications 120 make processed video data available to users in a highly accessible form. User interface 120 is available in various embodiments on any computing device with processing capability, communication capability, and display capability, including personal computers and mobile devices.
In an embodiment, backend 104 is a multi-layered system whose roles include: registering all existing video streams and their sources; if the source is a contributor, storing availability and contact information to provide data or to pay them, based on the availability of their sensors; managing “worker” processes that process all video feeds in a different subsystem, and will report data to backend 104; gathering and storing data extracted from video streams; consolidating and merging all data from various sources (e.g., video measurements, weather APIs); packaging and serving data for applications or as an output of backend 104; and architecturally removing the dependency of the video algorithm processor on the various sources of data.
According to one aspect of backend 104, it serves to coordinate the distribution of all input sources and worker processes over different types of networks and environments.
Various applications APIs 220 can be used to allow various applications to communicate data to data APIs 224.
The video data processing system executes various video algorithms and various data algorithms. In an embodiment, the video algorithms are based on a layered stack of algorithms. In an embodiment, these algorithmic layers are based on the assumption that video feeds have a static viewpoint and an average frame rate greater than 0.2 frames per seconds, but embodiments are not so limited.
There is no solution today to continuously send video from a mobile phone to a server over long periods of time—think months, 24/7. Video streaming from mobile apps poses several challenges. Video streaming libraries for the iOS and Android are of poor quality, badly supported and/or unreliable, especially for purposes of the current embodiment where it is intended to stream 24/7. With video streaming libraries one can publish a stream from a mobile app, but this stream needs to be captured by a streaming server and restreamed for consumption (e.g., by a video worker). A streaming server is a complex piece of infrastructure to maintain and non-trivial to scale.
A protocol according to an embodiment addresses these issues. The mobile app continuously captures video clip files of a given length L from its camera. The timestamp T at which the clip was captured is embedded in the file metadata. When a video clip is ready it is uploaded directly to dedicated file storage in the cloud (many easily scalable distributed solutions are available for this kind of storage). A video worker that is processing a mobile app stream, polls the file storage or the latest video clips. It downloads the latest clip, processes the video in it, using frame timestamps derived from the embedded timestamp T. When it is done it cleans up file storage and keeps polling until the next clip is available. A video worker will consider a stream broken if after a given amount of time polling file storage no new clip arrived. When we detect missing data or files, we remotely command the mobile application to reduce video bitrate if possible.
This results in a loss-less video streaming protocol that is not entirely real-time (clips are processed at least with a delay of L). For our use case this delay in real-time processing is acceptable, as long as we can reconstruct the timestamp of any given clip or frame.
Video Algorithms
Moving object detection is a layer is that detects moving objects or moving parts in the image. It is based on estimating the background image of a fixed video stream, by modeling each point using a Gaussian distribution of values on each channel of a color image, or the amplitude of the combined channels. Each pixel is then modeled as: Gaussian distributions for all channels of the color image; a Gaussian distribution for the pixel luminance expressed as a linear combination of the three color channels.
Such a model is created and stored in memory for each coordinate point of an image. As new frames arrive in the system, the Gaussian model estimation is updated with the new values of each pixel at the same coordinate by storing the sum S of the pixel values over time, and the sum T of squared values. Given the total number of observations is N, the average and standard deviation of the Gaussian model can then be evaluated as S/N for the mean value and (2S−S*S)/N for the square value of the standard deviation.
In order to adjust the Gaussian values to potential changes in the mean and standard deviation, these values are computed on moving time windows. In order to reduce the complexity of computing all values over a moving averages, a half distance overlapping scheme is used. If M is the minimum window size (number of samples) over which mean and standard deviation is to be estimated, two sets of overlapping sums and square sums are constantly stored: the current sum set and the future sum set. Each set has the number of samples and the sum of values and the sum of square values. When the first set reaches M samples, the second set is reset, and then updated with each new frame. When the first set reaches M*2 samples, the future set reaches M samples. The future set values are then copied into the current set values, and the future set is reset. This way, at any point in time after M first samples, the estimation of the Gaussian model always has more than M samples, and it is adjusted over time windows of M*2 samples. M is typically set to values ranging from 10 to 1000 depending on applications and video frame rates.
Once a new frame comes in, for each pixel location in an image, it is first assessed whether the current value is part of the background or not. To do so, the normalized distance of the current pixel values is computed for each color channel with the background mean values for each channel. The normalized distance is the distance of the current point to the closest mean adjusted with the standard deviation for the background images. This distance is then normalized towards the amplitude of each channel or the average of all channels. The raw distance calculated from above is divided by a uniform factor of the average values.
If this normalized distance is greater than a predefined threshold, the pixel is classified as a foreground pixel and assigned to the moving objects. If not, the pixel is deemed as part of the background, it is not assigned to the front end masks but used to update the current background models.
At any point in time, the algorithm assumes that there could be a rapid change in background, so it maintains a candidate background point. That point is either updated or created for each point detected as a foreground point.
If the image is too large, the image can be subsampled by an integer factor to evaluate a lower resolution version of the background. Also, the background statistics can be updated only once every n frames. This is very efficient to make the algorithm real time whatever the dimension or frame rate of a video. The CPU occupancy of such a process is controlled and defined with these two parameters. This is a unique way to linearly adjust algorithm reactivity and accuracy based on available or desired computation power.
The object classification layer classifies moving foreground objects (described with reference to the previous layer) into classes of known objects or “noise”. In one embodiment, a customized version of the Haar Pyramid approach is used here. Once all moving objects have been detected, they are classified using a classic supervised learning approach, based on the Haar-like feature Cascade classification (as described in P. A. Viola, M. J. Jones: Robust Real-Time Face Detection. ICCV 2001).
According to embodiments, the system is trained and tested, and the algorithms run only on moving objects, thereby reducing the possibilities and variety of the training and input sets of images. In short the classification scheme only needs to recognize moving urban objects from each other, as opposed to recognizing one type of object from any other possible matrix of pixels.
A tracking layer detects the trajectory of one given object over time. The system uses a novel approach based on a holistic model of the trajectories in the image based on existing known foreground objects or newly emerged objects.
An analysis layer uses the type and trajectory information to detect higher level, human readable data such as vehicle or pedestrian speed, and people entering or exiting a location. Inferences can also be drawn based on building layouts, vehicle traffic flows, and pedestrian traffic flows.
Data Algorithms: Line Analysis
Embodiments also include data algorithms that perform specific tasks based on the data obtained from the main stack of video algorithms above. As an example of a data algorithm, line analysis will be described in detail below.
Line analysis is a data algorithm that uses a video of a line to detect how many people wait in line and how long it takes them to go through the whole line. Embodiments analyze a waiting line in real time video sequences. The goal of the algorithm is the estimation of line attributes in real time that can be useful for somebody in the process of deciding whether to join the line. For example, estimations for the number of people that currently wait in the line and for the current wait time are extracted. The current wait time is an approximation of the time that a person will have to wait in the line if she joins it in this moment. With reference to
With reference to
Process (1.1) works specifically by starting with the current frame input. The current frame input is run through a non-linear time-domain high-pass filter which contains processes Z{circumflex over ( )}(−1), absolute difference and binary threshold. After being run through the non-linear time-domain high-pass filter R, G, B planes with saturation are added. The output of this is run through the space-domain median filter. Once filtered the output is run through either of two routes. In one instance the output is run through a non-linear time-domain low-pass filter which does a time-domain low-pass filter and binary threshold. After running through the filter a copy with the mask is made and the binary threshold is found. The output of this is considered a high activity area and is added to the low activity areas produced by the other instance. In the other instance the output from the space-domain filter has the colors inverted and noisy frames rejected before running through the same linear time-domain low-pass filter described above. The output of this is the low-activity areas. The low-activity area is subtracted from the high activity area to return the area with movement.
Process (1.2) starts with the inputs current frame and expected background. The absolute difference of the current frame and the expected background is found and then R, G, B planes with saturation are added. The absolute difference is then merged with Background (MADB) and the binary threshold of that is found.
Process (1.3) works specifically by starting with an activity mask as the input. The activity mask is sent through an opening process and then the mask is expanded. The MAM is introduced to the output of that process and the mask areas where background does not change are sent to be copied and combined with the expected background. After the MAM is introduced the process will also invert the mask and take the areas where the background does change to make a copy of the current frame using these mask areas. It also will take a copy of that mask and combine it with the expected background. The weighed sum of these copies is found and combined with the masked copy of unchanged background.
Process (1.4) contour extraction starts with the input activity mask. An opening is applied on the activity mask and the output is run through the TC89 algorithm to return the activity contours.
With reference to
There can be situations in which the image of the waiting line has gaps. This can be due to people standing too far from each other or because the line passes behind occluding objects, like trees or light poles. To cope with these situations, contours that lay after the line's first contour end point are sought. If they meet the certain conditions, they are appended to the line's tail, the end point is updated and the search process is repeated until no more potential line contours are found.
Referring to
The operation (2.1) find line first contour starts with the input activity contours. The activity contours are run through the operation to find contours that touch the user-defined line start box. The output of the operation is then sorted to find the one with the largest area. The output from this is the line first contour.
The operation (2.2) find optimal path from start point over contour uses the input line first contour. The line first contour is processed to extract the contour curvature. The output of this is run through a low-pass filter curvature. After the filter the curvature local maxima is found. The output results then provide the path over the contour between start point and a local maxima that maximizes the benefit score B. The output of this process is the optimal curve model.
The operation (2.3) extend path from end point of first contour over fragmented contours operates by taking the optimal curve model as an input. The end point of the optimal curve model is then found. Then the derivative at local interval around optimal curve end point is found. The next operation is the initialization step for an iterative process, where the current line first contour is stored in S, all other contours are stored in R, and the curve end point is added as the first element of the path set P, this first element is represented by assigning subscript index i to zero. The iterative process will go through all contour elements N in R that are close to the current line first contour S and do not imply a subtle turn, these two decisions are made also based on the input threshold maps. Threshold maps are also an input at this part of the process. The output is then analyzed for two outcomes: if size of N==0 then the extended path (P) has been found; and if size of N==0 is not true then S=N, remove N from R. This process recalculates the average of all elements in N of the element's farthest point from the current optimal curve end point. The derivative is then updated with this estimated average. The current optimal curve end point then equals the point of contour in N whose projection is highest over the derivative. The output of this process then added to the extended path P, then iteration of the calculation of N using threshold maps takes place.
The operation (2.4) update line model starts with the input extended path (P). The path is subsampled to a fixed number of points. Subsamples are used to find the total length of the extended path (P) which is subtracted from its inverse, yielding a delta L which is input to a Gaussian estimator and that is used for normalization.
If normalized delta L is determined to be too high, then the curve model of the line has been found. If normalized delta L is not determined to be too high, the line model is updated with P before outputting the curve model of the line.
With reference to
The estimation of the number of people that wait in the line is the line integral along the line model of a people density function. Since the density function variable is a distance over the ground plane, a transformation from image pixels to ground plane distance units must be applied first. The transformation is pre-computed for the specific camera intrinsic and extrinsic parameters. The density function is numerically integrated and, therefore, a first super-sampling step is required to ensure proper accuracy of the result.
In low quality video footage, it is sometimes impossible to distinguish individual people, so tracking waiting persons to know the time that it takes to travel the whole line is usually not viable. Instead, this approach to estimating the average wait time consists in dividing the line length by the average line speed. The line speed is estimated by computing the optical flow of a set of salient points into the line contours over each pair of consecutive frames.
The salient points are found by running the Shi-Tomasi corner detector [1] over the line contour areas. The optical flow is computed with a pyramidal version of the Lucas-Kanade algorithm [2]. Noisy flow vectors—those with impossible speeds for a person walking in a line—are removed from the resulting set. When dividing the line length by the average speed, the resulting wait time is a number of frames. This figure depends on the camera frame rate. The conversion to seconds is achieved by dividing the result by the camera frame rate. Since the system has to deal with variable frame rate video streams, there is a frame rate estimator block that provides this measure.
Referring to
Referring to 3.2, the average wait time is estimated by transforming the line contours and picking contours for shortest length model, also using a transform of the curve model of the line. Mask contours are generated from the contours. Then a copy with the mask (using the current frame) is used to find Shi-Tomasi features in this masked frame. Also, an inversed transformed copy with the mask (using the transformed current frame) is used to find Shi-Tomasi features the Shi-Tomasi features in the second copy. Shi-Tomasi features from both copies are inputs to the Lucas-Kanade Pyramidal optical flow the input is provided to compute flow vectors. Then noisy flow vectors are filtered out and resulting vectors are projected over the line model.
Using a camera-to-ground-plane transformation, then the camera vectors are transformed to ground vectors and averaged, yielding a line speed estimate which is filtered for line speed outliers both with and without being run through a Gaussian estimator. The filtered line speed is run through the time-domain low-pass filter to obtain average wait frames (line_length divided by line_speed). The average wait frames are then filtered and converted from frames to seconds using a wait frame estimator, to yield an average wait.
Additional Data Algorithms
Queue Time Estimation
Referring to
This algorithm assumes a static camera is looking at a line with a static end point but potentially varying starting points or shapes of the line itself.
The goal is to estimate wait time in the line.
In an embodiment, the method includes three main modules: foreground segmentation, queueing line construction, and queueing time estimation.
A foreground segmentation module (1) builds a background model from a static video stream (see also the discussion of object detection and tracking section) and segment out all foreground blobs in the scene.
A queue skeleton extraction module (2) constructs a line model based on consistency in shape and orientation of connected foreground blobs for each pre-selected start point of the queue. One approach used to define the line is a mathematical morphology skeleton extraction. It could also be a B-spline approximation or any other algorithm that extracts the center line of a surface.
Unrelated foreground in the scenes are removed while discontinued foreground blobs of the line are added, both calculated based on their consistency with the line model. Once the line skeleton is extracted, we can compute line length (in pixels), line areas.
A motion speed estimation module (3) receives the output of the line skeleton extraction module (2). This module estimates the speed of the line in pixels/second, at each point of the line. By using a standard tracking algorithm such as salient point and local feature tracking, gradient descent tracking, or video flow, we can compute image motion in the line area. We can then project each estimated motion vector onto the line skeleton, using a perpendicular projection. Each project gives an indication of speed along the line. By averaging out all samples along the line, and filling empty areas with linear interpolation, we can estimate the speed of the line at any point of the line skeleton. Samples need to be regularly spaced to get a good estimation.
Wait time estimation (4) estimates how long someone would wait in line if they joined the line now. Now that we have a speed estimate at each point of the line, in pixels per seconds, all we need to do is divide the distance between two pixels of the skeleton by the average speed between these two pixels. We define a sampling step S and start at the first pixel of the line, using this pixel (index n) and the n+S pixel to measure the estimate. We add up estimates until the second index reaches the end of the line.
Automated segmentation of urban scenes: by analyzing where, on average, people or vehicles or bicycles are located in an image we can automatically detect where sidewalks, roads, bike routes and road crossings are.
Using the same algorithms as the ones described above, we have trajectory information about different types of elements of urban scenes. We focus on pedestrian, vehicles, public transportation vehicles, bicycles. By adding up all trajectories in a map of the video feed (pixels corresponding to the video feed pixels) we can have a map of where each type of elements moves and is in the image. If it is pedestrians, these areas will be mostly sidewalks, crossroads, parks, plazas. If it is vehicles, it will be roads. For bicycles it will be bicycle lanes. If we detect both vehicles and pedestrians at non overlapping times, this will be crosswalks.
This map will be quite accurate even if the detection and classification of objects is not very accurate. We typically can work with detectors that have a <10% false positives rate and >50% detection rate. Using this approach we can automatically build a map of the image, thus improving classification results. We can indeed either post process detection scores based on this map—a pedestrian detection score in a sidewalk area will be increased, decreased in non-classified areas (buildings, sky, . . . ). Or we can adjust algorithm parameters based on image location—a prior optimization approach as opposed to the posterior approach described right before.
Analyzing the scene viewed by a static camera that runs the algorithms previously described has several positive impacts;
1. reducing noise: if we know where streets and sidewalks and crosswalks are we can eliminate any detection that is not an expected object—too big or too fast for one of the expected objects for example. We can also remove all detections outside of these zones.
2. automating setup: if we can automatically detect streets, sidewalks, crosswalks and entrances of buildings, we can automatically set turnstiles or building entrances to start counting people there, without any manual intervention.
3. camera calibration: if we know what type of objects are on average in a given zone, we can estimate calibration of the camera by comparing the expected surface of that object, based on speed angle, with the real life estimated size of this object. We can create a map where each pixel contains its estimated dimensions in actual dimensions, in meters.
The output for this scene analysis is: a set of scene “flows” or zones that are zones where one type of object is in majority; the type of object for these zones; the average surface and speed of an object at each point of this zone.
In order to get to this scene analysis, an embodiment uses the following algorthim components:
1. Trajectory computation over a long period of time: Using the same algorithms as the ones described above, we have trajectory information about different types of elements of an urban scene. We let this algorithm run on enough video coming from this scene to extract a minimum of T trajectories of objects.
2. Trajectory clusterization using a DBSCAN variant: DBSCAN is an algorithm to clusterize points. We transpose it literally to classify trajectories. For that we define a trajectory distance that is computed as follows:
Then, for each cluster, we use these trajectories to compute a map for a “scene flow”, or group of trajectories. We first compute a mask of all pixels where there is a trajectory point, or that sits on a line between two consecutive points. We dilate this mask by a radius r—using morphological dilation using a square structuring element.
For each point of this mask, we look for trajectory points that are on the same coordinates. We compute an average and standard deviation for all these points, of the following measures: mask surface, mask width, mask height, speed norm average, speed norm standard deviation, speed angle average (−90 to 90 degrees) and speed angle standard deviation.
Once direct points are computed we go through all points in the mask that do not have a value and compute a value by spatial interpolation. In one embodiment, bilinear interpolation is used.
3. Ad-hoc or advanced cluster classification: Once we have all these clusters, what type of objects they contain in majority is to be determined. There are two approaches to that: ad-hoc and object class approaches.
Object class approach: if the resolution of the image is good enough to run a cascade classifier such as described above, we can get an estimated class for each object of each trajectory. We then take the majority class over the mask of the flows as described above, and assume this majority class is the main class of this flow. We do this for each flow above.
Ad-hoc approach: if we assume we have two classes, one being vehicles and one being people, we can assume that there are two classes of sizes for flows in the image. We have to keep in mind that we can have strong perspective effects, so only neighboring points can be compared to see if the objects they contain are larger or smaller than each other. If objects are too far away, the perspective effect might supercede actual object size. So for each flow, we consider all of its contour points. For each contour point, we look for immediate neighbors that in other flows. If we find some, that are closer to a distance D, we can compare average object dimensions between that point and its immediate neighbors. If we find enough examples where the starting point is much larger, by a factor T, than the neighbors, we can tag the point as a “large” point. If we find enough points where the starting point is much smaller, with a factor S, we can tag the starting point as “Small”. Over one contour of a flow, if more than a given percentage P of points are “large”, we tag the flow as vehicles. Over one contour of a flow, if more than a given percentage Q of points are “small”, we tag the flow as pedestrians. Otherwise we leave it untagged.
4. Calibration map: Once all flows are tagged, we can start building the calibration map. Take all flows with a tag. For each point of the flow, consider the average surface, width and height, expressed in pixels, of an object.
Now consider the typical width, height and depth of a person or a vehicle. This can depend on geographies—in some places average people are smaller, in some places vehicles are larger. We can compute a projection model where based on this 3D model of an object and the observed motion, we can estimate the expected width, height and surface of the projected object expressed in meters.
Now for each point of the flow, we can compute the ratio of square meters per square pixels for an object. By taking the square root of that number and dividing it by sqrt(2.0) we get the estimated dimension of a pixel (width and height), expressed in meters.
A post processing to remove noise such as median filter can be useful to clean up the signal. A variant is to also use the width and height in pixels and in meters to have two other estimates of that pixel dimensions, and then average out or median filter the three estimates.
5. Speed/dimensions estimation: Now that we have an estimation of the real life dimensions of a pixel, we can estimate real life dimensions and speeds of objects. We just have to multiply the dimensions in pixels by the meter to pixel ratio of the calibration map. An alternative is to use geometric angles to fine tune this estimate—if the angle is 45 degrees the real life speed is (speed in pixels/sec)×(meter/pixel ratio)*cos 45. If angle is 0 or 90, the real life speed is (speed in pixels/sec)×(meter/pixel ratio).
Object Detection and Classification.
A background/moving object detection process (1) takes as input an image from a static camera. Process (1) outputs groups of foreground pixels and a dynamic model of the background.
A goal of this layer is to detect moving objects or moving parts in the image. It is based on estimating the background image of a fixed video stream, by modeling each point using a Gaussian distribution of values on each channel of a color image, or the amplitude of the combined channels. For color images, the value of each channel is modeled as a Gaussian distribution. The Gaussian model, defined by its 0 and 1 moment, or mean and variance, is created and stored in memory for each coordinate point of an image.
In order to determine if a pixel p is part of the background or foreground, we compute a normalized distance metric of a pixel to the background value as the linear combination of the normalized distance for each channel. A normalized distance for a channel is defined as the absolute distance from the pixel channel value to the background value, divided by its standard deviation.
d(i)=(v(i)−b(i))/sigma(i)
where (i) is the channel index, v the current pixel channel value, b the background channel value and sigma the current estimate of the background channel value standard deviation.
D=d(0)+d(1)+d(2) if the image is a 3 channel color image.
As new frames arrive in the system, we first compute the normalized distance of this pixel to the current background Gaussian model. If the value is less than a first threshold T1, we consider the pixel as part of the background and update the Gaussian model for this coordinate point with the current pixel values.
If the value is greater than T1, we create a new model for a new background candidate. Things might have changed in the image and we need a new background candidate to adjust to these changes. If there are already some background candidates available, we first compute distances of the current pixels to other candidates. If any distance is less than T1, we update the best matching candidates (the one with the lowest distance) with the current value. If no match was found, we create a new candidate.
If a candidate was not updated for a given period of time S, we cancel the background candidate.
Each candidate has a lifetime span, that is equal to the time elapsed between its creation and its last update. The lifetime span cannot be greater than a parameter E called eternity.
LS=MIN(E, t(updated)−t(created)).
If any of the candidate backgrounds has a longer lifetime span than the current background, we cancel the current background value and replace it with the new, longer lifetime value. This helps the model adjust to rapid background changes.
If the distance metric is greater than a different factor T2, we mark the pixel as being part of the foreground.
In order to adjust the Gaussian values to potential changes in the mean and standard deviation, we estimate all Gaussian model values over overlapping time windows. In order to reduce the complexity of computing all values over moving averages, we use a half-overlapping scheme. If M is the minimum window size (number of samples) over which we want to estimate Gaussian models, we constantly store two sets of overlapping sums and square sums: the current sum set and the future sum set. Each set stores the number of samples and the sum of values and the sum of square values that help compute mean and variance. When the first set reaches M samples, we reset the second set and start updating it with each new frame. When the first set reaches M*2 samples, the future set reaches M samples. We then copy the future set values into the current set values, and reset the future set. This way, at any point in time after M first samples, we always have an estimation of the Gaussian model that has more than M samples, and adjust over time windows of M*2 samples. M is typically set to values ranging from 10 to 1000 depending on applications and video frame rates. As a result, outside of the starting period where we have less than M samples processed in total, all our Gaussian model estimates rely on at least M samples.
In order to reduce computation cost, we can subsample the image spatial reference by a factor P. Also, we can subsample the time reference by another factor Q—we update the background statistics only once every Q frames. This reduces the number of operations needed significantly. However, the foreground estimation cannot be subsampled, so complexity is only reduced for background estimation. This is very efficient to make the algorithm real time whatever the dimension or frame rate of a video. The CPU occupancy of such a process is controlled and defined with these two parameters. This is a unique way to linearly adjust algorithm reactivity and accuracy based on available or desired computation power.
An object pixel classification process (2) takes as input groups of foreground pixels. The output is one or more objects per group with an associated class.
The goal of this layer is to classify foreground described from process (1) above into classes of known objects or “noise”. In an embodiment, a customized version of the ADABOOST Cascade approach is used.
Once we have detected all moving objects we classify them using a classic supervised learning approach, based on the ADABOOST Cascade classification (described in Viola and Jones P. A. Viola, M., and J. Jones: Robust Real-Time Face Detection. ICCV 2001).
Embodiments of the method train, test and run the algorithm only on moving objects, thereby reducing the possibilities and variety of the training and input sets of images. In short our classification scheme only needs to recognize moving urban objects from each other, as opposed to recognizing one type of object from any other possible matrix of pixels.
This step also helps separate groups or aggregates in some cases—if a car and pedestrians are close to each other and detected as the same object, we will be able to detect them separately in many occasions, thus splitting the original object in two separate objects.
An object tracking process (3) takes as input an instance of one object at one point in time, with or without associated class. The output is a linked appearance of the same objects at different times, with trajectory and shape over time.
The goal of this layer is to connect the occurrence of the same object in consecutive frames so as to understand the object's movement in the image.
At each new frame, we try to match new foreground objects with existing, connected objects tracked in prior iterations, or if no match is found, we create a new object. We use a combination of shape, predicted position based on previous motion, and pixel content, to do the matching.
An object trajectory analysis and classification process (4) takes as input objects with trajectories, and outputs high level information on objects.
The goal of this layer is to use the type and trajectory information to detect higher level, human readable data such as vehicle or pedestrian speed, and people entering or exiting a location. We can also infer on the building layouts based on traffic flows of pedestrians and vehicles.
Using the same algorithms as the ones described above, we have trajectory information regarding different types of elements of urban scenes. For this data analysis, the focus is on pedestrian, vehicles, public transportation vehicles, bicycles. By adding up all trajectories in a map of the video feed (pixels corresponding to the video feed pixels) we can have a map of where each type of elements moves and is in the image. If it is pedestrians, these areas will be mostly sidewalks, crossroads, parks, plazas. If it is vehicles, it will be roads. For bicycles it will be bicycle lanes. If both vehicles and pedestrians are detected at non overlapping times, this will be crosswalks. This map will be quite accurate even if the detection and classification of objects is not very accurate. Embodiments typically work with detectors that have a <10% false positives rate and >50% detection rate.
Using this approach a map of the image is automatically built up, thus improving classification results. A post process detection scores can be posted based on this map—a pedestrian detection score in a sidewalk area will be increased, decreased in non-classified areas (buildings, sky, etc.). Or the algorithm parameters can be adjusted based on image location—a prior optimization approach as opposed to the posterior approach described right before.
Automated Detection of Building Entrances Using Trajectories: Areas where Many Trajectories Start or End are Likely to be Building Entrances.
The recognition and tracking algorithms described above are able to detect, recognize and track the trajectories of pedestrians, vehicles, and other types of urban elements. Based on the output of this algorithm, we are able to detect starting points and ending points of people or vehicles. We can detect areas of the video streams where more people or vehicles start or end their trajectories over long periods of time—typically 24 hours are required. These areas, when not on the border of the video streams, are areas where these elements appear or disappear.
Some of these areas will be areas of occlusion—a tree canopy, a large object hiding the view, etc. In such cases there are clear borders to the start and end points of trajectories and no trajectory will start or end, at all, where the occlusion is.
In cases where trajectories appear or disappear in a more scattered and distributed way, we probably are seeing a building entrance.
In order to automatically detect building entrances or exits, we represent all starting or ending points of trajectories on a map of the video stream. Then we run a local window analysis of the geographic distribution of these points. We can either use moments or simply cross point distance, or even principal components analysis. Moments and distances have proven to be great indicators of building entrances. This is extremely valuable to detect building automatically but also to start counting people coming in and out. Every trajectory starting in that entrance area will count as one person exiting the building. Every trajectory ending there will count as a person entering the building. By counting these entrances and exits continuously, statistically correcting numbers for detection errors, we can get to a real time count of occupancy and traffic in a given location. This is valid for people, cars, any type of vehicles.
Aspects of the systems and methods described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits (ASICs). Some other possibilities for implementing aspects of the system include: microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM)), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the system may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies like complementary metal-oxide semiconductor (CMOS), bipolar technologies like emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc.
It should be noted that the various functions or processes disclosed herein may be described as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, etc.). When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of components and/or processes under the system described may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.
The above description of illustrated embodiments of the systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise forms disclosed. While specific embodiments of, and examples for, the systems components and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems, components and methods, as those skilled in the relevant art will recognize. The teachings of the systems and methods provided herein can be applied to other processing systems and methods, not only for the systems and methods described above.
The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the systems and methods in light of the above detailed description.
In general, in the following claims, the terms used should not be construed to limit the systems and methods to the specific embodiments disclosed in the specification and the claims, but should be construed to include all processing systems that operate under the claims. Accordingly, the systems and methods are not limited by the disclosure, but instead the scope of the systems and methods is to be determined entirely by the claims.
While certain aspects of the systems and methods are presented below in certain claim forms, the inventors contemplate the various aspects of the systems and methods in any number of claim forms. For example, while only one aspect of the systems and methods may be recited as embodied in machine-readable medium, other aspects may likewise be embodied in machine-readable medium. Accordingly, the inventors reserve the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the systems and methods.
This application is a divisional application of U.S. patent application Ser. No. 15/134,245, filed Apr. 20, 2016. U.S. patent application Ser. No. 15/134,245 claims priority from the following U.S. Provisional Applications: No. 62/150,623, filed Apr. 21, 2015; 62/150,629, filed Apr. 21, 2015; 62/150,646, filed Apr. 21, 2015; 62/150,654, filed Apr. 21, 2015; 62/150,667, filed Apr. 21, 2015; and 62/150,692, filed Apr. 21, 2015. This application is also a continuation-in-part of U.S. patent application Ser. No. 14/727,321, filed Jun. 1, 2015 and a continuation-in-part of U.S. patent application Ser. No. 15/078,611, filed Mar. 23, 2016.
Number | Name | Date | Kind |
---|---|---|---|
5194908 | Lougheed | Mar 1993 | A |
6295321 | Lyu | Sep 2001 | B1 |
6587574 | Jeannin | Jul 2003 | B1 |
6987883 | Lipton et al. | Jan 2006 | B2 |
6999600 | Venetianer et al. | Feb 2006 | B2 |
7006128 | Xie | Feb 2006 | B2 |
7199798 | Echigo | Apr 2007 | B1 |
7221367 | Cardno | May 2007 | B2 |
7224852 | Lipton et al. | May 2007 | B2 |
7382244 | Donovan | Jun 2008 | B1 |
7391907 | Venetianer et al. | Jun 2008 | B1 |
7424175 | Lipton et al. | Sep 2008 | B2 |
7574043 | Porkili | Aug 2009 | B2 |
7613322 | Yin et al. | Nov 2009 | B2 |
7688349 | Flickner et al. | Mar 2010 | B2 |
7796780 | Lipton et al. | Sep 2010 | B2 |
7801330 | Zhang et al. | Sep 2010 | B2 |
7868912 | Venetianer et al. | Jan 2011 | B2 |
7932923 | Lipton et al. | Apr 2011 | B2 |
8325036 | Fuhr | Dec 2012 | B1 |
8331619 | Ikenoue | Dec 2012 | B2 |
8340349 | Salgian | Dec 2012 | B2 |
8340654 | Bratton et al. | Dec 2012 | B2 |
8369399 | Egnal et al. | Feb 2013 | B2 |
8401229 | Hassan-Shafique et al. | Mar 2013 | B2 |
8457401 | Lipton et al. | Jun 2013 | B2 |
8526678 | Liu et al. | Sep 2013 | B2 |
8564661 | Lipton et al. | Oct 2013 | B2 |
8582803 | Ding | Nov 2013 | B2 |
8594482 | Fan | Nov 2013 | B2 |
8599266 | Trivedi | Dec 2013 | B2 |
8625905 | Schmidt | Jan 2014 | B2 |
8649594 | Hua | Feb 2014 | B1 |
8654197 | Nizko | Feb 2014 | B2 |
8655016 | Brown | Feb 2014 | B2 |
8711217 | Venetianer et al. | Apr 2014 | B2 |
8823804 | Haering et al. | Sep 2014 | B2 |
8948458 | Hassan-Shafique et al. | Feb 2015 | B2 |
9213781 | Winter | Dec 2015 | B1 |
20020051057 | Yata | May 2002 | A1 |
20020124263 | Yokomizo | Sep 2002 | A1 |
20030053692 | Hong | Mar 2003 | A1 |
20030090751 | Itokawa | May 2003 | A1 |
20030215110 | Rhoads | Nov 2003 | A1 |
20040022227 | Lynch | Feb 2004 | A1 |
20040151342 | Venetianer | Aug 2004 | A1 |
20040161133 | Elazar | Aug 2004 | A1 |
20050002572 | Saptharishi | Jan 2005 | A1 |
20050169531 | Fan | Aug 2005 | A1 |
20050185058 | Sablak | Aug 2005 | A1 |
20050213836 | Hamilton | Sep 2005 | A1 |
20060007308 | Ide | Jan 2006 | A1 |
20060031291 | Beckemeyer | Feb 2006 | A1 |
20060078047 | Shu | Apr 2006 | A1 |
20060095539 | Renkis | May 2006 | A1 |
20060198608 | Girardi | Sep 2006 | A1 |
20060227862 | Campbell | Oct 2006 | A1 |
20060233535 | Honda | Oct 2006 | A1 |
20060290779 | Reverte | Dec 2006 | A1 |
20070024705 | Richter | Feb 2007 | A1 |
20070024706 | Brannon | Feb 2007 | A1 |
20070071403 | Urita | Mar 2007 | A1 |
20070016345 | Peterson | May 2007 | A1 |
20070127508 | Bahr | Jun 2007 | A1 |
20070127774 | Zhang | Jun 2007 | A1 |
20070147690 | Ishiwata | Jun 2007 | A1 |
20070177792 | Ma | Aug 2007 | A1 |
20070177800 | Connell | Aug 2007 | A1 |
20080030429 | Hailpern | Feb 2008 | A1 |
20080123955 | Yeh | May 2008 | A1 |
20080137950 | Park | Jun 2008 | A1 |
20080152122 | Idan | Jun 2008 | A1 |
20080263012 | Jones | Oct 2008 | A1 |
20080281518 | Dozier | Nov 2008 | A1 |
20080316327 | Steinberg | Dec 2008 | A1 |
20080316328 | Steinberg | Dec 2008 | A1 |
20090033745 | Yeredor | Feb 2009 | A1 |
20090034846 | Senior | Feb 2009 | A1 |
20090063205 | Shibasaki | Mar 2009 | A1 |
20090080864 | Rajakarunanayake | Mar 2009 | A1 |
20090103812 | Diggins | Apr 2009 | A1 |
20090141939 | Chambers | Jun 2009 | A1 |
20090147991 | Chau | Jun 2009 | A1 |
20090222388 | Hua | Sep 2009 | A1 |
20090268968 | Milov | Oct 2009 | A1 |
20090290023 | Lefort | Nov 2009 | A1 |
20100014717 | Rosenkrantz | Jan 2010 | A1 |
20100142927 | Lim | Jun 2010 | A1 |
20100177194 | Huang | Jul 2010 | A1 |
20100211304 | Hwang | Aug 2010 | A1 |
20100260385 | Chau | Oct 2010 | A1 |
20100290710 | Gagvani | Nov 2010 | A1 |
20100295999 | Li | Nov 2010 | A1 |
20100302346 | Huang | Dec 2010 | A1 |
20110007944 | Atrazhev | Jan 2011 | A1 |
20110018998 | Guzik | Jan 2011 | A1 |
20110125593 | Wright | May 2011 | A1 |
20110130905 | Mayer | Jun 2011 | A1 |
20110141227 | Bigiol | Jun 2011 | A1 |
20110152645 | Hoover | Jun 2011 | A1 |
20110153645 | Hoover | Jun 2011 | A1 |
20110184307 | Hulin | Jul 2011 | A1 |
20110280547 | Fan | Nov 2011 | A1 |
20110310970 | Lee | Dec 2011 | A1 |
20120008819 | Ding | Jan 2012 | A1 |
20120057640 | Shi | Mar 2012 | A1 |
20120075450 | Ding | Mar 2012 | A1 |
20120086568 | Scott | Apr 2012 | A1 |
20120106782 | Nathan | May 2012 | A1 |
20120127262 | Wu | May 2012 | A1 |
20120134535 | Pai | May 2012 | A1 |
20120179832 | Dolph | Jul 2012 | A1 |
20120182392 | Kearns | Jul 2012 | A1 |
20120262583 | Bernal | Oct 2012 | A1 |
20120304805 | Kim | Nov 2012 | A1 |
20130058537 | Chertok | Mar 2013 | A1 |
20130086389 | Suwald | Apr 2013 | A1 |
20130148848 | Lee | Jun 2013 | A1 |
20130166711 | Wang | Jun 2013 | A1 |
20130202165 | Wehnes | Aug 2013 | A1 |
20130251216 | Smowton | Sep 2013 | A1 |
20140003708 | Datta | Jan 2014 | A1 |
20140015846 | Campbell | Jan 2014 | A1 |
20140036090 | Black | Feb 2014 | A1 |
20140046588 | Maezawa | Feb 2014 | A1 |
20140052640 | Pitroda | Feb 2014 | A1 |
20140129596 | Howe | May 2014 | A1 |
20140132728 | Verano | May 2014 | A1 |
20140212002 | Curcio | Jul 2014 | A1 |
20140278068 | Stompolos | Sep 2014 | A1 |
20140294078 | Seregin | Oct 2014 | A1 |
20140307056 | Collet | Oct 2014 | A1 |
20140359576 | Rath | Dec 2014 | A1 |
20150006263 | Heier | Jan 2015 | A1 |
20150046127 | Chen | Feb 2015 | A1 |
20150070506 | Chattapadhyay | Mar 2015 | A1 |
20150077218 | Chakkaew | Mar 2015 | A1 |
20150138332 | Cheng | May 2015 | A1 |
20150227774 | Balch | Aug 2015 | A1 |
20150339532 | Sharma | Nov 2015 | A1 |
20150348398 | Williamson | Dec 2015 | A1 |
20150350608 | Winter | Dec 2015 | A1 |
20160012465 | Sharp | Jan 2016 | A1 |
20160019427 | Martin | Jan 2016 | A1 |
20160088222 | Jenny | Mar 2016 | A1 |
20160314353 | Winter | Oct 2016 | A1 |
20160334927 | Kim | Nov 2016 | A1 |
20170068858 | Winter | Mar 2017 | A1 |
20170070707 | Winter | Mar 2017 | A1 |
20170277956 | Winter | Sep 2017 | A1 |
20170277959 | Winter | Sep 2017 | A1 |
20180046315 | Kim | Feb 2018 | A1 |
20180165813 | Mai | Jun 2018 | A1 |
Number | Date | Country |
---|---|---|
101489121 | Jul 2009 | CN |
2015184440 | Dec 2015 | WO |
Entry |
---|
Parameswaran et al. “Design and Validation of a System for People Queue Statistics Estimation”, Jan. 2012, Springer, Video Analytics for Business Intelligence, p. 355-373. |
Makris et al., “Learning Semantic Scene Models From Observing Activity in Visual Surveillance”, Jun. 2005, IEEE, Trans. on Systems, Man, and Cybernetics—part B: Cybernetics, vol. 35, No. 3, p. 397-408. |
Estevez-Ayres et al., “Using Android Smartphones in a Service-Oriented Video Surveillance System”, Jan. 2011, IEEE Int. Conf. on Consumer Electronics, 2011, p. 887-888. |
Foresti et al., “Event Classification for Automatic Visual-based Surveillance of Parking Lots”, Aug. 2004, IEEE, Proceedings of the 17th Int. Conf. on Pattern Recognition, p. 1-4. |
Yang et al., “Multi-Target Tracking by Online Learning of Non-linear Motion Patterns and Robust Appearance Models”, Jun. 2012, IEEE, 2012 IEEE Conf. on Computer Vision and Pattern Recognition, p. 1918-1925. |
Magee, “Tracking multiple vehicle using foreground, background and motion models”, Feb. 2004., Elsevier, Image and Vision Computing, vol. 22, iss. 2, p. 143-155. |
Morris et al., “A Survey of Vision-Based Trajectory Learning and Analysis for Surveillance”, Aug. 2008, IEEE, IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, No. 8, p. 1114-1127. |
Borges, “Pedestrian Detection Based on Blob Motion Statistics”, Feb. 2013, IEEE, IEEE Trans. on Circuits and Systems for Video Technology, vol. 23, No. 2, p. 224-235. (Year: 2013). |
Miller, “Supervised Learning and Bayesian Classification”, Sep. 2011, University of Massachusets Amherst, CS370: Introduction to Computer Vision (<https://people.cs.umass.edu/˜elm/Teaching/370_S11/>), <https://people.cs.umass.edu/˜elm/Teaching/Docs/supervised.pdf>, p. 1-8. (Year: 2011). |
Franconeri et al., “A simple proximity heuristic allows tracking of multiple objects through occlusion”, Jan. 2012, Psychonomic Society, Attention, Perception, & Psychophysics, vol. 74, iss. 4, p. 691-702. (Year: 2012). |
Stephen et al., “A visual tracking system for the measurement of dynamic structural displacements”, Aug. 1991, Wiley, Concurrency and Computation: Practice and Experience, vol. 3, iss. 4, p. 357-366. (Year: 1991). |
Lefloch et al., “Real-time people counting system using a single video camera”, Feb. 2008, SPIE, Real-Time Image Processing 2008, Proc. SPIE, vol. 6811, p. 681109-1-681109-12. (Year: 2008). |
Cheung et al., “Robust techniques for background subtraction in urban traffic video”, Jan. 2004, SPIE< Visual Communications and Image Processing 2004, Proc. SPIE vol. 5308, p. 881-892. (Year: 2004). |
Wang et al., “A Novel Robust Statistical Method for Background Initialization and Visual Surveillance”, 2006, Springer, Computer Vision—ACCV 2006, LNCS 3851, p. 328-337. (Year: 2006). |
Xia et al., “A modified Gaussian mixture background model via spatiotemporal distribution with shadow detection”, Jan. 2015, Springer, Signal, Image and Video Processing, vol. 10, p. 343-350. (Year: 2015). |
Barnich et al., “Vibe: A Powerful Random Technique to Estimate the Background in Video Sequences”, Apr. 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, p. 945-948. (Year: 2009). |
H. Bannour, L. Hlaoua and B. Ayes, Survey of the Adequate Descriptor for Content-Based Image Retrieval on the Web: Global versus Local Features, Department of Information Sciences. |
E. Baudrier, G. Millon, F. Nicolier and S. Ruan, A Fast Binary-Image Comparison Method with Local-dissimilarity Quantification, Labratory Crestic, Troyes Cedex, France. |
D. Lisin, M. Mattar, M. Blaschko, M. Benfield and E. Learned-Miller, Combining Local and Global Image Features for Object Class Recognition, Computer Vision Lab, Department of Computer Science, University of Massachusetts. |
L. Paninski, Estimation of Entropy andMutual Information, Center for Neural Science, New York University, New York, NY 10003, U.S.A., accepted Nov. 27, 2002. |
Machado, D. People Counting Sytem using Existing Surveillance Video Camera: Nov. 2011, pp. 1-71. |
Terada, K. “A method of counting the passing people by using the stereo images” Image Processing, 1999, ICIP 99. Proceedings. 1999 International Conference, pp. 1-5. |
International Search Report and Written Opinion in PCT/US16/28511. |
International Search Report and Written Opinion in PCT/US16/28516. |
Machado, D., “People Counting System Using Existing Surveillance Video Camera” Nov. 2011, pp. 1-71. |
Lefloch, D., “Real-Time People Counting System Using Video Camera” Master of Computer Science, Image and Artificial Intelligence 2007 at UFR Sciences et Technique, pp. 1-5. |
Terada, K., “A Method of Counting the Passing People by Using the Stereo Images” Image Processing, 1999, ICIP 99. Proceedings. 1999 International Confenerence, pp. 1-5. |
Bannour, H.,, et al., “Survey of the Adequate Descriptor for Content-Based Image Retrieval on the Web” Global versus Local Features, Department of Information Sciences. |
Lisin, D. et al., “Combining Local and Global Image Features for Object Class Recognition” Computer Vision Lab, Department of Computer Science, University of Massachusetts. |
Paninski, L., “Estimation of Entropy and Mutual Information” Center for Neural Science, New York University, New York, NY 10003, USA, Accepted Nov. 27, 2002. |
Parameswaran et al., “Design and Validation of a System for People Queue Statistics Estimation”, Jan. 2012, Springer, Video Analytics for Business Intelligence, ;. 355-373. |
Magee, “Tracking Multiple Vehicle Using Foreground, Background and Motion Models”, Feb. 2004, Elsevier, Image and Vision Computing, vol. 22, issue 2, p. 143-155. |
Stauffer et al., “Adaptive Background Mixture Models for Real-Time Tracking”, Jun. 1999, IEEE, 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Proceedings, p. 246-252. |
Xu et al., “Partial Observation vs. Blind Tracking through Occlusion”, Jan. 2003, British Machine Vision Conference (BMVC 2002), p. 777-786. |
Virtual Turnstile | Security Systems Technology | www.sstgroup.co.uk/solutions/access-control/virtual-turnstile pp. 1-2. |
Amended Claims of Related U.S. Appl. No. 15/288,085, submitted Jun. 13, 2018. |
Amended Claims of Related U.S. Appl. No. 15/288,224, submitted May 9, 2018. |
Placemeter Inc., PCT/US2015/033499 Application, “Notification of Transmittal of the International Search Report and the Written Opinion”, dated Oct. 28, 2015. |
Placemeter Inc., PCT/US2016/025816 Application, “Notification Concerning Transmittal of International Preliminary Report on Patentability”, dated Nov. 2, 2017. |
Placemeter Inc., PCT/US2016/025816 Application, “Notification of Transmittal of the International Search Report and the Written Opinion”, dated Jul. 11, 201. |
Placemeter Inc., PCT/US2016/028511 Application, “Notification Concerning Transmittal of International Preliminary Report on Patentability”, dated Nov. 2, 2017. |
Placemeter Inc., PCT/US2016/028511 Application, “Notification of Transmittal of the International Search Report and the Written Opinion”, dated Sep. 14, 2016. |
Placemeter Inc., EP 15798996.3 Application, “Communication Pursuant to Rule 164(1) EPC—Partial Supplementary European Search Report”, dated Nov. 27, 2017. |
Placemeter Inc., EP 15798996.3 Application, “Communication—Extended European Search Report”, dated Feb. 6, 2018. |
Baudrier, E., et al., “A Fast Binary-Image Comparison Method with Local-Dissimilarity Quantification”, Laboratory Crestic, Troyes Cedex, France. |
Almomani, R., et al., “Segtrack: A Novel Tracking System with Improved Object Segmentation”, IEEE 2013, Wayne State University, Department of Computer Science, Detroit, MI 48202, ICIP 2013, p. 3939-3943. |
Lucas, B., et al., “An Iterative Image Registration Technique with an Application to Stereo Vision”, Proceedings of Imaging Understanding Workshop 1981, Computer Science Department, Carnegie-Mellon University, p. 121-130. |
Russakoff, D., et al., “Image Similarity Using Mutual Information of Regions”, Dept. of Computer Science, Stanford University, ECCV 2004, LNCS 3023, Springer-Verlag, Berlin Heidelberg 2004, pp. 596-607. |
Shi, J., et al., “Good Features to Track”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR94), Seattle, Jun. 1994. |
Studholme, C., et al., “An Overlap Invariant Entropy Measure of 3D Medical Image Alignment”, 1999 Pattern Recognition Society, Published by Elsevier Sciences Ltd., Pattern Recognition 31 (1999), p. 71-86. |
Viola, P., et al., “Robuts Real-Time Face Detection”, 2004 International Journal of Computer Vision 57(2), Kluwer Academic Publishers, Netherlands, p. 137-154. |
“Virtual Turnstile” Security Systems Technology Ltd 2008 pp. 1-2. |
Ming Xu et al.; “Illumination-Invariant Motion Detection Using Colour Mixture Models”; Department of Electrical, Electronic and Information Engineering; City University, London EC1V OHB; EPSRC under grant No. GR/M58030; BMVC 2001 doi:10.5244/C.15.18; pp. 163-172. |
Zakir Hussain et al.; “Moving Object Detection Bsaed on Background Subtraction & Frame Differencing Technique”; IJARCCE—International Journal of Advanced Research in Computer and Communication Engineering; vol. 5, Issue 5, May 2016; pp. 817-819—(3) pages. |
Number | Date | Country | |
---|---|---|---|
20170277956 A1 | Sep 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15134245 | Apr 2016 | US |
Child | 15288438 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15078611 | Mar 2016 | US |
Child | 15134245 | US |