The present invention relates generally to tracking systems. More particularly, the present invention relates to a system and method for providing real-time tracking.
The current systems and methods used for tracking can be classified into two categories: 1) Intra-Frame Processing (IntaF) to track individual crowd motions and behaviors within a sensor frame; 2) Inter-Frame Processing (InteF) for anomaly tracking to understand crowd behaviors and individuals motion patterns frame to frame and analyze trajectories to model normal and abnormal crowd behaviors. To perform these two aims, several methods such as optical flow, social force model, particle advection, hidden markov models, artificial neural networks and support vector machine have been developed to establish frame trajectories and a. crowd motion model to distinguish between normal and abnormal crowd behaviors.
Some of challenges and drawbacks found in these current methods and systems include: a) defining boundaries between normal and anomalous patterns and behavior is challenging and a learning process is needed to separate them; b) anomaly type is different for different applications; c) difficulties and availability of labeled data for training and validation; d) false positives in anomaly detection dramatically increase when data might contain noise; e) normal pattern and behavior could change over time; f) if the camera capturing video is not stationary most of above methods cannot model crow behavior; g) most of the current methods are designed for day usage and don't work at night; h) most of the existing methods are computationally expensive and need prior training and are not designed for real-time applications and embedding in an integrated system for carry-on-uses.
Accordingly, there is a need in the art for a method that allows for detecting in real time unsupervised and automatic moving objects and anomalies from stationary and non-stationary sensors.
The foregoing needs are met, to a great extent by a system for detection of an object including a source of image data, wherein said image data comprises a frame. The system also includes a real-time learning manifold system (RML) disposed on a fixed computer readable medium. The RML includes a first subsystem configured to provide prediction of motion pattern intra-frame, such that the object is detected moving within the frame. Additionally, the RML includes a second subsystem configured to provide prediction of motion pattern inter-frame, such that changes over time in a scene contained in the image data are predicted.
In accordance with an aspect of the present invention, a method for real-time tracking includes obtaining K sample frames and collecting a current frame (F(i)) and a uniformly sampled K−1 frame from frame J to current frame (i), where J=i−B. The method also includes applying nonlinear dimensional reduction to map K-sample frames to a manifold, KSM(i), to a 2D embedded space and calculating a distance between start and end point of the manifold to predict changes in the current frame compared to the past. Additionally, the method includes storing the calculated distance in array T as ith value of T.
In accordance with another aspect of the present invention, a method for detecting an object includes obtaining image data, wherein said image data comprises a frame and performing a moving objects detection to find the object in the frame. The method also includes performing a pattern recognition to classify the object and executing incremental manifold learning on the image data. Additionally, the method includes processing the image data with a trackogram protocol and assessing data from the pattern recognition and trackogram protocol in a rule-based decision making.
The accompanying drawings provide visual representations, which will be used to more fully describe the representative embodiments disclosed herein and can be used by those skilled in the art to better understand them and their inherent advantages. In these drawings, like reference numerals identify corresponding elements and:
The presently disclosed subject matter now will be described more fully hereinafter with reference to the accompanying Drawings, in which some, but not all embodiments of the inventions are shown. Like numbers refer to like elements throughout. The presently disclosed subject matter may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Indeed, many modifications and other embodiments of the presently disclosed subject matter set forth herein will come to mind to one skilled in the art to which the presently disclosed subject matter pertains having the benefit of the teachings presented in the foregoing descriptions and the associated Drawings. Therefore, it is to be understood that the presently disclosed subject matter is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims.
The present invention is directed to a system, hereinafter referred to as MovA, and methods therefor where MovA utilizes an embodiment of the present invention, a real-time manifold learning (RML) system. The RML and its method of use is capable of being integrated into different devices, including but not limited to, night vision cameras, drone and security cameras. The RML is suitable for real-time applications and can handle both small-scale and large-scale imagery data without need to save all prior image data. Importantly, it only needs new data when the data becomes available, due to the RML's incremental learning of motion pattern over time for anomaly detection. The incremental learning option of RML is only activated when the need to capture videos and analyze objects behavior over a long recording and image data that increases temporally. It is therefore, a preferred embodiment of the present invention to have a system that is capable of having an incremental learning capability with no need for saving and/or processing prior data. This feature is essential for real-time anomaly tracking as per the present invention, as opposed to previous systems that are based on supervised machine learning techniques with a need to use prior data and/or human interaction of labeling of anomalies for some previous data. In another preferred embodiment of the present invention, the RML allows for unsupervised automatic detection of moving objects and anomaly in both day and night conditions.
More particularly, MovA is an unsupervised, automatic method based on non-linear methods for anomaly object detection from stationary and/or non-stationary sensors (e.g., cameras, etc) and can be generalized to cover many scenarios. It is comprised of two “sub-systems” that enable the prediction and graphing of a motion pattern at the inter-frame and intra-frame level in real-time (no off-line processing) with the added ability to track moving objects in different frames. An example of an anomaly consists of a biker or skater going through a walking crowd or people “escaping” (see
As shown in
The method and system includes a real-time manifold distance learning (MDL) system that can Detect, Track, Identify and Locate (DTTIL) the objects. Second, a novel incremental non-linear dimensional reduction method (iNLDR) is also included. Finally, a preliminary demonstration of the method using very different input data sets to illustrate the usefulness of MovA is provided. The MDL methods can be integrated using embedded systems for remote devices, such as NV goggles, drone and security apparatus. The developed MDL has a reduced computational load, which makes it suitable for real-time applications, and the method can handle both small-scale and large-scale imagery data without the need to save all prior image data. MovA only needs new data when it becomes available for its incremental NLDR learning of the motion pattern over time. An incremental learning option can also be implemented for MDL, which, can be activated for the interF sub-system in situations, where the user needs to capture videos and analyze object behavior from long-time recordings. Moreover, this system can be integrated with other novel detection systems, such as event based imagers, to further reduce data to be processed and minimize the required communication bandwidth between the imager and the embedded processing system.
This is an advantage, since most object DTTIL methods are based on supervised machine learning techniques and require some means by which to label or train the system, which, could be difficult if quick decisions are required from the user. In addition, some methods rely on long-term data recording which increases the data load. These drawbacks can dramatically reduce applicability to real time applications. However, MovA overcomes these drawbacks with the iNLDR method, where both sub-systems can automatically DTTIL moving objects in both day and night conditions to alert the user to the need for action, if necessary.
The proposed system can provide critical capabilities for several military operational scenarios. For example, being able to detect multiple objects and identify them would lead to a potentially greater probability of hitting a high value target and reduce collateral damage. These systems could be used to reduce the clutter in NV goggles and highlight salient objects that would be defined for targeting. Moreover, this application of MovA could also be applied to detecting high value targets or anomalous movers in hyperspectral images or hyperspectral video streams.
iNLDR is a method that maps each image (frame) to a point in an embedded 2D space. This is accomplished using a novel unsupervised non-linear mathematical algorithm. Moreover, for ease of interpretation, a new model to visualize this data and generate a two or three dimensional embedded maps can be used, according to the present invention, in which, the most salient structures hidden in the high-dimensional data appear prominently. This will allow the user to visualize factors not visible to the human observer such as unknown characteristics between imaging datasets and other factors (see
However, when plotted in a high dimensional space and reduced using the model, prominent related structures hidden in the high-dimensional space are revealed. Indeed, the embedded space is a unified description that captures both the appearance and the dynamics of visual processes of the objects under interrogation. The advantages of moving into higher dimensions, allows for better separation of the different manifolds and better delineation of the differences in geodesic distances between manifolds, which suggests improved object detection and identification. Moreover, using the iNLDR approach, allows for adaptive conditions to subtle changes that current algorithms cannot detect.
For example, most standard probability based identification methods can fail, if the dimensionality is large or the training data set has some bias. In addition, current popular machine learning approaches such as Support Vector Machines (SVM) need input parameters (such as kernel selection or the scale for radial basis function kernels) for correct obtaining the correct hyperplane boundaries. These potential problems can be overcome using the MovA system. iNLDR is a modified version of mathematical non-linear maps named Isomap, diffusion-Maps (DfM) and locally linear embedding (LLE). They have been modified for improved usability for real-time applications and incremental data mining. Compared to existing nonlinear dimensionality reduction techniques which can be very slow, iNLDR is fast and needs new data only when it becomes available and keeps the location of previous data (frames) in the embedded space for future use (as illustrated in
Real-Time Manifold Learning: To visualize the underlying manifold of high dimensional data manifold learning and dimensionality reduction methods are used, as more than three dimensions cannot be visualized. By definition, a manifold is a topological space which is locally Euclidean, i.e., around every point, there is a neighborhood that is topologically the same as the open unit ball in Euclidian space. Indeed, any object that can be “charted” is a manifold. Dimensionality reduction (DR) means the mathematical mapping of high-dimensional manifold into a meaningful representation in lower dimension using either linear or nonlinear methods. Intrinsic dimensionality of a data set or object presumed to mean the lowest characteristics that can represent the structure of the data. Mathematically, a data set X⊂RD(−array of image pixels) has intrinsic dimensionality d<D, if X can be defined by d points or parameters that lie on a manifold. Dimensionality reduction methods map dataset X={x1, x2, . . . , xn}⊂RD(images) into a new dataset Y={y1, y2, . . . , yn}⊂Rd with dimensionality d, while retaining the geometry of the data as much as possible. Generally, the geometry of the manifold and the intrinsic dimensionality d of the dataset X are not known. In recent years, a large number of methods for dimensionality reduction and manifold learning have been proposed which belongs to two groups: linear and nonlinear and are briefly Some popular linear techniques are: Principal Components Analysis, Linear Discriminant Analysis, and multidimensional scaling. There are a vast number of nonlinear techniques such as Isomap, Locally Linear Embedding, Kernel PCA, diffusion maps, Laplacian Eigenmaps, and other techniques. Nonlinear DR techniques have the ability to deal with complex nonlinear data. A vast number of nonlinear techniques are perfectly performed on artificial tasks, whereas linear techniques fail to do so. However, successful applications of nonlinear DR techniques on natural datasets are scarce. One of the important applications of manifold learning algorithms is to visualize image sets and classify images based on the embedded coordinates for object recognition. Some of applications were face recognition, pose estimation, human activity recognition and tracking objects in a video, where manifold learning has shown promising results. Most of the studies in this area have demonstrated that between different nonlinear manifold learning methods Diffusion-Maps, Isomap, and Locally Linear Embedding (LLE) performed well on the real datasets compared to other nonlinear techniques. Therefore, these three methods can be used to deal with object recognition, and will be described further herein.
Isomap: Dimensionality reduction methods maps dataset X into a new dataset Y with dimensionality d, while retaining the geometry of the data as much as possible. If the high-dimensional data lies on or near a curved manifold, Euclidean distance does not take into account the distribution of the neighboring data points and might consider two data points as near points, whereas their distance over the manifold is much larger than the typical inter-point distance. Isomap overcomes this problem by preserving pair-wise geodesic (or curvilinear) distances between data points. Geodesic distance (GD) is the distance between two points measured over the manifold. GDs between the data points could be computed by constructing a neighborhood graph G (every data point xi is connected with its k nearest neighbors xij). GDs can be estimated using a shortest-path algorithm to find the shortest path between two points in the graph. GDs between all data points form a pair-wise GD matrix. The low-dimensional space Y is computed then by applying multidimensional scaling (MDS) While retaining the GD pairwise distances between the data points as much as possible. To do so, the error between the pairwise distances in the low-dimensional and high-dimensional representation of the data should be minimized: Σ(∥xi−xij∥yi−yijμ)2. This minimization can be performed using various methods, such as the eigen-decomposition of a pairwise distance matrix, the conjugate gradient method, or a pseudo-Newton method.
Diffusion maps (DfM): Diffusion maps find the subspace that best preserves the so-called diffusion interpoint distances based on defining a Markov random walk on a graph of the data termed Laplacian graph. It uses Gaussian kernel function to estimate the weights (K) of the edges in the graph:
. In the next step, the matrix K is normalized in a way that its rows add up to 1:
where P represents the forward transition probability of t time steps random walk from one data point to another data point. The diffusion distance is defined as:
In the diffusion distance parts of the graph with high density has more weight. Also pairs of data points with a high forward transition probability have a small diffusion distance. The diffusion distance is more robust to noise than the geodesic distance because it uses several paths through the graph. Based on spectral theory on the random walk, the low-dimensional representation Y can be obtained using the d nontrivial eigenvectors of the distance matrix D: Y={λ2V2, . . . λdVd}. As the graph is fully connected, eigenvector v1 of the largest eigenvalue (λ1=1) is discarded and the eigenvectors are normalized by their corresponding eigenvalues.
Locally Linear Embedding (LLE): As shown in
Subject to two constraints
and wij=0 when xj∉RD(image pixels). Where X is input data, n is number of points and k is neighborhood size. The optimal weights matrix W (n×K) subject to these constraints are found by solving a least-squares problem. Then, the embedding data (Y) is computed by calculating the eigenvectors corresponding to the smallest d nonzero eigenvalues of the matrix.
Sub-System 1: Intra-Data Stream Processing (IntaF): In this sub-system, individual object movements are detected within a frame. Intaf is a two stage process.
Moving Objects Detection: In this step, the current frame is registered with previous frame, Second, individual moving objects are detected by subtracting current frame from previous frame to exclude static and stationary objects in the frame. Next, the subtracted frame is t converted to a binary image using Otsu thresholding. Then, shape analysis is done on the binary image by computing following properties: a) Area; b) Orientation; c) Bounding Box; d) Centroid; d) Major Axis Length and e) Minor Axis Length. Based on these features, a rule is defined to exclude small and line shape areas from the binary image and collect the centroid with a minimum bounding box (a box which all the points of identified object lie on it) for all identified moving objects.
Pattern (object) Recognition: In this step, the identified objects are classified within the frame using pattern recognition techniques. Currently there are a vast number of pattern recognition methods developed to recognize objects in a set of images. These methods can be classified to two groups: supervised and supervised techniques: Popular supervised techniques such as support vector machine (SVM) and artificial neural networks (NN) and have been applied in several applications, such as, face recognition, pose estimation, human body activity, etc. However, the major drawback is the need for prior training with manual labeling of objects. This can have detrimental effects on the performance with an increase in the size the training data. This limits supervised methods for real-time applications.
Object Recognition Manifold Learning: These three nonlinear manifold learning methods explained above can be used to deal with object recognition and tracking.
Manifold Learning Steps: 1) Reconstruct data point cloud (X): suppose number of image patches are N and equalize patch sizes to L1×L2. In this application of manifold learning, number of dimensions is equal to number of pixels. Therefore the size of point cloud (X) will be a matrix with the size of L×N, where L=L1*L2. 2) Apply nonlinear dimensionality reduction (manifold learning) algorithm to reduce dimension of L to 2. This step returns a 2D matrix, P, with matrix size of N×2, where N is number of detected objects by rule-based image processing step. Each data point of P represents an image patch.
Class Identification After applying Manifold Learning, then an additional step of class identification is applied to segment manifold of objects obtained by nonlinear DR techniques and identify classes for normal and abnormal objects in the frame.
Steps: 1) Calculate pair-wise distance matrix (D) for matrix P in the embedded space. 2) For each data point (Pi), the nearest point is found by computing minimum value of distances in corresponding row in matrix D to obtain an array named Dmin with the size of N, where N is number of detected objects. 3) Calculate mean and 95% confidence interval (CI) on mean of Dmin, name it as Dmean. 4) Look at first row of matrix D and find data points which their distances from first data point are within the range of [Dmean−CI, Dmean+CI]. Those points belongs to class 1 and remove their corresponding rows from matrix D. 5—repeat step 4 for remaining rows of matrix D. 6—Find which class has the highest population and label it as normal class. 7—Calculate centroid of all classes. 8—Calculate distances between centroid of normal class (CN) and other classes. 9—Find which object has the maximum distance from other detected objects in the embedded space and label it as anomaly object and the class that it belongs as anomaly class. 10—Abnormality Rank: Normalize distances calculated in step 8 and report it as abnormality rank (AR). AR=1 represents the most suspicious class of objects and one of its objects (most suspicious object) has the maximum distance from other detected objects in the embedded space. By applying these steps, all N detected objects will belong to a class and number of classes (Noj) varies based on object type and shape. The identified classes for all objects and their AR as well as most suspicious object will be reported to the InteF sub-system.
Sub-System 2: Inter-Data Stream Processing (InteF): In this sub-system, changes are predicted in frame over time which leads us to detect anomaly in scenes over time. InteF is a two stage process.
Trackogram: Real-Time Manifold Learning: Standard nonlinear DR methods are non-incremental techniques and cannot be used in real-time applications. Standard DR methods can only work if the entire frames are available and they can be used off-line to map video trajectories from hi ah dimension space to a 2D embedded space. Segmenting and interpretation of such a trajectory which visualize both global and sub-manifolds (sub-spaces) is hard and subjective. However, the proposed incremental DR technique named Trackogram which is described below designed to deal with sub and local manifolds on-line and in real-time.
In this step, the proposed real-time manifold learning algorithm is used to predict a real-time semantic and analytic crowd behavior descriptor using a manifold formed by a sub-sample of previous frames and current video frame. This means the manifold of video frames is recursively updated over time to track normal and abnormal crowd behavior in an unsupervised and automatic manner.
If frame matrix size is N1×N2, L equals to N1−N2. After k-sample-manifold representing manifold of video frame is mapped around current frame to the 2D embedded space, distance is calculated between start and end point of embedded manifold to predict changes in the current frame compared to the past. The calculated distance in the embedded space is used as a semantic descriptor of video frames and track its change over time to obtain a graph of crow behavior over time which is referred to as Trackogram (see
Trackogram Steps: 1—Wait until K frames occur. 2—Obtain k-sample manifold: Collect current frame (F(i)) and uniformly sampled k−1 frames from frame J to current frame (i), where J=i−B. B is a user-defined value and shows how far back to go to obtain history of crow behavior. 3—Apply nonlinear DR to map k-sample manifold, KSM(i), to the 2D embedded space. 4—calculate distance between start and end point of embedded manifold to predict changes in the current frame compared to the past. Store the calculated distance in array T (Trackogram) as ith value of T. 5—New frame (F(i+1)) happened: go to step 2 and incrementally update k-sample manifold, to obtain updated manifold KSM(i+1), and then repeat steps 3 and 4 for this new frame (F(i+1)).
Anomaly Detection using a Rule-Based Decision Making: To detect anomaly in crow behavior over time, first calculate derivative of Trackogram (dT/dt) to detect sudden changes in crow behavior, where t represents time. Then calculate subtraction of upper and lower envelope of dT/dt and use it as a proposed anomaly index, which is indeed a continuous index and thresholding algorithm could be used to obtain a binary anomaly detection index. IntaF sub-system provides this unit the identified classes for all objects and their AR as well as the most suspicious objects. If anomaly index is increased dramatically compared to the past (baseline) and stays high for two consecutive frames, this unit from Ired sub-system consider the most suspicious objects as anomaly in the current frame. Summary of algorithm and rules set by this unit are as following:
1—Find derivative of T, dT/dt═T(t)−T(t−1), where t is the frame index for the current frame. 2—Find upper and lower envelopes of dT/dt. 3—Calculate anomaly index (Ax) Ls as subtraction of upper and lower envelope of dT/dt. 4—Calculate average (Amean) and confidence interval (ACI) of k previous values of anomaly index (Ax). 5—If anomaly index value for current and previous frames are bigger than Amean+2*ACI, report the most suspicious objects in the current frame (reported by IntaF sub-system) as anomaly (anomalies) in the current frame.
PRELIMINARY DATA: To validate the proposed method and system, the method was applied to following the crowd activity benchmark datasets.
University of Minnesota dataset (UMN 2009):This dataset includes several video sequences of three different scenarios. A 3rd scenario with a normal starting section and abnormal ending section was also used. A group of people start running (anomalous behavior) after several times randomly rotating in a circle in the beginning part of video.
2) University of California, San Diego Anomaly Dataset (UCSD 2010): This dataset includes several video sequences of four different scenarios, biker, wheelchair, cart and skater. A difficult anomaly case (skater and biker) was used to test the proposed methods. In skater case, a skater enters the scene in frame 60 and it is in the scene till end.
Night vision stereo sequences provided by Daimler AG Company in June 2007: This dataset includes several video sequences of seven different scenarios, Construction-Site, Crazy-Turn, Dancing-Light, Intern-On-Bike, Safe-Turn, Squirrel, and Traffic-Light. Another difficult anomaly case (Squirrel or Fox) was used to test the proposed methods. In Squirrel case, a squirrel enters the scene in frames.
Novel variational optical flow techniques as well as efficient tracking techniques using kernel methods and particle filters can also be used in conjunction with the present method. These approaches will be used alongside the iNLDR techniques to find motion anomalies that would point to suspicious or unusual activities. In this case, motion flow would be dimensionality reduced via iNLDR then either Support Vector methods or geodesic-distance based approaches would be used for recognition or discrimination. Additional techniques developed to find anomalies from high dimensional data (in particular hyperspectral data) based on Machine Learning techniques and in particular Support Vector Data Description (SVDD) can also be used with the present invention. SVDD can be used in sub-manifold spaces representing scenes, 3D motion, or images to determine that behave as outliers.
Hyperspectral Imagery: In addition to applying the iNLDR method to NV goggles, these algorithms can also be used to solve detection and tagging problems in hyperspectral imagery. Hyperspectral imagery consists of high resolution spectral information, providing thousands of bands for each image pixel, thereby encoding the reflectance of the object and/or material of interest across a specific swath of the EM spectrum, typically spanning the visible and IR ranges. Because it is able to see the fine spectral signature of the materials, a hyperspectral camera is able to discriminate between fine changes in reflectance. However the high dimensionality of the data (what is acquired is a data cube several times a second, this data cube itself consisting of thousands of images, one per spectral band), this type of data is an ideal candidate for processing using DR algorithms.
Unfortunately, most dimensionality reduction algorithms are subject to the possibility of loss of critical subspaces (features) that are most discriminative for anomaly detection or object recognition purposes. This is not the case of the iNLDR approach. Therefore, several strategies for performing anomaly/target detection and leveraging the iNLDR approach, can also be used in conjunction with the present method as follows: (a) by performing anomaly detection directly in the dimensionality reduced hyperspectral image space; and comparing it to (b) existing methods developed relying on support vector data description (SVDD) or RX detector directly on the original hyperspectral space; and finally comparing the two approaches to (c) performing SVDD or RX detection in the dimensionality reduced space. Both global and local referentials can be used, in order to characterize global anomalies as well as fine anomalies that consist of subtle differences between groups (i.e. being able to distinguish a car between mostly trucks is a global anomaly, while being able to distinguish a specific blue ford explorer with fine/abnormal variation of tint among a set of blue ford explorers is a local anomaly). Finding local referential will be accomplished by clustering images in the submanifold, and for each cluster, finding a subset of images that can help define a local referential system. The interplay of the dimensionality reduction used via iNLDR, and the implicit increase in dimensionality brought about by the use of the SVDD when using Gaussian Radial Basis Functions, to allow for the definition or non-linear decision boundaries.
Image vs Feature spaces: Anomaly detection can also be carried out, not in the dimensionality reduced image space, but instead in the iNLDR-dimensionality reduced feature space. One possibility is to concatenate Scale Invariant Feature Transform (SIFT) or SpeededUp Robust Feature (SURF) features vectors of salient image feature point found in the image. Such a comparison would allow the performance of object detection or anomaly detection by efficiently computing geodesic distances in the dimensionality reduced feature space. To address the issue of how to actually combine these features in a way that is consistent across images and invariant to their location in the image, one possibility is to use a bag of visual Words approach (BOW) and to take as feature vectors the frequency at which the visual word appears in the image. Another possibility to perform this while still encoding the important information of the object location to use spatial pyramids with BOW as was proposed recently in combination of iNLDR.
The system and method can also be integrated into available embedded chips, such as Field Programmable Gate Arrays (FPGA). FPGAs provide a reconfigurable, massively parallel hardware framework on which such systems can be implemented. This enables fast computations that can out-perform Graphical Processor Units (GPU) if the problem is be fine-grained parallelizable.
The MovA algorithm maps readily to the FPGA computation fabric, allowing the entire system to be realized a medium to large scale FPGA. FPGA operates at 1000×−100× faster compared to CPU. Furthermore, these FPGA systems are much more compact and use much less power than their CPU and GPU counterparts, allowing them to be embedded into mobile platforms such as robots, UAVs and wearable devices such as NV goggles.
The many features and advantages of the invention are apparent from the detailed specification, and thus, it is intended by the appended claims to cover all such features and advantages of the invention which fall within the true spirit and scope of the invention. Further, since numerous modifications and variations will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.
This application claims the benefit of U.S. Provisional Patent Application No. 61/651,748 flied on May 25, 2012, which is incorporated by reference, herein, in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2013/042869 | 5/25/2013 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61651748 | May 2012 | US |