A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. Further, no references to third party patents or articles made herein is to be construed as an admission that the present invention is not entitled to antedate such material by virtue of prior invention.
The present invention relates generally to the field of automated systems and methods for traffic enforcement and more particularly to the acquisition of video files in connection with traffic signal light violations.
In the field of traffic enforcement, there exist a variety of systems and methods for acquiring and capturing data related to a traffic violation event, such as the capture of a video of the violation event, as well the acquisition and delivery of other information about the traffic violation itself. The traffic violation may be any action that violates an operating law and more particularly, that violates a traffic signal light, such as a red light traffic violation, by traveling through an intersection in violation of the traffic light (i.e., after the light has already turned red).
It is desirable to detect, capture and store violation events via roadside traffic enforcement cameras, or other imaging devices. For example, when tracking a red light violation, it is desirable to capture a video of the violation, as well as any relevant information such as the location of the violation, the date, the duration of the violation, information identifying the violating vehicle and/or operator and any other pertinent information useful in proving that the violation occurred, such as the state of the traffic light signal.
According to current traffic enforcement systems, determination of traffic signal states (i.e. the color of the traffic light—red, yellow or green) is achieved using electronic devices that are electronically connected to the traffic light system and/or its controller. Such enforcement systems can sense the presence of absence of power being transmitted to a traffic signal head. For example, a module may be connected to a traffic signal input to measure the presence or absence of power to each signal disc. However, this method typically requires direct wiring between the traffic signal input and the traffic enforcement module to measure the presence or absence of power. Traffic signal controllers may vary greatly. Thus, it may prove difficult to provide a wired interface that accommodates the majority of light systems without the need for significant customization. Such custom installation increases the costs of providing an enforcement system.
Various attempts have been made to overcome the need for a hardwired connection between the signal and the enforcement system, such as providing an inductive toroidal coil, placed around the electrical wire that feeds each signal disc, to measure the presence of absence of power. However, this still requires a connection to the target traffic signal to determine the state of the signal. This requirement of a connection to the signal head, directly or indirectly, becomes even more problematic when the connections are either prohibited by law or made impossible or costly to due physical restraints. Connecting to a traffic signal light head to determine its state clearly has its disadvantages.
It is therefore highly desirable to provide a system and method for determining traffic signal states without requiring a wired connection to the traffic signal or need to sense the electrical state of the signal wiring. This approach should improve automated traffic enforcement by enabling intersections to be monitored without the need to hardwire into, or form another type of direct connection or communication with, the traffic light control system. In this manner, an intersection may be monitored once the enforcement cameras and associated enforcement system components are installed, without the need for additional connections to the traffic light signal itself.
This invention overcomes disadvantages of the prior art by providing a system and method for determining the state (e.g. red, green, yellow, red arrow, green arrow, etc.) of a traffic signal light using traffic enforcement cameras that are free of interconnection, wired or otherwise, to the controllers or wiring of the traffic signal system. In general, the invention herein provides a system and method for automatically predicting, tracking and capturing traffic violation events in which the traffic enforcement cameras include a signal camera provided to transmit images to a video signal sensing software module so that they can be used to determine the state of the traffic signal. This data can be used in compiling the overall information relating to the traffic violation, such as for generating a citation of the violation that includes images of the violation.
In an illustrative embodiment, there is provided a system and method for acquiring pertinent information related to a traffic violation event. More particularly, the system and method employs one traffic enforcement camera to capture a video file of the traffic signal violation event, while simultaneously employing a signal camera that provides images to a signal sensing module that employs machine vision search techniques to determine the state of the traffic signal. The method first monitors a particular roadside area for traffic violations. For example, there may be a plurality of video cameras each having a respective, discrete view of an intersection that is being monitored for, by way of example, a red light traffic violation. A prediction algorithm is employed to determine if a vehicle is a potential violator, and if so, a video of the violation is captured. Simultaneously, a signal video camera according to an illustrative embodiment captures images of the traffic signal light head, and transmits the images to a processing unit that runs a video signal-sensing software module to determine the active state of the light.
In the illustrative embodiment, the state (i.e., red, yellow or green) of the traffic signal is determined utilizing the hue, brightness, color intensity, shape and temporal changes detected by a traffic enforcement camera employing machine vision search techniques. Each of these factors are weighted differently according to a video signal-sensing algorithm or process to determine the active state of the video signal as being red, yellow or green.
Combining the state of the traffic light with the violation video creates a piece of evidence that is used to verify the violation of the traffic light. This information may be reviewed by traffic enforcement personnel to issue warnings and/or citations accordingly. When a citation is issued, these images may be provided directly thereon to automatically issue the citation having direct proof of the violation.
The invention description below refers to the accompanying drawings, of which:
In accordance with the present invention there is provided a video signal sensing system and method for the prediction, tracking and capturing of a video and other information related to a traffic violation event. More particularly, there is provided a system and method for acquiring and capturing information related to a traffic signal light violation, such as a red light violation, using traffic enforcement cameras to sense a video signal. A “red light violation” as used herein occurs when a vehicle passes the stop line when the designated traffic signal is red and then it continues to cross through the intersection.
Referring now to
The system employs a plurality of traffic enforcement cameras including a tracking camera 110, a signal camera 120 and an enforcement camera 130 to monitor the intersection 100 for possible violations. The tracking camera 110, as described in greater detail below with particular reference to
As shown in
An illustrative system employs environmentally sealed/protected pan, tilt, zoom and fixed mount video cameras mounted on existing traffic signal poles or additional poles provided at an intersection, onto which the cameras are mounted. These video cameras are the only devices required to perform the gathering of traffic enforcement evidence, as the video signal sensing is performed by a camera to detect a violation.
Referring again to
As will be discussed in further detail below, a variety of techniques can be employed to determine the light state. Some techniques can employ color identification, discerning between a bright contrasting field of red, green or yellow appearing within the overall field of view of the signal camera 120. Since the camera 120 and signal(s) are fixed with respect to each other, the signal camera 120 can be adapted and/or set to image a box that defines a narrow field around each signal so as to avoid fake readings from, for example the sun or a streetlight. Likewise, the vision system can search for particular ranges of wavelengths that are specifically characteristic of the particular signal colors. In an alternate, or complimentary technique, the vision system is trained to determine whether a high-contrast brightness (grayscale, for example) appears in the top, middle, or bottom part of the signal's field of view, representing the appropriate signal state. In such systems, the color detection can be substituted with grayscale detection which determines levels of brightness rather than different colors.
According to this embodiment, a single camera is used for the signal sensing of the illustrative system. This single camera has a view of the entire intersection, including the signal lights. As will be described in reference to
The signal camera 120 transmits a video as input to a processing unit 175 having a video signal sensing (VSS) software module 180 thereon. The processing unit 175 receives a video input from the signal camera 120 and then runs the VSS software module 180 to determine the active state of the system. The method for implementing this is described below with reference to
Also shown in
Referring now to
In an illustrative embodiment, the tracking camera is placed at an optimal location such that it provides a clear image of the tracking field of view (shaded area 210), preferably to a view from approximately 100 feet before the stop line to 20 feet after the stop line. During installation, the camera 110 should be placed at a location that is 32 to 38 feet from the ground, as the higher the camera is placed, typically the view of the violation area is improved. However, the location of the camera 110 can be varied as required to adapt to each location and/or intersection. It is typically desirable that the tracking camera be located no more than approximately 50 feet from the stop line to provide a clear and accurate view of the intersection 100.
Generally, the system employs at least one prediction unit responsible for predicting potential traffic violations and at least one violation unit in communication with the prediction unit for recording the violations. A prediction unit processes each video captured by a prediction camera so as to identify predicted violators. The prediction unit then sends a signal to the violation unit if it finds a high probability of violation events. The violation unit then records the high probability events. As previously described a more detailed discussion of the methods and systems for performing the prediction and tracking of traffic violation events, is found, for example, in commonly assigned U.S. Pat. No. 6,754,663.
Also shown in
Referring now to
Also note that the image frame 300 of
Referring now to
The signal camera provides a recording of vehicles approaching and passing the stop line from the rear in monitored lanes at the time of violations by obtaining a plurality of signal images. This view includes clearly visible signal lights. In an illustrative embodiment, this camera has a clear view from at least approximately 20 feet before the stop line to at least approximately 20 feet after the stop line, and also a clear view of the signal head controlling the monitored lanes. The lower the height for this camera, the better, and is preferably placed at approximately 17 feet, however up to approximately 20 feet is appropriate.
According to the illustrative system of
The signal camera field of view (the shaded region 410 of
Referring now to
In an alternate embodiment, the enforcement camera can be located on the opposite side of the intersection 100 than that depicted in
Reference is now made to
A second camera of the dual camera arrangement, the signal view camera 820, provides a view similar to the signal camera 120 of
As described above initially with reference to
Next, the VSS software module generally employs procedure step 930, which is a combination of five processes to determine the likelihood, or probability, of each phase being active based on a probability of imaging factors, including hue (at procedure step 931), brightness (932), color intensity (933), shape match (934), and temporal changes (935), as will be described in greater detail below. Each process is adaptive, meaning that it continuously adjusts its parameters based on the image and the recognition results. The probability of each process (931, 932, 933, 934 and 935) is combined by employing a probability combination process at step 940 to produce weighted average probabilities for each signal phase. The weight of each individual process is determined based on recognition performance of its corresponding value from a previous image. Processes that have a better recognition performance will be weighted more than those with a worse recognition performance.
Finally, at step 950, a state determination process is employed that uses the signal phase with the maximum combined probability as being the active state (red, yellow, or green) of the signal light. This can be determined by employing the following formula:
According to the formula for determining signal head state, fcombined(s) is the combined probability for phase ‘s’ and is calculated by performing a weighted average of the probability of all five according to the following formula:
f
combined(s)=whue*fhue(s)+wbrightness*fbrightness(s)+wcolor*fcolor(s)+wshape*fshape(s)+wchange*fchange(s)
The process by which the probability of each factor, as determined by its respective detector, will now be described. To determine fhue(s), so as to be used in the above equation, according to the hue determination process of step 931, the hue detector calculates an average hue value for all of the pixels within the bounding box (or directed camera) of the target signal head. It also estimates and tracks the average hue value and its variance separately for different signal states (i.e., red, yellow, or green). The Bayesian rule, as formulated in the following equation, calculates the probability of the average hue as representing a particular state of the traffic signal, such as red, yellow, or green:
According to the above equation for calculating hue probability, when the signal s is active,
To compute the probability for brightness, fbrightness(s), the brightness detector, according to the brightness determination process of step 932, first identifies the bright pixels around each signal disc based on its location within the bounding box (or directed camera) of the target signal head and calculates its center of mass and the size of the bright area. Bright pixels are defined as pixels whose intensity values exceed a certain threshold. Once the bright area for each signal disc is identified, its center of mass is compared to the projected center of each signal disc based on the bounding box geometry and a probability value is calculated according to the following formula:
According to the above formula for computing fbrightness(s), (xc(j), yc(j)) is the center of mass for the bright pixels around signal disc j and m(j) is the corresponding size. According to the configuration, (x(s), y(s)) is the projected center of signal disc j. Note that a signal disc could have no bright area if it is not currently active. In that instance, the corresponding size m(j) would have a value of 0 and thus the probability of brightness would be zero (the state would not be active).
According to the color determination process of step 933, to compute fcolor(s), the color detector calculates average red, yellow and green values from pixels around the corresponding signal disc. Yellow is calculated as an average from red and green channels. The color probability is then calculated according to the following formula:
According to the above color probability formula,
To compute the probability of a shape match, fshape(s), the shape detector, according to the shape determination process of step 934, first converts the RGB subimage Icurr into a grayscale image and builds a shape model, I(s), for each active signal s using incremental averaging. I(s) represents an average value of how the signal head looks on a grayscale when the signal s is active. Once the shape model, I(s), for each signal is built, it compares the current grayscale image to each shape model and calculates the probability of each state being active based on the difference between the current grayscale image and the shape models, I(s), according to the following formula:
According to the change determination process of step 935, to determine the probability based on temporal changes, fchange(s), the change detector first computes an average intensity, ī(s), around each signal disc when in the active state, s, and then estimates an average intensity, ī0(s), for each signal when it is not active. The change probability is calculated according to the following formula:
After the probability value for each of the five detectors (931, 932, 933, 934 and 935) are calculated, they are averaged based on their corresponding weights by the combined probability process at step 940 by a combined detector. This produces a combined probability value for each signal and the signal with the greatest combined probability value is selected as the current active signal, s*, to be used in the following formula, also depicted above:
Once the current active state, s*, is identified, it is compared with the signal with a maximum probability based on each individual detector. If the maximum probability signal, based on an individual detector, agrees with the maximum likelihood signal, based on the combined likelihood, its weight is increased. Otherwise, its weight is decreased. More specifically, for example, for the hue detector, the average hue value for the active signal and its variance will be updated using the current hue value. And for the shape detector, the shape model for the active signal is updated using the current grayscale image. Also, for the change detector, the average intensity values for the non-active signals are updated using the values from the current image.
The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Each of the various embodiments described above may be combined with other described embodiments in order to provide multiple features. Furthermore, while the foregoing describes a number of separate embodiments of the apparatus and method of the present invention, what has been described herein is merely illustrative of the application of the principles of the present invention. For example, the violation event described herein has been related primarily to a vehicle traveling through an intersection after the traffic light has already turned red. However, it is expressly contemplated that this has application in all areas of traffic enforcement, including, but not limited to, any situation in which the action of a drive subsequent to the change of a traffic signal may result in a traffic violation. Also, the depicted images relate to an intersection of two roads, however the teachings herein are applicable to any traffic light having multiple states that a driver and/or vehicle must obey such that a violation of the light results in a citation being issued to the operator. The detected state of the system can, likewise, be limited to those that either do or do not result in a violation (e.g. detect only red or detect only green/yellow). In general, the system and method herein can be implemented as hardware, software consisting of a computer-readable medium executing program instructions, or a combination of hardware and software. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
This application claims the benefit of U.S. Provisional Application No. 61/298,948, filed Jan. 28, 2010, the content of which is incorporated by reference herein and relied upon.
Number | Date | Country | |
---|---|---|---|
61298948 | Jan 2010 | US |