The disclosed embodiments relate to data-processing systems and methods. The disclosed embodiments further relate to camera diagnostics. The disclosed embodiments also relate to strategic use of moving test targets for traffic camera diagnostics.
Numerous localities use traffic cameras for video surveillance, security applications, and transportation applications. Traffic cameras are also used for traffic monitoring, traffic management, and for fee collection and/or photo enforcement for open road tolling, red light, speed enforcement etc. For example, in an effort to curb red-light running and promote better driving, some localities have implemented automated traffic enforcement systems, such as red light monitoring and enforcement systems. Red light monitoring and enforcement systems can be predictive in nature. The system can predict if a vehicle is going to run a red light by determining how fast a vehicle approaches an intersection and capturing images of the vehicle running the red light.
Maintenance of vast quantities of traffic cameras is a challenging undertaking. These cameras are often not easily accessible, usually being mounted on a pole high up in the air to prevent vandalism or better field of view. It is also difficult to set-up and perform camera diagnostics with the power and wiring for the cameras often located in the ground while the cameras are high up in the air. Further, there is often no display to view and analyze the immediately-acquired data during the maintenance or diagnostics. It is also difficult to place test targets in the field of view (i.e. “FOV”) in the center of traffic without disturbing or disrupting traffic.
Prior proposed solutions fail to address the traffic camera diagnostics problem. One can use indirect information (e.g. the yield of ALPR system or the frequency of the need of manual plate reading can be an indirect indication of camera quality degradation), or use elements in the scene (e.g. use static sharp edges found in the scene to test/track the focus of the camera) to do some level of diagnostics. But the capability and accuracy of these options are very limited and often scene and application dependent.
Therefore, a need exists for controlled and specialized test targets placed in the FOV for traffic camera diagnostics. It is thus the objective of this invention to propose a cost-effective and accurate system to overcome the limitations of prior proposed solutions. Key advantages of this invention include cost saving (e.g. no need for lane/traffic stops, less manual intervention) and better diagnostics performance (e.g. use of controlled/specialized test targets in the FOV, more points than static test targets, less scene dependency etc.).
The following summary is provided to facilitate an understanding of some of the innovative features unique to the embodiments disclosed and is not intended to be a full description. A full appreciation of the various aspects of the embodiments can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
It is, therefore, one aspect of the disclosed embodiments to provide for improved data-processing systems and methods.
It is another aspect of the disclosed embodiments to provide for improved camera diagnostics.
It is a further aspect of the disclosed embodiments to provide for strategic use of moving test targets for traffic camera diagnostics.
The above and other aspects can be achieved as is now described. A method, system, and computer-usable tangible storage device for traffic camera diagnostics via strategic use of moving test targets are disclosed. The disclosed embodiments can comprise four modules: Moving test target management module, Moving test target detection and identification module, Image/video feature extraction module, and Sensor characterization and diagnostics module. A first test vehicle can travel periodically through traffic camera(s) of interest. The traffic camera(s) would then identify these test vehicles via matching of license plate numbers and then identify test targets in video frames through pattern matching or barcode reading. The identified test targets are then analyzed to extract image and video features that can be used for sensor characterization, sensor health assessment, and sensor diagnostics. The disclosed embodiments provide for a non-traffic-stop (i.e., non-traffic-interruption) traffic camera diagnostics.
The accompanying figures, in which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate the embodiments and, together with the detailed description, serve to explain the embodiments disclosed herein.
The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
The embodiments now will be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by one of skill in the art, one or more of the disclosed embodiments can be embodied as a method, system, or computer program usable medium or computer program product. Accordingly, the disclosed embodiments can in some instances take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “module”. Furthermore, the disclosed embodiments may take the form of a computer usable medium, computer program product, a computer-readable tangible storage device storing computer program code, said computer program code comprising program instructions executable by said processor on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer readable medium may be utilized including hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
Computer program code for carrying out operations of the present invention may be written in an object oriented programming language (e.g., Java, C++, etc.) The computer program code, however, for carrying out operations of the present invention may also be written in conventional procedural programming languages, such as the “C” programming language or in a programming environment, such as, for example, Visual Basic.
The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
The disclosed embodiments are described in part below with reference to flowchart illustrations and/or block diagrams of methods, systems, computer program products and data structures according to embodiments of the invention. It will be understood that each block of the illustrations, and combinations of blocks, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
As depicted in
Data-process apparatus 100 can thus include CPU 110, ROM 115, and RAM 120, which are also coupled to a PCI (Peripheral Component Interconnect) local bus 145 of data-processing apparatus 100 through PCI Host Bridge 135. The PCI Host Bridge 135 can provide a low latency path through which processor 110 may directly access PCI devices mapped anywhere within bus memory and/or input/output (I/O) address spaces. PCI Host Bridge 135 can also provide a high bandwidth path for allowing PCI devices to directly access RAM 120.
A communications adapter 155, a small computer system interface (SCSI) 150. An expansion bus-bridge 170 can also be attached to PCI local bus 145. The communications adapter 155 can be utilized for connecting data-processing apparatus 100 to a network 165. SCSI 150 can be utilized to control high-speed SCSI disk drive 160. An expansion bus-bridge 170, such as a PCI-to-ISA bus bridge, may be utilized for coupling ISA bus 175 to PCI local bus 145. Note that PCI local bus 145 can further be connected to a monitor 130, which functions as a display (e.g., a video monitor) for displaying data and information for a user and also for interactively displaying a graphical user interface (GUI) 185. A user actuates the appropriate keys on the GUI 185 to select data file options.
The embodiments described herein can be implemented in the context of a host operating system and one or more modules. Such modules may constitute hardware modules, such as, for example, electronic components of a computer system. Such modules may also constitute software modules. In the computer programming arts, a software “module” can be typically implemented as a collection of routines and data structures that performs particular tasks or implements a particular abstract data type.
Software modules generally can include instruction media storable within a memory location of an image processing apparatus and are typically composed of two parts. First, a software module may list the constants, data types, variable, routines and the like that can be accessed by other modules or routines. Second, a software module can be configured as an implementation, which can be private (i.e., accessible perhaps only to the module), and that contains the source code that actually implements the routines or subroutines upon which the module is based. The term “module” as utilized herein can therefore generally refer to software modules or implementations thereof. Such modules can be utilized separately or together to form a program product that can be implemented through signal-bearing media, including transmission media and/or recordable media. Examples of such modules that can embody features of the present invention are a moving test target management module 205, a moving test target detection and identification module 215, an image/video feature extraction module 225, and a sensor characterization and diagnostics module 235, as depicted in
It is important to note that, although the embodiments are described in the context of a fully functional data-processing system (e.g., a computer system), those skilled in the art will appreciate that the mechanisms of the embodiments are capable of being distributed as a program product in a variety of forms, and that the present invention applies equally regardless of the particular type of signal-bearing media utilized to actually carry out the distribution. Examples of signal bearing media include, but are not limited to, recordable-type media such as media storage or CD-ROMs and transmission-type media such as analogue or digital communications links.
The interface 203 also serves to display traffic camera diagnostics, whereupon the user may supply additional inputs or terminate the session. In an embodiment, operating system 201 and interface 203 can be implemented in the context of a “Windows” system. It can be appreciated, of course, that other types of systems are potential. For example, rather than a traditional “Windows” system, other operation systems, such as, for example, Linux may also be employed with respect to operating system 201 and interface 203. The software application 202 can include a moving test target management module 205, a moving test target detection and identification module 215, an image/video feature extraction module 225, and a sensor characterization and diagnostics module 235. The software application 202 can also be configured to communicate with the interface 203 and various components and other modules and features as described herein.
Note that the term module as utilized herein may refer to a collection of routines and data structures that perform a particular task or implements a particular abstract data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines, and an implementation, which is typically private (accessible only to that module) and which includes source code that actually implements the routines in the module. The term module may also simply refer to an application, such as a computer program design to assist in the performance of a specific task, such as word processing, accounting, inventory management, music program scheduling, etc.
Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the disclosed method and system may be practiced with other computer system configurations, such as, for example, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, and the like.
The Moving test target management module 205 ensures that relevant moving test targets will appear in the FOV of traffic cameras of interest for some amount of occurrences. Optionally, it can also provide 301, 302 the schedule and other information about test targets, test vehicles, etc. to other modules. Interaction between this module and others 215, 225, 235 is highly dependent on the capability of other modules 215, 225, 235. At minimum, moving test target management module 205 needs to determine where to send test targets and which test vehicles to carry the test targets. It can be completely random or based on the trip schedule of service representatives or based on the feedback from specific traffic camera(s). The test targets can be painted on the test vehicles, put on a trailer and dragged by test vehicles, or mounted on top of the test vehicles, etc.
Continuing with
Through the identification of test vehicles that carry the test targets, one can recognize the presence of test targets in video frames using pattern matching or barcode reading. In this case, moving test target management module 205 needs to communicate 302 the test vehicle's collected information (e.g. license plate numbers) to the moving test target detection and identification module 215. Automated License Plate Recognition (“ALPR”) technology can be used to locate a test vehicle. A barcode can be used to identify the specific type of the moving test targets.
Through direct detection and identification of the test targets (similarly using pattern matching, barcode reading etc.), one can characterize, monitor, assess, and/or diagnose a sensed traffic camera. In this case, the moving test target management module 205 does not need to communicate 302 the test vehicle's collected information.
Through a direct communication between test vehicles and the traffic cameras, one can characterize, monitor, assess, and/or diagnose a sensed traffic camera. For example, the test vehicle can send a direct signal to each traffic camera (preferably a smart camera) when it enters its FOV.
The Image/video feature extraction module 225 extracts image and/or video features from the sensed moving test targets. The image and/or video features can be communicated 304 to the sensor characterization and diagnostics module 235 to characterize, monitor, assess, and/or diagnose the traffic cameras. The Image/video feature extraction module 225 analyzes test targets. The analysis is test-target dependent and application dependent. Analysis can include, for example, use of line patterns for MTF, sensor focus, and sensor color-plane registration, use of checkerboard for understanding change of geometry distortion for an indication of camera FOV moved, etc.
Sensor characterization and diagnostic module 235 can use the above mentioned extracted image/video features for sensor characterization, health monitoring and diagnostics. The analyses done by this module is test-target dependent and application dependent. For example, the Sensor characterization and diagnostic module 235 can track the resulting MTF or image blur over time to diagnose and/or prognose sensor degradation in focus or change of focus. For another example, the Sensor characterization and diagnostic module 235 can track the amount of changes in the geometry distortion over time to discover any FOV changes of the sensor, etc.
Though in the above discussion a feed forward communication is described from moving test target management module 205 to other modules 215, 225, 235, it is possible to have a feedback communication. In a feedback communication system, the traffic camera sensor(s) request a specific set of test targets for diagnostics based on its current diagnostic results (such as 305 in
The moving test target management module 205 also gathers and communicates 301, 302 additional information, such as test vehicle's travelling schedule (e.g., route and time), speed, where the test targets are mounted etc. to the other modules 215, 225, 235. For example, the schedule information can help moving test target detection and identification module 215 to narrow down the search range of videos if ALPR system fails. For another example, knowing the test vehicle travelling speed can help sensor characterization and diagnostic module 235 to parse out the contribution of sensor optical blur versus object motion blur for the observed test target blur. Although one can derive vehicle speed from reference marks on the moving test target directly, having the additional information available upfront can simplify or speed-up the analysis or can be used as verification information.
Motion correction to compensate for the distortion from test vehicle travelling speed in FOV can be performed before the sensor characterization and diagnostic module 235. For example, one can use existing motion correction technique in video processing prior to the extraction of image/video features in the image/video feature extraction module 225. For another example, one can simply build a speed compensation look-up table by collecting data from moving test targets at different test vehicle speeds.
First arbitrarily specifying (but keeping it the same once chosen) a reference point where Tc(i0,j0)=(0,0,z0). The FOV is estimated by feeding the four corners in the image plane, (1,1), (1,N), (M,N), (M,1) into the current camera calibration map Tc. If this task is performed many times over a period of time for each or selected camera out in the field, the estimated FOV is collected and logged each time to monitor the change of FOV for each identified camera.
For example, a periodic line pattern or a set of sharp texts can be painted on a board and mounted on a hitch, just like that shown in
Based on the foregoing, it can be appreciated that a number of different embodiments, preferred and alternative are disclosed herein. For example, in one embodiment, a method for traffic camera diagnostics via strategic use of at least one moving test target associated with at least one test vehicle is disclosed. The method can include steps for: positioning the at least one moving test target in a field of view of a traffic camera to diagnose the traffic camera; detecting a presence of the at least one moving test target by the traffic camera; extracting features of the at least one moving test target to analyze the extracted features of the at least one moving test target; and analyzing the extracted features of the at least one moving test target to characterize, monitor, assess, or diagnose the traffic camera.
In other embodiments, the method can include a step for identifying the at least one moving test target via at least one of pattern matching, barcode reading of a segment of an image of the at least one moving test target, layout of the at least one moving test target, and appearance of a sub-target element. In another embodiment, the method can include a step for identifying the test vehicle via automatic license plate recognition. In yet another embodiment, the method can include steps for: communicating information collected by the test vehicle for the at least one moving test target; communicating a traveling schedule of the test vehicle to narrow down a search range of the visual data if automatic license plate recognition of the test vehicle fails; and communicating a traveling speed of the test vehicle to parse out a contribution of sensor optical blur versus objection motion blur for an observed test target blur.
In other embodiments, analyzing the extracted visual features of the at least one moving test target further comprises using at least one of line patterns for measuring at least one of sensor modulation transfer function, sensor focus, sensor color-plane registration, use of checkerboard for understanding change of geometry distortion for an indication that the field of view for the traffic camera moved. While in other embodiments, analyzing the extracted visual features of the at least one moving test target further comprises tracking a resulting camera modulation transfer function or image blur over time to track the amount of changes in the geometry distortion over time to diagnose or prognose sensor degradation of the traffic camera.
In another embodiment, the method can include a step for compensating for distortion from a traveling speed of the test vehicle wherein the test vehicle is located in the field of view of the traffic camera. The method can further include a step for requesting another test target for additional diagnostics based on current diagnostic results. In another embodiment, steps are provided for monitoring a change of the field of view by collecting and logging an estimated field of view and performing traffic camera calibration identification for all collected positions of field of view frames.
In certain embodiments, diagnosis of the traffic camera comprises at least one of change in field of view and optical blur with a line test pattern design. In other embodiments, the at least one moving test target comprises at least one of a fixed test target, a test target selected from a pre-determined collection of a plurality of test targets, a test target created from a collection of a plurality of test target sub-elements. In another embodiment, the at least one moving test target is selected based at least one of a result of a previous traffic diagnostic trip, pre-knowledge about a specific site of a traffic camera of interest, and a specific goal a particular trip wherein the goal comprises at least one of camera blur and diagnosing a change in field of view of the traffic camera of interest.
In another embodiment, a system for traffic camera diagnostics via strategic use of at least one moving test target associated with at least one test vehicle is disclosed. The system can include a processor, a data bus coupled to the processor, and a computer-usable storage medium storing computer code, the computer-usable storage medium being coupled to the data bus. The computer program code can include program instructions executable by the processor and configured to position the at least one moving test target in a field of view of a traffic camera to diagnose the traffic camera; detect a presence of the at least one moving test target by the traffic camera; extract features of the at least one moving test target to analyze the extracted features of the at least one moving test target; and analyze the extracted features of the at least one moving test target to characterize, monitor, assess, or diagnose the traffic camera.
In other embodiments, the system can include program instructions to: identify the at least one moving test target via at least one of pattern matching, barcode reading of a segment of an image of the at least one moving test target, layout of the at least one moving test target, and appearance of a sub-target element; identify the test vehicle via automatic license plate recognition; compensate for distortion from a traveling speed of the test vehicle wherein the test vehicle is located in the field of view of the traffic camera; monitor a change of the field of view by collecting and logging an estimated field of view; perform traffic camera calibration identification for all collected positions of field of view frames; diagnose the traffic camera comprises via at least one of change in field of view and optical blur with a line test pattern design; and request another test target for additional diagnostics based on current diagnostic results.
In another embodiment the system can include program instruction to: communicate information collected by the test vehicle for the at least one moving test target; communicate a traveling schedule of the test vehicle to narrow down a search range of the visual data if automatic license plate recognition of the test vehicle fails; and communicate a traveling speed of the test vehicle to parse out a contribution of sensor optical blur versus objection motion blur for an observed test target blur.
In embodiments including analyzing the extracted visual features of the at least one moving test target, additional program instructions can be provided to use at least one of line patterns for measuring at least one of sensor modulation transfer function, sensor focus, sensor color-plane registration, use of checkerboard for understanding change of geometry distortion for an indication that the field of view for the traffic camera moved; and track a resulting camera modulation transfer function or image blur over time to track the amount of changes in the geometry distortion over time to diagnose or prognose sensor degradation of the traffic camera.
In other embodiments, the at least one moving test target comprises at least one of a fixed test target, a test target selected from a pre-determined collection of a plurality of test targets, a test target created from a collection of a plurality of test target sub-elements. In yet another embodiment, the at least one moving test target is selected based at least one of on a result of a previous traffic diagnostic trip, pre-knowledge about a specific site of a traffic camera of interest, and a specific goal a particular trip wherein the goal comprises at least one of camera blur and diagnosing a change in field of view of the traffic camera of interest.
In another embodiment, a computer-usable tangible storage device storing computer program code, the computer program code comprising program instructions executable by a processor for traffic camera diagnostics via strategic use of at least one moving test target associated with at least one test vehicle is disclosed. The computer program code can include program instructions executable by a processor to: position the at least one moving test target in a field of view of a traffic camera to diagnose the traffic camera; detect a presence of the at least one moving test target by the traffic camera; extract features of the at least one moving test target to analyze the extracted features of the at least one moving test target; and analyze the extracted features of the at least one moving test target to characterize, monitor, assess, or diagnose the traffic camera.
In some embodiments, the computer-usable tangible storage device can have program instructions to: identify the at least one moving test target via at least one of pattern matching, barcode reading of a segment of an image of the at least one moving test target, layout of the at least one moving test target, and appearance of a sub-target element; identify the test vehicle via automatic license plate recognition; compensate for distortion from a traveling speed of the test vehicle wherein the test vehicle is located in the field of view of the traffic camera; monitor a change of the field of view by collecting and logging an estimated field of view; perform traffic camera calibration identification for all collected positions of field of view frames; diagnose the traffic camera comprises via at least one of change in field of view and optical blur with a line test pattern design; request another test target for additional diagnostics based on current diagnostic results; communicate information collected by the test vehicle for the at least one moving test target; communicate a traveling schedule of the test vehicle to narrow down a search range of the visual data if automatic license plate recognition of the test vehicle fails; communicate a traveling speed of the test vehicle to parse out a contribution of sensor optical blur versus objection motion blur for an observed test target blur; use at least one of line patterns for measuring at least one of sensor modulation transfer function, sensor focus, sensor color-plane registration, use of checkerboard for understanding change of geometry distortion for an indication that the field of view for the traffic camera moved; and track a resulting camera modulation transfer function or image blur over time to track the amount of changes in the geometry distortion over time to diagnose or prognose sensor degradation of the traffic camera.
In yet other embodiments, the at least one moving test target can comprise at least one of a fixed test target, a test target selected from a pre-determined collection of a plurality of test targets, a test target created from a collection of a plurality of test target sub-elements. In another embodiment, the at least one moving test target is selected based at least one of on a result of a previous traffic diagnostic trip, pre-knowledge about a specific site of a traffic camera of interest, and a specific goal a particular trip wherein the goal comprises at least one of camera blur and diagnosing a change in field of view of the traffic camera of interest.
It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Furthermore, various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.