This application claims priority pursuant to 35 U.S.C. 119(a) to Indian Application No. 202211027975, filed May 16, 2022, which application is incorporated herein by reference in its entirety.
The present disclosure relates generally to camera-based monitoring systems, and more particularly to methods and system for automatically configuring camera settings of such camera-based monitoring system.
Camera-based monitoring systems are often used to monitor a monitoring region, and to identify certain objects and/or certain events that occur in the monitored region. In one example, a surveillance system often includes one or more cameras configured to monitor a surveilled region. The surveillance system may identify certain objects and/or certain events that occur in the surveilled region. In another example, a traffic monitoring system may monitor vehicle traffic along a roadway or the like. In some traffic monitoring systems, a License Plate Recognition (LPR) algorithm is used to processes images captured by one or more cameras of the traffic monitoring system to identify license plates of vehicles as they travel along the roadway.
In many camera-based monitoring systems, the quality of the images captured the cameras can be important to help identify certain objects and/or certain events in the monitored region. The quality of the images is often dependent upon the interplay between the camera settings, such as shutter speed, shutter aperture, focus, pan, tilt, and zoom, the conditions in the monitored region such as available light, and characteristics of the objects such as object type, object distance, object size and object speed. What would be desirable are methods and system for automatically configuring camera settings of a camera-based monitoring system to obtain higher quality images.
The present disclosure relates generally to camera-based monitoring systems, and more particularly to methods and system for automatically configuring camera settings of such camera-based monitoring system. In one example, an illustrative system may include a camera having a field of view, a radar sensor having a field of view that at least partially overlaps the field of view of the camera, and a controller operatively coupled to the camera and the radar sensor. In some cases, the controller is configured to receive one or more signals from the radar sensor, identify an object of interest moving toward the camera based at least in part on the one or more signals from the radar sensor, determine a speed of travel of the object of interest based at least in part on the one or more signals from the radar sensor, determine a projected track of the object of interest, and determine a projected image capture window within the field of view of the camera at which the object of interest is projected to arrive based at least in part on the determined speed of travel of the object of interest and the projected track of the object of interest. In some cases, the projected image capture window corresponds to less than all of the field of view of the camera.
In some cases, the controller sends one or more camera setting commands to the camera, including one or more camera setting commands that set one or more of: a shutter speed camera setting based at least in part on the speed of travel of the object of interest, a focus camera setting to focus the camera on the projected image capture window, a zoom camera setting to zoom the camera to the projected image capture window, a pan camera setting to pan the camera to the projected image capture window, and a tilt camera setting to tilt the camera to the projected image capture window. The controller may further send an image capture command to the camera to cause the camera to capture an image of the projected image capture window. In some cases, the controller may localize a region of the projected image capture window that corresponds to part or all of the object of interest (e.g. license plate of a car) and set one or more image encoder parameters for that localized region to a higher quality image. In some cases, the controller may change the encoder quantization value, which influences the degree of compression of an image or region of an image, thus affecting the quality of the image in the region.
Another example is found in a system that includes a camera having an operational range, a radar sensor having an operational range, wherein the operational range of the radar sensor is greater than the operational range of the camera, and a controller operatively coupled to the camera and the radar sensor. In some cases, the controller is configured to identify an object of interest within the operational range of the radar sensor using an output from the radar sensor, determine one or more motion parameters of the object of interest, set one or more camera settings for the camera based on the one or more motion parameters of the object of interest, and after setting the one or more camera settings for the camera, cause the camera to capture in an image of the object of interest.
Another example is found in a method for operating a camera that includes identifying an object of interest using a radar sensor, wherein the object of interest is represented as a point cloud, tracking a position of the object of interest, and determining a projected position of the object of interest, wherein the projected position is within a field of view of a camera. In some cases, the method further includes determining a projected image capture window that corresponds to less than all of the field of view of the camera, the projected image capture window corresponds to the projected position of the object of interest, setting one or more camera settings of the camera for capturing an image of the object of interest in the projected image capture window, and capturing an image of the object of interest when at least part of the object of interest is at the projected position and in the projected image capture window.
The preceding summary is provided to facilitate an understanding of some of the innovative features unique to the present disclosure and is not intended to be a full description. A full appreciation of the disclosure can be gained by taking the entire specification, claims, figures, and abstract as a whole.
The disclosure may be more completely understood in consideration of the following description of various examples in connection with the accompanying drawings, in which:
While the disclosure is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the disclosure to the particular examples described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure.
The following description should be read with reference to the drawings, in which like elements in different drawings are numbered in like fashion. The drawings, which are not necessarily to scale, depict examples that are not intended to limit the scope of the disclosure. Although examples are illustrated for the various elements, those skilled in the art will recognize that many of the examples provided have suitable alternatives that may be utilized.
All numbers are herein assumed to be modified by the term “about”, unless the content clearly dictates otherwise. The recitation of numerical ranged by endpoints includes all numbers subsumed within that range (e.g., 1 to 5 includes, 1, 1.5, 2, 2.75, 3, 3.8, 4, and 5).
As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include the plural referents unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.
It is noted that references in the specification to “an embodiment”, “some embodiments”, “illustrative embodiment”, “other embodiments”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is contemplated that the feature, structure, or characteristic may be applied to other embodiments whether or not explicitly described unless clearly stated to the contrary.
It is contemplated that the camera 12 may have a network address, which identifies a specific addressable location for that camera 12 on a network. The network may be a wired network, and in some cases, the network may be a wireless network communicating using any of a variety of different wireless communication protocols.
The illustrative system 10 further includes a radar sensor 14. In some cases, the radar sensor 14 may be contained within the housing of the camera 12, as indicated by the dashed lines, but this is not required. In some cases, the radar sensor 14 is separate from the camera 12. The radar sensor 14 may include a millimeter wave (mmWave) antenna 15 that may determine a Field of View (FOV) and an operational range, which together define at least in part the operational area that the radar sensor 14 can be used to reliably detect and/or identifying objects of interest for the particular application at hand. The FOV of the radar sensor 14 may define a horizontal FOV for the radar sensor 14, and in some cases, may define a distance in which the radar sensor 14 may reliably detect and/or identify objects of interest for the particular application at hand. In some cases, the radar sensor 14 may be have an operational range of 100-250 meters for detecting vehicles along a roadway. In some cases, the radar sensor 14 may have an operational range of 200-250 meters, or an operational range of 100-180 meters, or an operational range of 100-150 meters. These are just examples. In some cases, as described herein, the FOV of the radar sensor 14 at least partially overlaps the FOV of the camera 12. In some cases, the operational range of the FOV of the radar sensor 14 is greater than the operational range of the FOV of the camera 12 for detecting and/or identifying objects when applied to a particular application at hand. In some cases, the FOV of the radar sensor 14 may include a horizontal FOV that corresponds generally to a horizontal FOV of the camera 12 FOV, but this is not required.
The radar sensor 14 may utilize a radio wave transmitted from the radar sensor 14 and receive a reflection from an object of interest within the FOV. The radar sensor 14 may be used to detect the object of interest, and may also detect an angular position and distance of the object of interest relative to the radar sensor 14. The radar sensor may also be used to detect a speed of travel for the object of interest. In some cases, the radar sensor 14 may be used to track the object of interest over time. Some example radar sensors may include Texas Instruments™ FMCW radar, imaging radar, light detection and ranging (Lidar), micro-doppler signature radar, or any other suitable radar sensors.
The illustrative system 10 of
The illustrative system 10 of
In some cases, the controller 16 may be configured to classify the objects of interest into one of a plurality of classifications. The plurality of classifications may include a vehicle (e.g., a car, a van, a truck, a semi-truck, a motorcycle, a moped, and the like), a bicycle, a person, or the like. In some cases, more than one object of interest may be identified. For example, two vehicles may be identified, or a bicycle and a vehicle may be identified, or a person walking on the side of a road and a vehicle may be identified. These are just examples.
In some cases, the controller 16 may be configured to determine a projected future position of the object of interest based, at least in part, on the projected track of the object of interest. The controller 16 may determine a projected image capture window within the FOV of the camera 12 at which the object of interest is projected to arrive based, at least in part, on the determined speed of travel of the object of interest and the projected track of the object of interest. The projected image capture window may correspond to less than all of the FOV of the camera 12, but this is not required.
The controller 16 may include a memory 17. In some cases, the memory 17 may be configured to store relative FOV information of the camera 12 relative to the FOV of the radar sensor 14. The controller 16 may further include one or more camera settings 19. The one or more camera settings 19 may include, for example, one or more of a shutter speed camera setting, an aperture camera setting, a focus camera setting, a zoom camera setting, a pan camera setting, and a tilt camera setting. The controller 16 may be configured to send one or more camera setting 19 commands to the camera 12, and after the camera settings 19 have been set for the camera 12, the controller 16 may send an image capture command to the camera 12 to cause the camera 12 to capture an image of the projected image capture window. In some cases, the controller 16 may be configured to cause the camera 12 to capture an image of the object of interest when the object of interest reaches the projected future position. In some cases, the controller 16 may further localize the object of interest or part of the object of interest (e.g. license plate), and may set image encoder parameters to achieve a higher-quality image for that region of the image. In some cases, the controller 16 may adjust an encoder quantization value, which may impact a degree of compression of the image or part of the image of the projected image capture window, thereby creating a higher-quality image, but this is not required. In some cases, in post-processing after the image is captured, the text/characters in the license plate can be improved through well-known image enhancement techniques, when desired.
In some cases, the camera settings 19 may be determined using one or more motion parameters of the detected objects of interest, one or more of the radar signatures of the detected objects of interest and/or one or more classifications of the detected objects of interest. For example, the camera settings 19 may be based, at least in part, on the speed of travel of an object of interest detected in the FOV of the camera 12. In some cases, the shutter speed camera setting may have a linear correlation with the speed of travel of the object of interest. For example, the faster the speed of travel, the faster the shutter speed, which creates a shorter exposure of the camera 12 thereby reducing blur in the resulting image. To help compensate for the shorter exposure, the aperture camera setting may be increased. In some cases, the aperture camera setting may be based, at least in part, on the shutter speed camera setting and ambient lighting conditions. For example, when the shutter speed camera setting is set to a faster speed, the aperture may be set to a wider aperture to allow more light to hit the image sensor within the camera 12. In some cases, adjust the aperture setting may be accomplished by adjusting an exposure level setting of image sensor of the camera 12, rather than changing a physical aperture size of the camera 12.
In some cases, the shutter speed camera setting and the aperture camera setting may be based, at least in part, on the time of day, the current weather conditions and/or current lighting conditions. For example, when there is more daylight (e.g., on a bright, sunny day at noon) the shutter speed may be faster and the aperture may be narrower than at a time of day with less light (e.g., at midnight when it is dark, or on a cloudy day). These are just examples.
In some cases, the controller 16 may be configured to set a focus camera setting to focus the camera 12 on the projected image capture window. In other cases, an autofocus feature of the camera 12 may be used to focus the camera on the object as the object reaches the projected image capture window. In some cases, the controller 16 may set a zoom camera setting to zoom the camera 12 to the projected image capture window. In some cases, the camera 12 may set a pan camera setting and the tilt camera setting to pan and tilt to the camera to capture the projected image capture window.
In some cases, the object of interest may be a vehicle traveling along a roadway, and the projected image capture window may include a license plate region of the vehicle when the vehicle reaches the projected image capture window. In this case, the controller 16 may send a camera setting command to the camera 12 to pan and tilt the camera 12 toward the projected image capture window before the vehicle reaches the projected image capture window, focus the camera 12 on the projected image capture window and zoom the camera 12 on the projected image capture window to enhance the image quality at or around the license plate of the vehicle. The controller 16 may send an image capture command to the camera 12 to capture an image of the license plate of the vehicle when the vehicle reaches the projected image capture window.
The controller 16 may be configured to initially identify an object of interest as a point cloud cluster from the signals received from the radar sensor 14. The position (e.g. an angular position and distance) of the object of interest may be determined from the point cloud cluster. The position of the object of interest may be expressed on a cartesian coordinate ground plane, wherein the position of the object of interest is viewed from an overhead perspective. The controller 16 may be configured to determine a bounding box for the object of interest based, at least in part, on the point cloud. In such cases, as shown in
As shown in the example in
In some cases, a controller (e.g., controller 16) may be operatively coupled to the radar sensor and may include software that is configured to classify the object(s) of interest, as referenced by block 115. For example, the controller may be configured to receive signals from the radar sensor indicating the presence of the object(s) of interest within the operational range of the radar sensor, and the controller may determine the strength of the signals received by the radar sensor, as well as a speed of travel of the object(s) of interest, and/or a size of the point cloud cluster. In some cases, the speed of travel may indicate the type of object(s) of interest. For example, a person walking or riding a bicycle may not be able to travel at speeds of 120 kph. Thus, this would indicate the object(s) of interest would likely be a moving vehicle. In some cases, the strength of the signal may indicate a type of material present within the object(s) of interest. For example, the radar sensor may receive a strong signal from a metal object, such as a vehicle. In some cases, an object such as an article of clothing on a person may produce a weaker signal. Thus, using the strength of the signal, the speed of travel, and the size of the point cloud cluster, the controller may classify the object(s) of interest. In one example, the track(s) may be classified into one of a vehicle, a bicycle, a person, or the like.
As referenced at block 120, if a vehicle is determined to be present, the controller determines if a license plate recognition (LPR) is desired for any of the vehicles currently being tracked. If LPR is desired for any of the vehicles currently being tracked, the controller determines if LPR has been performed on all vehicles being tracked, as referenced by block 125. In the example shown, if no license plate recognition (LPR) is desired for any of the vehicles currently being tracked, the method 100 does not proceed to block 130 but rather simply returns to block 105. If the controller determines that LPR is desired for at least one of the vehicles currently being tracked, the method moves on to block 130. In block 130, the controller calculates and sets the camera settings, as referenced by block 130.
As discussed with reference to
The controller may compute a bounding box for each vehicle being tracked using the point cloud cluster, as referenced by block 135. As shown in
In
In some cases, when the camera is not a pan-tilt camera, the projected ROI may be cropped and resized, such as by scaling the image up to an original image dimension, to fit the image captured by the camera, as referenced by bock 165, and perform a focus on the projected ROI, as referenced by block 170.
The method 200 may further include the controller setting one or more camera settings of the camera for capturing an image of the object of interest in the projected image capture window, as referenced by block 225. The one or more camera settings may include one or more of a shutter speed camera setting, an aperture camera setting, a focus camera setting, and a zoom camera setting. In some cases, the one or more camera settings may include one or more of a pan camera setting and a tilt camera setting. The controller may capture an image of the object of interest when at least part of the object of interest is at the projected position and in the projected image capture window, as referenced by block 230.
The method 300 may further include the controller sending one or more camera setting commands to the camera. The one or more camera setting commands may be configured to set one or more of a shutter speed camera setting, wherein the shutter speed camera setting may be based at least in part on the speed of travel of the object of interest, a focus camera setting to focus the camera on the projected image capture window, a zoom camera setting to zoom the camera to the projected image capture window, a pan camera setting to pan the camera to the projected image capture window, and a tilt camera setting to tilt the camera to the projected image capture window, as referenced by block 330. The controller may then be configured to send an image capture command to the camera to cause the camera to capture an image of the projected image capture window, as referenced by block 335.
Having thus described several illustrative embodiments of the present disclosure, those of skill in the art will readily appreciate that yet other embodiments may be made and used within the scope of the claims hereto attached. It will be understood, however, that this disclosure is, in many respects, only illustrative. Changes may be made in details, particularly in matters of shape, size, arrangement of parts, and exclusion and order of steps, without exceeding the scope of the disclosure. The disclosure's scope is, of course, defined in the language in which the appended claims are expressed.
Number | Date | Country | Kind |
---|---|---|---|
202211027975 | May 2022 | IN | national |