The present invention relates generally to imaging, and, more specifically, a non-linear lens constructed to optically remap pixels of an image sensor from an area outside the desired zone to within the desired zone to form a desired pixel distribution within a captured image of the desired zone.
Many cameras include X-Y aspect ratios that describe the relationship between the width (X) and the height (Y) of the image. The aspect ratios have a specific resolution with a uniform distribution of pixels that describes the relationship between the width of the pixel compared to the height of the pixel. Typically, a camera sensor has a uniform pixel distribution and an aspect ratio that is defined by the total number of pixels in the X-Y plane for the camera sensor. The lens distributes the incident light to the pixels of the camera sensor to capture an image, and a processor processes the captured image to form an image that has a field of view with an aspect ratio (e.g., 4:3 standard, 16:9 standard, etc.). The field of view typically encompasses or includes all of the area in front of the camera, with objects closer to the camera appearing larger than objects farther from the camera. The depth of field of existing cameras defines the area of the field of view that appear in focus.
To increase the resolution of an image, the number of pixels or pixel density of a camera sensor increases, which increases the cost to manufacture the camera. In some applications, such as surveillance cameras, a relatively small portion of the image is considered an area of interest or the desired field of view. Therefore, sensors with a uniform distribution of pixels may be unable to provide the desired resolution within the area of interest and, at the same time, many pixels that are processed are not useful for identifying objects that are outside the area of interest.
Existing cameras typically cannot optically separate out or only focus on the desired area 30 outlined by the frame 38 (e.g., the roadway) and eliminate or disregard the undesired area(s) 34 (e.g., the tree line on each side of the roadway 32, as well as the skyline). Some cameras may use motion detection algorithms to block out portions of the image 24 from being monitored. Also, the uniform pixel distribution of the camera causes the objects that are positioned relatively close to the camera, inside and outside the frame 38, to appear with a higher resolution than is necessary to view objects in those areas. At the same time, objects relatively far from the camera appear with a lower resolution than is necessary to adequately view relevant objects.
The image 24, which is taken by positioning the existing camera relative to the roadway to monitor vehicles on the roadway 32 for identification of specific vehicles via their respective license plate(s), does not adequately identify the vehicle 58 and the vehicle 62 in the same image 24. This is shown in the schematic illustration of
In one aspect, the disclosure provides a system for capturing an image includes a camera positioned relative to an area, a camera sensor having an X-Y plane and an aspect ratio defined by a quantity of pixels in the X-Y plane, and a lens constructed with a profile to optically capture a desired zone of the area. The lens remaps pixels associated with the area outside the desired zone to within the desired zone to form a desired pixel distribution within a captured image of the desired zone.
In another aspect, a method for capturing an image includes positioning a camera relative to an area, wherein the camera comprises a camera sensor having an X-Y plane and an aspect ratio defined by a quantity of pixels in the X-Y plane, and a lens constructed with a profile to optically capture a desired zone of the area. An image is captured using the camera sensor. The lens remaps pixels associated with the area outside the desired zone to within the desired zone to form a desired pixel distribution within the captured image of the desired zone.
Before any embodiments of the present invention are explained in detail, it should be understood that the invention is not limited in its application to the details or construction and the arrangement of components as set forth in the following description or as illustrated in the drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. It should be understood that the description of specific embodiments is not intended to limit the disclosure from covering all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
The non-linear lens may use non-linear optics to optically capture a distorted image of the desired area 130. For example, the non-linear lens 114 may be a non-spherical type lens that is manufactured with optical clarity via 3D printing or other technology that is suitable for shaping the lens 114 for the application the system 100 will be applied to. The lens 114 may be constructed of one or more elements that cooperatively define the non-linear nature of the lens 114. For example, the lens 114 may be constructed of any combination of elements that are optically clear, reflective, optically liquid materials (e.g., to form a liquid lens), or include microelectromechanical systems (MEMS). The unique profile of the non-linear lens 114 is designed for each specific application (e.g., monitoring a roadway at a tollbooth, monitoring a storage facility, an intersection, etc.) to remap or redistribute the pixels within the desired zone(s) 130 to form a desired pixel distribution within the desired zone 130 without wasting pixels on undesired zone(s) 134 outside of a frame 138 (see
After the unique profile of the non-linear lens 114 is constructed for the specific application, the non-linear lens 114 is positioned relative to an area 122 (
In some constructions, the non-linear lens 114 may arrange the pixels within the desired frame 138 in a plurality of rows that increases in pixel density as the desired zone 130 extends farther from the camera 110. For example, the desired frame 138 may include a first row of pixels 174 and a second row of pixels 178, each of which is positioned within the desired zone 130 of the area 122, but not necessarily in close proximity with each other. The non-linear lens 114 arranges the pixels such that the first row of pixels 174 has the same, or nearly the same, pixels per square foot as the second row of pixels 178. As a result, the first row of pixels 174 positioned relatively far from the camera 110 are expanded to form an adjusted first row of pixels 176 in the adjusted frame 140. At the same time, the second row of pixels 178 may be relatively unchanged (e.g., remain the same size, or slightly reduced in size) after passing through the non-linear lens 114, or the pixels 178 may be adjusted to form an adjusted second row of pixels 180. Additionally, the pixels positioned beyond the second row of pixels 178 proximate to an end 154 of the frame 138 and the pixels positioned between the first and second row of pixels 174, 178 may be adjusted in the same manner by the lens 114. As a result, the objects in close proximity to and far away from the camera 110 and the lens 114 have relative clarity in the same image 124.
Expansion of the frame 138 to match the aspect ratio of the camera sensor 118 spreads the pixels of the desired zone 130 to produce the adjusted image 124 that represents a slightly distorted view of the zone 130. As a result, the size of the objects positioned relatively far from the camera 110 are adjusted to be the same size or approximately the same size as the objects positioned relatively close to the camera 110. Because the frame 138 has a unified pixel density with a greater pixel density relatively far from the camera 110 when compared to relatively close to the camera, the adjusted image 124 has a uniform final resolution.
In one example, after the pixels receive light passed through the non-linear lens 114, the pixels may be orientated in the wrong direction or distorted. As a result, the processor 120 may determine the correct positioning of the pixels to produce the final image for the processing system or a user. In some constructions, the system 100 may redistribute the pixels within the adjusted image 124 to further increase the resolution in the adjusted image 124 of specific areas of the desired zone 130. In other example, the processor 120 may include imaging software such as OCR or machine vision that processes the pixels on the camera sensor 118 to analyze and store the desired data from the adjusted image 124. In some constructions, the processor 120 may transmit the adjusted image 124 to a display over the communication network (e.g., wired, or wireless such as Bluetooth®, Wi-Fi™, etc.) for use by personnel assigned to monitor the desired zone 130.
The camera 110 may be used to monitor objects (e.g., vehicles, people, people counting, safety, security, etc.) that are in the desired zone 130 of the field of view for many different applications. The desired zone 130 is predetermined determined by the application to which the system will be applied. As a result, the desired zone 130 and the construction of the non-linear lens 114 may vary based on the placement of the camera 110 or the desired zone 130 of the area 122 the camera 110 is monitoring. While the desired zone 130 shown in the drawings is a continuous zone, it will be appreciated that the desired zone may include disjointed or unconnected portions of the area(s) to be monitored.
The non-linear lens 114 is constructed to optically capture and monitor the roadway 132 without capturing the undesired zone(s) 134 (in this example, the tree lines and skyline). The desired zone 130 is bounded by a frame 138 that includes a continuous section of the roadway 132 with the first vehicle 158 having the first license plate 166 and the second vehicle 162 having the second license plate 170.
After the non-linear lens 114 is constructed and positioned relative to the roadway 132 and the camera 110, the non-linear lens 114 expands the frame 138 (e.g., vertically, horizontally, or both vertically and horizontally) and, consequently, the pixels that are positioned within the frame 138, to define the adjusted image 124 that matches or fills the aspect ratio of the camera sensor 118 (see
While the system 100 is described and illustrated with regard to monitoring vehicles 158, 162 and their respective license plates 166, 170, it will be appreciated that the system can monitor any number of vehicles and/or licenses plates in the desired frame 138. Also, the system is applicable to various applications and should not be construed to apply only to vehicular or roadway monitoring.
The angular view 168 defines extents of the image that are visible by the imaging system 100. For example, the angular view 168 includes a near area 170, a deep area 172, and a middle area 174 positioned between the near area 170 and the deep area 172. The angular view 168 is bounded by the surface 166 and a set upper limit (e.g., illustrated with line 176) that converges at an angular vantage point 178 in the deep area 172 of the image. The angle 166 of the imaging system 100 and the upper limit 176 of the imaging system 100 may adjusted to encompass a larger or smaller area of the surface 166.
With reference to
In one example, after the pixels pass through the non-linear lens 114 (illustrated by dashed lines), the adjusted image 124 may be orientated in the wrong direction or distorted. As a result, the processor 120 may determine and correct the positioning of the pixels to produce the adjusted image 124 for the user of the processing system. This can include the system 100 redistributing the pixels within the adjusted image 124 to increase the resolution in specific areas of the adjusted image 124. In some constructions, the processor 120 may transmit the adjusted image 124 to a display over a communication network (e.g., wired, or wireless such as Bluetooth®, Wi-Fi™, etc.) for use by personnel assigned to monitor the desired zone 130.
The imaging system 100 may be used to monitor objects (e.g., vehicles, people, people counting, safety, security, etc.) that are in the desired frame 138 of the field of view for many different applications. The desired zone 130 is predetermined by the application to which the system will be applied. As a result, the desired zone 130 and the construction of the non-linear lens 114 may vary based on the placement of the imaging system 100 or the desired zone 130 of the area 122 the imaging system 100 monitors. While the desired zone 130 shown in the drawings is a continuous zone, it will be appreciated that the desired zone for a particular application may include disjointed or unconnected portions of the area(s) to be monitored.
The lens 214 optically captures the paths 212, 216, 226, 228 of the intersection 232 (i.e. the desired zones 230 in this example) without wasting pixels on the undesired zone(s) 234 surrounding the intersection 232. The lens 214 may arrange the pixels within the paths 212, 216, 226, 228 so each path 212, 216, 226, 228 has a desired pixel distribution consistent with what is described with regard to
For example, the lens 214 maps and expands each path 212, 216, 226, 228 on one-quarter of the surface area of the camera sensor 218 so that the paths 212, 216, 226, 228 are positioned in the same orientation relative to each other in the adjusted image 224. In some constructions, a processor 220 may be used to map or rearrange the pixels to achieve the same orientation for the paths 212, 216, 226, 228. Imaging software, such as OCR or machine vision, may be used to analyze the adjusted image 224 to monitor the objects in each path 212, 216, 226, 228.
In the illustrated construction, the lens 314 is constructed to optically capture the front desired zone 330a and the peripheral desired zones 330b, 330c. The lens maps the zones 330a, 330b, 330c to capture objects directly in front of the camera 310 and objects at relatively wide angles (including objects positioned slightly behind the camera 310). More specifically, the lens 314 maps and arrange the pixels within the desired zones 330a, 330b, 330c so each desired zone 330a, 330b, 330c has a desired pixel distribution consistent with what is described with regard to
In one non-limiting example, the camera 310 is mounted to a front part of a vehicle (e.g., a police car) to monitor the zone 330a directly in front of the vehicle and peripheral zones 330b, 330c to the sides of the zone 330a. The peripheral zones 330b, 330c may include objects that are at relatively wide angles both relatively close to the camera 310 (e.g., in the vehicle's blind spots), as well as objects that are objects relatively far away from the camera 310. The lens 314 optically captures the zones 330a, 330b, 330c and expands the zones 330a, 330b, 330c to define the adjusted image 324 that matches or fills the aspect ratio of the camera sensor 318. As a result, the adjusted image 324 allows the camera 310 to monitor objects in the peripheral zones 330b, 330c with the same or substantially the same clarity as the zone 330a. The adjusted image 324 can be stored and/or transmitted to a display to allow a user to monitor the adjusted image 324 of the captured desired zones 330a, 330b, 330c. Additionally or alternatively, imaging software, such as OCR or machine vision, may be used to analyze the adjusted image 324 to detect objects such as license plates, people, etc., in the captured desired zones 330a, 330b, 330c of the adjusted image 324.
The embodiment(s) described above and illustrated in the figures are presented by way of example only and are not intended as a limitation upon the concepts and principles of the present disclosure. As such, it will be appreciated that variations and modifications to the elements and their configurations and/or arrangement exist within the scope of one or more independent aspects as described.
The present application claims priority to U.S. Provisional Application 62/894,468, filed Aug. 30, 2019, and U.S. Provisional Application 62/897,975, filed Sep. 9, 2019, the entire contents of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62894468 | Aug 2019 | US | |
62897975 | Sep 2019 | US |