Not applicable to this application.
Not applicable to this application.
Example embodiments in general relate to a vehicle identification system for detecting a vehicle entering a point of interest, capturing an image of the vehicle, and extracting an identifying feature such as a license plate from the captured image.
Any discussion of the related art throughout the specification should in no way be considered as an admission that such related art is widely known or forms part of common general knowledge in the field.
Automated license plate recognition has been used for many years. However, image processing is complicated by nature and thus highly susceptible to errors or miscalculations. For example, factors such as sunlight glare, passing vehicles or pedestrians, or other factors which would inhibit the identification of a license plate may have a drastic negative impact on the reliability of systems using previous methods of automated license plate recognition.
Such a higher error rate introduces unreliability into the system which can be unacceptable for certain applications, such as for traffic counting or identifying vehicles in parking spaces. Systems that rely purely on vehicle or number plate recognition for parking availability have been known for their poor accuracy. By combining the capture of images with space-specific sensory information, the reliability of such systems can be greatly improved to overcome the shortcomings of existing, prior art license plate recognition systems.
An example embodiment is directed to a vehicle identification system. The vehicle identification system includes a sensor oriented towards a point of interest. Upon detection of a vehicle entering the point of interest, a camera will be directed to capture an image of the vehicle. The captured image will be processed so as to extract an identifying feature of the vehicle, such as a license plate. The system is configured to prevent false positives by rejecting extracted images which are obscured or which represent vehicles which have already been detected previously. The collected information may be used for many purposes, such as parking guidance, car counting, parking space availability, and vehicle location.
There has thus been outlined, rather broadly, some of the embodiments of the vehicle identification system in order that the detailed description thereof may be better understood, and in order that the present contribution to the art may be better appreciated. There are additional embodiments of the vehicle identification system that will be described hereinafter and that will form the subject matter of the claims appended hereto. In this respect, before explaining at least one embodiment of the vehicle identification system in detail, it is to be understood that the vehicle identification system is not limited in its application to the details of construction or to the arrangements of the components set forth in the following description or illustrated in the drawings. The vehicle identification system is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting.
Example embodiments will become more fully understood from the detailed description given herein below and the accompanying drawings, wherein like elements are represented by like reference characters, which are given by way of illustration only and thus are not limitative of the example embodiments herein.
The systems and method described herein may be utilized to combine vehicle license plate data and sensory information to determine the location of a vehicle 12, such as within a point of interest such as a parking space 14. One or more sensors 20, 21, such as LIDAR sensors, are adapted to detect a vehicle 12 entering the parking space 14. Upon such a detection, a camera 30 is triggered to capture an image of the vehicle 12. That captured image is then processed, such as by a processing unit 40 locally or a control unit 50 remotely, to extract license plate data such as but not limited to the issuing state or country of the license plate and the license plate number. All such license plate data may be extracted from within the captured image and then communicated, along with the triggering sensor 20, 21 details, via a gateway or direct communication for correlation on a cloud-based system server such as a control unit 50.
The information contained on the control unit 50 may then be used for further analysis and applications. For example, the information could be utilized in connection with “find my car” applications, in which a user will query the system 10 for the location of their vehicle 12. The information could also be utilized for billing purposes, such as by detecting how long a particular vehicle 12 is parked in a specific parking space 14 to determine how much to be billed for parking. The information could also be utilized for parking space 14 monitoring, such as for displaying and monitoring the number of available parking spaces 14 in a given parking garage or lot. The information could also be utilized for guidance within a parking garage or lot, with guidance signs or the like being used to guide a vehicle 12 to an open parking space 14.
An example vehicle identification system 10 generally comprises a sensor 20 oriented towards a point of interest, wherein the sensor is adapted to detect a vehicle 12 positioned at or near the point of interest; a camera 30 oriented such that the point of interest is within a field of view of the camera 30, wherein the camera 30 is adapted to capture an image of the vehicle when the sensor detects the vehicle 12 positioned at or near the point of interest; and a processing unit 40 communicatively connected to the camera 30 and the sensor 20, wherein the processing unit 40 is adapted to extract license plate data from the image of the vehicle 12, wherein the processing unit 40 is adapted to reject the license plate data and instruct the camera 30 to capture an additional image of the vehicle 12 if the license plate data is not extracted from the image of the vehicle 12. A housing 42 may be provided for housing the sensor 20, the camera 30, and the processing unit 40. The sensor 20 may be comprised of a LIDAR sensor. The point of interest may be comprised of a parking space 14. The license plate data may comprise an image of the license plate of the vehicle 12.
The processing unit 40 may be adapted to determine if the vehicle 12 is parked in the point of interest based on a position of the license plate in the image of the license plate. The processing unit 40 may be adapted to reject the license plate image if the license plate image is below a threshold size. A control unit 50 may be communicatively connected to the processing unit 40. The control unit 50 may be remote with respect to the processing unit 40. The control unit 50 may comprise a memory, wherein the processing unit 40 is adapted to communicate the license plate data to the memory of the control unit 50. The control unit 50 may be adapted to identify the vehicle as a new vehicle if the license plate image does not match any of a plurality of reference images, with each of the reference images comprising vehicles 12 which have been previously identified by the control unit 50. The license plate data may comprise a license plate number.
A method of monitoring a parking space with the vehicle identification system 10 may comprise the steps of detecting the vehicle 12 positioned at or near the point of interest by the sensor 20; capturing the image of the vehicle 12 by the camera 30 when the sensor 20 detects the vehicle 12 positioned at or near the point of interest; extracting license plate data from the image of the vehicle 12 by the processing unit 40; and verifying the license plate data based on one or more criteria by the processing unit 40. The license plate data may comprise an identification of the sensor which detected the vehicle. The one or more criteria may be comprised of a size of a license plate image of the license plate data.
In another exemplary embodiment, a vehicle identification system 10 may comprise a first sensor 20 oriented towards a first point of interest, wherein the first sensor 20 is adapted to detect a first vehicle 12 positioned at or near the first point of interest; a second sensor 21 oriented towards a second point of interest, wherein the second sensor 21 is adapted to detect a second vehicle 12 positioned at or near the second point of interest; a camera 30 oriented such that both the first point of interest and the second point of interest are within a field of view of the camera 30, wherein the camera 30 is adapted to capture an image of the first vehicle 12 when the first sensor 20 detects the first vehicle 12 positioned at or near the first point of interest, wherein the camera 30 is adapted to capture an image of the second vehicle 12 when the second sensor 21 detects the second vehicle 12 positioned at or near the second point of interest; and a processing unit 40 communicatively connected to the first sensor 20, the second sensor 21, and the camera 30, wherein the processing unit 40 is adapted to extract a first license plate image from the image of the first vehicle 12 and a second license plate image from the image of the second vehicle 12. The first sensor 20 may be adjacent to the second sensor 21. The first sensor 20, the second sensor 21, and the camera 30 may each be angled downwardly. A housing 42 may be provided for housing the processing unit 40, the first sensor 20, the second sensor 21, and the camera 30.
In another exemplary embodiment, a vehicle identification system 10 may comprise a plurality of sensors 20, 21 each being oriented towards at least one of a plurality of points of interest; a plurality of cameras 30 each being oriented towards at least one of the plurality of points of interest, wherein at least one of the plurality of cameras 30 and at least one of the plurality of sensors 20, 21 is oriented towards each of the plurality of points of interest; a plurality of processing units 40, wherein at least one of the plurality of processing units 40 is communicatively connected to at least one of the plurality of sensors 20, 21 and at least one of the plurality of cameras 30; and a control unit 50 communicatively connected to each of the plurality of processing units 40; wherein each of the plurality of sensors 20, 21 is adapted to detect a vehicle 12 positioned at or near at least one of the plurality of points of interest; wherein each of the plurality of cameras 30 is adapted to obtain an image of the vehicle 12 when one of the plurality of sensors 20, 21 detects the vehicle 12 positioned at or near at least one of the plurality of points of interest; wherein the processing unit 40 is adapted to extract a license plate image from the image of the vehicle 12, wherein the processing unit 40 is adapted to transfer the license plate image of the vehicle 12 to the control unit 50, wherein the control unit 50 is adapted to associate the license plate image of the vehicle 12 with one of the plurality of points of interest.
As shown throughout the figures, the vehicle license plate verification system 10 generally utilizes one or more sensors 20, 21 in combination with a camera 30 to capture identifying information, such as license plate information, of any vehicle 12 entering a point of interest. The type of sensors 20, 21 and number of sensors 20, 21 utilized may vary in different embodiments. By way of example,
The type of sensors 20, 21 may vary in different embodiments. By way of example and without limitation, the sensors 20, 21 may comprise light imaging, detection and ranging (LIDAR) sensors. LIDAR sensors provide high detection accuracy and also a mounting position cohesive with the mounting position of a corresponding camera 30, allowing for the potential reduction in installation infrastructure. In other embodiments, the sensors 20, 21 may comprise RF sensors or the like. In another exemplary embodiment, the systems and methods described herein may utilize the sensor arrangement of the “Vehicle Flow Monitoring System” described in U.S. patent application Ser. No. 16/750,244, which is hereby incorporated by reference.
The positioning and orientation of the sensors 20, 21 may also vary in different embodiments. Generally, the sensors 20, 21 should be positioned to be oriented towards the points of interest being monitored. In some embodiments, each point of interest will have a single sensor 20 oriented towards it. In other embodiments, multiple sensors 20, 21 may be oriented towards a single point of interest.
In a preferred embodiment as shown in the figures, the sensors 20, 21 may be positioned directly above the center-line on the parking aisle 15, oriented towards a parking space 14. Such an embodiment is shown in
In the exemplary embodiment shown in
While the figures illustrate that the sensors 20, 21 are mounted to an overhead structure such as a ceiling 18, it should be appreciated that in various situations and locations such an overhead structure may not be available. For example, outdoor parking lots or the top level of a parking garage typically do not have overhead structures such as a ceiling 18 on which to mount the sensors 20, 21. In such embodiments, the sensors 20, 21 may be mounted on their own vertical support structures, such as a post or pole as is common with street and traffic lights.
The sensors 20, 21 may be adapted to continuously take readings of the point of interest or periodically take readings. For example, the sensors 20, 21 may be configured to take readings of the point of interest only at certain time intervals, such as every five seconds. In other embodiments, the sensors 20, 21 will be “always on” so as to continuously monitor the point of interest.
The sensors 20, 21 may be communicatively connected to one or more cameras 30 such that, when one of the sensors 20, 21 detects a new vehicle in the point of interest, the relevant sensor 20, 21 (or a communicatively connected processing unit 40 as discussed below) will transmit an instruction to the camera 30 to capture an image of the point of interest. In some embodiments as discussed below and shown in
Thus, it should be appreciated that the field of view of a single camera 30 may cover the detection radii of multiple sensors 20, 21. Although the figures illustrate embodiments comprising two sensors 20, 21 per camera 30, additional sensors 20, 21 may be assigned to each camera 30 in alternate embodiments. For example, a camera 30 having a particularly wide field of view or positioned a sufficient distance away may be configured to capture images of an entire row of parking spaces 14.
The manner in which the sensors 20, 21 are communicatively connected to the camera 30 may vary in different embodiments. In some embodiments, a wired connection such as a serial or parallel connection may be used to connect each sensor 20, 21 to a camera 30. In other embodiments such as shown in the figures, the sensors 20, 21 may be wirelessly connected to the camera 30, such as by Bluetooth, RF, the Internet, or other communications protocols.
In other embodiments, the sensors 20, 21 may be communicatively connected to a processing unit 40 such as shown in
As shown throughout the figures, the system 10 may utilize one or more cameras 30 to capture images of any vehicles 12 in the point of interest being monitored by the sensors 20, 21. Various types of cameras 30 may be utilized, including cameras 30 with video and/or audio capabilities. The cameras 30 may be configured to continuously capture images or only periodically capture images. In a preferred embodiment, the cameras 30 are configured to capture images when (1) a sensor 20, 21 indicates presence of a vehicle 12 in the point of interest or (2) when a previously captured image was processed and the license plate image or data of the vehicle 12 was not identifiable or otherwise not successfully extracted.
The number of cameras 30 utilized for each point of interest may vary in different embodiments. Further, the number of cameras 30 used per sensor 20, 21 may also vary in different embodiments.
The positioning and orientation of the camera 30 may also vary in different embodiments. Generally, the camera 30 should be positioned and oriented towards the points of interest being monitored such that the points of interest are within the field of view of the camera 30. In some embodiments, each point of interest will have a single camera 30 oriented towards it. In other embodiments, multiple cameras 30 may be oriented towards a single point of interest. In yet further embodiments, multiple points of interest may be covered by a single camera 30 such as shown in
In a preferred embodiment as shown in the figures, the camera 30 may be positioned directly above the center-line on the parking aisle 15, oriented towards a parking space 14. Such an embodiment is shown in
In the exemplary embodiment shown in
While the figures illustrate that the camera 30 is mounted to an overhead structure such as a ceiling 18, it should be appreciated that in various situations and locations such an overhead structure may not be available. For example, outdoor parking lots or the top level of a parking garage typically do not have overhead structures such as a ceiling 18 on which to mount the camera 30. In such embodiments, the camera 30 may be mounted on their own vertical support structures, such as a post or pole as is common with street and traffic lights.
The camera 30 may be positioned adjacent to one or more sensors 20, 21 such as shown in the figures. In such embodiments, the camera 30 may be directly connected to, or even integrated with, the sensors 20, 21. In a wired connection, the camera 30 may be connected to the sensors 20, 21 by a hardwired serial or parallel communication or the like.
In other embodiments, the camera 30 may be wirelessly connected to one or more sensors 20, 21, such as by Bluetooth, WiFi, RF, or the like. In such embodiments, the camera 30 may be positioned at a location which is distant with respect to the sensors 20, 21. For example, in some embodiments, the camera 30 could be positioned overhead and the sensor 20 could be positioned on the ground surface 19, with both the sensor 20 and the camera 30 being oriented towards the same point of interest.
While both the sensor 20 and camera 30 are preferably oriented towards the same point of interest, they are not necessarily oriented at the same angle or even in the same direction. For example, in some embodiments, the camera 30 could be positioned in front of a parking space 14 to capture a front license plate and the sensor 20 could be positioned over a center aisle 15 behind the parking space 14. In such an embodiment, the camera 30 and sensor 20 would be oriented in opposite directions while still maintaining the same point of interest in the field of view.
In some embodiments, the camera 30 may not be directly communicatively connected to the sensors 20, 21. In such embodiments, the camera 30 may be communicatively connected to the processing unit 40, with the processing unit 40 also being communicatively connected to the sensors 20, 21 such as shown in
As shown throughout the figures, exemplary embodiments of the system 10 may comprise a processing unit 40 which is communicatively connected to the camera 30 and to any sensors 20, 21 associated with that camera 30. The processing unit 40 may be configured to perform the necessary functions to process the images which are captured by the camera 30. In some embodiments, the processing unit 40 may function as a gateway between the camera 30 and any associated sensors 20, 21 so as to both process the data from the sensors 20, 21 to detect when a new vehicle 12 has arrived and to instruct the camera 30, such as through a control signal or instruction, to capture an image of the point of interest such as a parking space 14 when such a new vehicle 12 is detected by the sensors 20, 21.
By way of example, the processing unit 40 may be configured to extract a license plate image or data relating to a license plate from a captured image of a vehicle 12 positioned at or near a point of interest such as a parking space 14. In some embodiments, the actual image of the license plate may be extracted. In other embodiments, the processing unit 40 (or control unit 50 as discussed below) may instead or additionally extract data from the image, such as extracting letters and numbers via object character recognition. Such extracted data may be utilized to maintain a database of occupied points of interest along with identifying information such as a vehicle license plate number without requiring the storage of the actual captured images. The processing unit 40 may also be configured to detect when a license plate image cannot be extracted from a particular image captured by the camera 30 and to, in those cases, instruct the camera 30 to capture additional images of the vehicle 12 until a license plate image can be extracted.
The processing unit 40 may comprise various computing devices and systems, such as by way of example a microcontroller or microprocessor. The processing unit 40 may comprise memory on which data, such as captured images of vehicles 12 or extracted license plate images, may be stored. The processing unit 40 may be positioned in the same housing 42 as the sensors 20, 21 and/or camera 30 such as shown in
The processing unit 40 may be communicatively connected to a large number of sensors 20, 21 and/or cameras 30. In some embodiments, the processing unit 40 may be communicatively connected to additional processing units 40 which are themselves communicatively connected to additional sensors 20, 21 and/or cameras 30. Such a distributed network of processing units 40, cameras 30, and sensors 20, 21 may be utilized to cover a large area, such as a large, multi-story parking garage. In such an embodiment, there may be a combination of ceiling-mounted, ground-mounted, pole-mounted, or otherwise mounted sensors 20, 21, cameras 30, and/or processing units 40. As discussed below, a central control unit 50 may be communicatively connected to such a distributed network of processing units 40, cameras 30, and sensors 20, 21 to manage and process the data from the entire network.
The manner in which the processing unit 40 is connected to the sensors 20, 21, cameras 30, additional processing units 40, and/or a control unit 50 may vary. For example, the processing unit 40 may be connected by a wired connection to a sensor 20 and a camera 30, and by a wireless connection to additional processing units 40 and/or a control unit 50. In other embodiments, the processing unit 40 may standalone and be wirelessly connected to the sensors 20, 21 and cameras 30 as well.
As shown in
As shown in
The control unit 50 may comprise various types of devices and systems capable of storing, processing, transmitting, and receiving data and images from the sensors 20, 21, cameras 22, and/or processing units 30 of the system 10. In this manner, a large number of points of interest, such as parking spaces 14, may be monitored in real-time by the control unit 50 based on the data and images received from the other components of the system 10.
The control unit 50 may, for example and without limitation, comprise a computer system such as a server computer, desktop computer, laptop computer, tablet computer, or mobile computer such as a smart phone. The control unit 50 may also comprise multiple such computer systems which are interconnected to form a distributed network. In some embodiments, the functions of the control unit 50, including but not limited to storage and processing of data and images, may be performed across multiple computer systems. In some embodiments, the data and images may be stored on and/or accessed from the cloud.
The systems and methods described herein may be utilized for a wide range of situations involving the monitoring of a point of interest. While the figures illustrate points of interest comprised of parking spaces 14 and objects being detected as vehicles 12, it should be appreciated that different points of interest and different types of objects could be supported by the methods and systems described herein. For example, in some embodiments, boat slips could be monitored, with the camera 30 being configured to photograph and extract an image of the boat's name or identification number. In yet other embodiments, aircraft hangars could be monitored, with the camera 30 being configured to photograph and extract an image of the aircraft's tail with its identification number.
Upon detection of the vehicle 12 at the point of interest, the camera 30 will be instructed to capture an image of the vehicle 12. The manner in which the camera 30 is instructed to capture the image of the vehicle 12 may vary in different embodiments. In an exemplary embodiment, a processing unit 40 may direct the camera 30 to capture an image of the vehicle 12 upon detection by the sensor 30 by, for example, activating a subroutine. An image of the vehicle 12 is then captured by the camera 30.
Upon the capture of an image of an object such as a vehicle 12, the image will be processed to extract an image of an identifying feature of the object, such as in the case of a vehicle 12 the license plate. In other embodiments, the image of the vehicle 12 may be transferred offsite, such as a cloud based control unit 50, for data processing and image extraction. Thus, either the processing unit 40 may extract the image of the identifying feature itself, or the image may be transferred to the control unit 50 for extraction.
Upon extraction of an identifying feature from the image, various data may be transferred to the control unit 50 to be saved in memory or processed further. By way of example and without limitation, such data as the captured image of the identifying feature, data representing the captured image of the identifying feature, data identifying the specific point of interest that the image was captured at, specific bay identifiers, timestamps, sensor 20, 21 identification, camera 30 identification, processing unit 40 identification, location, temperature, status of other points of interest at that time, and other data may be received by the control unit 50. In this manner, the control unit 50 may maintain a database of which points of interest are occupied by an object that is continuously updated in real-time.
In some circumstances, it may not be possible to extract an identifying feature from a captured image. For example, if there is glare from the sun or a person walking by, the image of a license plate of a vehicle 12 may be obstructed or obscured. In such cases, the processing unit 40 and/or control unit 50, upon detecting that an identifying feature is not extractable from the captured image, may continue to instruct the camera 30 to capture periodic additional images in an attempt to capture an image from which an identifying feature may be extracted such as shown in
If the vehicle 12 is detected by the sensors 20, 21 as having departed the point of interest, the camera 30 may stop capturing images until a new vehicle 12 has entered the point of interest. Further, the camera 30 will stop capturing images upon the processing unit 40 and/or control unit 50 successfully extracting an identifying feature from one of the additional captured images. In some embodiments, the system 10 may be configured such that, after a set period of time or a set number of failed extractions, the camera 30 will stop capturing images of the point of interest until the vehicle 12 has left and been replaced by a new vehicle 12.
In cases in which an identifying feature such as a license plate of a vehicle 12 is not recognized, the point of interest will still be considered as “occupied” by the processing unit 40 and/or control unit 50 until such time as the sensors 20, 21 detect the departure of the vehicle 12. Further, any applications involving parking guidance (such as directing vehicles 12 to a specific parking space 14) or space availability notifications will still be accurate. However, in situations in which a parking space 14 is reserved, the system 10 may trigger an alarm indicating that the processing unit 40 and/or control unit 50 is unable to verify that the vehicle 12 inhabiting the parking space 14 is authorized. In such instances, the system 10 may, for example, direct an attendant to manually confirm the identity of the vehicle 12 positioned in the parking space 14.
Continuing to reference
While the example embodiments in
The detection of multiple points of interest having multiple objects with identifying features to be extracted utilizes a similar method as with a single point of interest. However, some additional steps may need to be performed to ensure reliability where multiple objects such as vehicles 12 may be within the field of view of the cameras 30.
As can be seen in
As an example,
One method for distinguishing valid license plate images from invalid license plate images is shown in
In another embodiment, the criteria may be based on the location and orientation of the camera 30 and point of interest. In such an embodiment, the field of view 30 of the camera 30 may be partitioned internally between the points of interest being monitored by the camera 30. For example, a camera 30 covering a first parking space 14 with a first half of its field of view and a second parking space 14 with a second half of its field of view would associate a license plate with the point of interest based on the location of the license plate within the camera's 30 field of view. Any license plates outside of the expected partitions of the field of view may be rejected.
Continuing to reference
As shown in
Such information may be communicated to the control unit 50 in a number of manners, including but not limited to communication by the camera 30, the sensors 20, 21, both cameras 30 and sensors 20, 21, or the processing unit 40. The transfer of this information may be direct or via an appropriate communication gateway, such as the Internet. In some embodiments, all data, information, and details of all license plates that were recognized within a captured image may be sent to the control unit 50 for further processing and analysis, even if the total number of license plates recognized goes beyond the scope of the associated parking spaces 14. In this manner, the control unit 50 may be relied upon for quality control and verification.
Subsequently, when the vehicle 12 identified as D arrives and parks next to the vehicle 12 identified as C, the sensor 21 will detect the newly-arrived vehicle 12 and the camera 30 will capture an image. In this case, both of the vehicles 12 identified as C and D would be within the field of view of the camera 30, and thus both license plates would be present in the captured image. Upon receipt of the new captured image, the control unit 50 will compare the two license plates extracted from the captured image with the database of license plates stored in memory.
The control unit 50 will recognize that the vehicle 12 identified as C was already in the memory, and thus the control unit 50 can eliminate the license plate data associated with the vehicle 12 identified as C as it has already been taken and associated with the parking space that vehicle 12 is occupying. Therefore, the other license plate from the captured image, which is representative of the vehicle 12 identified as D, will be saved to memory and associated with the appropriate parking space 14. In this manner, the system 10 will correctly identify that there are two unique vehicles 12 within the two parking spaces 14 without duplication.
These steps may be performed repeatedly. Whenever a new license plate is extracted from a captured image, that license plate will be compared to other license plates in memory so as to eliminate duplicate entries. In circumstances in which the system 10 is unable to verify the positioning of the vehicles 12 for one reason or another, the system 10 may simply record both license plates as being “in the area” without identifying a particular parking space 14. Such a configuration may still be useful for monitoring the number of parking spaces 14 available in an area, since the specific occupied spaces 14 need not be known for a simple count of available spaces 14. In other embodiments, the system 10 may simply randomly associate the license plates with a particular parking space 14.
At a later point in time, if the vehicle 12 identified as D were to leave and be replaced by another vehicle 12, another captured image will be transferred to the control unit 50 for processing. The control unit 50 will recognize that the second parking space 14 has been occupied by a new vehicle 12; with the vehicle 12 identified as C remaining in the first parking space 14. The control unit 50 may then correct the association as necessary. The vehicle 12 identified as C is still present, but the system 10 recognizes that license plate as already being in memory, and thus is able to associate the newly-arrived vehicle 12 with the appropriate parking space 14.
In a further embodiment, personal information of an extracted license plate within a captured image may be communicated to the control unit 50 for further processing and analysis. Continuing to reference the vehicles 12 identified as C and D in
When only a single vehicle 12 is communicated for occupation of a singular space 14, the system 10 may record this as a valid license plate position for that space 14. Similarly, when a second vehicle 12 arrives subsequently and then only two license plate details are communicated, the second plate position can also be recorded as a valid position for the second parking space 14. In this manner, a position map may be formed of valid positions for license plates for a given parking space 14. This creates a learned region for the license plate position of a parking space 14 as opposed to a manually-defined region configured upon installation. In other words, through use, the system 10 will eventually learn to partition fields of views between parking spaces 14. This information can be used in times of uncertainty. For example, if there is a singular license plate within a learned region (or close to such a region), that license plate may be associated with confidence to that region's parking space 14.
In some embodiments, data related to a large number of points of interest such as parking spaces 14 may be stored and processed on a control unit 50. For example, in a large parking garage, the control unit 50 may process data from the cameras 30 and sensors 20, 21 to create a dynamic display which shows the occupancy or vacancy of each of the parking spaces 14 in the parking garage. This display may be visible by an attendant to manage the parking garage. The display may include identifying information of each vehicle 12 parked in a parking space 14, such as a license plate number, or may indicate a lack of such information when there has been a failure in extraction or processing.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods and materials similar to or equivalent to those described herein can be used in the practice or testing of the vehicle identification system, suitable methods and materials are described above. All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety to the extent allowed by applicable law and regulations. The vehicle identification system may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it is therefore desired that the present embodiment be considered in all respects as illustrative and not restrictive. Any headings utilized within the description are for convenience only and have no legal or limiting effect.