Method and apparatus for verifying an object image in a captured optical image

Information

  • Patent Grant
  • 10885371
  • Patent Number
    10,885,371
  • Date Filed
    Tuesday, November 13, 2018
    6 years ago
  • Date Issued
    Tuesday, January 5, 2021
    3 years ago
Abstract
A mobile apparatus is provided that includes an image sensor for converting an optical image into an electrical signal. The optical image includes an image of a vehicle license plate. The mobile apparatus includes a license plate detector configured to process the electrical signal to recover information from the vehicle license plate image. Upon capturing of the video that includes the image, a device operation instructor will dynamically determine a highest score of assigned object image scores for each frame of the video generate an operation adjustment control if the determined highest score is less than a predetermined score threshold, which is in turn dynamically displayed during continuous capture of the video by the image sensor.
Description
CLAIM OF BENEFIT TO PRIOR APPLICATIONS

This application claims the benefit of priority from each of U.S. patent application Ser. No. 15/466,634, filed Mar. 22, 2017, U.S. patent application Ser. No. 15/681,798, filed Aug. 21, 2017, U.S. patent application Ser. No. 15/713,413, filed Sep. 22, 2017, U.S. patent application Ser. No. 15/713,458, filed Sep. 22, 2017, U.S. patent application Ser. No. 15/880,361, filed Jan. 25, 2018, U.S. patent application Ser. No. 15/681,682, filed Aug. 21, 2017, U.S. patent application Ser. No. 15/451,399, filed Mar. 6, 2017, U.S. patent application Ser. No. 15/455,482, filed Mar. 10, 2017, U.S. patent application Ser. No. 15/427,001, filed Feb. 7, 2017, U.S. patent application Ser. No. 15/419,846 filed Jan. 30, 2017, and U.S. patent application Ser. No. 15/451,393, filed Mar. 6, 2017, which claims the benefit of priority from U.S. patent application Ser. No. 14/716,754, filed on May 19, 2015, and now issued as U.S. Pat. No. 9,589,202, which claims the benefit of priority from U.S. patent application Ser. No. 14/318,397, filed on Jun. 27, 2014, now abandoned, U.S. patent application Ser. No. 14/455,841, filed on Aug. 8, 2014, now abandoned, and U.S. patent application Ser. No. 14/613,323, filed on Feb. 3, 2015.


BACKGROUND
Field

The present disclosure relates generally to a method and apparatus for detecting license plate information from an image of a license plate and more specifically, detecting license plate information from an optical image, captured by a mobile apparatus, that includes a license plate image and several other object images.


Background

In recent years, collecting still images of license plates has become a common tool used by authorities to catch the drivers of vehicles that may engage in improper or unlawful activity. For example, law enforcement authorities have set up stationary traffic cameras to photograph the license plates of vehicles that may be traveling above a posted speed limit at a specific portion of a road or vehicles that drive through red lights. Toll booth operators also commonly use such stationary cameras to photograph vehicles that may pass through a toll booth without paying the required toll. However, all of these scenarios have a common thread. The camera must be manually installed and configured such that it will always photograph the vehicle's license plate at a specific angle and when the vehicle is in a specific location. Any unexpected modifications, such as a shift in angle or location of the camera would render the camera incapable of properly collecting license plate images.


Additionally, camera equipped mobile apparatuses (e.g., smartphones) have become increasingly prevalent in today's society. Mobile apparatuses are frequently used to capture optical images and for many users serve as a replacement for a simple digital camera because the camera equipped mobile apparatus provides an image that is often as good as those produced by simple digital cameras and can easily be transmitted (shared) over a network.


The positioning constraints put on the traffic cameras make it difficult to take images of license plates from different angles and distances and still achieve an accurate reading. Therefore, it would be difficult to scale the same license plate image capture process performed by law enforcement authorities to mobile apparatuses. In other words, it is difficult to derive license plate information from an image of a license plate taken from a mobile image capture apparatus at a variety of angles, distances, ambient conditions, mobile apparatus motion, and when other object images are also in the image, which hinders a user's ability to easily gather valuable information about specific vehicles when engaging in a number of different vehicle related activities such as buying and selling vehicles, insuring vehicles, and obtaining financing for vehicles.


SUMMARY

Several aspects of the present invention will be described more fully hereinafter with reference to various methods and apparatuses.


Some aspects of the invention relate to a mobile apparatus including an image sensor configured to convert an optical image into an electrical signal. The optical image includes an image of a vehicle license plate. The mobile apparatus includes a license plate detector configured to process the electrical signal to recover information from the vehicle license plate image. The mobile apparatus includes an interface configured to transmit the vehicle license plate information to a remote apparatus and receive an insurance quote for a vehicle corresponding to the vehicle license plate in response to the transmission.


Other aspects of the invention relate to a mobile apparatus including an image sensor configured to convert an optical image into an electrical signal. The optical image includes several object images. One of the object images includes a vehicle license plate image. The mobile apparatus includes a license plate detector configured to process the electrical signal to recover information from the vehicle license plate image from a portion of the electrical signal corresponding to said one of the object images. The mobile apparatus includes an interface configured to transmit the vehicle license plate information to a remote apparatus and receive an insurance quote for a vehicle corresponding to the vehicle license plate in response to the transmission.


Other aspects of the invention relate to a mobile apparatus including an image sensor configured to convert an optical image into an electrical signal. The mobile apparatus includes a display. The mobile apparatus includes a rendering module configured to render the optical image to the display. The mobile apparatus includes an image filter configured to apply one or more filter parameters to the electrical signal based on at least one of color temperature of the image, ambient light, and motion of the apparatus. The mobile apparatus includes a license plate detector configured to process the electrical signal to recover information from the vehicle license plate image. The rendering module is further configured to overlay a detection indicator on the displayed image to assist the user position of the apparatus with respect to the optical image in response to a signal from the image filter. The rendering module is further configured to provide an alert to the display when the license plate detector fails to recover the vehicle license plate information. The mobile apparatus includes an interface configured to transmit the vehicle license plate information to a remote apparatus and receive an insurance quote for a vehicle corresponding to the vehicle license plate in response to the transmission.


Other aspects of the invention relate to a computer program product for a mobile apparatus having an image sensor configured to convert an optical image into an electrical signal. The optical image includes several object images. One of the object images includes an image of a vehicle license plate. The computer program product includes a machine readable medium including code to process the electrical signal to select said one of the object images. The machine readable medium includes code to process a portion of the electrical signal corresponding to the selected said one of the object images to recover information from the vehicle license plate image. The machine readable medium includes code to transmit the vehicle license plate information to a remote apparatus. The machine readable medium includes code to receive an insurance quote for a vehicle corresponding to the vehicle license plate in response to the transmission.


Other aspects of the invention relate to a mobile apparatus including an image sensor configured to convert an optical image into an electrical signal. The optical image includes an image of a vehicle license plate. The mobile apparatus includes a timing circuit configured to sample the electrical signal at a frame rate. The mobile apparatus includes a license plate detector configured to process the sampled electrical signal to recover information from the vehicle license plate image. The mobile apparatus includes an interface configured to transmit the vehicle license plate information to a remote apparatus and receive an insurance quote for a vehicle corresponding to the vehicle license plate in response to the transmission.


It is understood that other aspects of methods and apparatuses will become readily apparent to those skilled in the art from the following detailed description, wherein various aspects of apparatuses and methods are shown and described by way of illustration. As understood by one of ordinary skill in the art, these aspects may be implemented in other and different forms and its several details are capable of modification in various other respects. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.





BRIEF DESCRIPTION OF THE DRAWINGS

Various aspects of processes and apparatuses will now be presented in the detailed description by way of example, and not by way of limitation, with reference to the accompanying drawings, wherein:



FIG. 1 conceptually illustrates an exemplary embodiment of an apparatus that is capable of capturing an optical image and detecting a license plate image from the optical image.



FIG. 2 illustrates an exemplary embodiment transmitting license plate information derived from an optical image to an external server.



FIG. 3 illustrates an exemplary embodiment of an apparatus for displaying an insurance quote from a license plate image.



FIG. 4 conceptually illustrates an exemplary embodiment of a process of obtaining an insurance quote from an optical image.



FIGS. 5a and 5b conceptually illustrate an exemplary embodiment of a process of obtaining an insurance quote from a video.



FIG. 6 illustrates an exemplary embodiment of a system architecture of a license plate detection apparatus.



FIG. 7 illustrates an exemplary embodiment of a diagram of the format converter.



FIG. 8 illustrates an exemplary embodiment of a diagram of the image filter.



FIG. 9 illustrates an exemplary embodiment of a diagram of a license plate detector.



FIG. 10 illustrates an exemplary embodiment of an object image with a convex hull fit around the object image.



FIG. 11 illustrates an exemplary embodiment of a method for forming a quadrilateral from a convex hull.



FIG. 12 illustrates an exemplary embodiment of an object image enclosed in a quadrilateral.



FIG. 13 illustrates an exemplary embodiment of a diagram of the rendering module.



FIG. 14 illustrates an exemplary embodiment of a scene that may be captured by a license plate detection apparatus.



FIG. 15 provides a high level illustration of an exemplary embodiment of how an image may be rendered on a mobile apparatus by the license plate detection apparatus and transmission of a detected license plate image to a server.



FIG. 16 conceptually illustrates an exemplary embodiment of a more detailed process for processing an electrical signal to recover license plate information.



FIG. 17 illustrates an exemplary embodiment of an object image comprising a license plate image within a rectangle.



FIG. 18 illustrates an exemplary embodiment of an object image comprising a license plate image within a quadrilateral.



FIG. 19 is an illustration of an exemplary embodiment of the dewarping process being performed on a license plate image.



FIG. 20 conceptually illustrates an exemplary embodiment of a process for processing an optical image comprising a license plate image.



FIG. 21 illustrates an exemplary embodiment of a diagram for determining whether a patch is an actual license plate image.



FIG. 22 conceptually illustrates an exemplary embodiment of a process for processing a patch comprising a candidate license plate image.



FIG. 23 illustrates an exemplary embodiment of an insurance underwriting data flow



FIG. 24 illustrates an exemplary embodiment of an operating environment for communication between a gateway and client apparatuses.



FIG. 25 illustrates an exemplary embodiment of data flow between a gateway and various other modules.



FIG. 26 conceptually illustrates an exemplary embodiment of a process for transmitting an insurance quote from a license plate image.



FIG. 27 illustrates an exemplary embodiment of an electronic system that may implement the license plate detection apparatus.





DETAILED DESCRIPTION

The detailed description set forth below in connection with the appended drawings is intended as a description of various exemplary embodiments of the present invention and is not intended to represent the only embodiments in which the present invention may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the present invention. Acronyms and other descriptive terminology may be used merely for convenience and clarity and are not intended to limit the scope of the invention.


The word “exemplary” or “embodiment” is used herein to mean serving as an example, instance, or illustration. Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term “embodiment” of an apparatus, method or article of manufacture does not require that all embodiments of the invention include the described components, structure, features, functionality, processes, advantages, benefits, or modes of operation.


It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “and/or” includes any and all combinations of one or more of the associated listed items.


Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by a person having ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.


In the following detailed description, various aspects of the present invention will be presented in the context of apparatuses and methods for recovering vehicle license plate information from an image. However, as those skilled in the art will appreciate, these aspects may be extended to recovering other information from an image. Accordingly, any reference to an apparatus or method for recovering vehicle license plate information is intended only to illustrate the various aspects of the present invention, with the understanding that such aspects may have a wide range of applications.



FIG. 1 conceptually illustrates an exemplary embodiment of an apparatus 130 that is capable of capturing an optical image and detecting a license plate 120 from the optical image. The apparatus 130 may be a mobile phone, personal digital assistants (PDA), smart phone, laptop computer, palm-sized computer, tablet computer, game console, media player, digital camera, or any other suitable apparatus. FIG. 1 includes a vehicle 110, the license plate 120 registered to the vehicle 110, the apparatus 130, touch screen 140, and a user 150. The apparatus 130, of some embodiments, may be a wireless handheld device with built in image capture capabilities such as the smart phone, tablet or personal data assistant (PDA) described above. However, in some aspects of the service, the apparatus 130 may be a digital camera capable of processing or transferring data derived from the captured image to a personal computer. The information may then be uploaded from the personal computer to the license plate detection apparatus discussed in the foregoing.


In an exemplary embodiment of the apparatus, a customized application is installed on the apparatus 130. The customized application may interface with the apparatus' image capture device to capture an optical image, convert the optical image to an electrical signal, process the electrical signal to detect the presence of a license plate image, and derive license plate information from a portion of the electrical signal that is associated with the license plate image. The license plate information may be transmitted wirelessly to a server for further processing or decoding such as optical character recognition (OCR) of the license plate image. Alternatively, the OCR process may be carried out on the mobile apparatus 130.


As shown in FIG. 1, the apparatus 130 may receive an interaction from the user 150 to capture an optical image that includes an object image of the license plate 120. The interaction may occur at the touch screen 140. The touch screen 140 shows an exemplary rendering of the optical image, including a rendering of the license plate image that may be captured by the apparatus 130. As illustrated on the touch screen 140, the image of the license plate 120 may include a number and a state. OCR software may be used to convert the state and number portions of the license plate image to text, which may be stored as strings to be used later for various functions. Once, a suitable image including a license plate image is captured by the apparatus 130, the license plate data may be recovered and transmitted to a server for further processing. Additionally, the server and/or the apparatus application may provide error checking capability to ensure that the captured image is clear enough to accurately detect and decode a license plate image. When the server or apparatus determines that a suitable image has not been captured, the apparatus 130 may display an alert in the display area 140, which may guide the user to acquiring a suitable image.


Alternatively, some aspects of the apparatus may provide the capability to bypass the image capture process to instead provide a user interface with text fields. For example, the user interface may provide text fields that allow for entry of the license plate number and state. The entered information may be provided as text strings to the license plate detection apparatus without going through the detection process discussed above.



FIG. 2 illustrates an exemplary embodiment of transmitting license plate information derived from an optical image to an external server 230. In some aspects of the apparatus, a license plate image may be transmitted after recovering the license plate information from an image 220. As shown, FIG. 2 includes an apparatus 210, the image 220, the server 230, and the Internet 240. The apparatus 210 may be a mobile or wireless apparatus. The image 220 may be the same as the image rendered on the display area 140 as illustrated in FIG. 1.


A license plate image recovered from the image 220 may be transmitted over the internet 240 to the server 230 where it is processed for the purpose of detecting whether the license plate image is suitable for deriving license plate data and/or for performing OCR on the license plate image to derive license plate information such as the state of origin and the license plate number.


Once the license plate image (or image file) is transmitted to the server 230, the apparatus 210 may receive and display a confirmation message for confirming that the derived license plate information (e.g., state and license plate number) is correct. In some aspects of the apparatus, the apparatus 210 may also display information about the vehicle to help the user determine whether the derived license plate information is correct. This may be useful in cases such as when the apparatus 210 captures a license plate image of a moving vehicle. The vehicle license plate may no longer be in eyesight. However, it may be possible to determine with some degree of accuracy whether the derived license plate information is correct based on the vehicle information that is displayed on the mobile apparatus.



FIG. 3 illustrates an exemplary embodiment of an apparatus 300 for displaying an insurance quote from a license plate image. The apparatus 300 may be a handheld wireless device. The apparatus 300 includes a display area 310. The display area 310 includes license plate information 320, selectable user interface (UI) objects 330, 335, 350, and 355, vehicle descriptors 340, representative vehicle image 370, licensed driver information 360, and insurance quote 345. FIG. 3 illustrates two stages 301-302 of a user's interaction with the apparatus 300.


In the first stage 301, the apparatus 300 may have transmitted a license plate image to the server 230 for further processing. Such processing will be described in the foregoing figures. The apparatus 300 may display the license plate information 320 and the licensed driver information 360. The licensed driver information may be captured from a second image of a driver's license. For instance, after receiving the license plate image, the apparatus may request that the user provide an image of a driver's license. The apparatus may recognize information from the driver's license such as state, number, name, and address. Such information may be used in the insurance underwriting process. Additionally, the recognized information may be converted to text by OCR software. In this exemplary illustration, the name recognized from the driver's license may be displayed in the display area 310 as licensed driver information 360 for verification by the user. Once the user has verified that the information in the display area 310 matches up with the vehicle associated with the license plate image and driver's license information, the apparatus 300 may receive a selection of the selectable UI object 335. However, in some aspects of the apparatus if the displayed information is not accurate, the apparatus 300 may receive a selection of the selectable UI object 330. In such aspects, the apparatus 300 may prompt the user to retake the license plate image and/or the driver's license image. In some embodiments of the apparatus, each selectable UI object may be a selectable button that is enabled when a user performs a gestural interaction with the touch screen of the apparatus 300.


In the second stage 302, the apparatus 300 may have received a selection of the selectable UI object 335. In response, the display area 310 may present the vehicle description 340, the representative vehicle image 370, the insurance quote 345 and the selectable UI objects 350 and 355. The insurance quote 345 may be based on default coverage information. However, in some aspects of the apparatus, the coverage information may be input by the user before receiving the quote. For instance, the display area 310 may display several slidable objects for setting the various coverage criteria, such as the amount of comprehensive coverage, collision coverage, uninsured motorist coverage and/or deductibles. The apparatus 300 may then transmit this information to the server 230, which then returns the insurance quote 345 based on the coverage information provided by the user.


Additionally, the apparatus may have the capability of receiving several different insurance quotes from several different agencies. Insurance quote 345 may only represent the best price receive by the apparatus. However, if the user wishes to see all of the insurance quotes, the apparatus 300 may receive a selection of the selectable UI object 350. Upon selection of the selectable UI object 350, a list of insurance quotes and providers may be displayed in the display area 310. The user may have the option to select from the available listed quotes in lieu of the displayed insurance quote 345.


Additionally, the user may wish the adjust the coverage information based on the insurance quote 345. For instance, the insurance quote 345 may inspire the user to request more or less coverage. In such instances, the apparatus 300 may receive a selection of the selectable UI object 355. Upon receiving a selection of the selectable UI object 355, the display area 310 may display a dialog or the slidable objects discussed above which may receive interaction from the user to adjust the insurance coverage. Upon setting the new insurance coverage, the apparatus 300 may then display a new insurance quote 345. If the insurance quote 345 is agreeable to the user, then the user may be prompted to enter any other additional information to finalize the insurance underwriting process and obtain an insurance policy. Although not shown, the display area 310 may prompt the user by a selectable UI object or any other suitable means to accept the insurance quote 345. Once accepted, the insurance policy may be generated from information obtained based on the license plate image and the driver's license image.


Providing the interface described in FIGS. 1-3 provides an easy and efficient way for buyers of vehicles to make informed decisions. The interface provides buyers with accurate information so that the buyer can feel comfortable with the cost of insuring a particular vehicle. Additionally, the information is gathered by simply capturing an image of a vehicle license plate and driver's license and providing no, or minimal, further interaction. Thus, the interface provides a trustworthy source for accurate information and an efficient way to obtain insurance coverage with minimal effort.



FIG. 4 conceptually illustrates an exemplary embodiment of a process 400 of obtaining an insurance quote from an optical image. The process 400 may be performed by a mobile apparatus such as the apparatus 130 described with respect to FIG. 1. The process 400 may begin after an image capture capability or an application is initiated on the mobile apparatus. In some aspects of the process, the application may enable the image capture feature on the mobile apparatus.


As shown, the process 400 captures (at 410) an optical image that includes a vehicle license plate image. As will be discussing in the following figure, some aspects of the apparatus may process a video. A frame may then be extracted and converted to an image file.


At 420, the process 400 converts the optical image into an electrical signal. The process 400 then processes (at 430) the electrical signal to recover license plate information. The process 400 determines (at 440) whether the license plate information was successfully recovered. When the license plate information was successfully recovered, the process 400 transmits (at 460) the license plate information to a remote server. The process 400 then receives (at 470) an insurance quote for a vehicle corresponding to the vehicle license plate. The process 400 then ends. In some aspects of the process, several different insurance quotes may be received. Such quotes may be available for display at the mobile apparatus for the user of the mobile apparatus to interact with and view.


Returning to 440, when the process 400 determines that the license plate information was not successfully recovered, the process 400 displays (at 450) an alert that the license plate information was not recovered. In some aspects of the process, a message guiding the user to position the mobile apparatus to achieve greater chances of recovering the license plate information may be provided with the displayed alert. The process then ends. However, in some aspects of the process, rather than end, the process may optionally return to capture (at 410) another optical image and repeat the entire process 400.



FIGS. 5a and 5b conceptually illustrate an exemplary embodiment of a process 500 of obtaining an insurance quote from a video. The process 500 may be performed by a mobile apparatus such as the apparatus 130 described with respect to FIG. 1. The process 500 may begin after an image and/or video capture capability or an application is initiated on the mobile apparatus. The application may enable the image and/or video capture feature on the mobile apparatus.


As shown, the process 500 converts (at 505) an optical image into an electrical signal for sampling the electrical signal at n frames/second (fps). In some aspects of the process, the process may sample the electrical signal at intervals such as 24 fps or any other suitable interval for capturing video according to the apparatus' capabilities. Each sample of the electrical signal represents a frame of a video image presented on a display. The process 500 samples (at 510) a first portion of the electrical signal representing a first frame of the video image presented on the display. The process then determines (at 515) whether any object image(s) are detected within the frame. At least one of the detected object image(s) may comprise a license plate image. When the process 500 determines that at least object image exists within the frame, the process 500 assigns (at 520) a score based on the detected object image. The score may be based on the likelihood that at least one of the object images is a license plate image and is discussed in greater detail below with respect to FIGS. 21 and 22. The score may be applied to each object image and/or aggregated for each object image detected in the frame. The process may then store (at 525) the score and associated object image and frame information in a data structure. In some aspects of the process, the process 500 may store the aggregated object image score and/or the process 500 may store the highest scoring object image in the frame.


When the process 500 determines (at 515) that no object image exists within the frame or after the process 500 stores the score (at 525), the process 500 displays feedback to a user based on the object image detected (or not detected). For instance, when no object image is detected in the frame, the process 500 may display a message guiding the user on how to collect a better optical image. However, when at least one object image is detected in the frame, the process 500 may provide feedback by overlaying rectangles around the detected object image(s). Alternatively or conjunctively, the process 500 may overlay a rectangle that provides a visual cue such as a distinct color, indicating which object image is determined to most likely be a license plate image or has a higher score than other object images within the frame. In some aspects, the visual cue may be provided when a particular object image receives a score above a threshold value.


The process 500 optionally determines (at 535) whether user input has been received to stop the video. Such user input may include a gestural interaction with the mobile apparatus, which deactivates the camera shutter on the mobile apparatus. When the process 500 determines (at 535) that user input to stop the video capture is received, the process 500 selects (at 545) the highest scoring frame according to the stored frame information. When the process 500 determines (at 535) that user input to stop the video capture has not been received, the process 500 determines (at 540) whether to sample additional portions of the electrical signal. In some aspects of the process, such a determination may be based on a predetermined number of samples. For instance, the mobile apparatus may have a built in and/or configurable setting for the number of samples to process before a best frame is selected. In other aspects of the process, such a determination may be based on achieving a score for a frame or object image in a frame that is above a predetermined threshold value. In such aspects, the frame or frame comprising the object image that is above the threshold score will be selected (at 545). When process 500 determines that there are more portions of the electrical signal to be sampled, the process 500 samples (at 550) the next portion of the electrical signal representing the next frame of the video image presented on the display. The process 500 then returns to detect (at 515) object image(s) within the next frame. In some aspects of the process, the process may receive user input to stop the video capture at any point while process 500 is running. Specifically, the process is not confined to receiving user input to halt video capture after the feedback is displayed (at 530); the user input may be received at anytime while the process 500 is running. In such aspects, if at least one object image has been scored, then the process 500 will still select (at 545) the highest scoring object image. However, if no object images were scored, then the process will simply end.


In some aspects of the process, the process 500 may optionally use the object image(s) detected in the previous sample to estimate the locations of the object images in the sample. Using this approach optimizes processing time when the process can determine that the mobile apparatus is relatively stable. For instance, the mobile apparatus may concurrently store gyro accelerometer data. The process 500 may then use gyro accelerometer data retrieved from the mobile apparatus to determine whether the mobile apparatus has remained stable and there is a greater likelihood that the object image(s) will be in similar locations. Thus, when the process 500 can determine that the mobile apparatus is relatively stable, the processing time for license plate detection may be increased because less of the portion of the electrical signal that represents the video image would need to be searched for the license plate image.


Alternatively or conjunctively, the process 500 may not use information about object image(s) from the previous frame as a predictor. Instead, the process 500 may undergo the same detection and scoring process discussed above. Then, for each object image that overlaps an object image detected in a previous frame (e.g., the object images share similar pixels either by space and/or location in the frames), the previous frame receives a higher score. Information about the overlapping object image(s) may be maintained for optimized processing later on. Additionally, in some aspects of the apparatus, the license plate detection apparatus may maintain a table of matching object image(s) for the sampled portions of the electrical signal representing frames of video images over time. In such aspects, some object image(s) may exist in one or a few of the frames or some may exist in many or all frames and accordingly with higher scores. In such instances, all of the overlapping object images may be processed as discussed in greater detail in the foregoing sections and provided to the server for OCR or identification. This would lead to greater accuracy in actual license plate detection and OCR results.


Returning now to FIGS. 5a and 5b, after selecting (at 545) the highest scoring frame, the process 500 processes (at 555) the electrical signal based on the information associated with the selected frame to recover license plate information. The process 500 then determines (at 560) whether license plate information was recovered from the electrical signal. When the process 500 determines (at 560) that license plate information has been recovered, the process 500 transmits (at 570) the license plate information to a remote server. The process 500 then receives (at 575) an insurance quote for a vehicle corresponding to the vehicle license plate. The process 500 then ends. In some aspects of the process, several different insurance quotes may be received that have similar features to the vehicle associated with the license plate information. Such quotes may be displayed at the mobile apparatus for the user of the mobile apparatus to interact with and view.


Returning to 560, when the process 500 determines that license plate information was not detected, the process 500 displays (at 565) an alert that license plate information cannot be recovered. Such an alert may guide the user to acquiring better video that is more likely to produce a readable license plate image. For instance, the alert may guide the user's mobile device position or angle. The process 500 may then return to collect additional video.



FIG. 6 illustrates an exemplary embodiment of a system architecture of a license plate detection apparatus. The plate detection apparatus may be a mobile apparatus such as the apparatus 130 described with respect to FIG. 1 or any other suitable mobile apparatus that has image capture and processing capabilities.


The license plate detection apparatus includes an image capture apparatus 605, an imager 610, a keypad 615, a strobe circuit 685, a frame buffer 690, a format converter 620, an image filter 625, a license plate detector 630, a network 635, network interfaces 640 and 697, a gateway 645, a rendering module 650, and a display 655. The license plate detection apparatus may communicate with a server having OCR Module 660, and an OCR analytics storage 670. However, in some aspects of the apparatus, the OCR module and/or OCR analytics storage may be part of the mobile apparatus. The license plate detection apparatus illustrated in FIG. 6 generates license plate information 675, which may be processed by various modules communicatively coupled to the gateway. The processed information may then be used by the insurance quote generator 695. The insurance quote generator may be used to obtain an insurance quote from a license plate image. In some aspects of the apparatus, the insurance quote generator 695 may be tied to an insurance underwriting service accessible via an API. In such aspects, when a device receives all the requisite criteria for obtaining insurance for a vehicle based on a license plate image, the insurance quote generator may communicate with the insurance underwriting service to underwrite the user for insurance in near real time based on the license plate image.


As shown, the image capture apparatus 605 communicates an optical image to the imager 610. The image capture apparatus 605 may comprise a camera lens and/or a camera that is built into a mobile apparatus. The imager 610 may comprise a CMOS array, NMOS, CCD, or any other suitable image sensor that converts an optical image into an electrical signal (e.g., raw image data). The electrical signal comprises pixel data associated with the captured image. The amount of pixel data is dependent on the resolution of the captured image. The pixel data is stored as numerical values associated with each pixel and the numerical values indicate characteristics of the pixel such as color and brightness. Thus, the electrical signal comprises a stream of raw data describing the exact details of each pixel derived from the optical image. During the image capture process, the imager 610 may produce a digital view as seen through the image capture apparatus for rendering at the display 655.


In some aspects of the apparatus, the image capture apparatus 605 may be configured to capture video. In such aspects, a timing circuit, such as the strobe circuit 685, may communicate with the imager 610. The strobe circuit 685 may sample (or clock) the imager 610 to produce a sampled electrical signal at some periodicity such as 24-30 fps. The sampled electrical signal may be representative of a frame of video presented on the display 655. The electrical signal may be provided to the frame buffer 690. However, the imager 610 may communicate the electrical signal directly to the format converter 620 when a single optical image is captured. When video is captured, the frame buffer may communicate the sample of the electrical signal representative of the frame of video from the frame buffer to the format converter 620. However, in some aspects of the apparatus, the frame buffer 690 may be bypassed such that the sampled electrical signal is communicated directly to the format converter 620.


The format converter 620 generates or compresses the raw image pixel data provided in the electrical signal to a standard, space efficient image format. However, in some aspects of the apparatus, the frame buffer 690 and format converter 620 may be reversed such that the sampled electrical signals are converted to a compressed standard video format before buffering. The standard image and/or video format can be read by the following modules of the license plate detection apparatus. However, the following description will assume that the sampled electrical signals are buffered before any such format conversion. The format converter 620 will be described in greater detail in FIG. 7.


The format converter 620 communicates the standard image file (or image) to the image filter 625. The image filter 625 performs a variety of operations on the image to provide the optimal conditions to detect a license plate image within the image. Such operations will be described in greater detail in FIG. 8. However, if the image filter 625 determines that the image is too distorted, noisy, or otherwise in a condition that is unreadable to the point that any filtering of the image will not result in a viable image for plate detection, the image filter 625 will signal to the rendering module 650 to display an alert on the display 655 that the image is not readable. Alternatively, once the image is filtered, ideally the image should be in a state that is optimal for accurate license plate detection. Therefore, the image filter 625 will then communicate the filtered image to the license plate detector 630.


The plate detector 630 is an integral module of license plate detection apparatus. The plate detector 630 will process the image to detect the presence of a license plate image by implementing several processes which will be described in greater detail in FIG. 9. The license plate detector may generate overlays such as rectangular boxes around object images it detects as potential or candidate license plate images. The overlays may be transmitted as signals from the license plate detector 630 to the rendering module 650. The rendering module may instruct the display 655 to display the overlays over the image received from the imager 610 so that the user of the mobile apparatus can receive visual guidance relating to what object images the license plate detection apparatus detects as candidate license plate images. Such information is useful in guiding the user to capture optical images that include the license plate image and provide a higher likelihood of accurate license plate information recovery.


The license plate detector 630 will determine which portion of the image (or electrical signal) is most likely a license plate image. The license plate detector 630 will then transmit only the license plate image portion of the image to the network 635 by way of the network interface 697. Alternatively, a user may skip the entire image conversion process and using the keypad 615, key in the license plate information, which is then transmitted over the network 635 by way of the network interface 697. The network 635 then transmits the license plate image information (or image file) or keyed information to the network interface 640, which transmits signals to the gateway 645.


The gateway 645 may transmit the license plate image data to the OCR module 660. The OCR module 660 derives the license plate information such as state and number information from the license plate image. The OCR module 660 may use several different third party and/or proprietary OCR applications to derive the license plate information. The OCR module 660 may use information retrieved from the OCR analytics storage 670 to determine which OCR application has the greatest likelihood of accuracy in the event that different OCR applications detected different characters. For instance, the OCR analytics storage 670 may maintain statistics collected from the user input received at the apparatus 300 described with respect to FIG. 3. The OCR module 660 may then select the license plate information that is statistically most likely to be accurate using information retrieved from the analytics storage 670. The OCR module 660 may then transmit the license plate information 675 as a text string or strings to the gateway 645, which provides the license plate information 675 to the rendering module 650 through the network 635. The rendering module 650 may then instruct the display 655 to display the license plate information 675. The display 655 may then display a message similar to the one described with respect to FIG. 3.


Additionally, the license plate information 675 may be transmitted through the gateway 645 and processed by various modules communicatively coupled to the gateway 645. The gateway 645 may transmit the processed information to the insurance quote generator 695. The insurance quote generator 695 may communicate with at least third party service by way of an API to receive at least one insurance quote based on the license plate information. The insurance quote may then be transmitted back to the gateway 645 for further processing. Alternatively, or in addition to, the gateway 645 may transmit the insurance quote to the rendering module 650 through the network 635. The rendering module 650 may then instruct the display 655 to display the insurance quote along with any other information to assist the user of the mobile apparatus.


In the event that the OCR module 660 or the license plate detector 630 is unable to detect a license plate image or identify any license plate information, the OCR module 660 and/or the license plate detector 630 will signal an alert to the rendering module 650, which will be rendered on the display 655.


In some aspects of the apparatus, the OCR module 660 may be located on an apparatus separate from an external server. For instance, the OCR module 660 may be located on the mobile apparatus 130 similar to the license plate detection apparatus. Additionally, in some aspects of the apparatus, the format converter 620, image filter 625, and license plate detector 630 may be located on an external server and the electrical signal recovered from the optical image may be transmitted directly to the network 635 for processing by the modules on the external server.


The license plate detection apparatus provide several advantages in that it is not confined to still images. As discussed above, buffered or unbuffered video may be used by the license plate detection apparatus to determine the frame with the highest likelihood of having a license plate image. It also enables optical images to be taken while a mobile apparatus is moving and accounts for object images recovered from any angle and/or distance. Additionally, the license plate detection apparatus also provides the added benefit of alerting the user when a license plate image cannot be accurately detected in addition to guidance relating to how to get a better image that is more likely to produce license plate information. Such guidance may include directional guidance such as adjusting the viewing angle or distance as well as guidance to adjust lighting conditions, if possible. Thus, the license plate detection apparatus provides a solution to the complicated problem of how to derive license plate information captured from moving object images and from virtually any viewing angle. The license plate information may then be used to derive different information associated with the license plate information such an estimated value for a vehicle.



FIG. 7 illustrates an exemplary embodiment of a diagram of the format converter 620. The format converter 620 receives the input of an electrical signal that defines an image 720 or an electrical signal that defines a sequence of sampled images such as video frames 725. The format converter 620 outputs an image file 730 in a standard format such as the formats discussed above with respect to FIG. 1. The format converter 620 includes a frame analyzer 715 and a conversion engine 710. When an electrical signal defining an image 720 is received at the format converter 620, the electrical signal will be read by the conversion engine 710. The conversion engine 710 translates the pixel data from the electrical signal into a standard, compressed image format file 730. Such standard formats may include .jpeg, .png, .gif, .tiff or any other suitable image format similar to those discussed with respect to FIG. 1. In the exemplary instance where the format converter 620 converts video to a standard format, the standard format may include .mpeg, .mp4, .avi, or any other suitable standard video format. Since the electrical signal received at the format converter 620 is raw data which can make for a very large file, the conversion engine may compress the raw data into a format that requires less space and is more efficient for information recovery.


The format converter 620 may also receive several sampled electrical signals, each representing frames of video images, such as frame data 725. The video data frames may be received at the frame analyzer 715 in the format converter 620. The frame analyzer 715 may perform a number of different functions. For instance, the frame analyzer 715 may perform a function of analyzing each frame and discarding any frames that are blurry, noisy, or generally bad candidates for license plate detection based on some detection process such as the process 500 described in FIG. 5. Those frames that are suitable candidates for license plate detection may be transmitted to the conversion engine 710 and converted to the standard format image 730 similar to how the image 720 was converted.



FIG. 8 illustrates an exemplary embodiment of a diagram of the image filter 625. The image filter 625 receives a formatted image file that the image filter 625 is configured to read. The image filter 625 outputs a filtered image 840 which may be optimized for more reliable license plate recognition. Alternatively, if the image filter 625 determines that the image is unreadable, the image filter 625 may signal an alert 845, indicating to the user that the image is unreadable and/or guide the user to capture a better image.


The image filter 625 includes a filter processor 805, a grayscale filter 810, and a parameters storage 835. When the image filter 625 receives the formatted image file 830, the filter processor 805 will retrieve parameters from the parameters storage 835, which will assist the filter processor 805 in how to optimally filter the image. For instance, if the received image was taken in cloudy conditions, the filter processor 805 may adjust the white balance of the image based on the parameters retrieved from the parameters storage 835. If the image was taken in the dark, the filter processor 805 may use a de-noise function based on the parameters retrieved from the parameters storage 835 to remove excess noise from the image. In some aspects of the apparatus, the filter processor 805 also has the ability to learn based on the success of previously derived license plate images what parameters work best or better in different conditions such as those conditions described above. In such aspects, the filter processor 805 may take the learned data and update the parameters in the parameters storage 835 for future use.


The filter processor 805 also has logic to determine if an image will be readable by the license plate detector 630. When the filter processor 805 determines that the image will not be readable by the license plate detector 630, the filter processor 805 may signal an alert 845 to the rendering module 650. However, when the filter processor 805 determines that sufficient filtering will generate a readable image for reliable license plate detection, the filter processor 805 communicates the image, post filtering, to the grayscale filter 810.


Additionally, in some aspects of the apparatus, the image filter 625 may receive several images in rapid succession. Such instances may be frames of a video that may be captured while a mobile apparatus is moving. In such instances, the filter processor 805 may continuously adjust the filter parameters to account for each video frame, it receives. The same alerts may be signaled in real-time in the event that a video frame is deemed unreadable by the filter processor 805.


The grayscale filter 810 will convert the received image file to grayscale. More specifically, the grayscale filter will convert the pixel values for each pixel in the received image file 830 to new values that correspond to appropriate grayscale levels. In some aspects of the filter, the pixel values may be between 0 and 255 (e.g., 256 values or 28 values). In other aspects of the filter, the pixel values may be between 0 and any other value that is a power of 2 minus 1, such as 1023, etc. The image is converted to grayscale, to simplify and/or speed up the license plate detection process. For instance, by reducing the number of colors in the image, which could be in the millions, to shades of gray, the license plate image search time may be reduced.


In the grayscale image, regions with higher intensity values (e.g., brighter regions) of the image will appear brighter than regions of the image with lower intensity values. The grayscale filter 810 ultimately produces the filtered image 840. However, one skilled in the art should recognize that the ordering of the modules is not confined to the order illustrated in FIG. 8. Rather, the image filter may first convert the image to grayscale using the grayscale filter 810 and then filter the grayscale image at the filter processor 805. The filter processor 805 then outputs the filtered image 840. Additionally, it should be noted that the image filter 625 and the format converter 620 may be interchangeable. Specifically, the order in which this image is processed by these two modules may be swapped in some aspects of the apparatus.



FIG. 9 illustrates an exemplary embodiment of a diagram of the license plate detector 630. The license plate detector 630 receives a filtered image 930 and processes the image to determine license plate information 935, which is may be a cropped image of at least one license plate image. The license plate detector 630 comprises an object detector 905, a quad processor 910, a quad filter 915, a region(s) of interest detector 920, and a patch processor 925. The license plate detector 630 provides the integral function of detecting a license plate image from an image at virtually any viewing angle and under a multitude of conditions, and converting it to an image that can be accurately read by at least one OCR application.


The license plate detector 630 receives the filtered image 930 at the object detector 905. As discussed above, the filtered image 930 has been converted to a grayscale image. The object detector 905 may use a mathematical method, such as a Maximal Stable Extremal Regions (MSER) method, for detecting regions in a digital image that differ in properties, such as brightness or color, compared to areas surrounding those regions. Simply stated, the detected regions of the digital image have some properties that are constant or vary within a pre-described range of values; all the points (or pixels) in the region can be considered in some sense to be similar to each other. This method of object detection may provide greater accuracy in the license plate detection process than other processes such as edge and/or corner detection. However, in some instances, the object detector 905 may use edge and/or corner detection methods to detect object images in an image that could be candidate license plate images.


Typically, the object images detected by the object detector 905 will have a uniform intensity throughout each adjacent pixel. Those adjacent pixels with a different intensity would be considered background rather than part of the object image. In order to determine the object images and background regions of the filtered image 930, the object detector 905 will construct a process of applying several thresholds to the image. Grayscale images may have intensity values between 0 and 255, 0 being black and 255 being white. However, in some aspects of the apparatus, these values may be reversed with 0 being white and 255 being black. An initial threshold is set to be somewhere between 0 and 255. Variations in the object images are measured over a pre-determined range of threshold values. A delta parameter indicates through how many different gray levels a region needs to be stable to be considered a potential detected object image. The object images within the image that remain unchanged, or have little variation, over the applied delta thresholds are selected as likely candidate license plate images. In some aspects of the detector, small variations in the object image may be acceptable. The acceptable level of variations in an object image may be programmatically set for successful object image detection. Conversely or conjunctively, the number of pixels (or area of the image) that must be stable for object image detection may also be defined. For instance, a stable region that has less than a threshold number of pixels would not be selected as an object image, while a stable region with at least the threshold number of pixels would be selected as an object image. The number of pixels may be determined based on known values relating to the expected pixel size of a license plate image or any other suitable calculation such as a height to width ratio.


In addition, the object detector 905 may recognize certain pre-determined textures in an image as well as the presence of informative features that provide a greater likelihood that the detected object image may be a license plate image. Such textures may be recognized by using local binary patterns (LBP) cascade classifiers. LBP is especially useful in real-time image processing settings such as when images are being captured as a mobile apparatus moves around an area. Although commonly used in the art for image facial recognition, LBP cascade classifiers may be modified such that the method is optimized for the detection of candidate license plate images.


In an LBP cascade classification, positive samples of an object image are created and stored on the license plate detection apparatus. For instance, a sample of a license plate image may be used. In some instances multiple samples may be needed for more accurate object image detection considering that license plates may vary from state to state or country to country. The apparatus will then use the sample object images to train the object detector 905 to recognize license plate images based on the features and textures found in the sample object images. LBP cascade classifiers may be used in addition to the operations discussed above to provide improved detection of candidate license plate images.


Once the object detector 905 has detected at least one object as a candidate license plate image, the object detector 905 will pass information relating to the detected object images to the quad processor 910 and/or the quad filter 915. In some aspects of the detector, the object images may not be of a uniform shape such as a rectangle or oval. The quad processor 910 will then fit a rectangle around each detected object image based on the object image information provided by the object detector 905. Rectangles are ideal due to the rectangular nature of license plates. As will be described in the foregoing, information about the rectangles may be used to overlay rectangles on object images that are displayed for the user's view on a mobile apparatus.


The rectangle will be sized such that it fits minimally around each object image and all areas of the object image are within the rectangle without more additional background space than is necessary to fit the object image. However, due to various factors such as the angle at which the optical image was taken, the license plate image may not be perfectly rectangular. Therefore, the quad processor 910 will perform a process on each object image using the rectangle to form a quadrilateral from a convex hull formed around each object image.


The quad processor 910 will use an algorithm that fits a quadrilateral as closely as possible to the detected object images in the image. For instance, the quad processor 910 will form a convex hull around the object image. A convex hull is a polygon that fits around the detected object image as closely as possible. The convex hull comprises edges and vertices. The convex hull may have several vertices. The quad processor 910 will take the convex hull and break it down to exactly four vertices (or points) that fit closely to the object image.



FIGS. 10-12 illustrate the functionality of the quad processor 910. As shown, FIG. 10 illustrates an exemplary embodiment of an object image 1005 with a convex hull 1010 fit around the object image 1005, and a blown up region 1015. The convex hull 1010 comprises several edges and vertices including vertices A-D. In order to fit the object image 1005 into a quadrilateral, the convex hull 1010 may be modified such that only 4 vertices are used. For instance, as illustrated in FIG. 11, for each adjacent pair of points A-D in the convex hull 1010, the quad processor 910 will find a new point Z that maintains convexity and enclosure of the object image when B and C are removed. Point Z is chosen as a point that provides a minimal increase to the hull area and does not go outside of the originally drawn rectangle (not shown). Thus, FIG. 11 illustrates an exemplary embodiment of a method for forming a quadrilateral (shown in FIG. 12) from the convex hull 1010. The process repeats for each set of 4 points until the convex hull 1010 is compressed to only four vertices as illustrated in FIG. 12.



FIG. 12 illustrates an exemplary embodiment of the object image 1005 enclosed in a quadrilateral 1210. As shown in FIG. 12, Quadrilateral 1210 fits as closely to the object image 1005 as possible without any edge overlapping the object image 1005. Fitting a quadrilateral closely to an object image as illustrated by the FIGS. 10-12 provides the benefit of greater efficiency in the license plate detection process. As will be described below, the license plate detection apparatus will only search the portions of the image within the quadrilaterals for the presence of a license plate image.


Referring back to FIG. 9, now that the quad processor 910 has drawn efficient quadrilaterals around each of the detected object images, the coordinates of the quadrilaterals are passed to the quad filter 915 and/or the region(s) of interest detector 920. As discussed above, the license plate detection apparatus first overlays rectangles around each detected object image. The quad filter 915 may use the rectangle information (rather than the quadrilateral information) received from the object detector 905, such as the pixel coordinates of the rectangles in the image, and look for rectangles similar in size and that overlap. The quad filter 915 will then discard the smaller rectangle(s), while maintaining the biggest. If at least two rectangles are of an identical size and overlap, the quad filter 915 will use a mechanism to determine which rectangle is more likely to be a full license plate image and discard the less likely image within the other rectangle(s). Such mechanisms may involve textures and intensity values as determined by the object detector 905. In some aspects of the filter, rather than searching only rectangles, the quad filter 915 may alternatively or additionally search the quadrilateral generated by the quad processor 910 for duplicates and perform a similar discarding process. By filtering out the duplicates, only unique object images within the rectangles will remain, with the likelihood that at least one of those object images is a license plate image. Thus, at this point, the license plate detection apparatus will only need to search the areas within the rectangles or quadrilaterals for the license plate image.


The region(s) of interest detector 920 will then determine which of the object images are actually object images that that have similar proportions (e.g., height and width) to the proportions that would be expected for a license plate image. For instance, typically a license plate is rectangular in shape. However, depending on several factors such as the angle that the license plate image was captured, the object image may appear more like a parallelogram or trapezoid. However, there is a limit to how much skew or keystone (trapezoidal shape) a license plate image undergoes before it becomes unreadable. Therefore, it is necessary to compute a skew factor and/or keystone to determine whether the object image may be a readable license plate image. Specifically, object images that have a skew factor and/or keystone below and/or above a threshold value are likely object images that do not have the proportions expected for a license plate image or would likely be unreadable. Since a license plate has an expected proportion a threshold skew factor and/or keystone may be set and any detected object image that has a skew factor and/or keystone indicating that the object image is not a readable license plate image will be discarded. For instance, license plate images with a high skew and/or high keystone may be discarded.


In some aspects of the apparatus, the skew and keystone thresholds may be determined by digitally distorting known license plate images with varying amounts of pitch and yaw to see where the identification process and/or OCR fails. The threshold may also be dependent on the size of the object image or quadrilateral/trapezoid. Thus, quadrilaterals or trapezoids must cover enough pixel space to be identified and read by the OCR software. Those that do not have a large enough pixel space, skew factors that are too high, and/or keystones that are too high would then be discarded as either being unlikely candidates for license plate images or unreadable license plate images.


The skew factor is computed by finding the distance between opposing vertices of the quadrilateral and taking the ratio of the shorter distance to the longer distance so that the skew factor is less than or equal to 1. Rectangles and certain parallelograms that are likely candidate license plate images will have a skew factor that is close to 0, while skewed parallelograms will have a high skew factor. Additionally, trapezoids that are likely candidate license plate images will have a keystone that is close to 0, while trapezoids that are unlikely candidate license plate images will have a high keystone. Therefore, object images with a high skew factor are discarded, while the parallelograms with a lower skew factor and trapezoids with a lower keystone are maintained. In some aspects of the apparatus, a threshold skew and a threshold keystone may be defined. In such aspects, parallelograms having a skew factor below the threshold are maintained while those above the threshold are discarded. Similarly, in such aspects, trapezoids having a keystone below the threshold are maintained while those above the threshold are discarded. When the value is equal to the threshold, the parallelogram or trapezoid may be maintained or discarded depending on the design of the apparatus.


The remaining parallelograms and trapezoids are then dewarped. The dewarping process is particularly important for the trapezoids because it is used to convert the trapezoid into a rectangular image. The dewarping process uses the four vertices of the quadrilateral and the 4 vertices of an un-rotated rectangle with an aspect ratio of 2:1 (width:height), or any other suitable license plate aspect ratio, to computer a perspective transform. The aspect ratio may be pixel width:pixel height of the image. The perspective transform is applied on the region around the quadrilateral and the 2:1 aspect ratio object image is cropped out. The cropped object image, or patch, is an object image comprising a candidate license plate image.


The patch is then provided to the patch processor 925, which will search for alpha numeric characters in the patch, find new object images within the patch, fit rectangles around those object images, and compute a score from the fit rectangles. The score may be based on a virtual line that is drawn across the detected object images. If a line exists that has a minimal slope, the object images on that line may receive a score that indicates the object image is highly likely to be a license plate image. If no line with a minimal slope is detected, then an alert may be returned to the rendering module that a license plate image was not detected in the image. Scores may be calculated for several different patches from the same image and it follows that more than one license plate image may be detected in the same image. Once, the presence of a license plate image is detected, the license plate information 935 may be transmitted to a server for OCR and further processing. In some aspects of the apparatus, the license plate information is an image file comprising the license plate image. Additionally, the process for scoring the patch will be described in more detail with respect to FIG. 21.



FIG. 13 illustrates an exemplary embodiment of a diagram of the rendering module 650. The rendering module 650 may receive as input alert information from the image filter 1335, or information about detected object images from the license plate detector 1330. The rendering module will then communicate rendering instructions 1340 to the display 655. The rendering module 650 includes an overlay processor 1305, a detection failure engine 1310, and an image renderer 1315.


The overlay processor 1305 receives information about the detected object images 1330 from the license plate detector 630. As discussed above, such information may include coordinates of detected object images and rectangles determined to fit around those object images. The rectangle information is then provided to the detection failure engine 1310, which will determine that object images have been detected by the license plate detector 630. The detection failure engine 1310 may then forward the information about the rectangles to the image renderer 1315, which will provide rendering instructions 1340 to the display for how and where to display the rectangles around the image received from the imager 610. Such information my include pixel coordinates associated with the size and location of the rectangle and color information. For instance, if the license plate detector 630 determines that a detected object image is more likely to be an actual license plate image than the other detected object images, the rendering module 650 may instruct the display 655 to display the rectangle around the more likely object image in a way that is visually distinct from other rectangles. For instance, the rectangle around the object image more likely to be a license plate image may be displayed in a different color than the other rectangles in the display.


However, in some instances, the license plate detector 630 may not detect any object images. In such instances, the overlay processor will not forward any rectangle information to the detection failure engine 1310. The detection failure engine 1310 will then determine there has been an object image detection failure and signal an alert to the image renderer 1315. The image renderer 1315 will then communicate the display rendering instructions 1340 for the alert to the display 655. The license plate detection alerts have been described in greater detail above.


Additionally, the image filter 625 may provide information to the image renderer 1315 indicating an alert that the captured image cannot be processed for some reason such as darkness, noise, blur, or any other reason that may cause the image to be otherwise unreadable. The alert information from the image filter 625 is provided to the image renderer 1315, which then provides the rendering display instructions 1340 to the display 655 indicating how the alert will be displayed. The image filter alerts have been discussed in detail above.


The following FIGS. 14-22 provide exemplary illustrations and processes detailing the functionality of the license plate detection module 630. FIGS. 14-22 are devised to illustrate how the license plate detection apparatus goes from an optical image comprising many object images to detecting a license plate image among the object images.



FIG. 14 illustrates an exemplary embodiment of a scene 1400 that may be captured by a mobile apparatus 1410. The mobile apparatus 1410 may be similar to the mobile apparatus 130 described with respect to FIG. 1. The scene 1400 includes a structure 1425, a road 1430, a vehicle 1420, and a license plate 1405. The mobile apparatus 1410 includes a display area 1415.


As illustrated, the mobile apparatus 1410 has activated the image capture functionality of the mobile apparatus 1410. The image capture functionality may be an application that controls a camera lens and imager built into the apparatus 1410 that is capable of taking digital images. In some aspects of the apparatus, the image capture functionality may be activated by enabling an application which activates the license plate detection apparatus capabilities described in FIG. 6. In this example, the mobile apparatus 1410 may be capturing a still image, several images in burst mode, or video, in real-time for processing by the license plate detection apparatus. For instance, the vehicle 1420 may be moving while the image capture process occurs, and/or the mobile apparatus may not be in a stationary position. In such instances, the license plate detection apparatus may determine the best video frame taken from the video.



FIG. 15 provides a high level illustration of an exemplary embodiment of how an image may be rendered on an apparatus 1500 by the license plate detection apparatus and transmission of a detected license plate image to a server 1550. As shown, the apparatus 1500 includes a display area 1515 and an exploded view 1555 of the image that is rendered in display area 1515. The exploded view 1555 includes object images 1525, rectangles 1520 that surround the object images, overlapping rectangles 1530, candidate license plate image 1505, and a rectangle 1510 that surrounds the candidate license plate image 1505. In the event that a license plate image is detected and captured in the display area 1515, the apparatus 1500 may wirelessly transmit license plate image data 1535 over the Internet 1540 to a server 1550 for further processing. In some aspects of the apparatus, the license plate image data may be an image file comprising a license plate image.


As shown in exploded view 1555, the object detector 905 of the license plate detection apparatus has detected several object images 1525, as well as a candidate license plate image 1505. As shown, the rendering module 650 has used information communicated from the license plate detector 630 to overlay rectangles around detected object images 1525 including the candidate license plate image 1505. The rendering module 650 has also overlaid rectangles that differ in appearance around object images that are less likely to be license plate images. For instance, rectangles 1520 appear as dashed lines, while rectangle 1510 appears as a solid line. However, as those skilled in the art will appreciate, the visual appearance of the rectangles is not limited to only those illustrated in exploded view 1555. In fact, the visual appearance of the rectangles may differ by color, texture, thickness, or any other suitable way of indicating to a user that at least one rectangle is overlaid around an object image that is more likely to be a license plate image than the other object images in which rectangles are overlaid.


Exploded view 1555 also illustrates overlapping rectangles 1530. As discussed above, the quad filter 915 of the license plate detector 630 may recognize the overlapping rectangles 1530 and discard some of the rectangles, and detected object images within those discarded rectangles, as appropriate.


As is also illustrated by FIG. 15, the license plate detection apparatus has detected the presence of a candidate license plate image 1505 in the image. As a result, the mobile apparatus 1500 will transmit the license plate image data 1535 associated with the license plate image over the internet 1540 to the server 1550 for further processing. Such further processing may include OCR and using the license plate information derived from the OCR process to perform a number of different tasks that may be transmitted back to the mobile apparatus 1500 for rendering on the display area 1515. Alternatively, in some aspects of the apparatus, the OCR capability may be located on the mobile apparatus 1500 itself. In such aspects, the mobile apparatus may wirelessly transmit the license plate information such as state and number data rather than information about the license plate image itself.



FIG. 16 conceptually illustrates an exemplary embodiment of a more detailed process 1600 for processing an electrical signal to recover license plate information as discussed at a high level in process 400. The process may also be applied to detecting license plate information in a sample of an electrical signal representing a frame of a video image presented on a display as described in process 500 of FIG. 5. The process 1600 may be performed by the license plate detection apparatus. The process 1600 may begin after the mobile apparatus has activated an application, which enables the image capture feature of a mobile apparatus.


As shown, the process 1600 converts (at 1610) the captured image to grayscale. As discussed above, converting the image to grayscale makes for greater efficiency in distinguishing object images from background according to the level of contrast between adjacent pixels. Several filtering processes may also be performed on the image during the grayscale conversion process. The process 1600 then detects (at 1615) object image(s) from the grayscale image. Such object images may be the object images 1505 and 1525 as illustrated in FIG. 15. The process 1600 processes (at 1620) a first object image. The processing of object images will be described in greater detail in the foregoing description.


At 1625, the process 1600 determines whether an object image fits the criteria for a license plate image. When the object image fits the criteria for a license plate image, the process 1600 transmits (at 1630) the license plate image (or image data) to a server such as the server 1550. In some aspects of the process, an object image fits the criteria for a license plate when a score of the object image is above a threshold value. Such a score may be determined by a process which will be discussed in the foregoing description. The process 1600 then determines (at 1635) whether there are more object images detected in the image and/or whether the object image being processed does not exceed a threshold score.


When the process 1600 determines (at 1625) that an object image does not fit the criteria for a license plate image, the process 1600 does not transmit any data and determines (at 1635) whether more object images were detected in the image and/or whether the object image being processed did not exceed a threshold score. When the process 1600 determines that more object images were detected in the image and/or the object image being processed did not exceed a threshold score, the process 1600 processes (at 1640) the next object image. The process then returns to 1625 to determine if the object image fits the criteria of a license plate image.


When the process 1600 determines (at 1635) that no more object images were detected in the image and/or the object image being processed exceeds a threshold score, the process 1600 determines (at 1645) whether at least one license plate image was detected in the process 1600. When a license plate image was detected, the process ends. When a license plate image was not detected, an alert is generated (at 1650) and the rendering module 650 sends instructions to display a detection failure message at the display 655. In some aspects of the process, the detection failure alert may provide guidance to the user for capturing a better image. For instance, the alert may guide the user to move the mobile apparatus in a particular direction such as up or down and/or adjust the tilt of the mobile apparatus. Other alerts may guide the user to find a location with better lighting or any other suitable message that may assist the user such that the license plate detection apparatus has a greater likelihood of detecting at least one license plate image in an image.


The process 1600 may be performed in real-time. For instance, the process 1600 may be performed successively as more images are captured either by capturing several frames of video as the mobile apparatus or object images in the scene move and/or are tracked or by using an image capture device's burst mode. The process 1600 provides the advantage of being able to detect and read a license plate image in an image at virtually any viewing angle and under a variety of ambient conditions. Additionally, the criteria for determining a license plate image is determined based on the operations performed by the license plate detector. These operations will be further illustrated in the following figures as well.



FIGS. 17-19 illustrate the operations performed by the quad processor 910. For instance, FIG. 17 illustrates an exemplary embodiment of an image 1700 comprising a license plate image 1710. Although not shown, the license plate may be affixed to a vehicle which would also be part of the image. The image 1700 includes the license plate image 1710 and a rectangle 1705. As shown in exemplary image portion 1700, an object image has been detected by the object detector 905 of the license plate detector 630. The object image in this example is the license plate image 1710. After the object image was detected, the quad processor 910 fit a rectangle 1705 around the license plate image 1710. Information associated with the rectangle may be provided to the rendering module 650 for overlaying a rectangle around the detected license plate image in the image displayed on the display 655.



FIG. 18 illustrates an exemplary embodiment of a portion of an image 1800 comprising the same license plate image 1710 illustrated in FIG. 17. Image portion 1800 includes the license plate image 1810 and a quadrilateral 1805. As discussed above with respect to FIGS. 10-12, the quad processor 910 of the license plate detector 630 performs several functions to derive a quadrilateral that closely fits the detected object image. Once the quadrilateral has been derived, the quad processor 910 then computes the skew factor and/or keystone discussed above.


Once the quadrilateral is determined to have a low skew (or a skew below a threshold value) or the trapezoid has been determined to have a low keystone (or a keystone below a threshold value), the region(s) of interest detector 920 can dewarp the image to move one step closer to confirming the presence of a license plate image in the image and to also generate patch that is easily read by OCR software. In some aspects of the apparatus, the patch is the license plate image that has been cropped out of the image.



FIG. 19 is an exemplary embodiment of the dewarping process being performed on a license plate image 1905 to arrive at license plate image 1910. FIG. 19 illustrates two stages 1901 and 1902 of the dewarping process.


As shown, the first stage 1901 illustrates the license plate image 1905 in a trapezoidal shape similar to the shape of the quadrilateral 1805 illustrated in FIG. 18. The second stage 1902 illustrates the license plate image 1910 after the dewarping process has been performed. As shown, license plate image 1910 has undergone a perspective transform and rotation. The license plate image 1910 as shown in the second stage 1902 is now in a readable rectangular shape. In some aspects of the dewarping process, the license plate image may also undergo corrections if the license plate image is skewed or may scale the license plate image to a suitable size.


The ability to accurately dewarp quadrilaterals and especially the quadrilaterals that are license plate images taken at any angle is an integral piece of the license plate detection apparatus. The dewarping capability enables a user to capture an image of a license plate at a variety of different angles and distances. For instance, the image may be taken with any mobile apparatus at virtually any height, direction, and/or distance. Additionally, it provides the added benefit of being able to capture a moving image from any position. Once the license plate image has been dewarped, the region(s) of interest detector 920 will crop the rectangular license plate image to generate a patch. The patch will be used for further confirmation that the license plate image 1910 is, in fact, an image of a license plate.



FIG. 20 conceptually illustrates an exemplary embodiment of a process 2000 for processing an image comprising a license plate image. The process 2000 may be performed by a license plate detection apparatus. The process 2000 may start after the license plate detection apparatus has instantiated an application that enables the image capture feature of a mobile apparatus.


As shown, the process 2000 detects (at 2010) at least one object image, similar to the object image detection performed by process 1600. The following describes in greater detail the process of processing (at 1620) the image.


For instance, the process 2000 then fits (at 2015) a rectangle to each detected object image in order to reduce the search space to the detected object images. The information associated with the rectangle may also be used as an overlay to indicate to users of the license plate detection apparatus the location(s) of the detected object image(s). The process then uses the rectangles to form (at 2020) a convex hull around each object image. The convex hull, as discussed above, is a polygon of several vertices and edges that fits closely around an object image without having any edges that overlap the object image.


At 2025, the process 2000 compresses the convex hull to a quadrilateral that closely fits around the detected object image. The process of compressing the convex hull into a quadrilateral was discussed in detail with respect to FIGS. 9-12. The process 2000 then filters (at 2030) duplicate rectangles and/or quadrilaterals. In some aspects of the process, rectangles or quadrilaterals that are similar in size and overlap may be discarded based on some set criteria. For example, the smaller rectangle and/or quadrilateral may be discarded.


The process 2000 calculates (at 2035) a skew factor. The process 2000 then dewarps (at 2040) the quadrilateral. The process then crops (at 2045) the object image within the quadrilateral, which becomes the patch. The patch will be used for further processing as discussed below. In some aspects of the process, the object image is cropped at a particular ratio that is common for license plates of a particular region or type. For instance, the process may crop out a 2:1 aspect ratio patch, of the image, which is likely to contain the license plate image. Once the quadrilateral is cropped, the process 2000 then ends.



FIG. 21 illustrates an exemplary embodiment of a diagram that determines whether a patch 2100 is an actual license plate image. The patch 2100 includes a candidate license plate image 2105, alpha-numeric characters 2120 and 2140, rectangles 2115, sloped lines 2130, zero-slope line 2110, and graphic 2125.


As shown in the patch 2100, rectangles are fit around detected object images within the patch. In some aspects of the apparatus, object images may be detected using the MSER object detection method. Conjunctively or conversely, some aspects of the apparatus, may use edge and or corner detection methods to detect the object images. In this case, the detected object images are alpha-numeric characters 2120 and 2140 as well as graphic 2125. After detecting the alpha-numeric characters 2120 and 2140 as well as graphic 2125, a stroke width transform (SWT) may be performed to partition the detected object images into those that are likely from an alpha-numeric character and those that are not. For instance, the SWT may try to capture the only alpha-numeric effective features and use certain geometric signatures of alpha-numeric characters to filter out non-alpha-numeric areas, resulting in more reliable text regions. In such instances, the SWT transform may partition the alphanumeric characters 2120 and 2140 from the graphic 2125. Thus, only those object images that are determined to likely be alpha-numeric characters, such as alphanumeric characters 2120 and 2140, are later used in a scoring process to be discussed below. In some aspects of the apparatus, some object images other than alpha-numeric characters may pass through the SWT partitioning. Thus, further processing may be necessary to filter out the object images that are not alpha-numeric characters and also to determine whether the alpha-numeric characters in the license plate image fit the characteristics common for a license plate images.


Following the partitioning of alpha-numeric characters from non-alpha numeric characters, a line is fit to the center of the rectangle pair for each pair of rectangles. For instance, a sloped line is shown for the D and object image 2125 pair. The distance of all other rectangles to the lines 2130 and 2110 are accumulated and the pair with the smallest summed distance is used as a text baseline. For instance, the zero-slope line 2110 has the smallest summed distance of the rectangles to the line 2110. Some aspects of the apparatus may implement a scoring process to determine the presence of a license plate image. For instance, some aspects of the scoring process may determine a score for the determined alpha-numeric characters on the zero-slope line 2110. The score may increase when the rectangle around the alpha-numeric character is not rotated beyond a threshold amount. The score may decrease if the detected alpha-numeric character is too solid. In some aspects of the scoring process, solidity may be defined as the character area/rectangle area. When the calculated area is over a threshold amount, then the detected object image may be deemed too solid and the score decreases.


In other aspects of the scoring process, for each rectangle 2115 in the patch 2100 the patch score increases by some scoring value if the center of the rectangle is within a particular distance of the baseline line where X is the shorter of the rectangle height and width. For instance, if the particular distance were to be defined as the shorter of the rectangle height and width and if the scoring value is set at 1, the patch score value of the patch 2100 would be 7 because the rectangles around the characters “1DDQ976” are within a shorter distance than the width of the rectangle. Furthermore, the zero-slope of the line 2110 between the alpha-numeric characters 2120 further confirm that this patch is likely a license plate image since typically license plates have a string of characters along a same line. Sloped lines 2130 are, therefore, unlikely to provide any indication that the patch is a license plate image because the distance between characters is too great and the slope is indicative of a low likelihood of a license plate image. Accordingly, in some aspects of the process, sloped lines 2130 are discarded in the process.


In some aspects of the process, when the patch has a score above a threshold value, the patch is determined to be a license plate image, and the license plate detection is complete. The license plate image data is then transmitted to a server for further processing and for use in other functions computed by the server, the results of which are provided to the license plate detection apparatus.



FIG. 22 conceptually illustrates an exemplary embodiment of a process 2200 for processing a patch comprising a candidate license plate image such as patch 2100. The process may be performed by the license plate detection apparatus. The process may begin after a patch has been cropped from an image file.


As shown, the process 2200 processes (at 2205) only substantially rectangular portion(s) of the patch to locate alpha-numeric characters. The process 2200 fits (at 2210) rectangles around the located alpha-numeric characters and computes scores based on the distances between rectangle pairs as discussed above with respect to FIG. 21. The process 2200 selects (at 2215) the patch with the best score to recognize as a license plate image. Alternatively or conjunctively, the process 2200 may select all patches that have a score above a threshold level to be deemed as license plate images. In such instances, multiple patches, or instances of license plate information, would be transmitted to the server for further processing.



FIG. 23 illustrates an exemplary embodiment of an insurance underwriting data flow 2300. The data flow includes license plate image 2305, VIN 2310, vehicle configuration (e.g., VIN explosion) 2315, insurance quote 2345 and optional elements 2350. The optional elements include a driver's license image 2320, driver's license information 2335, validation module 2330, and alert 2340.


As shown, the data flow begins with the license plate image 2305. The VIN 2310 is the recovered from the license plate image 2305. Then the vehicle configuration 2315 is recovered from the VIN 2310. Based on the vehicle configuration 2315, an insurance quote 2345 may be generated for a vehicle based on the license plate image. However, this quote may be a simple estimate or include a range of value without incorporating the optional information 2350.


Optional information 2350 may be concurrently or consecutively processed by the apparatus. As shown, the data flow for the optional information begins with the driver's license image 2320. The apparatus may prompt a user to capture a driver's license image before or after capturing the license plate image. The driver's license information 2335 is the recovered from the driver's license image. Such information may be recovered by using OCR software similar to the OCR software used to recover the license plate information. The recovered driver's license information 2335 may include information such as state, number, name, address, date of birth and any other pertinent information that may assist in providing the insurance quote as well as underwriting the requestor for insurance if the requestor is the owner of the vehicle. The driver's license information 2335 is then transmitted to the validation module 2330. The validation module determines whether the driver's license information matches the owner information associated with the VIN. If the information does not match, the validation module 2330 transmits the alert 2340. Such an alert may be displayed at a mobile apparatus. However, if the validation module determines that the driver's license information and vehicle ownership information match, then the driver's license information is transmitted along with the vehicle information to determine a more accurate insurance quote that incorporates all of the requisite driver information. At this point, if the requestor is happy with the insurance quote 2345, the requestor may interact with the mobile apparatus to initiate the generation of an insurance policy from the mobile apparatus.


In some aspects of the apparatus, once collected, the driver's license information may be maintained on the server 230 for processing more accurate insurance quotes consistently. Thus the described data flow 2300 provides an efficient way to obtain an insurance estimate, be underwritten for insurance, and efficiently receive an insurance policy when the driver's license information matches the ownership information associated with the vehicle associated with the license plate image. It follows that an insurance quote can be generated by simply capturing an image of a license plate and a more accurate insurance quote may be generated by simply capturing the license plate image and the driver's license image, thereby streamlining the insurance underwriting process.



FIG. 24 illustrates an exemplary embodiment of an operating environment 2400 for communication between a gateway 2495 and client apparatuses 2410, 2430, and 2470. In some aspects of the service, client apparatuses 2410, 2430, and 2470 communicate over one or more wired or wireless networks 2440 and 2460. For example, wireless network 2440, such as a cellular network, can communicate with a wide area network (WAN) 2480, such as the internet, by use of mobile network gateway 2450. A mobile network gateway in some aspects of the service provides a packet oriented mobile data service or other mobile data service allowing wireless networks to transmit data to other networks, such as the WAN 2480. Likewise, access device 2460 (e.g., IEEE 802.11b/g/n wireless access device) provides communication access to the WAN 2480. The apparatuses 2410, 2430, and 2470 can be any portable electronic computing apparatus capable of communicating with the gateway 2495. For instance, the apparatuses 2410 and 2470 may have an installed application that is configured to communicate with the gateway 2495. The apparatus 2430 may communicate with the gateway 2495 through a website having a particular URL. Alternatively, the apparatus 2430 may be a non-portable apparatus capable of accessing the internet through a web browser.


In order to process the license plate information to provide an insurance quote for a vehicle, the gateway 2495 may also communicate with third party services 2490 that provide information such as vehicle configuration and vehicle identification numbers (VIN)s. Additionally, the gateway 2495 may also communicate with various insurance underwriting services and services capable of validating vehicle ownership as part of the insurance approval process. As shown, the gateway 2495 may communicate directly with at least one third party processing service 2490 if such services are located on the same network as the gateway 2495. Alternatively, the gateway 2495 may communicate with at least one of the third party processing services 2490 over the WAN 2480 (e.g., the internet).


In some aspects of the service, the process for providing an insurance quote may incorporate the location with the vehicle and/or device. In such aspects, the service may optionally use location information acquired through a GPS satellite 2420. The apparatuses 2410 and 2470 may be configured to use a GPS service and provide location information to the gateway 2495 using the connections discussed above. The provided location information may be used by the vehicle 2495 to adjust the requisite criteria for obtaining an insurance quote based on geographic region provided GPS information. For instance, certain locations may be known as higher risk areas. Such locations may use the GPS information to generate a higher insurance quote than may be provided for a different location. However, the location may also be configurable by user input of a home address received at one of the apparatuses 2410 or 2470. Such user input may include deriving an address from a driver's license as discussed in the previous figure. Thus, the service described in FIG. 24 provides greater granularity and accuracy obtaining an insurance quote which may then be used to underwrite a user for insurance in near real time.



FIG. 25 illustrates an exemplary flow of data between a gateway 2500 and various other modules. The gateway 2500 and modules 2510-2560 may be located on a server such as the server 230. In some aspects of the apparatus, the gateway 2500 may be a request router in that it receives requests from the various modules 2510-2560 and routes the requests to at least one of the appropriate module 2510-2560. The gateway 2500 communicates with various modules 2510-2560, which may communicate with various third party services to retrieve data that enables the gateway 2500 to provide an insurance quote to a client apparatus 2570 from an image of a license plate.


As shown, the client apparatus 2570, may use a network interface 2560 to transmit at least one license plate image recovered from an optical image taken by the client apparatus 2570. Additionally, geographic information may also be transmitted from the client apparatus 2570 over the network interface 2560. The client apparatus 2570 may include an installed application providing instructions for how to communicate with the gateway 2500 through the network interface 2560. In this example, the network interface 2560 provides license plate image information or text input of a license plate to the gateway 2500, as well as, geographic information to the gateway 2500. For instance, as discussed above, the network interface 2560 may transmit text strings received as user input at the client apparatus 2570 or a license plate image processed by the client apparatus 2570 to the gateway 2500. As further shown in this example, the gateway 2500 may route the license plate image data to the OCR module 2510 to perform the OCR text extraction of the license plate information. In this example, the OCR module 2510 may have specialized or a commercial OCR software application installed that enables accurate extraction of the license plate number and state. The OCR module may be similar to the OCR module discussed in FIG. 6. In one example, the OCR module 2510 may also have the capability of determining if a license plate image contains a clear enough image that will provide for accurate text extraction. In this example, if the license plate image does not contain a clear image or the image quality is too low, the OCR module may alert the gateway 2500 to transmit a warning to the client apparatus 2570. In an alternative example, a license plate image may be recovered and transmitted to the gateway 2500 for further processing.


Once the license plate number and state information is extracted and converted to text strings, the gateway 2500 will provide the extracted text to a translator 2520, which is capable of determining a VIN from the license plate information. The translator 2520 may communicate with third party services using functionality provided in an application programming interface (API) associated with the third party services. Such services may derive VINs from license plate information to retrieve the vehicle's VIN. In some aspects of the server, the various modules 2510-2550 may also be configured to communicate with the third party services or apparatuses (not shown) using APIs associated with the third party services. In such aspects of the server, each module 2510-2550 may route a request through the gateway 2500 to the network interface 2560, which will communicate with the appropriate third party service (not shown).


The gateway 2500 then routes the retrieved VIN to the VIN decoder 2530 along with a request to generate a VIN explosion. The VIN decoder 2530 is capable of using the VIN to generate a VIN explosion by requesting the VIN explosion from a third party service. The VIN explosion includes all of the features, attributes, options, and configurations of the vehicle associated with the VIN (and the license plate image). In some aspects of the apparatus, the VIN explosion may be provided as an array of data, which the gateway 2500 or VIN decoder 2530 is capable of understanding, processing, and/or routing accurately. Similar to the VIN translator 2520, the VIN decoder 2530 may communicate with a third party service by using an API associated with the service. The VIN and/or vehicle data derived from the VIN explosion may be routed back through the gateway 2500 and through the network interface 2560 to the client apparatus 2570. As discussed above, the client apparatus may display the VIN and/or vehicle data.


Next, the VIN explosion data may be provided by the gateway 2500 to the insurance quote generator 2550. The insurance quote generator 2550 may optionally receive and confirm that the VIN matches the information from the VIN explosion. Additionally, if a user wishes to insure a vehicle that the user owns, driver's license information may be transmitted from the apparatus 2570 over the network interface 2560 and routed by the gateway 2500 to the validation module 2540. The validation module 2540 may query a third party service to verify that the driver's license information matches information related to the ownership of the particular VIN. If such information does not match up, then an alert may be generated for the display at the apparatus 2570. However, if the validation module 2560 determines that the VIN and driver's license information match up, the insurance quote generator 2550 may receive information about the driver along with the VIN explosion to underwrite a user for insurance. Once underwritten, at least one accurate insurance quote may be transmitted through the gateway 2500 to the apparatus 2570. Thus, this exemplary diagram illustrates that two possible options for receiving an insurance quote. The first option may be an estimate simply based on the information derived from the vehicle license plate image. However, the second option provides an owner of a vehicle the option to be underwritten for insurance and obtain insurance instantaneously. Thus, a user may be able to make a more informed decision by having an estimated insurance cost of a vehicle. In some embodiments of the apparatus, the data flow pattern may bypass the validation module 2540 and provide a more accurate estimated insurance quote based on the retrieved driver information as well as the VIN explosion when the insurance quote requestor is not the owner of the vehicle.


Once generated, the insurance quote may then be routed through the gateway 2500 to the network interface 2560 for display on the apparatus 2570. In some aspects of the apparatus, a user may wish to obtain insurance coverage based on the received quote. In such aspects, an interaction with the apparatus 2570 may initiate an action by the insurance quote generator 2550 to generate an actual insurance policy based on many of the factors discussed above. However, one of ordinary skill in the art will recognize that several different factors may go into creating an insurance policy. Such factors may either be retrieved from third party service or provided by user interaction with the apparatus 2570.


Additionally, geographic information is provided to the insurance quote generator 2550. The geographic information is used further refine the insurance quote. Such geographic information may be received by GPS or retrieved from a driver's license image as discussed above. Additionally, the apparatus may receive user input of a location for generating the insurance quote. Providing the option to preview insurance quotes and be underwritten for an insurance policy simply based on a couple of images, gives the user a simple and efficient way to estimate the cost of ownership of a particular vehicle and once the user becomes the owner of the vehicle, the user may receive a new policy in a highly efficient manner.



FIG. 26 conceptually illustrates an exemplary embodiment of a process 2600 for transmitting an insurance quote from a license plate image. The process 2600 may be performed by a server such as the server 230. The process may begin after a mobile apparatus has recovered a suitable license plate image for transmission to the server 230 and/or a driver's license image.


As shown, the process 2600 receives (at 2610) license plate image information or text input from a mobile apparatus. The text input may be information associated with a vehicle license plate such as a state and alpha-numeric characters. The process 2600 then requests (at 2630) a VIN associated with the license plate information. The process 2600 may request the VIN by sending the request to a third party server. In some aspects of the server, the process 2600 communicates with the third party server by using an API.


At 2640, the process 2600 receives the VIN associated with the license plate information. The process 2600 then requests (at 2650) a vehicle configuration using the received VIN. The vehicle configuration may include different features and options that are equipped on the vehicle. For instance, such features and options may include different wheel sizes, interior trim, vehicle type (e.g., coupe, sedan, sport utility vehicle, convertible, truck, etc.), sound system, suspension, and any other suitable vehicle configuration or feature typically found in a vehicle. The vehicle configuration may be the VIN explosion discussed in the previous figure. The process 2600 receives (at 2660) the vehicle configuration data.


At 2680, the process 2600 may optionally receive information from a driver's license image from the mobile apparatus. Such information may be useful for determining the requestor's driving record so that a more accurate insurance quote may be generated. Additionally location information may be recovered from the driver's license information to provide a quote based on where the requestor resides. In the case that requestor wants to insure a particular vehicle, the process 2600 optionally requests (at 2685) validation that the driver's license information matches the vehicle owner's information. At 2686, the process determines if the information matches. When the information does not match, the process 2600 transmits (at 2687) an alert to the mobile apparatus. However, as discussed above, steps 2680-2687 may only be performed when a vehicle owner (or alleged vehicle owner) is seeking an insurance policy for the owner's vehicle. However, if the requestor is simply looking for an estimate, some or all of the previously described steps may be bypassed.


At 2690, the process 2600 requests an insurance quote from at least one entity capable of insurance underwriting. Such a quote may simply be an estimate based on the vehicle configuration. However, in some aspects of the process, the insurance quote may be based on a number of factors such as location and/or driver information. The process 2600 transmits (at 2695) the insurance quote to the mobile apparatus for display at the apparatus. Then the process ends.



FIG. 27 illustrates an exemplary embodiment of a system 2700 that may implement the license plate detection apparatus. The electronic system 2700 of some embodiments may be a mobile apparatus. The electronic system includes various types of machine readable media and interfaces. The electronic system includes a bus 2705, processor(s) 2710, read only memory (ROM) 2715, input device(s) 2720, random access memory (RAM) 2725, output device(s) 2730, a network component 2735, and a permanent storage device 2740.


The bus 2705 communicatively connects the internal devices and/or components of the electronic system. For instance, the bus 2705 communicatively connects the processor(s) 2710 with the ROM 2715, the RAM 2725, and the permanent storage 2740. The processor(s) 2710 retrieve instructions from the memory units to execute processes of the invention.


The processor(s) 2710 may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Alternatively, or in addition to the one or more general-purpose and/or special-purpose processors, the processor may be implemented with dedicated hardware such as, by way of example, one or more FPGAs (Field Programmable Gate Array), PLDs (Programmable Logic Device), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits.


Many of the above-described features and applications are implemented as software processes of a computer programming product. The processes are specified as a set of instructions recorded on a machine readable storage medium (also referred to as machine readable medium). When these instructions are executed by one or more of the processor(s) 2710, they cause the processor(s) 2710 to perform the actions indicated in the instructions.


Furthermore, software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The software may be stored or transmitted over as one or more instructions or code on a machine-readable medium. Machine-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by the processor(s) 2710. By way of example, and not limitation, such machine-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processor. Also, any connection is properly termed a machine-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects machine-readable media may comprise non-transitory machine-readable media (e.g., tangible media). In addition, for other aspects machine-readable media may comprise transitory machine-readable media (e.g., a signal). Combinations of the above should also be included within the scope of machine-readable media.


Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems 2700, define one or more specific machine implementations that execute and perform the operations of the software programs.


The ROM 2715 stores static instructions needed by the processor(s) 2710 and other components of the electronic system. The ROM may store the instructions necessary for the processor(s) 2710 to execute the processes provided by the license plate detection apparatus. The permanent storage 2740 is a non-volatile memory that stores instructions and data when the electronic system 2700 is on or off. The permanent storage 2740 is a read/write memory device, such as a hard disk or a flash drive. Storage media may be any available media that can be accessed by a computer. By way of example, the ROM could also be EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.


The RAM 2725 is a volatile read/write memory. The RAM 2725 stores instructions needed by the processor(s) 2710 at runtime, the RAM 2725 may also store the real-time video images acquired during the license plate detection process. The bus 2705 also connects input and output devices 2720 and 2730. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 2720 may be a keypad, image capture apparatus, or a touch screen display capable of receiving touch interactions. The output device(s) 2730 display images generated by the electronic system. The output devices may include printers or display devices such as monitors.


The bus 2705 also couples the electronic system to a network 2735. The electronic system may be part of a local area network (LAN), a wide area network (WAN), the Internet, or an Intranet by using a network interface. The electronic system may also be a mobile apparatus that is connected to a mobile data network supplied by a wireless carrier. Such networks may include 3G, HSPA, EVDO, and/or LTE.


It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Further, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.


The various aspects of this disclosure are provided to enable one of ordinary skill in the art to practice the present invention. Various modifications to exemplary embodiments presented throughout this disclosure will be readily apparent to those skilled in the art, and the concepts disclosed herein may be extended to other apparatuses, devices, or processes. Thus, the claims are not intended to be limited to the various aspects of this disclosure, but are to be accorded the full scope consistent with the language of the claims. All structural and functional equivalents to the various components of the exemplary embodiments described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”

Claims
  • 1. An image detection device for providing dynamic feedback for verifying image capture of an optical image, the image detection device comprising: an image sensor configured to continuously capture a video including a plurality of frames;an image analyzer configured to analyze each frame of the captured video to determine whether each frame includes an object image with a candidate text string;a text string scoring determiner configured to score each candidate text string identified in each respective object image by assigning a respective object image score for each object image based on relative positions of each alphanumeric character therein;a device operation instructor configured to dynamically determine a highest score of the assigned object image scores assigned for the respective object images of the plurality of the frames and to generate an operation adjustment control if the determined highest score is less than a predetermined score threshold;a dynamic instruction generator configured to generate at least one visual instruction based on the operation adjustment control;a display configured to dynamically display the captured video and the generated at least one visual instruction as an overlay on the captured video during the continuous capture of the video by the image sensor, with the overlay for the generated at least one visual instruction providing a user instruction for assisting a user in positioning the image sensor to capture the video to include the object image with the candidate text string; andan object image verifier configured to verify that at least one of the plurality of frames, which is captured after the adjusting of the capture of the video based on the generated at least one visual instruction and comprises the candidate text string, includes a verified candidate text string when the assigned object image score for the at least one frame including the candidate text string is greater than the predetermined score threshold.
  • 2. The image detection device according to claim 1, wherein the text string scoring determiner is configured to score each candidate text string by applying a straight line through each alphanumeric character within each candidate text string and comparing a slope of the applied straight line to a horizontal direction, such that the score of each candidate text string is based on an angle of the applied straight line relative to the horizontal direction.
  • 3. The image detection device according to claim 2, wherein the text string scoring determiner is configured to discard each candidate text string if the respective score indicates that the angle between the applied straight line and the horizontal direction is greater than a predetermined threshold.
  • 4. The image detection device according to claim 1, wherein the dynamic instruction generator is further configured to control the display to generate the overlay as a detection indicator to provide the user instruction for assisting the user in positioning the image sensor to capture the video.
  • 5. The image detection device according to claim 4, wherein the dynamic instruction generator is further configured generate an alert when the object image verifier fails to verify that the at least one frame of the plurality of frames includes the verified candidate text string.
  • 6. The image detection device according to claim 1, further comprising an interface configured to transmit vehicle license plate information to a remote apparatus upon the object image verifier verifying that that the at least one frame includes the verified candidate text string and receive a vehicle financing offer from the remote apparatus.
  • 7. The image detection device according to claim 1, further comprising an image filter configured to: apply a set of filter parameters to the captured video that includes the plurality of frames; anddynamically adjust the filter parameters based on at least one of color temperature, ambient light, object image location, and the location of the image detection device relative an object associated with the captured object image.
  • 8. A device for providing dynamic feedback for verifying image capture of an optical image that includes a candidate license plate, the device comprising: an image analyzer configured to analyze a plurality of frames of a captured video to determine whether each frame includes an object image with a candidate text string for the candidate license plate;a text string scoring determiner configured to score each candidate text string identified in each respective object image by assigning a respective object image score for each object image based on relative positions of each alphanumeric character therein;a device operation instructor configured to dynamically determine a highest score of the assigned object image scores assigned for the respective object images of the plurality of frames of the captured video and to generate an operation adjustment control if the determined highest score is less than a predetermined score threshold;a dynamic instruction generator configured to generate at least one visual instruction based on the operation adjustment control;a display configured to dynamically display the captured video and the generated at least one visual instruction as an overlay on the captured video during continuous capture of the video, with the overlay for the generated at least one visual instruction providing a user instruction for assisting a user in capturing the video to include the object image with the candidate text string; andan object image verifier configured to verify that at least one frame of the plurality of frames, which is captured after the adjusting of the video capture based on the generated at least one visual instruction and comprises the candidate text string, includes a verified license plate number when the assigned object image score for the at least one frame including the candidate text string is greater than the predetermined score threshold.
  • 9. The device according to claim 8, further comprising an image sensor configured to continuously capture the video of the candidate license plate that includes the plurality of frames.
  • 10. The device according to claim 8, wherein the text string scoring determiner is configured to score each candidate text string by applying a straight line through each alphanumeric character within each candidate text string and comparing a slope of the applied straight line to a horizontal direction, such that the score of each candidate text string is based on an angle of the applied straight line relative to the horizontal direction.
  • 11. The device according to claim 10, wherein the text string scoring determiner is configured to discard each candidate text string if the respective score indicates that the angle between the applied straight line and the horizontal direction is greater than a predetermined threshold.
  • 12. The device according to claim 9 wherein the dynamic instruction generator is further configured to control the display to generate the overlay as a detection indicator to provide the user instruction for assisting the user in positioning the image sensor to capture the video.
  • 13. The device according to claim 12, wherein the dynamic instruction generator is further configured generate an alert when the object image verifier fails to verify that the at least one frame of the plurality of frames includes the verified license plate.
  • 14. The device according to claim 8, further comprising an interface configured to transmit vehicle license plate information to a remote apparatus upon the object image verifier verifying that that the at least one frame includes the verified license plate and receive a vehicle financing offer from the remote apparatus that is associated with the verified license plate.
  • 15. The device according to claim 8, further comprising an image filter configured to: apply a set of filter parameters to the captured video that includes the plurality of frames; anddynamically adjust the filter parameters based on at least one of color temperature, ambient light, object image location, and the location of the image detection device relative an object associated with the captured object image.
  • 16. A method for providing dynamic feedback for verifying image capture of an optical image that includes a candidate license plate, the method comprising: analyzing, by a processor, a plurality of frames of a captured video to determine whether each frame includes an object image with a candidate text string for the candidate license plate;scoring, by the processor, each candidate text string identified in each respective object image by assigning a respective object image score for each object image based on relative positions of each alphanumeric character therein;dynamically determining, by the processor, a highest score of the assigned object image scores assigned for the respective object images of a plurality of frames of the captured video;generating, by the processor, an operation adjustment control if the determined highest score is less than a predetermined score threshold;generating, by the processor, at least one visual instruction based on the operation adjustment control;dynamically displaying, by a display screen controlled by the processor, the captured video and the generated at least one visual instruction as an overlay on the captured video during continuous capture of the video, with the overlay for the generated at least one visual instruction providing a user instruction for assisting a user in capturing the video to include the object image with the candidate text string; andverifying, by the processor, that at least one frame of the plurality of frames, which is captured after the adjusting of the video capture based on the generated at least one visual instruction and comprises the candidate text string, includes a verified license plate number when the assigned object image score for the at least one frame including the candidate text string is greater than the predetermined score threshold.
  • 17. The method according to claim 16, further continuously capturing, by an image sensor, the video of the candidate license plate that includes the plurality of frames.
  • 18. The method according to claim 16, further comprising scoring each candidate text string by applying a straight line through each alphanumeric character within each candidate text string and comparing a slope of the applied straight line to a horizontal direction, such that the score of each candidate text string is based on an angle of the applied straight line relative to the horizontal direction.
  • 19. The method according to claim 18, further comprising discarding, by the processor, each candidate text string if the respective score indicates that the angle between the applied straight line and the horizontal direction is greater than a predetermined threshold.
  • 20. The method according to claim 17, further comprising controlling, by the processor, the display to generate the overlay as a detection indicator to provide the user instruction for assisting the user in positioning the image sensor to capture the video including the candidate license plate.
  • 21. The method according to claim 20, further comprising generating an alert when the object image verifier fails to verify that the at least one frame of the plurality of frames includes the verified license plate.
  • 22. The method according to claim 16, further comprising: transmitting vehicle license plate information to a remote apparatus upon the object image verifier verifying that that the at least one frame includes the verified license plate; andreceiving a vehicle financing offer from the remote apparatus that is associated with the verified license plate.
  • 23. The method according to claim 16, further comprising: applying a set of filter parameters to the captured video that includes the plurality of frames; anddynamically adjusting the filter parameters based on at least one of color temperature, ambient light, object image location, and the location of the image detection device relative an object associated with the captured object image.
US Referenced Citations (654)
Number Name Date Kind
3544771 O'Meara Dec 1970 A
3550084 Bigelow et al. Dec 1970 A
4817166 Gonzalez et al. Mar 1989 A
5227803 O'Connor et al. Jul 1993 A
5579008 Hulderman et al. Nov 1996 A
5579021 Lee Nov 1996 A
5651075 Frazier et al. Jul 1997 A
5706457 Dwyer et al. Jan 1998 A
5708425 Dwyer et al. Jan 1998 A
5734337 Kupersmit Mar 1998 A
5864306 Dwyer et al. Jan 1999 A
5963253 Dwyer Oct 1999 A
6081206 Kielland Jun 2000 A
6140941 Dwyer et al. Oct 2000 A
6339651 Tian et al. Jan 2002 B1
6404902 Takano et al. Jun 2002 B1
6536961 Gillies Mar 2003 B1
6571279 Herz May 2003 B1
6622131 Brown Sep 2003 B1
6705521 Wu et al. Mar 2004 B1
RE38626 Kielland Oct 2004 E
6814284 Ehlers et al. Nov 2004 B2
6847965 Walker et al. Jan 2005 B2
6922156 Kavner Jul 2005 B2
7016518 Vernon Mar 2006 B2
7046169 Bucholz May 2006 B2
7053792 Aoki et al. May 2006 B2
7076349 Davidson et al. Jul 2006 B2
7104447 Lopez et al. Sep 2006 B1
7124006 Davidson et al. Oct 2006 B2
7136828 Allen et al. Nov 2006 B1
7146345 Weik, III et al. Dec 2006 B2
7171049 Snapp Jan 2007 B2
7174044 Ding et al. Feb 2007 B2
7253946 Bellonard et al. Aug 2007 B2
7262790 Bakewell Aug 2007 B2
7265656 McMahon et al. Sep 2007 B2
7301115 Elliot et al. Nov 2007 B2
7302098 Tang et al. Nov 2007 B2
7319379 Melvin Jan 2008 B1
7339495 Kavner Mar 2008 B2
7346222 Lee et al. Mar 2008 B2
7346553 Barnett Mar 2008 B2
7347368 Gravelle et al. Mar 2008 B1
7355527 Franklin et al. Apr 2008 B2
7359553 Wendt et al. Apr 2008 B1
7367058 Lawson et al. Apr 2008 B2
7377426 Makeever Apr 2008 B1
7407097 Robinson et al. Aug 2008 B2
7412078 Kim Aug 2008 B2
7424968 Meyerhofer et al. Sep 2008 B2
7428337 Gao et al. Sep 2008 B2
7433764 Janssen Oct 2008 B2
7436437 Fletcher et al. Oct 2008 B2
7439847 Pederson Oct 2008 B2
7460028 Garibotto et al. Dec 2008 B2
7482910 Melvin Jan 2009 B2
7504965 Windover et al. Mar 2009 B1
7522766 Ishidera Apr 2009 B2
7539331 Wendt et al. May 2009 B2
7542588 Ekin et al. Jun 2009 B2
7579965 Bucholz Aug 2009 B2
7583858 Gallagher Sep 2009 B2
7630515 Takahashi et al. Dec 2009 B2
7646895 Haupt et al. Jan 2010 B2
7676392 Hedley et al. Mar 2010 B2
7679497 Arant Mar 2010 B1
7693629 Kawasaki Apr 2010 B2
7710452 Lindberg May 2010 B1
7711150 Simon May 2010 B2
7714705 Rennie et al. May 2010 B2
7725348 Allen et al. May 2010 B1
7734097 Porikli et al. Jun 2010 B1
7734500 Allen et al. Jun 2010 B1
7738706 Aradhye et al. Jun 2010 B2
7739000 Kevaler Jun 2010 B2
7751975 Allen et al. Jul 2010 B2
7774228 Robinson et al. Aug 2010 B2
7778447 Takahashi et al. Aug 2010 B2
7812711 Brown et al. Oct 2010 B2
7813581 Fitzpatrick et al. Oct 2010 B1
7860344 Fitzpatrick et al. Dec 2010 B1
7860639 Yang Dec 2010 B2
7881498 Simon Feb 2011 B2
7890355 Gay et al. Feb 2011 B2
7893963 Gallagher et al. Feb 2011 B2
7902978 Pederson Mar 2011 B2
7908237 Angell et al. Mar 2011 B2
7925440 Allen et al. Apr 2011 B2
7933455 Haupt et al. Apr 2011 B2
7970644 Hedley et al. Jun 2011 B2
7982634 Arrighetti Jul 2011 B2
7987103 Gay et al. Jul 2011 B2
7991629 Gay et al. Aug 2011 B2
8009870 Simon Aug 2011 B2
8019629 Medina, III et al. Sep 2011 B1
8044824 Vu et al. Oct 2011 B2
8059864 Huang et al. Nov 2011 B2
8089340 Cochran et al. Jan 2012 B2
8094887 Axemo et al. Jan 2012 B2
8107677 Angell et al. Jan 2012 B2
8120473 Rennie et al. Feb 2012 B2
RE43245 Ouimet et al. Mar 2012 E
8155384 Chew Apr 2012 B2
8175917 Flynn et al. May 2012 B2
8203425 Medina, III et al. Jun 2012 B1
8218822 Sefton Jul 2012 B2
8218871 Angell et al. Jul 2012 B2
8229168 Geva et al. Jul 2012 B2
8229171 Takahashi et al. Jul 2012 B2
8238610 Shah et al. Aug 2012 B2
8254631 Bongard Aug 2012 B2
8260002 Almbladh Sep 2012 B2
8260639 Medina, III et al. Sep 2012 B1
8265963 Hanson et al. Sep 2012 B1
8265988 Hedley et al. Sep 2012 B2
8279088 Krim Oct 2012 B2
8284037 Rennie et al. Oct 2012 B2
8284996 Winkler Oct 2012 B2
8290213 Chen et al. Oct 2012 B2
8307037 Bain et al. Nov 2012 B2
8311856 Hanson et al. Nov 2012 B1
8321264 Goldmann et al. Nov 2012 B2
8330769 Moore et al. Dec 2012 B2
8331621 Allen et al. Dec 2012 B1
8345921 Frome et al. Jan 2013 B1
8346578 Hopkins, III et al. Jan 2013 B1
8364439 Mintz et al. Jan 2013 B2
8473332 Robinson et al. Jan 2013 B2
8369653 Cohen Feb 2013 B1
8380389 Wright et al. Feb 2013 B2
8384560 Malarky Feb 2013 B2
8401327 Almbladh Mar 2013 B2
8401343 Braun Mar 2013 B2
8411992 Hamada et al. Apr 2013 B2
8417035 Angell et al. Apr 2013 B2
8437551 Noonan et al. May 2013 B2
8441535 Morin May 2013 B2
8447112 Paul et al. May 2013 B2
8457408 Challa Jun 2013 B2
8463642 Hedley et al. Jun 2013 B2
8473333 Robinson et al. Jun 2013 B2
8478480 Mian et al. Jul 2013 B2
8493216 Angell et al. Jul 2013 B2
8497769 Rennie et al. Jul 2013 B2
8502698 Chen et al. Aug 2013 B2
8504415 Hedley Aug 2013 B2
8508341 Kohli et al. Aug 2013 B2
8527305 Hanson et al. Sep 2013 B1
8543285 Allen et al. Sep 2013 B2
8548201 Yoon Oct 2013 B2
8571751 Blair Oct 2013 B1
8571895 Medina, III et al. Oct 2013 B1
8577184 Young Nov 2013 B2
8577344 Kobylarz Nov 2013 B2
8581922 Moore et al. Nov 2013 B2
8582832 Angell et al. Nov 2013 B2
8587454 Dearworth Nov 2013 B1
8588470 Rodriguez Serrano et al. Nov 2013 B2
8593521 Schofield et al. Nov 2013 B2
8625853 Carbonell et al. Jan 2014 B2
8629755 Hashim-Waris Jan 2014 B2
8630497 Badawy et al. Jan 2014 B2
8637801 Schofield et al. Jan 2014 B2
8639433 Meis et al. Jan 2014 B2
8660890 Hedley Feb 2014 B2
8665079 Pawlicki et al. Mar 2014 B2
8666196 Young Mar 2014 B2
8682066 Milgrom et al. Mar 2014 B2
8693733 Harrison Apr 2014 B1
8694341 Hanson et al. Apr 2014 B1
8698895 Nerayoff et al. Apr 2014 B2
8704682 Chau Apr 2014 B1
8704889 Hofman Apr 2014 B2
8704948 Mountain Apr 2014 B2
8712630 Walwer Apr 2014 B2
8712803 Buentello Apr 2014 B1
8712806 Medina, III et al. Apr 2014 B1
8713121 Bain et al. Apr 2014 B1
8725542 Hanson et al. May 2014 B1
8725543 Hanson et al. May 2014 B1
8730066 Malarky May 2014 B2
8731244 Wu May 2014 B2
8744905 Robinson et al. Jun 2014 B2
8751099 Blair Jun 2014 B2
8751270 Hanson et al. Jun 2014 B1
8751391 Freund Jun 2014 B2
8760316 Kohli et al. Jun 2014 B2
8761446 Frome et al. Jun 2014 B1
8768009 Smith Jul 2014 B1
8768753 Robinson et al. Jul 2014 B2
8768754 Robinson et al. Jul 2014 B2
8773266 Starr et al. Jul 2014 B2
8774462 Kozitsky et al. Jul 2014 B2
8774465 Christopulos et al. Jul 2014 B2
8775236 Hedley et al. Jul 2014 B2
8775238 Angell et al. Jul 2014 B2
8781172 Kozitsky et al. Jul 2014 B2
8788300 Hanson et al. Jul 2014 B1
8792677 Kephart Jul 2014 B2
8792682 Fan et al. Jul 2014 B2
8799034 Brandmaier et al. Aug 2014 B1
8799036 Christensen et al. Aug 2014 B1
8818042 Schofield et al. Aug 2014 B2
8825271 Chen Sep 2014 B2
8825368 Rakshit Sep 2014 B2
8831970 Weik, III et al. Sep 2014 B2
8831972 Angell et al. Sep 2014 B2
8837830 Bala et al. Sep 2014 B2
8855621 Chen Oct 2014 B2
8855853 Blair Oct 2014 B2
8860564 Rubin et al. Oct 2014 B2
8862117 Chen Oct 2014 B2
8879120 Thrasher et al. Nov 2014 B2
8879846 Amtrup et al. Nov 2014 B2
8884782 Rubin et al. Nov 2014 B2
8885229 Amtrup et al. Nov 2014 B1
8897820 Marovets Nov 2014 B2
8917910 Rodriguez Serrano Dec 2014 B2
8922391 Rubin et al. Dec 2014 B2
8924851 Wichmann Dec 2014 B2
8934676 Burry et al. Jan 2015 B2
8935094 Rubin et al. Jan 2015 B2
8937559 Ioli Jan 2015 B2
8953846 Wu et al. Feb 2015 B2
8954226 Binion et al. Feb 2015 B1
8957759 Medina, III et al. Feb 2015 B1
8958605 Amtrup et al. Feb 2015 B2
8958630 Gallup et al. Feb 2015 B1
8971582 Dehart Mar 2015 B2
8971587 Macciola et al. Mar 2015 B2
8982208 Takeuchi et al. Mar 2015 B2
8989515 Shustorovich et al. Mar 2015 B2
8993951 Schofield et al. Mar 2015 B2
9004353 Block et al. Apr 2015 B1
9008369 Schofield et al. Apr 2015 B2
9008370 Burry et al. Apr 2015 B2
9008958 Rubin et al. Apr 2015 B2
9014429 Badawy et al. Apr 2015 B2
9014432 Fan et al. Apr 2015 B2
9014908 Chen et al. Apr 2015 B2
9019092 Brandmaier et al. Apr 2015 B1
9020657 Uhler Apr 2015 B2
9020837 Oakes, III et al. Apr 2015 B1
9021384 Beard et al. Apr 2015 B1
9031858 Angell et al. May 2015 B2
9031948 Smith May 2015 B1
9035755 Rennie et al. May 2015 B2
9058515 Amtrup et al. Jun 2015 B1
9058580 Amtrup et al. Jun 2015 B1
9092808 Angell et al. Jul 2015 B2
9092979 Burry et al. Jul 2015 B2
9105066 Gay et al. Aug 2015 B2
9111331 Parikh et al. Aug 2015 B2
9118872 Goodman et al. Aug 2015 B1
9123034 Rydbeck et al. Sep 2015 B2
9129159 Cardoso et al. Sep 2015 B2
9129289 Vaughn et al. Sep 2015 B2
9137417 Macciola et al. Sep 2015 B2
9141112 Loo et al. Sep 2015 B1
9141503 Chen Sep 2015 B1
9141926 Kilby et al. Sep 2015 B2
9158967 Shustorovich et al. Oct 2015 B2
9165187 Macciola et al. Oct 2015 B2
9165188 Thrasher et al. Oct 2015 B2
9177211 Lehning Nov 2015 B2
9208536 Macciola et al. Dec 2015 B2
9223769 Tsibulevskiy et al. Dec 2015 B2
9223893 Rodriguez Dec 2015 B2
9235599 Smith Jan 2016 B1
9253349 Amtrup et al. Feb 2016 B2
9311531 Amtrup et al. Apr 2016 B2
9365188 Penilla Jun 2016 B1
9384423 Rodriguez-Serrano Jul 2016 B2
9589202 Wilbert Mar 2017 B1
9607236 Wilbert Mar 2017 B1
9966065 Gruber May 2018 B2
10027662 Mutagi Jul 2018 B1
20010032149 Fujiwara Oct 2001 A1
20010034768 Bain et al. Oct 2001 A1
20020000920 Kavner Jan 2002 A1
20020106135 Iwane Aug 2002 A1
20020140577 Kavner Oct 2002 A1
20030019931 Tsikos Jan 2003 A1
20030042303 Tsikos et al. Mar 2003 A1
20030095688 Kirmuss May 2003 A1
20030146839 Ehlers et al. Aug 2003 A1
20040039690 Brown Feb 2004 A1
20050238252 Prakash et al. Oct 2005 A1
20060015394 Sorensen Jan 2006 A1
20060025897 Shostak et al. Feb 2006 A1
20060056658 Kavner Mar 2006 A1
20060059229 Bain et al. Mar 2006 A1
20060069749 Herz Mar 2006 A1
20060078215 Gallagher Apr 2006 A1
20060095301 Gay May 2006 A1
20060098874 Lev May 2006 A1
20060109104 Kevaler May 2006 A1
20060120607 Lev Jun 2006 A1
20060159345 Clary et al. Jul 2006 A1
20060215882 Ando et al. Sep 2006 A1
20060222244 Haupt et al. Oct 2006 A1
20060269104 Ciolli Nov 2006 A1
20060269105 Langlinais Nov 2006 A1
20060278705 Hedley Dec 2006 A1
20060287872 Simrell Dec 2006 A1
20070008179 Hedley Jan 2007 A1
20070009136 Pawlenko et al. Jan 2007 A1
20070016539 Groft et al. Jan 2007 A1
20070058856 Boregowda et al. Mar 2007 A1
20070058863 Boregowda et al. Mar 2007 A1
20070061173 Gay Mar 2007 A1
20070085704 Long Apr 2007 A1
20070088624 Vaughn et al. Apr 2007 A1
20070106539 Gay May 2007 A1
20070124198 Robinson et al. May 2007 A1
20070130016 Walker et al. Jun 2007 A1
20070156468 Gay et al. Jul 2007 A1
20070183688 Hollfelder Aug 2007 A1
20070192177 Robinson et al. Aug 2007 A1
20070208681 Bucholz Sep 2007 A1
20070252724 Donaghey Nov 2007 A1
20070265872 Robinson et al. Nov 2007 A1
20070288270 Gay et al. Dec 2007 A1
20070294147 Dawson et al. Dec 2007 A1
20070299700 Gay et al. Dec 2007 A1
20080021786 Stenberg et al. Jan 2008 A1
20080031522 Axemo et al. Feb 2008 A1
20080036623 Rosen Feb 2008 A1
20080040210 Hedley Feb 2008 A1
20080040259 Snow et al. Feb 2008 A1
20080063280 Hofman et al. Mar 2008 A1
20080077312 Mrotek Mar 2008 A1
20080120172 Robinson et al. May 2008 A1
20080120392 Dillon May 2008 A1
20080137910 Suzuki et al. Jun 2008 A1
20080166018 Li et al. Jul 2008 A1
20080175438 Alves Jul 2008 A1
20080175479 Sefton et al. Jul 2008 A1
20080212837 Matsumoto et al. Sep 2008 A1
20080221916 Reeves et al. Sep 2008 A1
20080249857 Angell et al. Oct 2008 A1
20080253616 Mizuno et al. Oct 2008 A1
20080277468 Mitschele Nov 2008 A1
20080285803 Madsen Nov 2008 A1
20080285804 Sefton Nov 2008 A1
20080310850 Pederson et al. Dec 2008 A1
20080319837 Mitschele Dec 2008 A1
20090005650 Angell et al. Jan 2009 A1
20090006125 Angell et al. Jan 2009 A1
20090018721 Mian et al. Jan 2009 A1
20090018902 Miller et al. Jan 2009 A1
20090024493 Huang et al. Jan 2009 A1
20090070156 Cleland-Pottie Mar 2009 A1
20090070163 Angell et al. Mar 2009 A1
20090083121 Angell et al. Mar 2009 A1
20090083122 Angell et al. Mar 2009 A1
20090089107 Angell et al. Apr 2009 A1
20090089108 Angell et al. Apr 2009 A1
20090110300 Kihara et al. Apr 2009 A1
20090136141 Badawy et al. May 2009 A1
20090138344 Dawson et al. May 2009 A1
20090138345 Dawson et al. May 2009 A1
20090016819 Vu et al. Jun 2009 A1
20090161913 Son Jun 2009 A1
20090167865 Jones, Jr. Jul 2009 A1
20090174575 Allen et al. Jul 2009 A1
20090174777 Smith Jul 2009 A1
20090198587 Wagner et al. Aug 2009 A1
20090202105 Castro Abrantes et al. Aug 2009 A1
20090208060 Wang et al. Aug 2009 A1
20090226100 Gao et al. Sep 2009 A1
20090232357 Angell et al. Sep 2009 A1
20090292597 Schwartz et al. Nov 2009 A1
20090307158 Kim et al. Dec 2009 A1
20100054546 Choi Mar 2010 A1
20100064305 Schumann et al. Mar 2010 A1
20100080461 Ferman Apr 2010 A1
20100082180 Wright et al. Apr 2010 A1
20100085173 Yang et al. Apr 2010 A1
20100088123 McCall et al. Apr 2010 A1
20100111365 Dixon et al. May 2010 A1
20100128127 Ciolli May 2010 A1
20100150457 Angell et al. Jun 2010 A1
20100153146 Angell et al. Jun 2010 A1
20100153147 Angell et al. Jun 2010 A1
20100153180 Angell et al. Jun 2010 A1
20100153279 Zahn Jun 2010 A1
20100153353 Angell et al. Jun 2010 A1
20100179878 Dawson et al. Jul 2010 A1
20100189364 Tsai et al. Jul 2010 A1
20100191584 Fraser et al. Jul 2010 A1
20100228607 Hedley et al. Sep 2010 A1
20100228608 Hedley et al. Sep 2010 A1
20100229247 Phipps Sep 2010 A1
20100232680 Kleihorst Sep 2010 A1
20100246890 Ofek et al. Sep 2010 A1
20100272317 Riesco Prieto et al. Oct 2010 A1
20100272364 Lee et al. Oct 2010 A1
20100274641 Allen et al. Oct 2010 A1
20100278389 Tsai et al. Nov 2010 A1
20100278436 Tsai et al. Nov 2010 A1
20100299021 Jalili Nov 2010 A1
20100302362 Birchbauer et al. Dec 2010 A1
20110047009 Deitiker et al. Feb 2011 A1
20110096991 Lee et al. Apr 2011 A1
20110115917 Lee et al. May 2011 A1
20110116686 Gravelle May 2011 A1
20110118967 Tsuda May 2011 A1
20110145053 Hashim-Waris Jun 2011 A1
20110161140 Polt et al. Jun 2011 A1
20110169953 Wang et al. Jul 2011 A1
20110191117 Hashim-Waris Aug 2011 A1
20110194733 Wilson Aug 2011 A1
20110208568 Deitiker et al. Aug 2011 A1
20110218896 Tonnon et al. Sep 2011 A1
20110224865 Gordon et al. Sep 2011 A1
20110234749 Alon Sep 2011 A1
20110235864 Shimizu Sep 2011 A1
20110238290 Feng et al. Sep 2011 A1
20110261200 Kanning et al. Oct 2011 A1
20110288909 Hedley Nov 2011 A1
20120007983 Welch Jan 2012 A1
20120033123 Inoue et al. Feb 2012 A1
20120057756 Yoon et al. Mar 2012 A1
20120069183 Aoki et al. Mar 2012 A1
20120070086 Miyamoto Mar 2012 A1
20120078686 Bashani Mar 2012 A1
20120089462 Hot Apr 2012 A1
20120089675 Thrower, III et al. Apr 2012 A1
20120106781 Kozitsky et al. May 2012 A1
20120106801 Jackson May 2012 A1
20120106802 Hsieh et al. May 2012 A1
20120116661 Mizrachi May 2012 A1
20120128205 Lee et al. May 2012 A1
20120130777 Kaufman May 2012 A1
20120130872 Baughman et al. May 2012 A1
20120140067 Crossen Jun 2012 A1
20120143657 Silberberg Jun 2012 A1
20120148092 Ni et al. Jun 2012 A1
20120148105 Burry et al. Jun 2012 A1
20120158466 John Jun 2012 A1
20120170814 Tseng Jul 2012 A1
20120195470 Fleming et al. Aug 2012 A1
20120215594 Gravelle Aug 2012 A1
20120223134 Smith et al. Sep 2012 A1
20120246007 Williams et al. Sep 2012 A1
20120256770 Mitchell Oct 2012 A1
20120258731 Smith et al. Oct 2012 A1
20120263352 Fan Oct 2012 A1
20120265574 Olding et al. Oct 2012 A1
20120275653 Hsieh et al. Nov 2012 A1
20120310712 Baughman et al. Dec 2012 A1
20130004024 Challa Jan 2013 A1
20130010116 Breed Jan 2013 A1
20130018705 Heath et al. Jan 2013 A1
20130039542 Guzik Feb 2013 A1
20130041961 Thrower, III et al. Feb 2013 A1
20130046587 Fraser et al. Feb 2013 A1
20130050493 Mitic Feb 2013 A1
20130058523 Wu et al. Mar 2013 A1
20130058531 Hedley Mar 2013 A1
20130060786 Serrano Mar 2013 A1
20130066667 Gulec et al. Mar 2013 A1
20130066757 Lovelace et al. Mar 2013 A1
20130073347 Bogaard et al. Mar 2013 A1
20130077888 Meyers et al. Mar 2013 A1
20130080345 Rassi Mar 2013 A1
20130084010 Ross et al. Apr 2013 A1
20130097630 Rodriguez Apr 2013 A1
20130100286 Lao Apr 2013 A1
20130108114 Aviad et al. May 2013 A1
20130113936 Cohen et al. May 2013 A1
20130121581 Wei et al. May 2013 A1
20130129152 Rodriguez Serrano et al. May 2013 A1
20130132166 Wu et al. May 2013 A1
20130136310 Hofman et al. May 2013 A1
20130144492 Takano et al. Jun 2013 A1
20130148846 Maeda et al. Jun 2013 A1
20130148858 Wiegenfeld et al. Jun 2013 A1
20130158777 Brauer et al. Jun 2013 A1
20130162817 Bernal Jun 2013 A1
20130163822 Chigos et al. Jun 2013 A1
20130163823 Chigos et al. Jun 2013 A1
20130166325 Ganapathy et al. Jun 2013 A1
20130170711 Chigos et al. Jul 2013 A1
20130173481 Hirtenstein et al. Jul 2013 A1
20130182910 Burry et al. Jul 2013 A1
20130204719 Burry et al. Aug 2013 A1
20130216101 Wu et al. Aug 2013 A1
20130216102 Ryan et al. Aug 2013 A1
20130229517 Kozitsky Sep 2013 A1
20130238167 Stanfield et al. Sep 2013 A1
20130238441 Panelli Sep 2013 A1
20130242123 Norman et al. Sep 2013 A1
20130243334 Meyers et al. Sep 2013 A1
20130246132 Buie Sep 2013 A1
20130253997 Robinson et al. Sep 2013 A1
20130262194 Hedley Oct 2013 A1
20130265414 Yoon et al. Oct 2013 A1
20130266190 Wang et al. Oct 2013 A1
20130268155 Mian et al. Oct 2013 A1
20130272579 Burry et al. Oct 2013 A1
20130272580 Karel et al. Oct 2013 A1
20130278761 Wu Oct 2013 A1
20130278768 Paul et al. Oct 2013 A1
20130279748 Hastings Oct 2013 A1
20130279758 Burry et al. Oct 2013 A1
20130279759 Kagarlitsky et al. Oct 2013 A1
20130282271 Rubin et al. Oct 2013 A1
20130290201 Rodriguez Carrillo Oct 2013 A1
20130294643 Fan et al. Nov 2013 A1
20130294653 Burry et al. Nov 2013 A1
20130297353 Strange et al. Nov 2013 A1
20130317693 Jefferies et al. Nov 2013 A1
20130325629 Harrison Dec 2013 A1
20130329943 Christopulos et al. Dec 2013 A1
20130329961 Fan et al. Dec 2013 A1
20130336538 Skaff et al. Dec 2013 A1
20140003712 Eid et al. Jan 2014 A1
20140025444 Willis Jan 2014 A1
20140029839 Mensink et al. Jan 2014 A1
20140029850 Meyers et al. Jan 2014 A1
20140037142 Bhanu et al. Feb 2014 A1
20140039987 Nerayoff et al. Feb 2014 A1
20140046800 Chen Feb 2014 A1
20140056483 Angell et al. Feb 2014 A1
20140056520 Rodriguez Serrano Feb 2014 A1
20140064564 Hofman et al. Mar 2014 A1
20140067631 Dhuse et al. Mar 2014 A1
20140072178 Carbonell et al. Mar 2014 A1
20140074566 McCoy et al. Mar 2014 A1
20140074567 Hedley et al. Mar 2014 A1
20140078304 Othmer Mar 2014 A1
20140079315 Kozitsky et al. Mar 2014 A1
20140081858 Block et al. Mar 2014 A1
20140085475 Bhanu et al. Mar 2014 A1
20140119651 Meyers et al. May 2014 A1
20140126779 Duda May 2014 A1
20140129440 Smith et al. May 2014 A1
20140136047 Mian et al. May 2014 A1
20140140578 Ziola et al. May 2014 A1
20140149190 Robinson et al. May 2014 A1
20140168436 Pedicino Jun 2014 A1
20140169633 Seyfried et al. Jun 2014 A1
20140169634 Prakash et al. Jun 2014 A1
20140172519 Nerayoff et al. Jun 2014 A1
20140172520 Nerayoff et al. Jun 2014 A1
20140188579 Regan, III et al. Jul 2014 A1
20140188580 Nerayoff et al. Jul 2014 A1
20140195099 Chen Jul 2014 A1
20140195138 Stelzig et al. Jul 2014 A1
20140195313 Nerayoff et al. Jul 2014 A1
20140200970 Nerayoff et al. Jul 2014 A1
20140201064 Jackson Jul 2014 A1
20140201213 Jackson Jul 2014 A1
20140201266 Jackson et al. Jul 2014 A1
20140207541 Nerayoff et al. Jul 2014 A1
20140214499 Hudson et al. Jul 2014 A1
20140214500 Hudson et al. Jul 2014 A1
20140219563 Rodriguez-Serrano et al. Aug 2014 A1
20140236786 Nerayoff et al. Aug 2014 A1
20140241578 Nonaka et al. Aug 2014 A1
20140241579 Nonaka Aug 2014 A1
20140244366 Nerayoff et al. Aug 2014 A1
20140247347 McNeill et al. Sep 2014 A1
20140247372 Byren Sep 2014 A1
20140249896 Nerayoff et al. Sep 2014 A1
20140254866 Jankowski et al. Sep 2014 A1
20140254877 Jankowski et al. Sep 2014 A1
20140254878 Jankowski et al. Sep 2014 A1
20140254879 Smith Sep 2014 A1
20140257942 Nerayoff et al. Sep 2014 A1
20140257943 Nerayoff et al. Sep 2014 A1
20140270350 Rodriguez-Serrano et al. Sep 2014 A1
20140270383 Pederson Sep 2014 A1
20140270386 Leihs et al. Sep 2014 A1
20140278839 Lynam et al. Sep 2014 A1
20140278841 Natinsky Sep 2014 A1
20140289024 Robinson et al. Sep 2014 A1
20140294257 Tussy Oct 2014 A1
20140301606 Paul et al. Oct 2014 A1
20140307923 Johansson Oct 2014 A1
20140307924 Fillion et al. Oct 2014 A1
20140309842 Jefferies et al. Oct 2014 A1
20140310028 Christensen et al. Oct 2014 A1
20140314275 Edmondson et al. Oct 2014 A1
20140316841 Kilby et al. Oct 2014 A1
20140324247 Jun Oct 2014 A1
20140328518 Kozitsky et al. Nov 2014 A1
20140334668 Saund Nov 2014 A1
20140336848 Saund et al. Nov 2014 A1
20140337066 Kephart Nov 2014 A1
20140337319 Chen Nov 2014 A1
20140337756 Thrower et al. Nov 2014 A1
20140340570 Meyers et al. Nov 2014 A1
20140348391 Schweid et al. Nov 2014 A1
20140348392 Burry et al. Nov 2014 A1
20140355835 Rodriguez-Serrano Dec 2014 A1
20140355836 Kozitsky et al. Dec 2014 A1
20140355837 Hedley et al. Dec 2014 A1
20140363051 Burry et al. Dec 2014 A1
20140363052 Kozitsky et al. Dec 2014 A1
20140369566 Chigos et al. Dec 2014 A1
20140369567 Chigos et al. Dec 2014 A1
20140376778 Muetzel et al. Dec 2014 A1
20140379384 Duncan et al. Dec 2014 A1
20140379385 Duncan et al. Dec 2014 A1
20140379442 Dutta et al. Dec 2014 A1
20150012309 Buchheim et al. Jan 2015 A1
20150019533 Moody et al. Jan 2015 A1
20150025932 Ross et al. Jan 2015 A1
20150032580 Altermatt et al. Jan 2015 A1
20150041536 Matsur Feb 2015 A1
20150049914 Alves Feb 2015 A1
20150051822 Joglekar Feb 2015 A1
20150051823 Joglekar Feb 2015 A1
20150052022 Christy et al. Feb 2015 A1
20150054950 Van Wiemeersch Feb 2015 A1
20150058210 Johnson, II et al. Feb 2015 A1
20150066349 Chan et al. Mar 2015 A1
20150066605 Balachandran et al. Mar 2015 A1
20150081362 Chadwick et al. Mar 2015 A1
20150095251 Alazraki et al. Apr 2015 A1
20150100448 Binion et al. Apr 2015 A1
20150100504 Binion et al. Apr 2015 A1
20150100505 Binion et al. Apr 2015 A1
20150100506 Binion et al. Apr 2015 A1
20150104073 Rodriguez-Serrano et al. Apr 2015 A1
20150112504 Binion et al. Apr 2015 A1
20150112543 Binion et al. Apr 2015 A1
20150112545 Binion et al. Apr 2015 A1
20150112730 Binion et al. Apr 2015 A1
20150112731 Binion et al. Apr 2015 A1
20150112800 Binion et al. Apr 2015 A1
20150120334 Jones Apr 2015 A1
20150125041 Burry et al. May 2015 A1
20150127730 Aviv May 2015 A1
20150138001 Davies May 2015 A1
20150149221 Tremblay May 2015 A1
20150154578 Aggarwal et al. Jun 2015 A1
20150205760 Hershey et al. Jul 2015 A1
20150206357 Chen et al. Jul 2015 A1
20150221041 Hanson et al. Aug 2015 A1
20150222573 Bain et al. Aug 2015 A1
20150249635 Thrower, III et al. Sep 2015 A1
20150254781 Binion et al. Sep 2015 A1
20150269433 Amtrup et al. Sep 2015 A1
20150310293 DeHart Oct 2015 A1
20150324924 Wilson et al. Nov 2015 A1
20150332407 Wilson et al. Nov 2015 A1
20160036899 Moody et al. Feb 2016 A1
20160180428 Cain et al. Jun 2016 A1
20160358297 Alon et al. Dec 2016 A1
Foreign Referenced Citations (11)
Number Date Country
103985256 Aug 2014 CN
204303027 Apr 2015 CN
0302998 Dec 2003 HU
10134219 Nov 1996 JP
4243411 Mar 2009 JP
0169569 Sep 2001 WO
02059838 Aug 2002 WO
02059852 Aug 2002 WO
2013138186 Mar 2013 WO
2014158291 Oct 2014 WO
2014160426 Oct 2014 WO
Non-Patent Literature Citations (13)
Entry
US 7,970,635 B1, 06/2011, Medina, III et al. (withdrawn)
CARFAX, Inc., “Find Used Cars for Sale,” iTunes App, updated as of Feb. 18, 2016.
CARFAX, Inc., “Vehicle History Report,” Mobile App, CARFAX Blog dated Aug. 27, 2012.
Brandon Turkus, re: DiDi Plate App Report dated Jun. 13, 2014.
Jason Hahn, “Scan License Plates So You Can Text Flirty Messages to Cute Drivers with GM's New App,” digitaltrends.com (http://www.digitaltrends.com/cars/scan-license-plate-text-drivers-gm-didi-plate-app/) dated Jun. 21, 2014.
Progressive, “Progressive's Image Capture technology saves users time, helps drivers quote and buy auto insurance using their smartphone camera,” Mayfield Village, Ohio—Feb. 2, 2012.
Don Jergler, “There's an App for That: Mobile Phone Quoting,” Insurance Journal, http://www.insurancejournal.com/news/naiional/2012AI2/21/236521.htm, dated Feb. 21, 2012.
Vimeo, LLC, online presentation for Five Focal's engineering service offering titled “Test and Validation,” website location https://vimeo.com/85556043, date site last visited Aug. 24, 2015.
Narayanswamy, Ramkumar; Johnson, Gregory E.; Silveira, Paulo E. X.; Wach, Hans B., article titled Extending the Imaging Volume for Biometric Iris Recognition, published Feb. 2005 in Applied Optics IP, vol. 44, Issue 5, pp. 701-712, website location http://adsabs.harvard.edu/abs/2005ApOpt..44..701N.
Alshahrani, M. A. A., Real Time Vehicle License Plate Recognition on Mobile Devices (Thesis, Master of Science (MSc)). Mar. 2013. University of Waikato, Hamilton, New Zealand.
Anagnostopoulos, Christos-Nikolaos E., et al. “License plate recognition from still images and video sequences: A survey.” IEEE Transactions on intelligent transportation systems 9.3 (2008): 377-391.
Chan, Wen-Hsin, and Ching-Twu Youe. “Video CCD based portable digital still camera.” IEEE transactions on consumer electronics 41.3 (1995): 455-459.
Charge-Coupled Device, Wikipedia, the free encyclopedia, Version: Mar. 4, 2013, http://en.wikipedia.org/w/index.php?title=Chargecoupled_device&oldid=542042079.
Related Publications (1)
Number Date Country
20190102644 A1 Apr 2019 US
Continuations (12)
Number Date Country
Parent 15466634 Mar 2017 US
Child 16190004 US
Parent 15681798 Aug 2017 US
Child 15466634 US
Parent 15713413 Sep 2017 US
Child 15681798 US
Parent 15713458 Sep 2017 US
Child 15713413 US
Parent 15880361 Jan 2018 US
Child 15713458 US
Parent 15681682 Aug 2017 US
Child 15880361 US
Parent 15451399 Mar 2017 US
Child 15681682 US
Parent 15455482 Mar 2017 US
Child 15451399 US
Parent 15427001 Feb 2017 US
Child 15455482 US
Parent 15419846 Jan 2017 US
Child 15427001 US
Parent 15451393 Mar 2017 US
Child 15419846 US
Parent 14716754 May 2015 US
Child 15451393 US
Continuation in Parts (3)
Number Date Country
Parent 14613323 Feb 2015 US
Child 14716754 US
Parent 14455841 Aug 2014 US
Child 14613323 US
Parent 14318397 Jun 2014 US
Child 14455841 US