The present disclosure relates generally to a method and apparatus for detecting license plate information from an image of a license plate and more specifically, detecting license plate information from an optical image, captured by a mobile apparatus, that includes a license plate image and several other object images.
In recent years, collecting still images of license plates has become a common tool used by authorities to catch the drivers of vehicles that may engage in improper or unlawful activity. For example, law enforcement authorities have set up stationary traffic cameras to photograph the license plates of vehicles that may be traveling above a posted speed limit at a specific portion of a road or vehicles that drive through red lights. Toll booth operators also commonly use such stationary cameras to photograph vehicles that may pass through a toll booth without paying the required toll. However, all of these scenarios have a common thread. The camera must be manually installed and configured such that it will always photograph the vehicle's license plate at a specific angle and when the vehicle is in a specific location. Any unexpected modifications, such as a shift in angle or location of the camera would render the camera incapable of properly collecting license plate images.
Additionally, camera equipped mobile apparatuses (e.g., smartphones) have become increasingly prevalent in today's society. Mobile apparatuses are frequently used to capture optical images and for many users serve as a replacement for a simple digital camera because the camera equipped mobile apparatus provides an image that is often as good as those produced by simple digital cameras and can easily be transmitted (shared) over a network.
The positioning constraints put on the traffic cameras make it difficult to take images of license plates from different angles and distances and still achieve an accurate reading. Therefore, it would be difficult to scale the same license plate image capture process performed by law enforcement authorities to mobile apparatuses. In other words, it is difficult to derive license plate information from an image of a license plate taken from a mobile image capture apparatus at a variety of angles, distances, ambient conditions, mobile apparatus motion, and when other object images are also in the image, which hinders a user's ability to easily gather valuable information about specific vehicles when engaging in a number of different vehicle related activities such as buying and selling vehicles, insuring vehicles, and obtaining financing for vehicles.
Several aspects of the present invention will be described more fully hereinafter with reference to various methods and apparatuses.
Some aspects of the invention relate to a computer-implemented method for automatically identifying vehicle information and facilitating a transaction related to a vehicle. In an exemplary aspect, the method includes capturing, by an image sensor of a first computing apparatus, an optical image of a vehicle, with the optical image including at least one distinguishing feature of the vehicle; and transmitting, by an interface of the first computing apparatus, the transmit the captured optical image. Moreover, the method further includes receiving, by a remote server, the transmitted and captured optical image; automatically scanning, by the remote server, the captured optical image to identify the at least one distinguishing feature of the vehicle; automatically comparing, by the remote server, the identified at least one distinguishing feature with a unique feature database that includes respective vehicle identification information associated with each of a plurality of unique vehicle features; automatically identifying, by the remote server, the respective vehicle identification information that corresponds to the vehicle upon determining a match between the identified at least one distinguishing feature of the vehicle and the respective one of the plurality of unique vehicle features; automatically identifying, by the remote server, corresponds vehicle configuration information based on the identified vehicle identification information; and automatically transmitting, by the remote server, the identified vehicle configuration information to the first computing apparatus to be displayed thereon.
Another aspect of the invention relates to a system for automatically identifying vehicle information and facilitating a transaction related to a vehicle. In this aspect, the system includes a first computing apparatus having an image sensor configured to capture an optical image of a vehicle, with the optical image including at least one distinguishing feature of the vehicle, and an interface configured to transmit the captured optical image. Moreover, the system further includes a remote server having a computer processor and configured to receive the transmitted and captured optical image, automatically scan the captured optical image to identify the at least one distinguishing feature of the vehicle, automatically compare the identified at least one distinguishing feature with a unique feature database that includes respective vehicle identification information associated with each of a plurality of unique vehicle features, automatically identify the respective vehicle identification information that corresponds to the vehicle upon determining a match between the identified at least one distinguishing feature of the vehicle and the respective one of the plurality of unique vehicle features, automatically identify corresponds vehicle configuration information based on the identified vehicle identification information, and automatically transmit the identified vehicle configuration information to the first computing apparatus to be displayed thereon.
It is understood that other aspects of methods and apparatuses will become readily apparent to those skilled in the art from the following detailed description, wherein various aspects of apparatuses and methods are shown and described by way of illustration. As understood by one of ordinary skill in the art, these aspects may be implemented in other and different forms and its several details are capable of modification in various other respects. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
Various aspects of processes and apparatuses will now be presented in the detailed description by way of example, and not by way of limitation, with reference to the accompanying drawings, wherein:
The detailed description set forth below in connection with the appended drawings is intended as a description of various exemplary embodiments of the present invention and is not intended to represent the only embodiments in which the present invention may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the present invention. Acronyms and other descriptive terminology may be used merely for convenience and clarity and are not intended to limit the scope of the invention.
The word “exemplary” or “embodiment” is used herein to mean serving as an example, instance, or illustration. Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term “embodiment” of an apparatus, method or article of manufacture does not require that all embodiments of the invention include the described components, structure, features, functionality, processes, advantages, benefits, or modes of operation.
It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “and/or” includes any and all combinations of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by a person having ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the following detailed description, various aspects of the present invention will be presented in the context of apparatuses and methods for recovering vehicle license plate information from an image. However, as those skilled in the art will appreciate, these aspects may be extended to recovering other information from an image. Accordingly, any reference to an apparatus or method for recovering vehicle license plate information is intended only to illustrate the various aspects of the present invention, with the understanding that such aspects may have a wide range of applications.
In an exemplary embodiment of the apparatus, a customized application is installed on the apparatus 130. The customized application may interface with the apparatus' image capture device to capture an optical image, convert the optical image to an electrical signal, process the electrical signal to detect the presence of a license plate image, and derive license plate information from a portion of the electrical signal that is associated with the license plate image. The license plate information may be transmitted wirelessly to a server for further processing or decoding such as optical character recognition (OCR) of the license plate image. Alternatively, the OCR process may be carried out on the mobile apparatus 130.
It should be appreciated that the customized software application can downloaded from a remote server (e.g., server 230 discussed below) and/or from an “Apps Store” that has been provided with a license to download the software application to mobile device users. Moreover, the customized software application enables each mobile device (of the seller and buyer, for example) to communicate required information to the server 230 to facilitate the transaction of the vehicle, including loan information and the like. Thus, the server is capable of receiving such information from each mobile device and performing the processes described herein.
As shown in
Alternatively, some aspects of the apparatus may provide the capability to bypass the image capture process to instead provide a user interface with text fields. For example, the user interface may provide text fields that allow for entry of the license plate number and state. The entered information may be provided as text strings to the license plate detection apparatus without going through the detection process discussed above.
A license plate image recovered from the image 220 may be transmitted over the internet 240 to the server 230 where it is processed for the purpose of detecting whether the license plate image is suitable for deriving license plate data and/or for performing OCR on the license plate image to derive license plate information such as the state of origin and the license plate number. It should be appreciated that while the exemplary embodiment transmits a full image 220 of a license plate, in some aspects server 230 can be configured to identity the vehicle using a partial plate. For example, if only a portion of the alphanumeric characters can be identified, but the state of the license plate is identifiable, the server 230 may be configured to identify the vehicle based on this partial match.
Once the license plate image (or image file) is transmitted to the server 230, the apparatus 210 may receive and display a confirmation message for confirming that the derived license plate information (e.g., state and license plate number) is correct. In some aspects of the apparatus, the apparatus 210 may also display information about the vehicle to help the user determine whether the derived license plate information is correct. This may be useful in cases such as when the apparatus 210 captures a license plate image of a moving vehicle. The vehicle license plate may no longer be in eyesight. However, it may be possible to determine with some degree of accuracy whether the derived license plate information is correct based on the vehicle information that is displayed on the mobile apparatus.
In the first stage 301, the apparatus 300 may have transmitted a license plate image to the server 230 for further processing. Such processing will be described in the foregoing figures. The apparatus 300 may display the license plate information 320. Additionally, the display area 310 may provide the configuration information 345 about the vehicle to assist the user in further identifying that the recovered license plate information is accurate. Moreover, the configuration information may be used later if the user wishes to post the vehicle for sale at an external website. Once the user has verified that the information in the display area 310 matches up with the vehicle associated with the license plate image and driver's license information, the apparatus 300 may receive a selection of the selectable UI object 370 to confirm the information.
In the second stage 302, the apparatus 300 may have received a selection of the selectable UI object 370. In response, the display area 310 may present the vehicle data 350. The vehicle data 350 may be pre-populated based on the known configuration information or the vehicle data 350 may be received by input from a user. Alternatively or in addition to, the vehicle data 350 may be editable by user input. The user may wish to adjust some of the configuration information because, for instance, the vehicle may have some aftermarket parts installed that were not part of the vehicle configuration information 345. The information may be edited by selecting the “show vehicle options” object of the vehicle data 350. Additionally, if the user wishes to post the vehicle information to a website, which may list the vehicle for sale, the apparatus may receive user input of the vehicle mileage, a price, and condition (not shown). Once all of the information has been received at the apparatus, the apparatus may receive a user interaction with the selectable UI object 330 to post the vehicle for sale.
Providing the interface described in
The web browser 380 in this exemplary illustration is displaying the vehicle described in
As shown, the web browser 380 also includes price information, mileage, and options/features that are included in the vehicle as vehicle configuration information and data 397. Additional information about the vehicle may be displayed lower in the web browser 380 window. In such instances, the scrollable object 385 may be used to scroll up and down based on received user input to view all of the vehicle features. Thus the interface discussed in
As shown, the process 400 captures (at 410) an optical image that includes a vehicle license plate image. As will be discussing in the following figure, some aspects of the apparatus may process a video. A frame may then be extracted and converted to an image file.
At 420, the process 400 converts the optical image into an electrical signal. The process 400 then processes (at 430) the electrical signal to recover license plate information. The process 400 determines (at 440) whether the license plate information was successfully recovered. When the license plate information was successfully recovered, the process 400 transmits (at 460) the license plate information to a remote server. The process 400 then receives (at 470) vehicle configuration information corresponding to the vehicle license plate. The process 400 then receives (at 480) user input to post the vehicle configuration information to a website. The process 400 then ends.
Returning to 440, when the process 400 determines that the license plate information was not successfully recovered, the process 400 displays (at 450) an alert that the license plate information was not recovered. In some aspects of the process, a message guiding the user to position the mobile apparatus to achieve greater chances of recovering the license plate information may be provided with the displayed alert. The process then ends. However, in some aspects of the process, rather than end, the process may optionally return to capture (at 410) another optical image and repeat the entire process 400.
As shown, the process 500 converts (at 505) an optical image into an electrical signal for sampling the electrical signal at n frames/second (fps). In some aspects of the process, the process may sample the electrical signal at intervals such as 24 fps or any other suitable interval for capturing video according to the apparatus' capabilities. Each sample of the electrical signal represents a frame of a video image presented on a display. The process 500 samples (at 510) a first portion of the electrical signal representing a first frame of the video image presented on the display. The process then determines (at 515) whether any object image(s) are detected within the frame. At least one of the detected object image(s) may comprise a license plate image. When the process 500 determines that at least object image exists within the frame, the process 500 assigns (at 520) a score based on the detected object image. The score may be based on the likelihood that at least one of the object images is a license plate image and is discussed in greater detail below with respect to
When the process 500 determines (at 515) that no object image exists within the frame or after the process 500 stores the score (at 525), the process 500 displays feedback to a user based on the object image detected (or not detected). For instance, when no object image is detected in the frame, the process 500 may display a message guiding the user on how to collect a better optical image. However, when at least one object image is detected in the frame, the process 500 may provide feedback by overlaying rectangles around the detected object image(s). Alternatively or conjunctively, the process 500 may overlay a rectangle that provides a visual cue such as a distinct color, indicating which object image is determined to most likely be a license plate image or has a higher score than other object images within the frame. In some aspects, the visual cue may be provided when a particular object image receives a score above a threshold value.
The process 500 optionally determines (at 535) whether user input has been received to stop the video. Such user input may include a gestural interaction with the mobile apparatus, which deactivates the camera shutter on the mobile apparatus. When the process 500 determines (at 535) that user input to stop the video capture is received, the process 500 selects (at 545) the highest scoring frame according to the stored frame information. When the process 500 determines (at 535) that user input to stop the video capture has not been received, the process 500 determines (at 540) whether to sample additional portions of the electrical signal. In some aspects of the process, such a determination may be based on a predetermined number of samples. For instance, the mobile apparatus may have a built in and/or configurable setting for the number of samples to process before a best frame is selected. In other aspects of the process, such a determination may be based on achieving a score for a frame or object image in a frame that is above a predetermined threshold value. In such aspects, the frame or frame comprising the object image that is above the threshold score will be selected (at 545). When process 500 determines that there are more portions of the electrical signal to be sampled, the process 500 samples (at 550) the next portion of the electrical signal representing the next frame of the video image presented on the display. The process 500 then returns to detect (at 515) object image(s) within the next frame. In some aspects of the process, the process may receive user input to stop the video capture at any point while process 500 is running. Specifically, the process is not confined to receiving user input to halt video capture after the feedback is displayed (at 530); the user input may be received at anytime while the process 500 is running. In such aspects, if at least one object image has been scored, then the process 500 will still select (at 545) the highest scoring object image. However, if no object images were scored, then the process will simply end.
In some aspects of the process, the process 500 may optionally use the object image(s) detected in the previous sample to estimate the locations of the object images in the sample. Using this approach optimizes processing time when the process can determine that the mobile apparatus is relatively stable. For instance, the mobile apparatus may concurrently store gyro accelerometer data. The process 500 may then use gyro accelerometer data retrieved from the mobile apparatus to determine whether the mobile apparatus has remained stable and there is a greater likelihood that the object image(s) will be in similar locations. Thus, when the process 500 can determine that the mobile apparatus is relatively stable, the processing time for license plate detection may be increased because less of the portion of the electrical signal that represents the video image would need to be searched for the license plate image.
Alternatively or conjunctively, the process 500 may not use information about object image(s) from the previous frame as a predictor. Instead, the process 500 may undergo the same detection and scoring process discussed above. Then, for each object image that overlaps an object image detected in a previous frame (e.g., the object images share similar pixels either by space and/or location in the frames), the previous frame receives a higher score. Information about the overlapping object image(s) may be maintained for optimized processing later on. Additionally, in some aspects of the apparatus, the license plate detection apparatus may maintain a table of matching object image(s) for the sampled portions of the electrical signal representing frames of video images over time. In such aspects, some object image(s) may exist in one or a few of the frames or some may exist in many or all frames and accordingly with higher scores. In such instances, all of the overlapping object images may be processed as discussed in greater detail in the foregoing sections and provided to the server for OCR or identification. This would lead to greater accuracy in actual license plate detection and OCR results.
Returning now to
Returning to 560, when the process 500 determines that license plate information was not detected, the process 500 displays (at 565) an alert that license plate information cannot be recovered. Such an alert may guide the user to acquiring better video that is more likely to produce a readable license plate image. For instance, the alert may guide the user's mobile device position or angle. The process 500 may then return to collect additional video.
The license plate detection apparatus includes an image capture apparatus 605, an imager 610, a keypad 615, a strobe circuit 685, a frame buffer 690, a format converter 620, an image filter 625, a license plate detector 630, a network 635, network interfaces 640 and 697, a gateway 645, a rendering module 650, and a display 655. The license plate detection apparatus may communicate with a server having OCR Module 660, and an OCR analytics storage 670. However, in some aspects of the apparatus, the OCR module and/or OCR analytics storage may be part of the mobile apparatus. The license plate detection apparatus illustrated in
As shown, the image capture apparatus 605 communicates an optical image to the imager 610. The image capture apparatus 605 may comprise a camera lens and/or a camera that is built into a mobile apparatus. The imager 610 may comprise a CMOS array, NMOS, CCD, or any other suitable image sensor that converts an optical image into an electrical signal (e.g., raw image data). The electrical signal comprises pixel data associated with the captured image. The amount of pixel data is dependent on the resolution of the captured image. The pixel data is stored as numerical values associated with each pixel and the numerical values indicate characteristics of the pixel such as color and brightness. Thus, the electrical signal comprises a stream of raw data describing the exact details of each pixel derived from the optical image. During the image capture process, the imager 610 may produce a digital view as seen through the image capture apparatus for rendering at the display 655.
In some aspects of the apparatus, the image capture apparatus 605 may be configured to capture video. In such aspects, a timing circuit, such as the strobe circuit 685, may communicate with the imager 610. The strobe circuit 685 may sample (or clock) the imager 610 to produce a sampled electrical signal at some periodicity such as 24-30 fps. The sampled electrical signal may be representative of a frame of video presented on the display 655. The electrical signal may be provided to the frame buffer 690. However, the imager 610 may communicate the electrical signal directly to the format converter 620 when a single optical image is captured. When video is captured, the frame buffer may communicate the sample of the electrical signal representative of the frame of video from the frame buffer to the format converter 620. However, in some aspects of the apparatus, the frame buffer 690 may be bypassed such that the sampled electrical signal is communicated directly to the format converter 620.
The format converter 620 generates or compresses the raw image pixel data provided in the electrical signal to a standard, space efficient image format. However, in some aspects of the apparatus, the frame buffer 690 and format converter 620 may be reversed such that the sampled electrical signals are converted to a compressed standard video format before buffering. The standard image and/or video format can be read by the following modules of the license plate detection apparatus. However, the following description will assume that the sampled electrical signals are buffered before any such format conversion. The format converter 620 will be described in greater detail in
The format converter 620 communicates the standard image file (or image) to the image filter 625. The image filter 625 performs a variety of operations on the image to provide the optimal conditions to detect a license plate image within the image. Such operations will be described in greater detail in
The license plate detector 630 is an integral module of license plate detection apparatus. The plate detector 630 will process the image to detect the presence of a license plate image by implementing several processes which will be described in greater detail in
The license plate detector 630 will determine which portion of the image (or electrical signal) is most likely a license plate image. The license plate detector 630 will then transmit only the license plate image portion of the image to the network 635 by way of the network interface 697. Alternatively, a user may skip the entire image conversion process and using the keypad 615, key in the license plate information, which is then transmitted over the network 635 by way of the network interface 697. The network 635 then transmits the license plate image information (or image file) or keyed information to the network interface 640, which transmits signals to the gateway 645.
The gateway 645 may transmit the license plate image data to the OCR module 660. The OCR module 660 derives the license plate information such as state and number information from the license plate image. The OCR module 660 may use several different third party and/or proprietary OCR applications to derive the license plate information. The OCR module 660 may use information retrieved from the OCR analytics storage 670 to determine which OCR application has the greatest likelihood of accuracy in the event that different OCR applications detected different characters. For instance, the OCR analytics storage 670 may maintain statistics collected from the user input received at the apparatus 300 described with respect to
Additionally, the license plate information 675 may be transmitted through the gateway 645 and processed by various modules communicatively coupled to the gateway 645. The gateway 645 may then transmit the processed information to the VIN Decoder 695. The VIN decoder 695 may communicate with at one least third party service by way of an API to receive vehicle configuration information and to post the configuration information. The configuration information may be transmitted back to the gateway 645 for further processing, which may include transmitting the vehicle configuration to the vehicle posting engine 699 to post the vehicle configuration information to a website. Alternatively, or in addition to, the gateway 645 may transmit the vehicle configuration information to the rendering module 650 through the network 635. The rendering module 650 may then instruct the display 655 to display the vehicle configuration information along with any other information to assist the user of the mobile apparatus. Additionally, the vehicle posting engine 699 may communicate with various internet services that specialize in used car sales.
In the event that the OCR module 660 or the license plate detector 630 is unable to detect a license plate image or identify any license plate information, the OCR module 660 and/or the license plate detector 630 will signal an alert to the rendering module 650, which will be rendered on the display 655.
In some aspects of the apparatus, the OCR module 660 may be located on an apparatus separate from an external server. For instance, the OCR module 660 may be located on the mobile apparatus 130 similar to the license plate detection apparatus. Additionally, in some aspects of the apparatus, the format converter 620, image filter 625, and license plate detector 630 may be located on an external server and the electrical signal recovered from the optical image may be transmitted directly to the network 635 for processing by the modules on the external server.
The license plate detection apparatus provide several advantages in that it is not confined to still images. As discussed above, buffered or unbuffered video may be used by the license plate detection apparatus to determine the frame with the highest likelihood of having a license plate image. It also enables optical images to be taken while a mobile apparatus is moving and accounts for object images recovered from any angle and/or distance. Additionally, the license plate detection apparatus also provides the added benefit of alerting the user when a license plate image cannot be accurately detected in addition to guidance relating to how to get a better image that is more likely to produce license plate information. Such guidance may include directional guidance such as adjusting the viewing angle or distance as well as guidance to adjust lighting conditions, if possible. Thus, the license plate detection apparatus provides a solution to the complicated problem of how to derive license plate information captured from moving object images and from virtually any viewing angle. The license plate information may then be used to derive different information associated with the license plate information such an estimated value for a vehicle.
The format converter 620 may also receive a several sampled electrical signals, each representing frames of video images, such as frame data 725. The video data frames may be received at the frame analyzer 715 in the format converter 620. The frame analyzer 715 may perform a number of different functions. For instance, the frame analyzer 715 may perform a function of analyzing each frame and discarding any frames that are blurry, noisy, or generally bad candidates for license plate detection based on some detection process such as the process 500 described in
The image filter 625 includes a filter processor 805, a grayscale filter 810, and a parameters storage 835. When the image filter 625 receives the formatted image file 830, the filter processor 805 will retrieve parameters from the parameters storage 835, which will assist the filter processor 805 in how to optimally filter the image. For instance, if the received image was taken in cloudy conditions, the filter processor 805 may adjust the white balance of the image based on the parameters retrieved from the parameters storage 835. If the image was taken in the dark, the filter processor 805 may use a de-noise function based on the parameters retrieved from the parameters storage 835 to remove excess noise from the image. In some aspects of the apparatus, the filter processor 805 also has the ability to learn based on the success of previously derived license plate images what parameters work best or better in different conditions such as those conditions described above. In such aspects, the filter processor 805 may take the learned data and update the parameters in the parameters storage 835 for future use.
The filter processor 805 also has logic to determine if an image will be readable by the license plate detector 630. When the filter processor 805 determines that the image will not be readable by the license plate detector 630, the filter processor 805 may signal an alert 845 to the rendering module 650. However, when the filter processor 805 determines that sufficient filtering will generate a readable image for reliable license plate detection, the filter processor 805 communicates the image, post filtering, to the grayscale filter 810.
Additionally, in some aspects of the apparatus, the image filter 625 may receive several images in rapid succession. Such instances may be frames of a video that may be captured while a mobile apparatus is moving. In such instances, the filter processor 805 may continuously adjust the filter parameters to account for each video frame, it receives. The same alerts may be signaled in real-time in the event that a video frame is deemed unreadable by the filter processor 805.
The grayscale filter 810 will convert the received image file to grayscale. More specifically, the grayscale filter will convert the pixel values for each pixel in the received image file 830 to new values that correspond to appropriate grayscale levels. In some aspects of the filter, the pixel values may be between 0 and 255 (e.g., 256 values or 28 values). In other aspects of the filter, the pixel values may be between 0 and any other value that is a power of 2 minus 1, such as 1023, etc. The image is converted to grayscale, to simplify and/or speed up the license plate detection process. For instance, by reducing the number of colors in the image, which could be in the millions, to shades of gray, the license plate image search time may be reduced.
In the grayscale image, regions with higher intensity values (e.g., brighter regions) of the image will appear brighter than regions of the image with lower intensity values. The grayscale filter 810 ultimately produces the filtered image 840. However, one skilled in the art should recognize that the ordering of the modules is not confined to the order illustrated in
The license plate detector 630 receives the filtered image 930 at the object detector 905. As discussed above, the filtered image 930 has been converted to a grayscale image. The object detector 905 may use a mathematical method, such as a Maximal Stable Extremal Regions (MSER) method, for detecting regions in a digital image that differ in properties, such as brightness or color, compared to areas surrounding those regions. Simply stated, the detected regions of the digital image have some properties that are constant or vary within a pre-described range of values; all the points (or pixels) in the region can be considered in some sense to be similar to each other. This method of object detection may provide greater accuracy in the license plate detection process than other processes such as edge and/or corner detection. However, in some instances, the object detector 905 may use edge and/or corner detection methods to detect object images in an image that could be candidate license plate images.
Typically, the object images detected by the object detector 905 will have a uniform intensity throughout each adjacent pixel. Those adjacent pixels with a different intensity would be considered background rather than part of the object image. In order to determine the object images and background regions of the filtered image 930, the object detector 905 will construct a process of applying several thresholds to the image. Grayscale images may have intensity values between 0 and 255, 0 being black and 255 being white. However, in some aspects of the apparatus, these values may be reversed with 0 being white and 255 being black. An initial threshold is set to be somewhere between 0 and 255. Variations in the object images are measured over a pre-determined range of threshold values. A delta parameter indicates through how many different gray levels a region needs to be stable to be considered a potential detected object image. The object images within the image that remain unchanged, or have little variation, over the applied delta thresholds are selected as likely candidate license plate images. In some aspects of the detector, small variations in the object image may be acceptable. The acceptable level of variations in an object image may be programmatically set for successful object image detection. Conversely or conjunctively, the number of pixels (or area of the image) that must be stable for object image detection may also be defined. For instance, a stable region that has less than a threshold number of pixels would not be selected as an object image, while a stable region with at least the threshold number of pixels would be selected as an object image. The number of pixels may be determined based on known values relating to the expected pixel size of a license plate image or any other suitable calculation such as a height to width ratio.
In addition, the object detector 905 may recognize certain pre-determined textures in an image as well as the presence of informative features that provide a greater likelihood that the detected object image may be a license plate image. Such textures may be recognized by using local binary patterns (LBP) cascade classifiers. LBP is especially useful in real-time image processing settings such as when images are being captured as a mobile apparatus moves around an area. Although commonly used in the art for image facial recognition, LBP cascade classifiers may be modified such that the method is optimized for the detection of candidate license plate images.
In an LBP cascade classification, positive samples of an object image are created and stored on the license plate detection apparatus. For instance, a sample of a license plate image may be used. In some instances multiple samples may be needed for more accurate object image detection considering that license plates may vary from state to state or country to country. The apparatus will then use the sample object images to train the object detector 905 to recognize license plate images based on the features and textures found in the sample object images. LBP cascade classifiers may be used in addition to the operations discussed above to provide improved detection of candidate license plate images.
Once the object detector 905 has detected at least one object as a candidate license plate image, the object detector 905 will pass information relating to the detected object images to the quad processor 910 and/or the quad filter 915. In some aspects of the detector, the object images may not be of a uniform shape such as a rectangle or oval. The quad processor 910 will then fit a rectangle around each detected object image based on the object image information provided by the object detector 905. Rectangles are ideal due to the rectangular nature of license plates. As will be described in the foregoing, information about the rectangles may be used to overlay rectangles on object images that are displayed for the user's view on a mobile apparatus.
The rectangle will be sized such that it fits minimally around each object image and all areas of the object image are within the rectangle without more additional background space than is necessary to fit the object image. However, due to various factors such as the angle at which the optical image was taken, the license plate image may not be perfectly rectangular. Therefore, the quad processor 910 will perform a process on each object image using the rectangle to form a quadrilateral from a convex hull formed around each object image.
The quad processor 910 will use an process that fits a quadrilateral as closely as possible to the detected object images in the image. For instance, the quad processor 910 will form a convex hull around the object image. A convex hull is a polygon that fits around the detected object image as closely as possible. The convex hull comprises edges and vertices. The convex hull may have several vertices. The quad processor 910 will take the convex hull and break it down to exactly four vertices (or points) that fit closely to the object image.
Referring back to
The region(s) of interest detector 920 will then determine which of the object images are actually object images that that have similar proportions (e.g., height and width) to the proportions that would be expected for a license plate image. For instance, typically a license plate is rectangular in shape. However, depending on several factors such as the angle that the license plate image was captured, the object image may appear more like a parallelogram or trapezoid. However, there is a limit to how much skew or keystone (trapezoidal shape) a license plate image undergoes before it becomes unreadable. Therefore, it is necessary to compute a skew factor and/or keystone to determine whether the object image may be a readable license plate image. Specifically, object images that have a skew factor and/or keystone below and/or above a threshold value are likely object images that do not have the proportions expected for a license plate image or would likely be unreadable. Since a license plate has an expected proportion a threshold skew factor and/or keystone may be set and any detected object image that has a skew factor and/or keystone indicating that the object image is not a readable license plate image will be discarded. For instance, license plate images with a high skew and/or high keystone may be discarded.
In some aspects of the apparatus, the skew and keystone thresholds may be determined by digitally distorting known license plate images with varying amounts of pitch and yaw to see where the identification process and/or OCR fails. The threshold may also be dependent on the size of the object image or quadrilateral/trapezoid. Thus, quadrilaterals or trapezoids must cover enough pixel space to be identified and read by the OCR software. Those that do not have a large enough pixel space, skew factors that are too high, and/or keystones that are too high would then be discarded as either being unlikely candidates for license plate images or unreadable license plate images.
The skew factor is computed by finding the distance between opposing vertices of the quadrilateral and taking the ratio of the shorter distance to the longer distance so that the skew factor is less than or equal to 1. Rectangles and certain parallelograms that are likely candidate license plate images will have a skew factor that is close to 0, while skewed parallelograms will have a high skew factor. Additionally, trapezoids that are likely candidate license plate images will have a keystone that is close to 0, while trapezoids that are unlikely candidate license plate images will have a high keystone. Therefore, object images with a high skew factor are discarded, while the parallelograms with a lower skew factor and trapezoids with a lower keystone are maintained. In some aspects of the apparatus, a threshold skew and a threshold keystone may be defined. In such aspects, parallelograms having a skew factor below the threshold are maintained while those above the threshold are discarded. Similarly, in such aspects, trapezoids having a keystone below the threshold are maintained while those above the threshold are discarded. When the value is equal to the threshold, the parallelogram or trapezoid may be maintained or discarded depending on the design of the apparatus.
The remaining parallelograms and trapezoids are then dewarped. The dewarping process is particularly important for the trapezoids because it is used to convert the trapezoid into a rectangular image. The dewarping process uses the four vertices of the quadrilateral and the 4 vertices of an un-rotated rectangle with an aspect ratio of 2:1 (width:height), or any other suitable license plate aspect ratio, to computer a perspective transform. The aspect ratio may be pixel width:pixel height of the image. The perspective transform is applied on the region around the quadrilateral and the 2:1 aspect ratio object image is cropped out. The cropped object image, or patch, is an object image comprising a candidate license plate image.
The patch is then provided to the patch processor 925, which will search for alpha numeric characters in the patch, find new object images within the patch, fit rectangles around those object images, and compute a score from the fit rectangles. The score may be based on a virtual line that is drawn across the detected object images. If a line exists that has a minimal slope, the object images on that line may receive a score that indicates the object image is highly likely to be a license plate image. If no line with a minimal slope is detected, then an alert may be returned to the rendering module that a license plate image was not detected in the image. Scores may be calculated for several different patches from the same image and it follows that more than one license plate image may be detected in the same image. Once, the presence of a license plate image is detected, the license plate information 935 may be transmitted to a server for OCR and further processing. In some aspects of the apparatus, the license plate information is an image file comprising the license plate image. Additionally, the process for scoring the patch will be described in more detail with respect to
The overlay processor 1305 receives information about the detected object images 1330 from the license plate detector 630. As discussed above, such information may include coordinates of detected object images and rectangles determined to fit around those object images. The rectangle information is then provided to the detection failure engine 1310, which will determine that object images have been detected by the license plate detector 630. The detection failure engine 1310 may then forward the information about the rectangles to the image renderer 1315, which will provide rendering instructions 1340 to the display for how and where to display the rectangles around the image received from the imager 610. Such information my include pixel coordinates associated with the size and location of the rectangle and color information. For instance, if the license plate detector 630 determines that a detected object image is more likely to be an actual license plate image than the other detected object images, the rendering module 650 may instruct the display 655 to display the rectangle around the more likely object image in a way that is visually distinct from other rectangles. For instance, the rectangle around the object image more likely to be a license plate image may be displayed in a different color than the other rectangles in the display.
However, in some instances, the license plate detector 630 may not detect any object images. In such instances, the overlay processor will not forward any rectangle information to the detection failure engine 1310. The detection failure engine 1310 will then determine there has been an object image detection failure and signal an alert to the image renderer 1315. The image renderer 1315 will then communicate the display rendering instructions 1340 for the alert to the display 655. The license plate detection alerts have been described in greater detail above.
Additionally, the image filter 625 may provide information to the image renderer 1315 indicating an alert that the captured image cannot be processed for some reason such as darkness, noise, blur, or any other reason that may cause the image to be otherwise unreadable. The alert information from the image filter 625 is provided to the image renderer 1315, which then provides the rendering display instructions 1340 to the display 655 indicating how the alert will be displayed. The image filter alerts have been discussed in detail above.
The following
As illustrated, the mobile apparatus 1410 has activated the image capture functionality of the mobile apparatus 1410. The image capture functionality may be an application that controls a camera lens and imager built into the apparatus 1410 that is capable of taking digital images. In some aspects of the apparatus, the image capture functionality may be activated by enabling an application which activates the license plate detection apparatus capabilities described in
As shown in exploded view 1555, the object detector 905 of the license plate detection apparatus has detected several object images 1525, as well as a candidate license plate image 1505. As shown, the rendering module 650 has used information communicated from the license plate detector 630 to overlay rectangles around detected object images 1525 including the candidate license plate image 1505. The rendering module 650 has also overlaid rectangles that differ in appearance around object images that are less likely to be license plate images. For instance, rectangles 1520 appear as dashed lines, while rectangle 1510 appears as a solid line. However, as those skilled in the art will appreciate, the visual appearance of the rectangles is not limited to only those illustrated in exploded view 1555. In fact, the visual appearance of the rectangles may differ by color, texture, thickness, or any other suitable way of indicating to a user that at least one rectangle is overlaid around an object image that is more likely to be a license plate image than the other object images in which rectangles are overlaid.
Exploded view 1555 also illustrates overlapping rectangles 1530. As discussed above, the quad filter 915 of the license plate detector 630 may recognize the overlapping rectangles 1530 and discard some of the rectangles, and detected object images within those discarded rectangles, as appropriate.
As is also illustrated by
As shown, the process 1600 converts (at 1610) the captured image to grayscale. As discussed above, converting the image to grayscale makes for greater efficiency in distinguishing object images from background according to the level of contrast between adjacent pixels. Several filtering processes may also be performed on the image during the grayscale conversion process. The process 1600 then detects (at 1615) object image(s) from the grayscale image. Such object images may be the object images 1505 and 1525 as illustrated in
At 1625, the process 1600 determines whether an object image fits the criteria for a license plate image. When the object image fits the criteria for a license plate image, the process 1600 transmits (at 1630) the license plate image (or image data) to a server such as the server 1550. In some aspects of the process, an object image fits the criteria for a license plate when a score of the object image is above a threshold value. Such a score may be determined by a process which will be discussed in the foregoing description. The process 1600 then determines (at 1635) whether there are more object images detected in the image and/or whether the object image being processed does not exceed a threshold score.
When the process 1600 determines (at 1625) that an object image does not fit the criteria for a license plate image, the process 1600 does not transmit any data and determines (at 1635) whether more object images were detected in the image and/or whether the object image being processed did not exceed a threshold score. When the process 1600 determines that more object images were detected in the image and/or the object image being processed did not exceed a threshold score, the process 1600 processes (at 1640) the next object image. The process then returns to 1625 to determine if the object image fits the criteria of a license plate image.
When the process 1600 determines (at 1635) that no more object images were detected in the image and/or the object image being processed exceeds a threshold score, the process 1600 determines (at 1645) whether at least one license plate image was detected in the process 1600. When a license plate image was detected, the process ends. When a license plate image was not detected, an alert is generated (at 1650) and the rendering module 650 sends instructions to display a detection failure message at the display 655. In some aspects of the process, the detection failure alert may provide guidance to the user for capturing a better image. For instance, the alert may guide the user to move the mobile apparatus in a particular direction such as up or down and/or adjust the tilt of the mobile apparatus. Other alerts may guide the user to find a location with better lighting or any other suitable message that may assist the user such that the license plate detection apparatus has a greater likelihood of detecting at least one license plate image in an image.
The process 1600 may be performed in real-time. For instance, the process 1600 may be performed successively as more images are captured either by capturing several frames of video as the mobile apparatus or object images in the scene move and/or are tracked or by using an image capture device's burst mode. The process 1600 provides the advantage of being able to detect and read a license plate image in an image at virtually any viewing angle and under a variety of ambient conditions. Additionally, the criteria for determining a license plate image is determined based on the operations performed by the license plate detector. These operations will be further illustrated in the following figures as well.
Once the quadrilateral is determined to have a low skew (or a skew below a threshold value) or the trapezoid has been determined to have a low keystone (or a keystone below a threshold value), the region(s) of interest detector 920 can dewarp the image to move one step closer to confirming the presence of a license plate image in the image and to also generate patch that is easily read by OCR software. In some aspects of the apparatus, the patch is the license plate image that has been cropped out of the image.
As shown, the first stage 1901 illustrates the license plate image 1905 in a trapezoidal shape similar to the shape of the quadrilateral 1805 illustrated in
The ability to accurately dewarp quadrilaterals and especially the quadrilaterals that are license plate images taken at any angle is an integral piece of the license plate detection apparatus. The dewarping capability enables a user to capture an image of a license plate at a variety of different angles and distances. For instance, the image may be taken with any mobile apparatus at virtually any height, direction, and/or distance. Additionally, it provides the added benefit of being able to capture a moving image from any position. Once the license plate image has been dewarped, the region(s) of interest detector 920 will crop the rectangular license plate image to generate a patch. The patch will be used for further confirmation that the license plate image 1910 is, in fact, an image of a license plate.
As shown, the process 2000 detects (at 2010) at least one object image, similar to the object image detection performed by process 1600. The following describes in greater detail the process of processing (at 1620) the image.
For instance, the process 2000 then fits (at 2015) a rectangle to each detected object image in order to reduce the search space to the detected object images. The information associated with the rectangle may also be used as an overlay to indicate to users of the license plate detection apparatus the location(s) of the detected object image(s). The process then uses the rectangles to form (at 2020) a convex hull around each object image. The convex hull, as discussed above, is a polygon of several vertices and edges that fits closely around an object image without having any edges that overlap the object image.
At 2025, the process 2000 compresses the convex hull to a quadrilateral that closely fits around the detected object image. The process of compressing the convex hull into a quadrilateral was discussed in detail with respect to
The process 2000 calculates (at 2035) a skew factor. The process 2000 then dewarps (at 2040) the quadrilateral. The process then crops (at 2045) the object image within the quadrilateral, which becomes the patch. The patch will be used for further processing as discussed below. In some aspects of the process, the object image is cropped at a particular ratio that is common for license plates of a particular region or type. For instance, the process may crop out a 2:1 aspect ratio patch, of the image, which is likely to contain the license plate image. Once the quadrilateral is cropped, the process 2000 then ends.
As shown in the patch 2100, rectangles are fit around detected object images within the patch. In some aspects of the apparatus, object images may be detected using the MSER object detection method. Conjunctively or conversely, some aspects of the apparatus, may use edge and or corner detection methods to detect the object images. In this case, the detected object images are alpha-numeric characters 2120 and 2140 as well as graphic 2125. After detecting the alpha-numeric characters 2120 and 2140 as well as graphic 2125, a stroke width transform (SWT) may be performed to partition the detected object images into those that are likely from an alpha-numeric character and those that are not. For instance, the SWT may try to capture the only alpha-numeric effective features and use certain geometric signatures of alpha-numeric characters to filter out non-alpha-numeric areas, resulting in more reliable text regions. In such instances, the SWT transform may partition the alphanumeric characters 2120 and 2140 from the graphic 2125. Thus, only those object images that are determined to likely be alpha-numeric characters, such as alphanumeric characters 2120 and 2140, are later used in a scoring process to be discussed below. In some aspects of the apparatus, some object images other than alpha-numeric characters may pass through the SWT partitioning. Thus, further processing may be necessary to filter out the object images that are not alpha-numeric characters and also to determine whether the alpha-numeric characters in the license plate image fit the characteristics common for a license plate images.
Following the partitioning of alpha-numeric characters from non-alpha numeric characters, a line is fit to the center of the rectangle pair for each pair of rectangles. For instance, a sloped line is shown for the D and object image 2125 pair. The distance of all other rectangles to the lines 2130 and 2110 are accumulated and the pair with the smallest summed distance is used as a text baseline. For instance, the zero-slope line 2110 has the smallest summed distance of the rectangles to the line 2110. Some aspects of the apparatus may implement a scoring process to determine the presence of a license plate image. For instance, some aspects of the scoring process may determine a score for the determined alpha-numeric characters on the zero-slope line 2110. The score may increase when the rectangle around the alpha-numeric character is not rotated beyond a threshold amount. The score may decrease if the detected alpha-numeric character is too solid. In some aspects of the scoring process, solidity may be defined as the character area/rectangle area. When the calculated area is over a threshold amount, then the detected object image may be deemed too solid and the score decreases.
In other aspects of the scoring process, for each rectangle 2115 in the patch 2100 the patch score increases by some scoring value if the center of the rectangle is within a particular distance of the baseline line where X is the shorter of the rectangle height and width. For instance, if the particular distance were to be defined as the shorter of the rectangle height and width and if the scoring value is set at 1, the patch score value of the patch 2100 would be 7 because the rectangles around the characters “1DDQ976” are within a shorter distance than the width of the rectangle. Furthermore, the zero-slope of the line 2110 between the alpha-numeric characters 2120 further confirm that this patch is likely a license plate image since typically license plates have a string of characters along a same line. Sloped lines 2130 are, therefore, unlikely to provide any indication that the patch is a license plate image because the distance between characters is too great and the slope is indicative of a low likelihood of a license plate image. Accordingly, in some aspects of the process, sloped lines 2130 are discarded in the process.
In some aspects of the process, when the patch has a score above a threshold value, the patch is determined to be a license plate image, and the license plate detection is complete. The license plate image data is then transmitted to a server for further processing and for use in other functions computed by the server, the results of which are provided to the license plate detection apparatus.
As shown, the process 2200 processes (at 2205) only substantially rectangular portion(s) of the patch to locate alpha-numeric characters. The process 2200 fits (at 2210) rectangles around the located alpha-numeric characters and computes scores based on the distances between rectangle pairs as discussed above with respect to
In order to process the license plate information to provide vehicle configuration information, the gateway 2395 may also communicate with third party services that provide vehicle configuration information from license plate information and/or additional vehicle related information. Such additional information may be a VIN number or vehicle configuration information. As shown, the gateway 2395 may communicate directly with at least one third party processing service 2390 if the service is located on the same network as the gateway 2395. Alternatively, the gateway 2395 may communicate with at least one of the third party processing services 2390 over the WAN 2380 (e.g., the internet). Additionally, the vehicle configuration information may be posted to a website by communicating with the WAN 2380.
In some aspects of the service, the process of posting vehicle configuration information to a website may incorporate the location with the vehicle. For instance, the vehicle configuration information may be posted to a website that is used for listing a car for sale. In such instances, having the location information may be used by the listing website in searches performed by users of the website. The apparatus may use location information acquired through a global positioning service (GPS) satellite 2320. The apparatuses 2310 and 2370 may be configured to use a GPS service and provide location information to the gateway 2395 using the connections discussed above. The provided location information may be used by the gateway 2395 and provided to additional modules, discussed in the following figure, as necessary. Thus, the service described in
As shown, the client apparatus 2470, may use a network interface 2460 to transmit at least one license plate image recovered from an optical image taken by the client apparatus 2470. The client apparatus 2470 may include an installed application providing instructions for how to communicate with the gateway 2400 through the network interface 2460. In this example, the network interface 2460 provides license plate image information or text input of a license plate to the gateway 2400. For instance, as discussed above, the network interface 2460 may transmit text strings received as user input at the client apparatus 2470 or a license plate image processed by the client apparatus 2470 to the gateway 2400. As further shown in this example, the gateway 2400 may route the license plate image data to the OCR module 2410 to perform the OCR text extraction of the license plate information. In this example, the OCR module 2410 may have specialized or a commercial OCR software application installed that enables accurate extraction of the license plate number and state. The OCR module may be similar to the OCR module discussed in
Once the license plate number and state information is extracted and converted to text strings, the gateway 2400 will provide the extracted text to a translator 2420, which is capable of determining a VIN from the license plate information. The translator 2420 may communicate with third party services using functionality provided in an application programming interface (API) associated with the third party services. Such services may retrieve VINs from license plate information. In some aspects of the service, the various modules 2410-2450 may also be configured to communicate with the third party services or apparatuses (not shown) using APIs associated with the third party services. In such aspects of the service, each module 2410-2450 may route a request through the gateway 2400 to the network interface 2460, which will communicate with the appropriate third party service (not shown).
The gateway 2400 then routes the retrieved VIN to the VIN decoder 2430 along with a request to generate a VIN explosion. The VIN decoder 2430 is capable of using the VIN to generate a VIN explosion by requesting the VIN explosion from a third party service. The VIN explosion includes all of the features, attributes, options, and configurations of the vehicle associated with the VIN (and the license plate image). In some aspects of the apparatus, the VIN explosion may be provided as an array of data, which the gateway 2400 or VIN decoder 2430 is capable of understanding, processing, and/or routing accurately. Similar to the VIN translator 2420, the VIN decoder 2430 may communicate with a third party service by using an API associated with the service. The VIN and/or vehicle data derived from the VIN explosion may be routed back through the gateway 2400 and through the network interface 2460 to the client apparatus 2470. As discussed above, the client apparatus may display the vehicle configuration data derived from the VIN explosion.
The vehicle configuration information from the VIN explosion may also be routed from the gateway to the vehicle posting engine 2450 with a request to post the vehicle configuration information to a website. The vehicle posting engine 2450 may then communicate with at least one web host via an API to post the vehicle configuration to the website. Such websites may include services that list vehicles that are for sale. Additionally, the apparatus may provide an estimated value of the vehicle acquired from a value estimation service. The estimated value may be adjusted as additional information about the vehicle is received at the apparatus 2470 via user input. Such information may include vehicle mileage, condition, features and/or any other pertinent vehicle information. The vehicle posting engine 2450 may also use geographic location information provided by the apparatus 2470 in posting to the website. Thus, the data flow described in
In this instance, the process may begin after the mobile apparatus 2310 has recovered a suitable license plate image for transmission to the server 230. As shown, the process 2500 receives (at 2510) license plate image information or text input from a mobile apparatus. The text input may be information associated with a vehicle license plate such as a state and alpha-numeric characters. Upon receiving identifying the vehicle license plate information (including the state), the process 2500 automatically requests (at 2530) a VIN associated with the license plate information. Specifically, server 230 is configured to automatically transmit a request for a VIN corresponding to the license plate information by sending the request to a third party processing service. In some aspects of the system, the process communicates with the third party service (e.g., service 2390) by using an API.
In one aspect, the third party service can identify the vehicle and VIN using the license plate image information and confirm whether the vehicle is in fact recognized by the service. For example, at 2540, the process 2500 receives the VIN associated with the license plate information. The process 2500 then requests (at 2550) a vehicle configuration using the received VIN. The vehicle configuration may include different features and options that are equipped on the vehicle. For instance, such features and options may include different wheel sizes, interior trim, vehicle type (e.g., coupe, sedan, sport utility vehicle, convertible, truck, etc.), sound system, suspension, and any other type of vehicle configuration or feature. In other aspects, the vehicle configuration can also include estimated pricing information from a third-party website, for example. The process 2500 receives (at 2570) the vehicle configuration data. If the service is unable to identify the vehicle, the server may transmit an error message back to the mobile device request another image of the license plate, as described above.
Otherwise, at 2580, the process 2500 transmits the vehicle configuration data to the mobile apparatus 2310. The mobile apparatus 2310 may display the configuration information for the user of the apparatus to view. As noted above, in one aspect, the vehicle configuration information can include estimated value information from a third party service that provides an estimated price of the vehicle using the VIN number and any identified vehicle history, including accidents, repairs, and the like. The price information can include a variation of prices including trade-in value, estimated sales prices (i.e., obtained from third party pricing services based on VIN number, for example), and a dealership price. Thus, according to an exemplary aspect, the server 230 is configured to automatically transmit the year, make, model and other information (e.g., mileage) to a third-party valuation service that automatically returns an estimated value of the vehicle. In an alternative aspect, the server 230 is configured to maintain its own database of vehicle says based on year, make, model, geographical region, time of year, and the like, all of which can be provided with actual historical sales prices. Server 230 is configured to automatically adjust the recommended price received from the third-party valuation service based on estimated odometer, estimated condition, estimated vehicle configuration package, and the like. Then, upon receiving the prospective seller's vehicle information, the server 230 is configured to reference this database and identify one or more closest matches to generate an estimated sales price. In either case, this information is transmitted to the seller at step 2580.
The user may then wish to post the configuration information to a website such as a sales listing website. Thus, the process 2500 receives (at 2585) a selection to post the vehicle information to the website. The vehicle information posted may include vehicle information that was included as part of the received vehicle configuration information as well as additional configuration details which may have been added or adjusted at the mobile apparatus by user input. The process 2500 then communicates (at 2590) the vehicle information and instructions to post the information to an external website. As discussed above, this may be handled by communicating using a website's API.
In one aspect, the customized software application downloaded on the mobile apparatus facilitates the posting and sale of the user's vehicle through the Internet. For example, in this aspect, the software application can include a listing of vehicles that the user is attempting to sell (i.e., a “garage” of vehicles). In order to set up a garage and post vehicle information through the software application, the user of the mobile device is required to create and/or access a user account associated with a software application running on the mobile device. To do so, the user can create a user account by verifying and accepting conventional legal terms and conditions as well as privacy policies.
Once a user account is created and the vehicle configuration data is received by the software application (at steps 2570 and 2580, for example) the user can select to post the vehicle at step 2585 and related information, such as vehicle configuration information (make, model, mileage, etc.) and requested price on the website and/or via the software application. For example, the software application can provide a website and/or marketplace that lists all vehicles that have been posted by users of the software application for sale. Preferably, gateway 2395 communicates with a plurality of third party retail sites (e.g., Craigslist®) to post the vehicle for sale. Moreover, the initial listing of the vehicle includes a total price and suggested monthly payments for a prospective buyer. In one aspect, the retail website further provides user modifiable filters that enables prospective buyers to filter by price, year, mileage, make, style, color, physical location of the vehicle, and the like. It should be appreciated that other social networks can also facilitate the posting and sale of the user's vehicles (e.g., step 2590).
According to a further exemplary aspect, the software application can require the user to verify ownership of the vehicle before the vehicle information is posted for sale and/or before an actual sale is negotiated and consummated with a prospective buyer.
At 2640, the process 2600 receives the VIN associated with the license plate information. The process 2600 then requests (at 2650) vehicle ownership information using the received VIN. As described above, this request may be prompted by the software application on the mobile device in advance of the user's request to post the vehicle configuration information to a website and/or the software application for sale. The requested vehicle ownership information can include one or more of the user's personal information, such as at least one of a first and last name, full name, address, and driver's license number. Additional information can include the vehicle's title (e.g., a PDF or JPEG image of the title), loan payment information, registration information, and the like.
As further shown, the process 2600 receives this vehicle ownership information at step 2660. At 2680, the process 2600 receives information from a driver's license image from the mobile apparatus and requests (at 2685) validation that the driver's license information matches the vehicle owner's information. At 2686, the process determines if the information matches. When the information does not match, the process 2600 transmits (at 2687) an alert to the mobile apparatus indicating that the registration information does not match the driver's license information. In this case, the process ends and the user can repeat the vehicle ownership process. Alternatively, when the vehicle ownership information does match, the process 2600 transmits (at 2695) confirmation of vehicle ownership to the apparatus. At this stage, the software application enables the user to post the vehicle for sale as described above according to the exemplary aspect.
Initially, at step 2705, the seller can post the vehicle and related information to a website or the like using the software application on the seller's mobile device, for example, to formally present the vehicle for sale. This process is described above with respect to steps 2580-2590 of
Next, at step 2710, the prospective buyer can visit the website and/or a “marketplace” provided on the downloadable software application to view from a number of available vehicles for sale. For example, if the prospective buyer is browsing vehicles on a third party service, such as Craigslist, and identifies the seller's vehicle, the listing my include a prompt for the buyer to download the customized software application on his or her mobile device 2370. The additional vehicle information will also be available to the buyer, including car history, location, etc., as described above. As an example of a user interface of the presentation is discussed below with respect to
According to the exemplary aspect, each presented vehicle includes a button, link or the like (e.g., “Message Seller”) that enables the prospective buyer to initiate a confidential and private communication with the seller. Thus, if the buyer identifies a car that he or she is interested in purchasing, the buyer can select the user input entry at step 2715 to initiate the private communication. At step 2720 a private messaging platform is created by the software application installed on both the seller's mobile device and the buyer's mobile device to facilitate a sales negotiation between the two parties. It should be appreciated that this message center enables the buyer to ask any questions relating to the vehicle, the vehicle's condition and location, and the like, as well as an ability to negotiate the purchase price.
An optional step is then provided at step 2725 in which the seller and buyer can agree on a time and place for the user to perform a test drive. For example, in this aspect, each mobile apparatus of the seller and buyer can transmit GPS or other device location information to the server 230. In turn, the server 230 can access a map database to identify one or a few meeting locations (e.g., local coffee shop, school parking lot, etc.) that are mutually convenient for each of the buyer and seller. For example, the identified location can be at a distance approximately equal length of travel from each of the buyer and seller. In another aspect, the test drive location can predefined. In either instant, the server 230 can then transmit the suggested test drive location as well as more or more times to the mobile apparatuses of each of the buyer and seller to facilitate the scheduling of a test.
In either case, at step 2730, if the buyer and seller agree to the vehicle's sale, using the software application, either or both parties can transmit a signal to the gateway 2395 and/or server 230 that indicates that the parties have agreed on a sale and that the purchasing steps should now be initiated on the buyer's side. In one aspect, this communication can further prompt the seller to indicate whether the vehicle should be removed from the website and/or marketplace of the software application. Alternatively, the step of removing the vehicle from these one or more platforms performed automatically.
Conventional vehicle loans require significant paperwork and place undue burden on the individuals and computers necessary to process the application. For example, the loan application is required to enter data in numerous fields and/or fill out paper work that must be scanned in a computer and transmitted to a remote loan processing facility. Moreover, in some case, the receiving facilities are required to perform an object character recognition of such paperwork in order to identify relevant applicant information. Moreover, there is often numerous communication steps between the loan applicant and finance institution to ensure all required information has been submitted.
Accordingly, to the exemplary embodiment, the prospective buyer is prompted to scan an image of his or her license (or other user identification), including the barcode of the identification and to also indicate a current annual income. Thus, as shown in
In turn, the server 230 can then access one or more third-party credit evaluation services to execute a credit report on the buyer. Based on the credit report, the server 230 can automatically define financing terms for the buyer, which may include, for example, monthly payment, interest rates of the loan, down payment (if any) and the like. These payment terms are defined by server 230 at step 2830 and then presented to the user for review at step 2835. For example, the defined financing terms can be displayed to the user at step 2835 using the software application on the buyer's mobile device.
In one aspect, the buyer is presented with a number of customizable loan options. For example, if the user elects to pay a larger down payment, the interest rate of the load may decrease. Moreover, the buyer can easily define the term of the loan, and the like. It should be appreciated that the variation of these options will be dependent on the user's credit, annual income, and the like, and preferably set by the server 230, automatically. In one aspect, the dynamic rate pricing can be adjusted by the buyer using one or more slides presented on the user interface of the mobile apparatus. Thus, in one example, as the buyer increase the down payment using an interface slide, the interest rate may decrease proportionately.
Once the buyer accepts the finance terms, the buyer selects such acceptance on the software application, which, in turn, communicates the acceptance to server 230. In response thereto, server 230 automatically, sends a formal offer to the seller, such that the seller is presented with this offer on the seller's mobile apparatus by the software application.
The software application on the seller's mobile apparatus 2310 requests acceptance of the buyer's offer at step 2840. If the seller rejects the buyer's offer, the process will either end or return to step 2815. In one aspect, the seller can then be provided with an option to define a new cash offer, defining a new/increased sales price, for example. This process can be continually repeated where the buyer can then accept and submit a new offer (again at step 2835) for the seller's acceptance. Once the seller accepts the buyer's offer at step 2840 or vice versa, the server 230 creates, approves, and executes a loan agreement (if required) for the buyer.
More particularly,
If the buyer is not approved, the method proceeds to step 2860 where the transaction is canceled and both the buyer and the seller are notified of the canceled transaction by an alert sent to each user respective mobile apparatus, where the software application displays the cancellation notifications. In a further aspect, the software application includes an indication explaining the reasoning for the transaction cancellation (e.g., buyer loan failed). If the buyer is approved, the method proceeds to step 2865 where the loan is presented to the buyer. In one optional aspect, the method can also proceed to step 2870, where the server 230 can again access third party services, such as credit services to confirm the buyer's credit. In either case, at step 2875, the system (i.e., at server 230) is presented with an option of a “hard pull” meaning the system can automatically, or at the instruction of a system administrator, pull the financing offer. If this option is executed, the method again proceeds to step 2860 where each party is notified of the canceled transaction as described above. Alternatively, the method proceeds to step 2880 where the system confirms to both the seller and buyer on the respective mobile software applications that the loan has been approved and proceeds with the steps need to execute the loan agreement at step 2880. It should be appreciated that such steps mainly include presenting a formal loan offer to the buyer that includes the financial terms of the load, all relevant conditions, and request the user to confirm acceptance of the loan agreement. If the user accepts the loan, of which acceptance can be electronic signature, the method proceeds to finalize the transaction and transfer title, as will be discussed as follows.
Moreover, according to one further aspect, the buyer can be prompted with the option to purchase a warranty for the vehicle at step 2930. If the buyer declines the warranty, the process ends. Otherwise, the server 230 receives the request a purchases the warranty from the vehicle manufacturer, for example, on behalf of the buyer. Conventional systems require the purchaser to contact the vehicle manufacturer or dealer directly to purchase the warranty. The exemplary method performs this task on behalf of the buyer at step 2935. It should be appreciated that according to an alternative aspect, the option to purchase a warranty can be presented during the loan application process, the prices of which will be included in the proposed loan plan.
As shown, the electronic system includes various types of machine readable media and interfaces. The electronic system includes a bus 3005, processor(s) 3010, read only memory (ROM) 3015, input device(s) 3020, random access memory (RAM) 3025, output device(s) 3030, a network component 3035, and a permanent storage device 3040.
The bus 3005 communicatively connects the internal devices and/or components of the electronic system. For instance, the bus 3005 communicatively connects the processor(s) 3010 with the ROM 3015, the RAM 3025, and the permanent storage 3040. The processor(s) 3010 retrieve instructions from the memory units to execute processes of the invention.
The processor(s) 3010 may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Alternatively, or in addition to the one or more general-purpose and/or special-purpose processors, the processor may be implemented with dedicated hardware such as, by way of example, one or more FPGAs (Field Programmable Gate Array), PLDs (Programmable Logic Device), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits.
Many of the above-described features and applications are implemented as software processes of a computer programming product. The processes are specified as a set of instructions recorded on a machine readable storage medium (also referred to as machine readable medium). When these instructions are executed by one or more of the processor(s) 3010, they cause the processor(s) 3010 to perform the actions indicated in the instructions.
Furthermore, software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The software may be stored or transmitted over as one or more instructions or code on a machine-readable medium. Machine-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by the processor(s) 3010. By way of example, and not limitation, such machine-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processor. Also, any connection is properly termed a machine-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects machine-readable media may comprise non-transitory machine-readable media (e.g., tangible media). In addition, for other aspects machine-readable media may comprise transitory machine-readable media (e.g., a signal). Combinations of the above should also be included within the scope of machine-readable media.
Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems 3000, define one or more specific machine implementations that execute and perform the operations of the software programs.
The ROM 3015 stores static instructions needed by the processor(s) 3010 and other components of the electronic system. The ROM may store the instructions necessary for the processor(s) 3010 to execute the processes provided by the license plate detection apparatus. The permanent storage 3040 is a non-volatile memory that stores instructions and data when the electronic system 3000 is on or off. The permanent storage 3040 is a read/write memory device, such as a hard disk or a flash drive. Storage media may be any available media that can be accessed by a computer. By way of example, the ROM could also be EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The RAM 3025 is a volatile read/write memory. The RAM 3025 stores instructions needed by the processor(s) 3010 at runtime, the RAM 3025 may also store the real-time video images acquired during the license plate detection process. The bus 3005 also connects input and output devices 3020 and 3030. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 3020 may be a keypad, image capture apparatus, or a touch screen display capable of receiving touch interactions. The output device(s) 3030 display images generated by the electronic system. The output devices may include printers or display devices such as monitors.
The bus 3005 also couples the electronic system to a network 3035. The electronic system may be part of a local area network (LAN), a wide area network (WAN), the Internet, or an Intranet by using a network interface. The electronic system may also be a mobile apparatus that is connected to a mobile data network supplied by a wireless carrier. Such networks may include 3G, HSPA, EVDO, and/or LTE.
In one exemplary aspect, at step 3104, the server 230 is configured to perform an automatic determination of whether the buyer is within a predetermined age range (i.e., 18 to 100). For example, based on an image capture of the buyer's driver's license (e.g., the barcode from the license as discussed above) or other identification results in the creation of data records corresponding to the buyer's identity, the server can subsequently perform automatic processing steps without manual intervention. Specifically, at step 3104, server 230 identifies the age and automatically compares it with an age range that is predetermined by an administrator, for example. In other words, the server automatically can perform a determination as to whether the buyer is within the predetermined age range by processing of the buyer's identification results in determining the age range and then applying a numerical calculation to confirm the buyer is over age 18, but under 100, for example. If the buyer is not within this range, the buyer is automatically declined at step 3106.
Otherwise, the server 230 proceeds to step 3108 and automatically performs a soft credit pull of the buyer. Specifically, in this aspect, the server 230 can access one or more third-party credit evaluation services to obtain a credit report (e.g., a credit score) of the buyer. Using this credit score, the server is configured to automatically assign the buyer with a “buying power” at step 3110, which can be communicated to the buyer in one exemplary aspect. The details of the buyer power calculation will be discussed in more detail below. However, according to the exemplary aspect, server 230 is configured to automatically generates the buying power tier by numerically manipulating the buyer's credit score and placing it within one of a number of calculated tiers (e.g., five tiers). For example, any credit score over 750, for example, can assigned to Tier 1, any score between 675 and 750, can be assigned to Tier 2, and so forth. These numbers are simply provided as an example, but can be varied according to the system administrator. Thus, in this aspect, one or more algorithms or equations may be applied to assign a unique buying power tier for the buyer, which may also take into consideration other data points which are retrieved for the buyer. For example, in addition to the credit score data, other factors may be applied within the algorithms/equations that may alter the exact tier in which the buyer is placed. These other factors could include a debt to income ratio, the geographical location where the buyer is found, the employment status of the buyer, annual income, among others. Therefore, it should be understood that the buying power tier that is assigned is not only an automatic aspect of the invention, but may take into account a number of different attributes of the buyer that go beyond the mere evaluation of the buyer's credit score.
Based on the automatically calculated buying power, the server 230 is able to use the determined buying power for the buyer to establish three limits for the buyer's financing plan: (1) maximum monthly payment; (2) maximum total loan amount; and (3) ratio of maximum loan to vehicle value. This information is stored in a database at server 230 and is associated with the prospective buyer accordingly. With respect to automatically determining acceptable ranges of finance terms for the buyer that meet the value of a selected car along with maximum monthly payments, maximum total amount, and maximum loan to vehicle value, it should also be understood that this step in the method comprises one or more algorithms which automatically determine the acceptable finance terms. Variables within the algorithms include ranges of values which enable finance terms to be determined without manual intervention or evaluation.
Next, at step 3112, the buyer uses his application to select a vehicle for potential purchase. This process is described in detail above and will not be repeated herein. At step 3114, the server 230 calculates finance plan limits for the selected vehicle. In particular, the vehicle is assigned with a selling price and/or an estimated price as described above. Based on the calculated buying power of the buyer, the server 230 is configured to automatically determine acceptable ranges of finance terms for the buyer that meet the value of the specific car with the maximum monthly payment, maximum total loan amount, and ratio of maximum loan to vehicle value identified above.
Next, at step 3116, the buyer is able to use the software application on his or her mobile device to select user configurable financing and purchase terms.
Referring back to
Otherwise, if the seller accepts the buyers offer at step 3122, the method proceeds to step 3124 where the buyer is prompted to agree to the offered financing terms and must also agree to a hard credit pull. Upon receiving authorization, at step 3126, the server 230 again accesses one or more third-party credit evaluation services to obtain a full credit report including a credit score of the buyer. For example, referring to
Otherwise, the method proceeds to step 3150 where the server 230 is configured cross reference a number of condition rules to determine whether the buyer has correctly been assigned the current tier (e.g., Tier 1). For example, one conditional rule may be whether the buyer has defaulted on any vehicle loan payment for more than 60 days within the past year. Based on the buyer's credit report previously obtained, the server 230 is configured to automatically determine this fact and, if so, the method will proceed to step 3156 where the buyer will be reduced to a lower tier of buying power. For example, if the above noted condition is satisfied, the server 230 will include a condition that the buyer is reduced by one tier. Step 3152 then determines whether the buying power has actually been reduced by one or more tiers. If the buying power has not been reduced, the method proceeds to step 3154 where the transactional process continues at its current step as shown in
According to yet another exemplary embodiment, the disclosed system and method is configured to initiate the automatic transaction process based on image recognition of distinct vehicle features obtained from digital images of the client device. It should be appreciated that the exemplary embodiment can be executed using a system, such as that described above with respect to
Each of these features can then be compared at step 3320 to the unique vehicle feature database to try and identify a match that indicates the make and preferably the model number of the vehicle. For example, the identified unique features obtained from the capture image 3400 (e.g., unique features 3410A and 3410B), which correspond to the rear tail lights, can be identified as tail lights and compared with all vehicle tail lights in the database, for example. If not match is identified, the system may send the user an error message requiring additional images of the vehicle. Alternatively or in addition, upon determination of a match (or a certain percentage of a believed match), the method will proceed to step 3325.
It is noted that in the exemplary aspect, the remote server is configured to identify the specific features in the captured image (e.g., the rear tail light, rear window shape, trunk outline, etc.) and compare these features in the unique vehicle feature database. In alternative aspect, the application on the mobile apparatus may include software capable of identifying one or more of these features (similar to the license plate capture and determination described above) before these identified features are transmitted to the gateway/request router 2400 for processing as described herein.
In yet a refinement of the exemplary aspect, the unique vehicle feature database can be built as part of a neural network of the disclosed system. In other words, each time a user uploads one or more captured images and indicates the corresponding make and model (and possibly other information such as vehicle year), each vehicle component or feature can be identified as a sub-image (e.g., an image of the taillight) and stored in the database with the associated metadata (e.g., make, model and/or year). Then, as other images of similar vehicles having the same make and model number are received, the database can be expanded accordingly so that the system has a more confident understanding that each particular feature (e.g., the taillights) are indeed unique to that make and model. Thus, as this network builds, the system will in turn be able to identify the particular vehicle make and model number with more confidence based on the received images.
In yet a further refinement of the exemplary aspect, the unique feature can include a logo or model number found in the captured image. For example, in the captured image 3400, both the model number “C 300” and the Mercedes logo can be seen and identified (using OCR techniques described above, for example). Thus, these identified unique features can also be compared in the unique vehicle feature database, according to an exemplary aspect, to help further identify the make and model number of the vehicle.
As further shown in
At step 3330, the calculated confidence level is compared with a predetermined percentage threshold, which can also be set by a system administrator for example. If the calculated confidence level does not meet the threshold (e.g., 80%), the system may be configured to transmit a message back to the user on the mobile device 2470 (i.e., step 3335), to display an error message, such “we could not detect a car, please try again.” The user can then be prompted to resubmit one or more additional captured image of the vehicle.
Otherwise, if the calculated confidence level satisfies or exceeds the predetermined threshold at step 3330, the method proceeds to step 3340. In this instance, the disclosed system can identify the corresponding make and model number (and possibly year) of the vehicle based on the unique features. In this example, the make and model is a Mercedes C 300 as described above. This information can then be provided to VIN decoder 2430 for example, which may be further configured to identify the portion of the VIN that corresponds to the identified make, model and possibly vehicle year. This information can be collected from a known third-party database, for example. After the partial VIN number is identified at step 3340, this number can be transmitted back to the mobile device 2470, where the user is further prompted to complete the remainder of the VIN corresponding to trim level and serial number.
In yet a further refinement of the exemplary aspects described above, the disclosed system and method is configured to provide a measure of redundancy (i.e., cross checking) to determine whether the state prediction and/or text OCR was incorrect. That is, the vehicle features identified using the feature comparison described above may identify a 2014 Mercedes Benz C 300, but this information can be used as a redundancy check in an exemplary aspect. More particularly, in this example, the license and registration, which can also be transmitted as described above, may be for a different type of vehicle (e.g., the registration may be registered to a licensed driver who is registered to own a Honda Accord). Therefore, if the system identifies the inconsistency between the registered vehicle and capture image, the user may be notified of this inconsistency or error via the user interface of the mobile device. In this aspect, the user may be further prompted to confirm that the above license plate information is correct. In other words, when the camera button of the mobile device 2470 is clicked to begin the process, the license plate is decoded using the exemplary aspects described above, but if the vehicle registration details do not match the prediction from the image, a confirmation form is displayed to prompt the user to correct the license plate state or text. An exemplary screen shot of this correcting prompt 3400C for a user interface is disclosed in
It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Further, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
The various aspects of this disclosure are provided to enable one of ordinary skill in the art to practice the present invention. Various modifications to exemplary embodiments presented throughout this disclosure will be readily apparent to those skilled in the art, and the concepts disclosed herein may be extended to other apparatuses, devices, or processes. Thus, the claims are not intended to be limited to the various aspects of this disclosure, but are to be accorded the full scope consistent with the language of the claims. All structural and functional equivalents to the various components of the exemplary embodiments described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”
This application is a continuation of U.S. patent application Ser. No. 15/785,741, which is a continuation-in-part application of U.S. application Ser. No. 15/363,960, filed Nov. 29, 2016, now issued as U.S. Pat. No. 9,818,154. In turn, U.S. application Ser. No. 15/363,960, is a continuation-in-part application of each of U.S. patent application Ser. No. 14/716,808, filed May 19, 2015; U.S. patent application Ser. No. 14/716,738, filed May 19, 2015; U.S. patent application Ser. No. 14/716,743, filed May 19, 2015; U.S. patent application Ser. No. 14/716,793, filed May 19, 2015; U.S. patent application Ser. No. 14/716,651, filed May 19, 2015; and U.S. patent application Ser. No. 14/716,445, filed May 19, 2015. Each of these prior applications is a continuation-in-part of U.S. patent application Ser. No. 14/613,323, filed on Feb. 3, 2015, which is a continuation-in-part of each of U.S. patent application Ser. No. 14/318,397, filed on Jun. 27, 2014, and U.S. patent application Ser. No. 14/455,841, filed on Aug. 8, 2014. This application claims the benefit of priority to each of these applications and hereby incorporates by reference the contents of each of these applications.
Number | Name | Date | Kind |
---|---|---|---|
3544771 | O'Meara | Dec 1970 | A |
3550084 | Bigelow et al. | Dec 1970 | A |
4817166 | Gonzalez et al. | Mar 1989 | A |
5227803 | O'Connor et al. | Jul 1993 | A |
5579008 | Hulderman et al. | Nov 1996 | A |
5579021 | Lee | Nov 1996 | A |
5651075 | Frazier et al. | Jul 1997 | A |
5706457 | Dwyer et al. | Jan 1998 | A |
5708425 | Dwyer et al. | Jan 1998 | A |
5734337 | Kupersmit | Mar 1998 | A |
5864306 | Dwyer et al. | Jan 1999 | A |
5963253 | Dwyer | Oct 1999 | A |
6081206 | Kielland | Jun 2000 | A |
6140941 | Dwyer et al. | Oct 2000 | A |
6339651 | Tian et al. | Jan 2002 | B1 |
6404902 | Takano et al. | Jun 2002 | B1 |
6536961 | Gillies | Mar 2003 | B1 |
6571279 | Herz | May 2003 | B1 |
6622131 | Brown et al. | Sep 2003 | B1 |
6705521 | Wu et al. | Mar 2004 | B1 |
RE38626 | Kielland | Oct 2004 | E |
6814284 | Ehlers et al. | Nov 2004 | B2 |
6847965 | Walker et al. | Jan 2005 | B2 |
6922156 | Kavner | Jul 2005 | B2 |
7016518 | Vernon | Mar 2006 | B2 |
7046169 | Bucholz | May 2006 | B2 |
7053792 | Aoki et al. | May 2006 | B2 |
7076349 | Davidson et al. | Jul 2006 | B2 |
7104447 | Lopez et al. | Sep 2006 | B1 |
7124006 | Davidson et al. | Oct 2006 | B2 |
7136828 | Allen et al. | Nov 2006 | B1 |
7146345 | Weik, III et al. | Dec 2006 | B2 |
7171049 | Snapp | Jan 2007 | B2 |
7174044 | Ding et al. | Feb 2007 | B2 |
7253946 | Bellonard et al. | Aug 2007 | B2 |
7262790 | Bakewell | Aug 2007 | B2 |
7265656 | McMahon et al. | Sep 2007 | B2 |
7301115 | Elliot et al. | Nov 2007 | B2 |
7302098 | Tang et al. | Nov 2007 | B2 |
7319379 | Melvin | Jan 2008 | B1 |
7339495 | Kavner | Mar 2008 | B2 |
7346222 | Lee et al. | Mar 2008 | B2 |
7346553 | Barnett | Mar 2008 | B2 |
7347368 | Gravelle et al. | Mar 2008 | B1 |
7355527 | Franklin et al. | Apr 2008 | B2 |
7359553 | Wendt et al. | Apr 2008 | B1 |
7367058 | Lawson et al. | Apr 2008 | B2 |
7377426 | Makeever | Apr 2008 | B1 |
7407097 | Robinson et al. | Aug 2008 | B2 |
7412078 | Kim | Aug 2008 | B2 |
7424968 | Meyerhofer et al. | Sep 2008 | B2 |
7428337 | Gao et al. | Sep 2008 | B2 |
7433764 | Janssen | Oct 2008 | B2 |
7436437 | Fletcher et al. | Oct 2008 | B2 |
7439847 | Pederson | Oct 2008 | B2 |
7460028 | Garibotto | Dec 2008 | B2 |
7482910 | Melvin | Jan 2009 | B2 |
7504965 | Windover | Mar 2009 | B1 |
7522766 | Ishidera | Apr 2009 | B2 |
7539331 | Wendt et al. | May 2009 | B2 |
7542588 | Ekin et al. | Jun 2009 | B2 |
7579965 | Bucholz | Aug 2009 | B2 |
7583858 | Gallagher | Sep 2009 | B2 |
7630515 | Takahashi et al. | Dec 2009 | B2 |
7646895 | Haupt et al. | Jan 2010 | B2 |
7676392 | Hedley et al. | Mar 2010 | B2 |
7679497 | Arant | Mar 2010 | B1 |
7693629 | Kawasaki | Apr 2010 | B2 |
7710452 | Lindberg | May 2010 | B1 |
7711150 | Simon | May 2010 | B2 |
7714705 | Rennie et al. | May 2010 | B2 |
7725348 | Allen et al. | May 2010 | B1 |
7734097 | Porikli et al. | Jun 2010 | B1 |
7734500 | Allen et al. | Jun 2010 | B1 |
7738706 | Aradhye et al. | Jun 2010 | B2 |
7739000 | Kevaler | Jun 2010 | B2 |
7751975 | Allen et al. | Jul 2010 | B2 |
7774228 | Robinson et al. | Aug 2010 | B2 |
7778447 | Takahashi et al. | Aug 2010 | B2 |
7812711 | Brown et al. | Oct 2010 | B2 |
7813581 | Fitzpatrick et al. | Oct 2010 | B1 |
7860344 | Fitzpatrick et al. | Dec 2010 | B1 |
7860639 | Yang | Dec 2010 | B2 |
7881498 | Simon | Feb 2011 | B2 |
7890355 | Gay et al. | Feb 2011 | B2 |
7893963 | Gallagher et al. | Feb 2011 | B2 |
7902978 | Pederson | Mar 2011 | B2 |
7908237 | Angell et al. | Mar 2011 | B2 |
7925440 | Allen et al. | Apr 2011 | B2 |
7933455 | Haupt et al. | Apr 2011 | B2 |
7970644 | Hedley et al. | Jun 2011 | B2 |
7982634 | Arrighetti | Jul 2011 | B2 |
7987103 | Gay et al. | Jul 2011 | B2 |
7991629 | Gay et al. | Aug 2011 | B2 |
8009870 | Simon | Aug 2011 | B2 |
8019629 | Medina, III et al. | Sep 2011 | B1 |
8044824 | Vu et al. | Oct 2011 | B2 |
8059864 | Huang et al. | Nov 2011 | B2 |
8089340 | Cochran et al. | Jan 2012 | B2 |
8094887 | Axemo et al. | Jan 2012 | B2 |
8101677 | Angell et al. | Jan 2012 | B2 |
8120473 | Rennie et al. | Feb 2012 | B2 |
RE43245 | Ouimet et al. | Mar 2012 | E |
8155384 | Chew | Apr 2012 | B2 |
8175917 | Flynn et al. | May 2012 | B2 |
8203425 | Medina et al. | Jun 2012 | B1 |
8218822 | Sefton | Jul 2012 | B2 |
8218871 | Angell et al. | Jul 2012 | B2 |
8229168 | Geva et al. | Jul 2012 | B2 |
8229171 | Takahashi et al. | Jul 2012 | B2 |
8238610 | Shah et al. | Aug 2012 | B2 |
8254631 | Bongard | Aug 2012 | B2 |
8260002 | Almbladh | Sep 2012 | B2 |
8260639 | Medina, III et al. | Sep 2012 | B1 |
8265963 | Hanson et al. | Sep 2012 | B1 |
8265988 | Hedley et al. | Sep 2012 | B2 |
8279088 | Khim | Oct 2012 | B2 |
8284037 | Rennie et al. | Oct 2012 | B2 |
8284996 | Winkler | Oct 2012 | B2 |
8290213 | Chen et al. | Oct 2012 | B2 |
8307037 | Bain et al. | Nov 2012 | B2 |
8311856 | Hanson et al. | Nov 2012 | B1 |
8321264 | Goldmann et al. | Nov 2012 | B2 |
8330769 | Moore et al. | Dec 2012 | B2 |
8331621 | Allen et al. | Dec 2012 | B1 |
8345921 | Frome et al. | Jan 2013 | B1 |
8346578 | Hopkins, III et al. | Jan 2013 | B1 |
8364439 | Mintz et al. | Jan 2013 | B2 |
8473332 | Robinson et al. | Jan 2013 | B2 |
8369653 | Cohen | Feb 2013 | B1 |
8380389 | Wright et al. | Feb 2013 | B2 |
8384560 | Malarky | Feb 2013 | B2 |
8401327 | Almbladh | Mar 2013 | B2 |
8401343 | Braun | Mar 2013 | B2 |
8411992 | Hamada et al. | Apr 2013 | B2 |
8417035 | Angell et al. | Apr 2013 | B2 |
8437551 | Noonan et al. | May 2013 | B2 |
8441535 | Morin | May 2013 | B2 |
8447112 | Paul et al. | May 2013 | B2 |
8457408 | Challa | Jun 2013 | B2 |
8463642 | Hedley et al. | Jun 2013 | B2 |
8473333 | Robinson et al. | Jun 2013 | B2 |
8478480 | Mian et al. | Jul 2013 | B2 |
8493216 | Angell et al. | Jul 2013 | B2 |
8497769 | Rennie et al. | Jul 2013 | B2 |
8502698 | Chen et al. | Aug 2013 | B2 |
8504415 | Hedley | Aug 2013 | B2 |
8508341 | Kohli et al. | Aug 2013 | B2 |
8527305 | Hanson et al. | Sep 2013 | B1 |
8543285 | Allen et al. | Sep 2013 | B2 |
8548201 | Toon et al. | Oct 2013 | B2 |
8571751 | Blair | Oct 2013 | B1 |
8571895 | Medina, III et al. | Oct 2013 | B1 |
8577184 | Young | Nov 2013 | B2 |
8577344 | Kobylarz | Nov 2013 | B2 |
8581922 | Moore et al. | Nov 2013 | B2 |
8582832 | Angell et al. | Nov 2013 | B2 |
8587454 | Dearworth | Nov 2013 | B1 |
8588470 | Rodriguez Serrano et al. | Nov 2013 | B2 |
8593521 | Schofield et al. | Nov 2013 | B2 |
8625853 | Carbonell et al. | Jan 2014 | B2 |
8629755 | Hashim-Waris | Jan 2014 | B2 |
8630497 | Badawy et al. | Jan 2014 | B2 |
8637801 | Schofield et al. | Jan 2014 | B2 |
8639433 | Meis et al. | Jan 2014 | B2 |
8660890 | Hedley | Feb 2014 | B2 |
8665079 | Pawlicki et al. | Mar 2014 | B2 |
8666196 | Young | Mar 2014 | B2 |
8682066 | Milgrom et al. | Mar 2014 | B2 |
8693733 | Harrison | Apr 2014 | B1 |
8694341 | Hanson et al. | Apr 2014 | B1 |
8698895 | Nerayoff et al. | Apr 2014 | B2 |
8704682 | Chau | Apr 2014 | B1 |
8704889 | Hofman | Apr 2014 | B2 |
8704948 | Mountain | Apr 2014 | B2 |
8712630 | Walwer | Apr 2014 | B2 |
8712803 | Buentello | Apr 2014 | B1 |
8712806 | Medina, III et al. | Apr 2014 | B1 |
8713121 | Bain et al. | Apr 2014 | B1 |
8725542 | Hanson et al. | May 2014 | B1 |
8725543 | Hanson et al. | May 2014 | B1 |
8730066 | Malarky | May 2014 | B2 |
8731244 | Wu | May 2014 | B2 |
8744905 | Robinson et al. | Jun 2014 | B2 |
8751099 | Blair | Jun 2014 | B2 |
8751270 | Hanson et al. | Jun 2014 | B1 |
8751391 | Freund | Jun 2014 | B2 |
8760316 | Kohli et al. | Jun 2014 | B2 |
8761446 | Frome et al. | Jun 2014 | B1 |
8768009 | Smith | Jul 2014 | B1 |
8768753 | Robinson et al. | Jul 2014 | B2 |
8768754 | Robinson et al. | Jul 2014 | B2 |
8773266 | Starr et al. | Jul 2014 | B2 |
8774462 | Kozitsky et al. | Jul 2014 | B2 |
8774465 | Christopulos | Jul 2014 | B2 |
8775236 | Hedley et al. | Jul 2014 | B2 |
8775238 | Angell et al. | Jul 2014 | B2 |
8781172 | Kozitsky et al. | Jul 2014 | B2 |
8788300 | Hanson et al. | Jul 2014 | B1 |
8792677 | Kephart | Jul 2014 | B2 |
8792682 | Fan et al. | Jul 2014 | B2 |
8799034 | Brandmaier et al. | Aug 2014 | B1 |
8799036 | Christensen et al. | Aug 2014 | B1 |
8818042 | Schofield et al. | Aug 2014 | B2 |
8825271 | Chen | Sep 2014 | B2 |
8825368 | Rakshit | Sep 2014 | B2 |
8831970 | Weik, III et al. | Sep 2014 | B2 |
8831972 | Angell et al. | Sep 2014 | B2 |
8837830 | Bala et al. | Sep 2014 | B2 |
8855621 | Chen | Oct 2014 | B2 |
8855853 | Blair | Oct 2014 | B2 |
8860564 | Rubin et al. | Oct 2014 | B2 |
8862117 | Chen | Oct 2014 | B2 |
8879120 | Thrasher et al. | Nov 2014 | B2 |
8879846 | Amtrup et al. | Nov 2014 | B2 |
8884782 | Rubin et al. | Nov 2014 | B2 |
8885229 | Amtrup et al. | Nov 2014 | B1 |
8897820 | Marovets | Nov 2014 | B2 |
8917910 | Rodriguez Serrano | Dec 2014 | B2 |
8922391 | Rubin et al. | Dec 2014 | B2 |
8924851 | Wichmann | Dec 2014 | B2 |
8934676 | Burry et al. | Jan 2015 | B2 |
8935094 | Rubin et al. | Jan 2015 | B2 |
8937559 | Ioli | Jan 2015 | B2 |
8953846 | Wu et al. | Feb 2015 | B2 |
8954226 | Binion et al. | Feb 2015 | B1 |
8957759 | Medina, III et al. | Feb 2015 | B1 |
8958605 | Amtrup et al. | Feb 2015 | B2 |
8958630 | Gallup et al. | Feb 2015 | B1 |
8971582 | Dehart | Mar 2015 | B2 |
8971587 | Macciola et al. | Mar 2015 | B2 |
8982208 | Takeuchi et al. | Mar 2015 | B2 |
8989515 | Shustorovich et al. | Mar 2015 | B2 |
8993951 | Schofield et al. | Mar 2015 | B2 |
9004353 | Block et al. | Apr 2015 | B1 |
9008369 | Schofield et al. | Apr 2015 | B2 |
9008370 | Burry et al. | Apr 2015 | B2 |
9008958 | Rubin et al. | Apr 2015 | B2 |
9014429 | Badawy et al. | Apr 2015 | B2 |
9014432 | Fan et al. | Apr 2015 | B2 |
9014908 | Chen et al. | Apr 2015 | B2 |
9019092 | Brandmaier et al. | Apr 2015 | B1 |
9020657 | Uhler | Apr 2015 | B2 |
9020837 | Oakes, III et al. | Apr 2015 | B1 |
9021384 | Beard et al. | Apr 2015 | B1 |
9031858 | Angell et al. | May 2015 | B2 |
9031948 | Smith | May 2015 | B1 |
9035755 | Rennie et al. | May 2015 | B2 |
9058515 | Amtrup et al. | Jun 2015 | B1 |
9058580 | Amtrup et al. | Jun 2015 | B1 |
9092808 | Angell et al. | Jul 2015 | B2 |
9092979 | Burry et al. | Jul 2015 | B2 |
9105066 | Gay et al. | Aug 2015 | B2 |
9111331 | Parikh et al. | Aug 2015 | B2 |
9118872 | Goodman et al. | Aug 2015 | B1 |
9123034 | Rydbeck et al. | Sep 2015 | B2 |
9129159 | Cardoso et al. | Sep 2015 | B2 |
9129289 | Vaughn et al. | Sep 2015 | B2 |
9137417 | Macciola et al. | Sep 2015 | B2 |
9141112 | Loo et al. | Sep 2015 | B1 |
9141503 | Chen | Sep 2015 | B1 |
9141926 | Kilby et al. | Sep 2015 | B2 |
9158967 | Shustorovich et al. | Oct 2015 | B2 |
9165187 | Macciola et al. | Oct 2015 | B2 |
9165188 | Thrasher et al. | Oct 2015 | B2 |
9177211 | Lehning | Nov 2015 | B2 |
9208536 | Macciola et al. | Dec 2015 | B2 |
9223769 | Tsibulevskiy et al. | Dec 2015 | B2 |
9223893 | Rodriguez | Dec 2015 | B2 |
9235599 | Smith | Jan 2016 | B1 |
9253349 | Amtrup et al. | Feb 2016 | B2 |
9311531 | Amtrup et al. | Apr 2016 | B2 |
9365188 | Penilla | Jun 2016 | B1 |
9384423 | Rodriguez-Serrano | Jul 2016 | B2 |
9589202 | Wilbert et al. | Mar 2017 | B1 |
9607236 | Wilbert et al. | Mar 2017 | B1 |
9966065 | Gruber et al. | May 2018 | B2 |
10027662 | Mutagi et al. | Jul 2018 | B1 |
20010032149 | Fujiwara | Oct 2001 | A1 |
20010034768 | Bain et al. | Oct 2001 | A1 |
20020000920 | Kavner | Jan 2002 | A1 |
20020106135 | Iwane | Aug 2002 | A1 |
20020140577 | Kavner | Oct 2002 | A1 |
20030019931 | Tsikos et al. | Jan 2003 | A1 |
20030042303 | Tsikos et al. | Mar 2003 | A1 |
20030095688 | Kirmuss | May 2003 | A1 |
20030146839 | Ehlers et al. | Aug 2003 | A1 |
20040039690 | Brown et al. | Feb 2004 | A1 |
20050238252 | Prakash et al. | Oct 2005 | A1 |
20060015394 | Sorensen | Jan 2006 | A1 |
20060025897 | Shostak et al. | Feb 2006 | A1 |
20060056658 | Kavner | Mar 2006 | A1 |
20060059229 | Bain et al. | Mar 2006 | A1 |
20060069749 | Hertz et al. | Mar 2006 | A1 |
20060078215 | Gallagher | Apr 2006 | A1 |
20060095301 | Gay | May 2006 | A1 |
20060098874 | Lev | May 2006 | A1 |
20060109104 | Kevaler | May 2006 | A1 |
20060120607 | Lev | Jun 2006 | A1 |
20060159345 | Clary et al. | Jul 2006 | A1 |
20060215882 | Ando et al. | Sep 2006 | A1 |
20060222244 | Haupt et al. | Oct 2006 | A1 |
20060269104 | Ciolli | Nov 2006 | A1 |
20060269105 | Langlinais | Nov 2006 | A1 |
20060278705 | Hedley et al. | Dec 2006 | A1 |
20060287872 | Simrell | Dec 2006 | A1 |
20070008179 | Hedley et al. | Jan 2007 | A1 |
20070009136 | Pawlenko et al. | Jan 2007 | A1 |
20070016539 | Groft et al. | Jan 2007 | A1 |
20070058856 | Boregowda et al. | Mar 2007 | A1 |
20070058863 | Boregowda et al. | Mar 2007 | A1 |
20070061173 | Gay | Mar 2007 | A1 |
20070085704 | Long | Apr 2007 | A1 |
20070088624 | Vaughn et al. | Apr 2007 | A1 |
20070106539 | Gay | May 2007 | A1 |
20070124198 | Robinson et al. | May 2007 | A1 |
20070130016 | Walker et al. | Jun 2007 | A1 |
20070156468 | Gay et al. | Jul 2007 | A1 |
20070183688 | Hollfelder | Aug 2007 | A1 |
20070192177 | Robinson et al. | Aug 2007 | A1 |
20070208681 | Bucholz | Sep 2007 | A1 |
20070252724 | Donaghey et al. | Nov 2007 | A1 |
20070265872 | Robinson et al. | Nov 2007 | A1 |
20070288270 | Gay et al. | Dec 2007 | A1 |
20070294147 | Dawson et al. | Dec 2007 | A1 |
20070299700 | Gay et al. | Dec 2007 | A1 |
20080021786 | Stenberg | Jan 2008 | A1 |
20080031522 | Axemo et al. | Feb 2008 | A1 |
20080036623 | Rosen | Feb 2008 | A1 |
20080040210 | Hedley | Feb 2008 | A1 |
20080040259 | Snow et al. | Feb 2008 | A1 |
20080063280 | Hofman et al. | Mar 2008 | A1 |
20080077312 | Mrotek | Mar 2008 | A1 |
20080120172 | Robinson et al. | May 2008 | A1 |
20080120392 | Dillon | May 2008 | A1 |
20080137910 | Suzuki et al. | Jun 2008 | A1 |
20080166018 | Li et al. | Jul 2008 | A1 |
20080175438 | Alves | Jul 2008 | A1 |
20080175479 | Sefton et al. | Jul 2008 | A1 |
20080212837 | Matsumoto et al. | Sep 2008 | A1 |
20080221916 | Reeves et al. | Sep 2008 | A1 |
20080249857 | Angell et al. | Oct 2008 | A1 |
20080253616 | Mizuno et al. | Oct 2008 | A1 |
20080277468 | Mitschele | Nov 2008 | A1 |
20080285803 | Madsen | Nov 2008 | A1 |
20080285804 | Sefton | Nov 2008 | A1 |
20080310850 | Pederson et al. | Dec 2008 | A1 |
20080319837 | Mitschele | Dec 2008 | A1 |
20090005650 | Angell et al. | Jan 2009 | A1 |
20090006125 | Angell et al. | Jan 2009 | A1 |
20090018721 | Mian et al. | Jan 2009 | A1 |
20090018902 | Miller et al. | Jan 2009 | A1 |
20090024493 | Huang et al. | Jan 2009 | A1 |
20090070156 | Cleland-Pottie | Mar 2009 | A1 |
20090070163 | Angell et al. | Mar 2009 | A1 |
20090083121 | Angell et al. | Mar 2009 | A1 |
20090083122 | Angell et al. | Mar 2009 | A1 |
20090089107 | Angell et al. | Apr 2009 | A1 |
20090089108 | Angell et al. | Apr 2009 | A1 |
20090110300 | Kihara et al. | Apr 2009 | A1 |
20090136141 | Badawy et al. | May 2009 | A1 |
20090138344 | Dawson et al. | May 2009 | A1 |
20090138345 | Dawson et al. | May 2009 | A1 |
20090016819 | Vu et al. | Jun 2009 | A1 |
20090161913 | Son | Jun 2009 | A1 |
20090167865 | Jones, Jr. | Jul 2009 | A1 |
20090174575 | Allen et al. | Jul 2009 | A1 |
20090174777 | Smith | Jul 2009 | A1 |
20090198587 | Wagner et al. | Aug 2009 | A1 |
20090202105 | Castro Abrantes et al. | Aug 2009 | A1 |
20090208060 | Wang et al. | Aug 2009 | A1 |
20090226100 | Gao et al. | Sep 2009 | A1 |
20090232357 | Angell et al. | Sep 2009 | A1 |
20090292597 | Schwartz et al. | Nov 2009 | A1 |
20090307158 | Kim et al. | Dec 2009 | A1 |
20100054546 | Choi | Mar 2010 | A1 |
20100064305 | Schumann et al. | Mar 2010 | A1 |
20100080461 | Ferman | Apr 2010 | A1 |
20100082180 | Wright et al. | Apr 2010 | A1 |
20100085173 | Yang et al. | Apr 2010 | A1 |
20100088123 | McCall et al. | Apr 2010 | A1 |
20100111365 | Dixon et al. | May 2010 | A1 |
20100128127 | Ciolli | May 2010 | A1 |
20100150457 | Angell et al. | Jun 2010 | A1 |
20100153146 | Angell et al. | Jun 2010 | A1 |
20100153147 | Angell et al. | Jun 2010 | A1 |
20100153180 | Angell et al. | Jun 2010 | A1 |
20100153279 | Zahn | Jun 2010 | A1 |
20100153353 | Angell et al. | Jun 2010 | A1 |
20100179878 | Dawson et al. | Jul 2010 | A1 |
20100189364 | Tsai et al. | Jul 2010 | A1 |
20100191584 | Fraser et al. | Jul 2010 | A1 |
20100228607 | Hedley | Sep 2010 | A1 |
20100228608 | Hedley et al. | Sep 2010 | A1 |
20100229247 | Phipps | Sep 2010 | A1 |
20100232680 | Kleihorst | Sep 2010 | A1 |
20100246890 | Ofek et al. | Sep 2010 | A1 |
20100272317 | Riesco Prieto et al. | Oct 2010 | A1 |
20100272364 | Lee et al. | Oct 2010 | A1 |
20100274641 | Allen et al. | Oct 2010 | A1 |
20100278389 | Tsai et al. | Nov 2010 | A1 |
20100278436 | Tsai et al. | Nov 2010 | A1 |
20100299021 | Jalili | Nov 2010 | A1 |
20100302362 | Birchbauer et al. | Dec 2010 | A1 |
20110047009 | Deitiker et al. | Feb 2011 | A1 |
20110096991 | Lee et al. | Apr 2011 | A1 |
20110115917 | Lee et al. | May 2011 | A1 |
20110116686 | Gravelle | May 2011 | A1 |
20110118967 | Tsuda | May 2011 | A1 |
20110145053 | Hashim-Waris | Jun 2011 | A1 |
20110161140 | Polt et al. | Jun 2011 | A1 |
20110169953 | Sandler et al. | Jul 2011 | A1 |
20110191117 | Hashim-Waris | Aug 2011 | A1 |
20110194733 | Wilson | Aug 2011 | A1 |
20110208568 | Deitiker et al. | Aug 2011 | A1 |
20110218896 | Tonnon et al. | Sep 2011 | A1 |
20110224865 | Gordon et al. | Sep 2011 | A1 |
20110234749 | Alon | Sep 2011 | A1 |
20110235864 | Shimizu | Sep 2011 | A1 |
20110238290 | Feng et al. | Sep 2011 | A1 |
20110261200 | Kanning et al. | Oct 2011 | A1 |
20110288909 | Hedley et al. | Nov 2011 | A1 |
20120007983 | Welch | Jan 2012 | A1 |
20120033123 | Inque et al. | Feb 2012 | A1 |
20120057756 | Yoon et al. | Mar 2012 | A1 |
20120069183 | Aoki et al. | Mar 2012 | A1 |
20120070086 | Miyamoto | Mar 2012 | A1 |
20120078686 | Bashani | Mar 2012 | A1 |
20120089462 | Hot | Apr 2012 | A1 |
20120089675 | Thrower, III et al. | Apr 2012 | A1 |
20120106781 | Kozitsky et al. | May 2012 | A1 |
20120106801 | Jackson | May 2012 | A1 |
20120106802 | Hsieh et al. | May 2012 | A1 |
20120116661 | Mizrachi | May 2012 | A1 |
20120128205 | Lee et al. | May 2012 | A1 |
20120130777 | Kaufman | May 2012 | A1 |
20120130872 | Baughman et al. | May 2012 | A1 |
20120140067 | Crossen | Jun 2012 | A1 |
20120143657 | Silberberg | Jun 2012 | A1 |
20120148092 | Ni et al. | Jun 2012 | A1 |
20120148105 | Burry et al. | Jun 2012 | A1 |
20120158466 | John | Jun 2012 | A1 |
20120170814 | Tseng | Jul 2012 | A1 |
20120195470 | Fleming et al. | Aug 2012 | A1 |
20120215594 | Gravelle | Aug 2012 | A1 |
20120223134 | Smith et al. | Sep 2012 | A1 |
20120246007 | Williams et al. | Sep 2012 | A1 |
20120256770 | Mitchell | Oct 2012 | A1 |
20120258731 | Smith et al. | Oct 2012 | A1 |
20120263352 | Fan et al. | Oct 2012 | A1 |
20120265574 | Olding et al. | Oct 2012 | A1 |
20120275653 | Hsieh et al. | Nov 2012 | A1 |
20120310712 | Baughman et al. | Dec 2012 | A1 |
20130004024 | Challa | Jan 2013 | A1 |
20130010116 | Breed | Jan 2013 | A1 |
20130018705 | Heath et al. | Jan 2013 | A1 |
20130039542 | Guzik | Feb 2013 | A1 |
20130041961 | Thrower, III et al. | Feb 2013 | A1 |
20130046587 | Fraser et al. | Feb 2013 | A1 |
20130050493 | Mitic | Feb 2013 | A1 |
20130058523 | Wu et al. | Mar 2013 | A1 |
20130058531 | Hedley et al. | Mar 2013 | A1 |
20130060786 | Serrano et al. | Mar 2013 | A1 |
20130066667 | Gulec et al. | Mar 2013 | A1 |
20130066757 | Lovelace et al. | Mar 2013 | A1 |
20130073347 | Bogaard et al. | Mar 2013 | A1 |
20130077888 | Meyers et al. | Mar 2013 | A1 |
20130080345 | Rassi | Mar 2013 | A1 |
20130084010 | Ross et al. | Apr 2013 | A1 |
20130097630 | Rodriguez | Apr 2013 | A1 |
20130100286 | Lao | Apr 2013 | A1 |
20130108114 | Aviad et al. | May 2013 | A1 |
20130113936 | Cohen et al. | May 2013 | A1 |
20130121581 | Wei et al. | May 2013 | A1 |
20130129152 | Rodriguez Serrano et al. | May 2013 | A1 |
20130132166 | Wu et al. | May 2013 | A1 |
20130136310 | Hofman et al. | May 2013 | A1 |
20130144492 | Takano et al. | Jun 2013 | A1 |
20130148846 | Maeda et al. | Jun 2013 | A1 |
20130148858 | Wiegenfeld et al. | Jun 2013 | A1 |
20130158777 | Brauer et al. | Jun 2013 | A1 |
20130162817 | Bernal | Jun 2013 | A1 |
20130163822 | Chigos et al. | Jun 2013 | A1 |
20130163823 | Chigos et al. | Jun 2013 | A1 |
20130166325 | Ganapathy et al. | Jun 2013 | A1 |
20130170711 | Chigos et al. | Jul 2013 | A1 |
20130173481 | Hirtenstein et al. | Jul 2013 | A1 |
20130182910 | Burry et al. | Jul 2013 | A1 |
20130204719 | Burry et al. | Aug 2013 | A1 |
20130216101 | Wu et al. | Aug 2013 | A1 |
20130216102 | Ryan et al. | Aug 2013 | A1 |
20130229517 | Kozitsky | Sep 2013 | A1 |
20130238167 | Stanfield et al. | Sep 2013 | A1 |
20130238441 | Panelli | Sep 2013 | A1 |
20130242123 | Norman et al. | Sep 2013 | A1 |
20130243334 | Meyers et al. | Sep 2013 | A1 |
20130246132 | Buie | Sep 2013 | A1 |
20130253997 | Robinson et al. | Sep 2013 | A1 |
20130262194 | Hedley | Oct 2013 | A1 |
20130265414 | Yoon et al. | Oct 2013 | A1 |
20130266190 | Wang et al. | Oct 2013 | A1 |
20130268155 | Mian et al. | Oct 2013 | A1 |
20130272579 | Burry et al. | Oct 2013 | A1 |
20130272580 | Karel et al. | Oct 2013 | A1 |
20130278761 | Wu | Oct 2013 | A1 |
20130278768 | Paul et al. | Oct 2013 | A1 |
20130279748 | Hastings | Oct 2013 | A1 |
20130279758 | Burry et al. | Oct 2013 | A1 |
20130279759 | Kagarlitsky et al. | Oct 2013 | A1 |
20130282271 | Rubin et al. | Oct 2013 | A1 |
20130290201 | Rodriguez Carrillo | Oct 2013 | A1 |
20130294643 | Fan et al. | Nov 2013 | A1 |
20130294653 | Burry et al. | Nov 2013 | A1 |
20130297353 | Strange et al. | Nov 2013 | A1 |
20130317693 | Jefferies et al. | Nov 2013 | A1 |
20130325629 | Harrison | Dec 2013 | A1 |
20130329943 | Christopulos et al. | Dec 2013 | A1 |
20130329961 | Fan et al. | Dec 2013 | A1 |
20130336538 | Skaff et al. | Dec 2013 | A1 |
20140003712 | Eid et al. | Jan 2014 | A1 |
20140025444 | Willis | Jan 2014 | A1 |
20140029839 | Mensink et al. | Jan 2014 | A1 |
20140029850 | Meyers et al. | Jan 2014 | A1 |
20140037142 | Bhanu et al. | Feb 2014 | A1 |
20140039987 | Nerayoff et al. | Feb 2014 | A1 |
20140046800 | Chen | Feb 2014 | A1 |
20140056483 | Angell et al. | Feb 2014 | A1 |
20140056520 | Rodriguez Serrano | Feb 2014 | A1 |
20140064564 | Hofman et al. | Mar 2014 | A1 |
20140067631 | Dhuse et al. | Mar 2014 | A1 |
20140072178 | Carbonell et al. | Mar 2014 | A1 |
20140074566 | McCoy et al. | Mar 2014 | A1 |
20140074567 | Hedley et al. | Mar 2014 | A1 |
20140078304 | Othmer | Mar 2014 | A1 |
20140079315 | Kozitsky et al. | Mar 2014 | A1 |
20140081858 | Block et al. | Mar 2014 | A1 |
20140085475 | Bhanu et al. | Mar 2014 | A1 |
20140119651 | Meyers et al. | May 2014 | A1 |
20140126779 | Duda | May 2014 | A1 |
20140129440 | Smith et al. | May 2014 | A1 |
20140136047 | Mian et al. | May 2014 | A1 |
20140140578 | Ziola et al. | May 2014 | A1 |
20140149190 | Robinson et al. | May 2014 | A1 |
20140168436 | Pedicino | Jun 2014 | A1 |
20140169633 | Seyfried et al. | Jun 2014 | A1 |
20140169634 | Prakash et al. | Jun 2014 | A1 |
20140172519 | Nerayoff et al. | Jun 2014 | A1 |
20140172520 | Nerayoff et al. | Jun 2014 | A1 |
20140188579 | Regan et al. | Jul 2014 | A1 |
20140188580 | Nerayoff et al. | Jul 2014 | A1 |
20140195099 | Chen | Jul 2014 | A1 |
20140195138 | Stelzig et al. | Jul 2014 | A1 |
20140195313 | Nerayoff et al. | Jul 2014 | A1 |
20140200970 | Nerayoff et al. | Jul 2014 | A1 |
20140201064 | Jackson et al. | Jul 2014 | A1 |
20140201213 | Jackson et al. | Jul 2014 | A1 |
20140201266 | Jackson et al. | Jul 2014 | A1 |
20140207541 | Nerayoff et al. | Jul 2014 | A1 |
20140214499 | Hudson et al. | Jul 2014 | A1 |
20140214500 | Hudson et al. | Jul 2014 | A1 |
20140219563 | Rodriguez-Serrano et al. | Aug 2014 | A1 |
20140236786 | Nerayoff et al. | Aug 2014 | A1 |
20140241578 | Nonaka et al. | Aug 2014 | A1 |
20140241579 | Nonaka | Aug 2014 | A1 |
20140244366 | Nerayoff et al. | Aug 2014 | A1 |
20140247347 | McNeill et al. | Sep 2014 | A1 |
20140247372 | Byren | Sep 2014 | A1 |
20140249896 | Nerayoff et al. | Sep 2014 | A1 |
20140254866 | Jankowski et al. | Sep 2014 | A1 |
20140254877 | Jankowski et al. | Sep 2014 | A1 |
20140254878 | Jankowski et al. | Sep 2014 | A1 |
20140254879 | Smith | Sep 2014 | A1 |
20140257942 | Nerayoff et al. | Sep 2014 | A1 |
20140257943 | Nerayoff et al. | Sep 2014 | A1 |
20140270350 | Rodriguez-Serrano et al. | Sep 2014 | A1 |
20140270383 | Pederson | Sep 2014 | A1 |
20140270386 | Leihs et al. | Sep 2014 | A1 |
20140278839 | Am et al. | Sep 2014 | A1 |
20140278841 | Natinsky | Sep 2014 | A1 |
20140289024 | Robinson et al. | Sep 2014 | A1 |
20140294257 | Tussy | Oct 2014 | A1 |
20140301606 | Paul et al. | Oct 2014 | A1 |
20140307923 | Johansson | Oct 2014 | A1 |
20140307924 | Fillion et al. | Oct 2014 | A1 |
20140309842 | Jefferies et al. | Oct 2014 | A1 |
20140310028 | Christensen et al. | Oct 2014 | A1 |
20140314275 | Edmondson et al. | Oct 2014 | A1 |
20140316841 | Kilby et al. | Oct 2014 | A1 |
20140324247 | Jun | Oct 2014 | A1 |
20140328518 | Kozitsky et al. | Nov 2014 | A1 |
20140334668 | Saund | Nov 2014 | A1 |
20140336848 | Saund et al. | Nov 2014 | A1 |
20140337066 | Kephart | Nov 2014 | A1 |
20140337319 | Chen | Nov 2014 | A1 |
20140337756 | Thrower et al. | Nov 2014 | A1 |
20140340570 | Meyers et al. | Nov 2014 | A1 |
20140348391 | Schweid et al. | Nov 2014 | A1 |
20140348392 | Burry et al. | Nov 2014 | A1 |
20140355835 | Rodriguez-Serrano et al. | Dec 2014 | A1 |
20140355836 | Kozitsky et al. | Dec 2014 | A1 |
20140355837 | Hedley et al. | Dec 2014 | A1 |
20140363051 | Burry et al. | Dec 2014 | A1 |
20140363052 | Kozitsky et al. | Dec 2014 | A1 |
20140369566 | Chigos et al. | Dec 2014 | A1 |
20140369567 | Chigos et al. | Dec 2014 | A1 |
20140376778 | Muetzel et al. | Dec 2014 | A1 |
20140379384 | Duncan et al. | Dec 2014 | A1 |
20140379385 | Duncan et al. | Dec 2014 | A1 |
20140379442 | Dutta et al. | Dec 2014 | A1 |
20150012309 | Buchheim et al. | Jan 2015 | A1 |
20150019533 | Moody et al. | Jan 2015 | A1 |
20150025932 | Ross et al. | Jan 2015 | A1 |
20150032580 | Altermatt et al. | Jan 2015 | A1 |
20150041536 | Matsur | Feb 2015 | A1 |
20150049914 | Alves | Feb 2015 | A1 |
20150051822 | Joglekar | Feb 2015 | A1 |
20150051823 | Joglekar | Feb 2015 | A1 |
20150052022 | Christy et al. | Feb 2015 | A1 |
20150054950 | Van Wiemeersch | Feb 2015 | A1 |
20150058210 | Johnson et al. | Feb 2015 | A1 |
20150066349 | Chan et al. | Mar 2015 | A1 |
20150066605 | Balachandran et al. | Mar 2015 | A1 |
20150081362 | Chadwick et al. | Mar 2015 | A1 |
20150095251 | Alazraki et al. | Apr 2015 | A1 |
20150100448 | Binion et al. | Apr 2015 | A1 |
20150100504 | Binion et al. | Apr 2015 | A1 |
20150100505 | Binion et al. | Apr 2015 | A1 |
20150100506 | Binion et al. | Apr 2015 | A1 |
20150104073 | Rodriguez-Serrano et al. | Apr 2015 | A1 |
20150112504 | Binion et al. | Apr 2015 | A1 |
20150112543 | Binion et al. | Apr 2015 | A1 |
20150112545 | Binion et al. | Apr 2015 | A1 |
20150112730 | Binion et al. | Apr 2015 | A1 |
20150112731 | Binion et al. | Apr 2015 | A1 |
20150112800 | Binion et al. | Apr 2015 | A1 |
20150120334 | Jones | Apr 2015 | A1 |
20150125041 | Burry et al. | May 2015 | A1 |
20150127730 | Aviv | May 2015 | A1 |
20150138001 | Davies et al. | May 2015 | A1 |
20150149221 | Tremblay | May 2015 | A1 |
20150154578 | Aggarwal et al. | Jun 2015 | A1 |
20150205760 | Hershey et al. | Jul 2015 | A1 |
20150206357 | Chen et al. | Jul 2015 | A1 |
20150221041 | Hanson et al. | Aug 2015 | A1 |
20150222573 | Bain et al. | Aug 2015 | A1 |
20150249635 | Thrower, III et al. | Sep 2015 | A1 |
20150254781 | Binion et al. | Sep 2015 | A1 |
20150269433 | Amtrup et al. | Sep 2015 | A1 |
20150310293 | Dehart | Oct 2015 | A1 |
20150324924 | Wilson et al. | Nov 2015 | A1 |
20150332407 | Wilson, II et al. | Nov 2015 | A1 |
20160036899 | Moody et al. | Feb 2016 | A1 |
20160180428 | Cain | Jun 2016 | A1 |
20160358297 | Alon | Dec 2016 | A1 |
Number | Date | Country |
---|---|---|
103985256 | Aug 2014 | CN |
204303027 | Apr 2015 | CN |
0302998 | Dec 2003 | HU |
10134219 | Nov 1996 | JP |
4243411 | Mar 2009 | JP |
0169569 | Sep 2001 | WO |
02059852 | Jan 2002 | WO |
02059838 | Aug 2002 | WO |
02059852 | Aug 2002 | WO |
2013138186 | Mar 2013 | WO |
2014158291 | Oct 2014 | WO |
2014160426 | Oct 2014 | WO |
Entry |
---|
US 7,970,635 B1, 06/2011, Medina et al. (withdrawn) |
Vimeo, LLC, online presentation for Five Focal's engineering service offering titled “Test and Validation,” website Tocation https://vimeo.com/85556043, date site last visited Aug. 24, 2015. |
Arayanswamy, Ramkumar; Johnson, Gregory E.; Silveira, Paulo E. X.; Wach, Hans B., article titled Extending the Imaging Volume for Biometric Iris Recognition, published Feb. 2005 in Applied Optics IP, vol. 44, Issue 5, pp. 701-712, website location http://adsabs.harvard.edu/abs/2005ApOpt..44..701N. |
Alshahrani, M.A. A., Real Time Vehicle License Plate Recognition on Mobile Devices {Thesis, Master of Science MSc)). Mar. 2013. University of Waikato, Hamilton, New Zealand. |
Anagnostopoulos, Christos-Nikolaos E., et al. “License plate recognition from still images and video sequences: A survey.” IEEE Transactions on intelligent transportation systems 9.3 (2008): 377-391. |
Chan, Wen-Hsin, and Ching-Twu Youe. Video CCD based portable digital still camera. IEEE transactions on consumer electronics 41.3 (1995): 455-459. |
Charge-Coupled Device, Wikipedia, the free encyclopedia, Version: Mar. 4, 2013, http://en.wikipedia.org/w/index.php?tille=Chargecoupled_device&oldid=542042079. |
CARFAX, Inc., “Find Used Cars for Sale,” iTunes App, updated as of Feb. 18, 2016. |
CARFAX, Inc., “Vehicle History Report,” Mobile App, CARFAX Blog dated Aug. 27, 2012. |
Brandon Turkus, re: DiDi Plate App Report dated Jun. 13, 2014. |
Jason Hahn, “Scan License Plates So You Can Text Flirty Messages to Cute Drivers with GM's New App,” digitaltrends.com (http://www.digitaltrends.com/cars/scan-license-plate-text-drivers-gm-didi-plate-app/) dated Jun. 21, 2014. |
Progressive, “Progressive's Image Capture technology saves users time, helps drivers quote and buy auto insurance using their smartphone camera,” Mayfield Village, Ohio—Feb. 2, 2012. |
Don Jergler, “There's an App for That: Mobile Phone Quoting,” Insurance Journal, http://www.insurancejournal.com/news/naiional/2012AI2/21/236521.htm, dated Feb. 21, 2012. |
Number | Date | Country | |
---|---|---|---|
Parent | 15785741 | Oct 2017 | US |
Child | 17110219 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15363960 | Nov 2016 | US |
Child | 15785741 | US | |
Parent | 14716808 | May 2015 | US |
Child | 15363960 | US | |
Parent | 14716738 | May 2015 | US |
Child | 14716808 | US | |
Parent | 14716743 | May 2015 | US |
Child | 14716738 | US | |
Parent | 14716793 | May 2015 | US |
Child | 14716743 | US | |
Parent | 14716651 | May 2015 | US |
Child | 14716793 | US | |
Parent | 14716445 | May 2015 | US |
Child | 14716651 | US | |
Parent | 14613323 | Feb 2015 | US |
Child | 14716808 | US | |
Parent | 14613323 | Feb 2015 | US |
Child | 14716738 | US | |
Parent | 14613323 | Feb 2015 | US |
Child | 14716743 | US | |
Parent | 14613323 | Feb 2015 | US |
Child | 14716793 | US | |
Parent | 14613323 | Feb 2015 | US |
Child | 14716651 | US | |
Parent | 14613323 | Feb 2015 | US |
Child | 14716445 | US | |
Parent | 14455841 | Aug 2014 | US |
Child | 14613323 | US | |
Parent | 14318397 | Jun 2014 | US |
Child | 14455841 | US |