Systems and methods for using machine learning for vehicle damage detection and repair cost estimation

Information

  • Patent Grant
  • 12020217
  • Patent Number
    12,020,217
  • Date Filed
    Wednesday, November 11, 2020
    3 years ago
  • Date Issued
    Tuesday, June 25, 2024
    4 days ago
Abstract
Systems and methods for estimating the repair cost of one or more instances of vehicle damage pictured in a digital image are disclosed herein. These systems and methods may first use a damage detection neural network (NN) model to determine location(s), type(s), intensit(ies), and corresponding repair part(s) for pictured damage. Then, a repair cost estimation NN model may be given a damage type, a damage intensity, and the repair part(s) needed to determine a repair cost estimation. The training of each of the damage detection NN model and the repair cost estimation NN model is described. The manner of outputting results data corresponding to the systems and methods disclosed herein is also described.
Description
TECHNICAL FIELD

This application relates generally to systems for detecting vehicle damage and estimating repair costs of such damage.


BACKGROUND

Vehicles used for transporting people and goods (such as cars, trucks, vans, etc.) can become damaged (e.g., due to collisions, vandalism, acts of nature, etc.). In such cases, it may be important to understand the estimated cost of repairing such damage to the vehicle. For example, an insurance company may need to understand this estimated cost in order to determine whether to repair the damage to the vehicle or whether to total the vehicle out. In other cases, this information may be useful to a layperson with little to no experience with vehicle damage repair cost estimation to help them understand what an expected cost of repair should be (so that they can make informed decisions about having repairs done, such as, for example, verifying that a quoted cost for making a repair under consideration is reasonable).


Current methods may rely on manual and/or in-person evaluation by a person in order to come up with such repair estimates. This may involve the services of a person who is skilled in making such evaluations. However, such persons may not be readily available at all times, and/or may not be readily available to all entities who might be interested in such information in all situations (e.g., in the case where an owner of the damaged vehicle has not involved, e.g., an insurance adjuster to analyze the damage). Accordingly, it is of value to develop ways of estimating such repair costs in a manner that does not require such manual and/or in-person evaluation.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.



FIG. 1 illustrates an image preparation system, according to some embodiments.



FIG. 2 illustrates a method of preparing one or more digital images prior to training a damage detection neural network (NN) model, according to an embodiment.



FIG. 3 illustrates a vehicle damage detection training system, according to some embodiments.



FIG. 4 illustrates a method of training a damage detection NN model, according to an embodiment.



FIG. 5 illustrates a repair cost estimation training system, according to an embodiment.



FIG. 6 illustrates a method of training a repair cost estimation NN model to determine an estimated cost to repair damage, according to an embodiment.



FIG. 7 illustrates a repair cost estimation system, according to an embodiment.



FIG. 8A and FIG. 8B together illustrate a method of making a repair cost estimation, according to an embodiment.



FIG. 9A illustrates a user provided digital image picturing vehicle damage, according to an embodiment.



FIG. 9B illustrates a result of processing on a user provided digital image, according to an embodiment.



FIG. 10 illustrates a user provided digital image of a damaged vehicle that illustrates the overlay of a first vehicle damage segmentation, a second vehicle damage segmentation, and a third vehicle damage segmentation on the user provided digital image, according to an embodiment.



FIG. 11 illustrates a user provided digital image of a damaged damage vehicle that illustrates the overlay of a first vehicle damage segmentation, a second vehicle damage segmentation, a third vehicle damage segmentation, and a fourth vehicle damage segmentation on the user provided digital image, according to an embodiment.



FIG. 12 illustrates a user device displaying results data, according to an embodiment.





DETAILED DESCRIPTION

Machine learning methods may be developed which can augment or replace the use of manual and/or in-person evaluations of vehicle damage repair costs. These machine learning methods may train one or more neural network (NN) models in order to generate a model corresponding to the neural network that is fit to perform a given task.


One example of such a NN model may be a damage detection NN model trained to receive one or more preprocessed digital images of vehicle damage. Such preprocessed digital images may be generated from, for example, a user provided digital image (including stills from a digital video) that might be taken for purposes of use with a system including the damage detection NN model. In some cases, these user provided digital images may be taken by a layperson using a camera that is found on a smartphone (or other device). In other cases, these user provided digital images may be taken by, for example, a camera operated by a service center computer system. Other cases are contemplated. In whatever case, these user provided digital images may be unannotated (meaning that they have not been manually marked or otherwise indicated to have one or more parameters of interest preparatory to using the image to train a damage detection NN model—as may likely be the case when receiving a user provided digital image for processing at an already-trained damage detection NN model). The damage detection NN model may be trained to process a such a user provided digital image (after preprocessing) such that one or more parameters of interest regarding vehicle damage within the user provided digital image (such as a location of the vehicle damage, one or more repair parts corresponding to the vehicle damage, a type of the vehicle damage, and/or an intensity of the vehicle damage) are identified.


Another such example of a NN model may be a repair cost estimation NN model that is trained to receive one or more parameters of interest regarding vehicle damage, such as one or more repair parts corresponding to the vehicle damage, a type of vehicle damage, and an intensity of vehicle damage. This repair cost estimation NN model may be trained to process this data such that an estimation of a cost to repair such damage is made.


The use of two such trained NN models together may allow a user of a system implementing or using such models to provide an (unannotated) user provided digital image picturing, and to ultimately receive an estimated cost to repair such damage (also described herein as a “repair cost estimation”) and/or a view of segmentation data overlaid on the user provided digital image that indicates the location of the corresponding damage.


Embodiments herein discuss systems and methods for training and/or using such NN models.


It is anticipated that the application of systems leveraging these NN models may be useful in many different contexts. For example, service inspection and/or repair providers may be able to more quickly determine the nature of any damage to a vehicle and/or the cost of fixing such damage, making these service providers more efficient. As a further example, rental car companies may be able to determine the nature of damage to a vehicle both before it is rented and after it is returned in a more efficient and remote manner, allowing them to more easily integrate, for example, contactless vehicle rental methods into their business models. As a further example, vehicle insurers may be able to remotely and more quickly review damage to a vehicle and determine the cost of a claim and/or review a vehicle's current damage prior to insuring it. As a further example, laypeople that are untrained in damage analysis may use such systems to determine a reasonable expectation of the cost of making a repair under consideration.


It should be understood that such systems might provide vehicle damage and/or repair cost information via an application programming interface (API) to any consuming software system. Examples of such consuming software systems may include, for example, a vehicle repair order system (in order to capture damage of the vehicle and related estimated cost for the repair order), or a vehicle rental system (in order to, e.g., validate no damage has been done during the rental period), or any other consuming software system that the data might be useful to. These consuming software systems may be examples of “user devices” as described herein.



FIG. 1 illustrates a digital image preparation system 102, according to some embodiments. The digital image preparation system 102 may be used to annotate one or more digital images preparatory to training a damage detection NN model useful for, for example, identifying one or more parameters of interest regarding vehicle damage within a preprocessed user provided digital image (such as a location of the vehicle damage, one or more repair parts corresponding to the vehicle damage, a damage type of the vehicle damage, and/or an intensity of the vehicle damage). The digital image preparation system 102 may make further modifications (beyond annotations) to the one or more such digital images (whether before or after they are annotated) that are to be used to train the damage detection NN model, as will be described in additional detail below.


It is contemplated that the digital image preparation system 102 could be part of a larger computer-implemented system. Such a larger computer-implemented system could include, in addition to the digital image preparation system 102, one or more of, for example, a vehicle damage detection training system 302, a repair cost estimation training system 502, and/or a repair cost estimation system 702 as those systems are described herein.


The digital image preparation system 102 may include a memory 104, one or more processor(s) 106, one or more I/O device(s) 108, and a network interface 110. These elements may be connected by a data bus 112.


The I/O device(s) 108 may include devices connected to the digital image preparation system 102 that allow a user of the digital image preparation system 102 to provide input from the digital image preparation system 102 and receive output from the digital image preparation system 102. For example, these devices may include a mouse, a keyboard, speakers, a monitor, external storage, etc.


The network interface 110 may connect to a network 114 to allow the digital image preparation system 102 to communicate with outside entities, such as, for example, a digital image database 116, an annotated digital image database 118, and/or a vehicle damage detection system 120. Such communications may be performed via one or more APIs used by the digital image preparation system 102.


The memory 104 may contain digital image preparation instructions 122, which may be used by the processor(s) 106 to operate a digital image preparation engine 128. The memory 104 may further include (not yet annotated) digital images 124 (which may have been sourced from the digital image database 116) which are processed by the digital image preparation system 102, after which they become annotated digital images 126 (which may ultimately be stored at the annotated digital image database 118).



FIG. 2 illustrates a method 200 of preparing one or more digital images prior to training a damage detection NN model, according to an embodiment. The method 200 could be performed by, for example, the digital image preparation engine 128 of the digital image preparation system 102 of FIG. 1 (as instructed by the digital image preparation instructions 122 of FIG. 1). For convenience, the digital images being discussed in relation to the method of FIG. 2 may be referred to in the aggregate as a “set” of digital images.


The method 200 includes receiving 202 digital images picturing vehicle damage from a data source. The data source could be a memory on a device implementing the method 200, or the data source could be some data source accessible to (e.g., via a network) the device implementing the method 200 (such as, for example, the digital image database 116 of FIG. 1).


The method 200 further includes converting 204 the digital images to grey scale. This may be done prior to using the digital images in the training process so as to reduce the complexity of the analysis, in that black and white images can be represented using a single matrix, where color images may instead use multiple matrices to be represented. This may increase the training speed as compared to using color images. Converting 204 the digital images to grey scale may occur prior to annotating the digital image (see below). This may increase the accuracy of a damage detection NN model that takes as input grey scaled images (such as at least some embodiments of the damage detection NN model 320 to be described). The converting 204 the digital images to grey scale prior to training may be optional, but may be beneficial in that it leads to a decrease in the time and/or computational complexity of training the damage detection NN model.


The method 200 further includes determining 206 whether a damage detection NN model for identifying one or more parameters of interest (e.g., that are the subject of the annotation process) in an (unannotated) digital image is (already) available to the system implementing the method 200. For example, the digital image preparation engine 128 of FIG. 1 may communicate with the digital image preparation system 102 via the network interface 110 in order to determine whether the vehicle damage detection system 120 has an already trained damage detection NN model for identifying one or more of the parameters of interest (or, in the case where the digital image preparation system 102 includes the vehicle damage detection system 120, the digital image preparation system 102 may check its memory 104 for such a trained damage detection NN model). If not, the method 200 proceeds on to labeling 212 the digital images with a damage location. If so, the method 200 proceeds on to providing 208 the digital images to the model.


Labeling 212 the digital images with a damage location may be done manually (e.g., by a user of the digital image preparation system 102 using the I/O device(s) 108). The damage location may be annotated directly on the digital image in a graphical manner (e.g., using user-placed markings relative to the digital image). It is contemplated that at least some digital images may depict damage in more than one location on the vehicle. In these cases, some or all of these damage locations may be so labeled. The output of this process may be saved in annotation data corresponding to the digital image.


The method 200 then proceeds to labeling 214 the digital images with a damage type. This may be done manually (e.g., by a user of the digital image preparation system 102 using the I/O device(s) 108). The damage type used in the labeling 214 may be one of a group of pre-determined damage types for which the user is interested in training a damage detection NN model to recognize. These damage types may include (but are not limited to) a dent damage type, a scratch damage type, a smash damage type, a crack damage type, a lamp broken damage type, and a break damage type. It is contemplated that other damage types could also be used, should the user be interested in training the damage detection NN model to identify such damage types. The damage type may correspond to a labeled damage location within a digital image. In cases where digital images have more than one labeled damage location, a (potentially different) damage type may correspond to each labeled damage location. The output of this process may be saved in annotation data corresponding to the digital image.


The method 200 then proceeds to labeling 216 the digital images with a damage intensity. This may be done manually (e.g., by a user of the digital image preparation system 102 using the I/O device(s) 108). The damage intensity used in the labeling 216 may be one of a group of pre-determined damage intensities for which the user is interested in training a damage detection NN model to recognized. These damage intensities may include (but are not limited to) a minor intensity, a medium intensity, and a severe intensity. It is contemplated that other damage intensities could also be used, should the user be interested in training the damage detection NN model to identify such damage intensities. The damage intensity may correspond to a labeled damage location within a digital image. In cases where digital images have more than one labeled damage location, a (potentially different) damage intensity may correspond to each labeled damage location. The output of this process may be saved in annotation data corresponding to the digital image.


The method 200 then proceeds to labeling 218 the digital images with a repair part. This may be done manually (e.g., by a user of the digital image preparation system 102 using the I/O device(s) 108). This repair part may be a part used to effect a repair of the damaged location. In cases where digital images have more than one labeled damage location, a (potentially different) repair part may correspond to each labeled damage location. It is also anticipated that a user could specify that multiple repair parts correspond to the damage, in cases where multiple repair parts would be needed to repair such damage. The output of this process may be saved in annotation data corresponding to the digital image. The method 200 then proceeds to cleaning 220 irrelevant digital images.


Each of the labeling 212 the digital images with a damage location, the labeling 214 the digital images with a damage type, the labeling 216 the digital images with a damage intensity, and/or the labeling 218 the digital images with a repair part may be considered a form of annotation of the respective digital image. In other words, each digital image now has a set of corresponding annotation data that is to be used during the training of a damage detection NN model using the digital image. Accordingly, at this point in the method 200, it may be said that the digital images are now annotated digital images (digital images for which corresponding annotation data has been generated).


If, at determining 206, it is instead determined that a damage detection NN model for identifying one or more parameters of interest in a digital image is available, the method 200 proceeds to providing 208 the digital images to the damage detection NN model. The damage detection NN model may implement its current training to label damage location(s) and corresponding damage type(s), intensit(ies), and repair part(s) corresponding to each digital image. These labels may then be placed in annotation data corresponding to each digital image for further use during training of a damage detection NN model using each respective digital image, which may be considered a form of annotation of such digital images. Accordingly, at this point in the method 200, it may be said that the digital images are now annotated digital images.


The method 200 then proceeds to the manually verifying and/or correcting 210 of these model-annotated digital images. This may be done by a user of the digital image preparation system 102 using the I/O device(s) 108. The manually verifying and/or correcting 210 may correct any mistakes made in the annotations made by the damage detection NN model used so that the eventual training process using such the annotated digital images is more accurate. Further, if the damage detection NN model to be trained is the same as the damage detection NN model used in providing 208 the digital images to the damage detection NN model, modifications to such annotations may be made during the process of manually verifying and/or correcting 210 (or else the eventual training will simply result in the same damage detection NN model that these digital images were provided to). The method 200 then proceeds to cleaning 220 irrelevant digital images.


When cleaning 220 the irrelevant digital images, any digital image with corresponding annotation data that lacks, for example, one or more of a damage location, a damage type, a damage intensity, and/or a repair part may be removed from the set of digital images.


The method 200 further includes normalizing 222 the digital images. The normalization process may account for the fact that different digital images may have different overall ranges of values (e.g., one grey scaled digital image may have darker blacks or lighter whites than another digital image within the set). The normalization process may remove this variance from the set of digital images by causing each of values of each matrix representing the (perhaps grey scaled, as discussed above) digital image to be normalized according to a defined range (e.g., from −1 to 1). This may increase the accuracy of the eventual accuracy of a damage detection NN model that takes as input normalized images (such as at least some embodiments of the damage detection NN model 320 to be described).


The processes of converting 204 a digital image to grey scale and normalizing 222 such a digital image (whether one or both of these processes are applied) may either individually or jointly be considered a “preprocessing” (as that term is understood herein) of the digital images. Accordingly, in embodiments of the method 200 where the digital images have been normalized (and possibly grey scaled prior to normalization) at this stage, they may be considered “preprocessed digital images.” In this specification, when both a grey scaling and normalization of a digital image are discussed as being performed on a (singular) “digital image” to generate a “preprocessed digital image,” it may be understood that such processes are applied in step-wise fashion starting with the digital image, with an intervening image existing prior to completing the preprocessed digital image (but not explicitly discussed)).


The method 200 further includes augmenting 224 the set of digital images. The set of digital images may be augmented by taking one or more of the digital images and modifying it such that the modified digital image is also represented within the set. A digital image may be so modified by rotation, scaling, inversion, mirroring, cropping, etc. (with corresponding changes made to its corresponding annotation data). In this manner, a more robust data set for training may be generated.


During each of the cleaning 220 the irrelevant digital images, the normalizing 222 the digital images, and the augmenting 224 the set of digital images, the corresponding annotation data for each image may be accordingly protected/correspondingly modified. This may in some cases be handled by, for example, the digital image preparation system 102 of FIG. 1 without explicit input from the user.


The method 200 further includes saving 226 the annotated digital images. In other words, the preprocessed digital images along with their corresponding annotation data are saved as annotated digital images. These annotated digital images may be saved as, for example, the annotated digital images 126 in the memory 104 of the digital image preparation system 102 of FIG. 1. Alternatively or additionally, these annotated digital images may be saved in, for example, the annotated digital image database 118 of FIG. 1.



FIG. 3 illustrates a vehicle damage detection training system 302, according to some embodiments. The vehicle damage detection training system 302 may be used to train a damage detection NN model for determining a location of vehicle damage pictured in a user provided digital image, determining one or more repair parts corresponding to the vehicle damage, determining a type of the vehicle damage, and determining an intensity of the vehicle damage.


It is contemplated that the vehicle damage detection training system 302 could be part of a larger computer-implemented system. Such a larger computer-implemented system could include, in addition to the vehicle damage detection training system 302, one or more of, for example, the digital image preparation system 102, the repair cost estimation training system 502, and/or the repair cost estimation system 702 as those systems are described herein.


The vehicle damage detection training system 302 may include a memory 304, one or more processor(s) 306, one or more I/O device(s) 308, and a network interface 310. These elements may be connected by a data bus 312.


The I/O device(s) 308 may include devices connected to the vehicle damage detection training system 302 that allow a user of the vehicle damage detection training system 302 to provide input from the vehicle damage detection training system 302 and receive output from the vehicle damage detection training system 302. For example, these devices may include a mouse, a keyboard, speakers, a monitor, external storage, etc.


The network interface 310 may connect to a network 314 to allow the vehicle damage detection training system 302 to communicate with outside entities, such as, for example, the annotated digital image database 118. Such communications may be performed using one or more APIs used by the vehicle damage detection training system 302.


The memory 304 may contain damage detection NN model training instructions 316, which may be used by the processor(s) 306 to operate a damage detection NN model training engine 318. The memory 304 may further include a damage detection NN model 320, which may be a NN model that can be used to (and/or that is being trained to) determine a location of vehicle damage pictured in a user provided digital image, one or more repair parts corresponding to such vehicle damage; a type of the vehicle damage; and an intensity of the vehicle damage. The memory 304 may further include annotated digital images 322 that are being used by the vehicle damage detection training system 302 to train the damage detection NN model 320. In some embodiments, the annotated digital images 322 may have been sourced from, for example, the annotated digital image database 118.



FIG. 4 illustrates a method 400 of training a damage detection NN model, according to an embodiment. The method 400 may be performed by, for example, the damage detection NN model training engine 318 of the vehicle damage detection training system 302 (as instructed by the damage detection NN model training instructions 316 of FIG. 3). For convenience, the digital images being discussed in relation to the method of FIG. 4 may be referred to in the aggregate as a “set” of digital images.


The method 400 includes receiving 402 annotated digital images. The set of annotated digital images received may be provided from an outside source, such as, for example, the annotated digital image database 118. Annotated digital images so received could also be sourced from, for example, an image preparation system (such as the digital image preparation system 102 of FIG. 1). The set of annotated digital images may be digital images that have been normalized (and possibly grey scaled), as discussed above. The set of annotated digital images may be digital images that are annotated with labels for damage location(s) and corresponding damage type(s), intensit(ies), and repair part(s), in the manner described above.


The method 400 further includes bifurcating 404 the set of annotated digital images into a training subset and a validation subset. The training subset may be used to train a damage detection NN model, and the validation subset may be used to check against the damage detection NN model so trained in order to make a determination regarding the accuracy of the damage detection NN model.


The method 400 further includes defining 406 a learning rate for the training process. The learning rate controls the amount of adjustment that is made for one or more neurons of the damage detection NN model in relation to a calculated loss between the output of the NN model as currently arranged and the ideal result (e.g., how accurately the damage detection NN model as currently arranged can properly identify damage location(s) and corresponding damage type(s), intensit(ies), and repair part(s)). These losses may be determinable by the system by comparing the result of using an (annotated) digital image with the damage detection NN model as currently arranged and comparing the result to the annotation data corresponding to the digital image.


The method 400 further includes defining 408 an epoch for use with the training data, which indicates how many runs through the training subset are performed during the training process.


The method 400 further includes training 410 the damage detection NN model as a mask region-based convolutional neural network (mask RCNN) using the training subset according to the learning rate and the epoch. Training the damage detection NN model as a mask RCNN may have certain advantages in embodiments herein. It may be that mask RCNNs are particularly suited for detecting one or more types of objects within a digital image. During training as a mask RCNN, the damage detection NN model may be trained to recognize both the location of and one or more classifications of an object found in a digital image using the training subset of the set of annotated digital images, which contains annotated digital images showing vehicle damage and having corresponding annotation data for each of the classification variables. For instance, a damage detection NN model trained as a mask RCNN according to embodiments herein may be capable of identifying the location of vehicle damage. This information may be returned as a mask of the vehicle damage and/or a bounding box surrounding the vehicle damage (as will be illustrated in embodiments below). Further, training the damage detection NN model as a mask RCNN may allow the damage detection NN model to classify such recognized damage according to a type (e.g., scratch, smash, dent, crack, lamp broken, etc.); according to an intensity (e.g., minor, medium, or severe etc.); and according to the corresponding repair part(s) needed to repair the pictured damage.


Transfer learning may be leveraged during the training process. For example, a known NN model may already have various trained layers that can identify points, then lines, then shapes, etc., but may not have one or more final layers that can identify damage locations, types, intensities, and/or corresponding repair parts. In these cases, the layers of the (separate) known NN model for, for example, points, lines, shapes, etc., may be re-used for the current damage detection NN model, and the training process may then train these one or more final layers for making the ultimate identifications desired (e.g., locations, types, intensities, and/or corresponding repair parts). Transfer learning leveraged in this manner may save training time.


The trained damage detection NN model (e.g., the damage detection NN model 320) may accordingly receive as input a preprocessed user provided digital image (e.g., a version of a user provided digital image that has been, for example, normalized (possibly after being grey scaled, where the preprocessed image may be grey scaled in the case where the damage detection NN model was trained on grey scaled images)). The damage detection NN model then may process such an image in order to identify the location(s) of vehicle damage in the user provided digital image. The damage detection NN model may represent and/or store these locations as segmentation data that is configured to be overlaid on the user provided digital image (e.g., configured to be overlaid on the version of the user provided digital image that is not grey scaled and/or normalized). Further, the damage detection NN model may further process the preprocessed user provided digital image to identify damage type(s) corresponding to each damage location, intensit(ies) of damage corresponding to each damage location, and/or repair part(s) needed at each damage location to effectuate a repair of the identified damage.


It is anticipated that, to the extent that annotations involving multiple repair parts for a single instance of vehicle damage are available in the digital images in the training data used to train the damage detection NN model, the damage detection NN model may be able to identify multiple repair parts that correspond to a single instance of vehicle damage.


Further, the damage detection NN model may be trained on sufficient digital images such that an appropriate accuracy is reached for determining these locations, types, intensities, and repair parts without the use of a reference image of an undamaged vehicle. In other words, it is anticipated that no reference image of an undamaged vehicle is necessary to perform methods disclosed herein using a trained damage detection NN model.


The method 400 further includes determining 412 the accuracy of the model using the validation subset. Once all epochs of training data have been run through, the damage detection NN model is run with the validation subset, and an overall loss (or average loss) based on losses corresponding to each annotated digital images of the validation subset as applied to the damage detection NN model is calculated. For example, these losses may be calculated based on how accurately the damage detection NN model as currently arranged can properly identify damage location(s) and corresponding damage type(s), intensit(ies), and repair part(s) for the digital images in the validation subset, as compared to annotation data corresponding each of the annotated digital images in the validation subset. In some embodiments, a multi-task loss function is used. For example:

Loss=Lcls+Lbox+Lmask, where:

    • Lcls is the loss according to classification(s) (e.g., damage type, intensity, and repair parts);
    • Lbox is the loss for predicting the bounding box(s) (e.g., corresponding to damage location(s)); and
    • Lmask is the loss for predicting the mask(s) (e.g., corresponding to the damage location(s)).


The method 400 further includes determining 414 whether the accuracy is acceptable. If the overall loss (or average loss) calculated in relation to the validation subset is less than (or less than or equal to) a given value, the method 400 may deem that the damage detection NN model that has been trained is sufficiently accurate, and may proceed to saving 416 the trained damage detection NN model for later use.


However, if the overall loss (or average loss) calculated in relation to the validation subset is greater than (or greater than or equal to) the given value, the method 400 instead proceeds to determining 418 whether hypertuning is needed. Hypertuning may refer to changing one or both of a learning rate and/or an epoch used by the method 400 in order to generate a different resulting trained NN model. For example, in FIG. 4, if it is determined that the such hypertuning is needed, the method 400 may return to defining 406 a learning rate so that one or more of the learning rate and/or the epoch can be modified and a new damage detection NN model can be trained.


If it is determined that hypertuning is not needed, the method 400 then concludes that additional (or new) training using additional annotated digital images may be needed in order to generate an accurate damage detection NN model. In this case, the method 400 proceeds to receiving 402 an additional and/or different set of annotated digital images such that the (now different) set may be used in conjunction with the method 400, with the hope of being able to generate a more accurate damage detection NN model with the help of this additional data.


It is contemplated that determining 418 whether hypertuning is needed and/or determining 420 that additional annotated digital images are needed to help generate a more accurate damage detection NN model could happen in any order (as both actions may be independently capable of changing/improving the output of the damage detection NN model trained by the method 400). Further, in some embodiments, it is also contemplated that perhaps only one or the other of the determining 418 whether hypertuning is needed and/or the determining 420 that additional annotated digital images are needed to help generate a more accurate damage detection NN model is performed during an iteration of the method 400.



FIG. 5 illustrates a repair cost estimation training system 502, according to an embodiment. The repair cost estimation training system 502 may be used to train a repair cost estimation NN model that can be used to determine an estimated cost to repair damage based on a received damage type, a received damage intensity, and one or more received repair parts to make a repair.


It is contemplated that the repair cost estimation training system 502 could be part of a larger computer-implemented system. Such a larger computer-implemented system could include, in addition to the repair cost estimation training system 502, one or more of, for example, the digital image preparation system 102, the vehicle damage detection training system 302, and/or the repair cost estimation system 702 as those systems are described herein.


The repair cost estimation training system 502 may include a memory 504, one or more processor(s) 506, one or more I/O device(s) 508, and a network interface 510. These elements may be connected by a data bus 512.


The I/O device(s) 508 may include devices connected to the repair cost estimation training system 502 that allow a user of the repair cost estimation training system 502 to provide input from the repair cost estimation training system 502 and receive output from the repair cost estimation training system 502. For example, these devices may include a mouse, a keyboard, speakers, a monitor, external storage, etc.


The network interface 510 may connect to a network 514 to allow the repair cost estimation training system 502 to communicate with outside entities, such as, for example, a historical repair cost time series database 516. Such communications may be performed using one or more APIs used by the repair cost estimation training system 502.


The memory 504 may contain a repair cost estimation NN model training instructions 518, which may be used by the processor(s) 506 to operate a repair cost estimation NN model training engine 520. The memory 504 may further include a repair cost estimation NN model 522, which may be a NN model that can be used to (e.g., that is being trained to) determine an estimated cost to repair damage based on a received damage type, a received damage intensity, and one or more received repair parts. In some embodiments, a historical repair cost time series 524 may have been sourced from, for example, the historical repair cost time series database 516.



FIG. 6 illustrates a method 600 of training a repair cost estimation NN model to determine an estimated cost to repair damage, according to an embodiment. The method 600 may be performed by, for example, the repair cost estimation NN model training engine 520 of the repair cost estimation training system 502 (as instructed by the repair cost estimation NN model training instructions 518 of FIG. 5).


The method 600 includes initializing 602 the neuron weights within the repair cost estimation NN model that is to be trained. In transfer learning cases, these weights may be partially transferred in from another NN model.


The method 600 includes collecting 604 historical time series data for repair cost according to repair part(s), damage type, and damage intensity. The historical time series data collected may be gathered from an outside source, such as, for example, the historical repair cost time series database 516 of FIG. 5. The historical time series data may be data points that each correspond to a repair that was made to a damage vehicle that are labeled and/or ordered according to time. Each of these time-associated data points may record multiple items per data point, such as the point in time a repair was made, a damage type, a damage intensity, and repair part(s) used to repair the damage, and the cost of making the repair. The relevant time with which each data point is associated may be, for example, a date, a year, or some other record of where in an overall timeline a data point may fit.


The method 600 further includes standardizing 606 the historical time series data. Because the historical time series data points may be associated with items that do not have similar ranges (for example, an item corresponding to the cost of making a repair may have a range from 0 to 50,000 (dollars), where an item corresponding to the damage intensity may be a discrete classification system (minor, medium, severe)), it may be beneficial to standardize or normalize all of the items relative to each other, by removing the mean and scaling to unit variance. For example, each item may be translated (as necessary) to represent a numerical value (e.g., in the case of a damage intensity that uses the discrete classification system discussed above, minor may be translated to 1, middle may be translated to 2, and severe may be translated to 3). Then, the values corresponding to each item may be brought into a known range relative to the possible ranges for that item. For example, each data point for each item may be brought into the range of −1 to 1, while keeping the (relative distance between each separate data point intact). This known range (whether it be −1 to 1 or some other known range) may be the same range used for each item, which may act to simplify the training of the repair cost estimation NN model because differing ranges do not have to be accounted for within the training process.


The method 600 further includes bifurcating 608 the historical time series data into a training subset and a validation subset. The training subset may be used to train a repair cost estimation NN model, and the validation subset may be used to check against the repair cost estimation NN model so trained in order to make a determination regarding the accuracy of the repair cost estimation NN model.


The method 600 further includes defining 610 a learning rate for the training process. The learning rate controls the amount of adjustment that is made for one or more neurons of the repair cost estimation NN model in relation to a calculated loss between the output of the repair cost estimation NN model as currently arranged and the ideal result (e.g., how accurately the repair cost estimation NN model as currently arranged can properly estimate a cost of a repair). These losses may be determinable by the system by comparing the result of using a data point containing repair part(s), a damage type, and a damage intensity with the repair cost estimation NN model as currently arranged and comparing the resulting output of a repair cost to the actual repair cost associated with the data point.


The method 600 further includes defining 612 an epoch for use with the training data, which indicates how many runs through the training subset are performed during the training process.


The method 600 further includes using 614 a long short-term memory (LSTM) recurrent neural network (RNN) to train the repair cost estimation NN model using the training subset, according to the learning rate and the epoch.


Training the repair cost estimation NN model as an LSTM RNN may have certain advantages in embodiments herein. It may be that LSTM RNNs are particularly suited for predicting a result based on a input set of data with a view of analogous data collected over a previous time period. LSTM RNNs may be able to make predictions of results that account for, at least in part, variations over time within such data (and, e.g., in response to trends over time) Accordingly, during training as an LSTM RNN, the repair cost estimation NN model may be trained to make a repair cost estimation based on repair part(s) corresponding to vehicle damage, a type of the vehicle damage, and an intensity of the vehicle damage and in view of pricing trends over time using the training subset of the historical time series data, which contains this type of data for each timestamped data point. This information may be returned as textual information (as illustrated below).


In some embodiments, it is contemplated that a repair cost estimation NN model (e.g., the repair cost estimation NN model 522) can be trained to receive other types of inputs. For example, the data points of the training subset of the historical time series data may also include a geographic indication (such as city, zip code, country, county, etc.) where the repair took place. The training process may then configure the repair cost estimation NN model 522 to account for geographical location (as well) when making a repair cost estimation.


The trained repair cost estimation NN model (e.g., the repair cost estimation NN model 522) may accordingly receive as input a damage type of vehicle damage, a damage intensity of the vehicle damage, and repair part(s) used to repair the vehicle damage. The repair cost estimation NN model then may process this information and identify an estimated cost to repair the vehicle damage at the current time.


In some embodiments, it is contemplated that a repair cost estimation NN model (e.g., the repair cost estimation NN model 522) can give output corresponding to additional types of inputs. For example, the repair cost estimation NN model 522 may be trained to receive a geographic indication (as described above) such that it can further determine the estimated cost to repair damage based on a received geographic indication.


The method 600 further includes determining 616 the accuracy of the model using the validation subset. Once all epochs of training data have been run through, the repair cost estimation NN model is run with the validation subset, and an overall loss (or average loss) based on losses corresponding to each data point of the validation subset as applied to the repair cost estimation NN model is calculated. For example, these losses may be calculated based on how accurately the repair cost estimation NN model as currently arranged can properly estimate a cost of repair at the timestamped time for each data point in the validation subset using the repair part(s) corresponding to the vehicle damage, the type of the vehicle damage, and the intensity of the vehicle damage from each respective data point, as compared to the actual repair cost at that same time that is known for that data point.


The method 600 further includes determining 618 whether the accuracy is acceptable. If the overall loss (or average loss) calculated in relation to the validation subset is less than (or less than or equal to) a given value, the method 600 may deem that the repair cost estimation NN model that has been trained is sufficiently accurate, and may proceed to determining 620 the trained repair cost estimation NN model for later use.


However, if the overall loss (or average loss) calculated in relation to the validation subset is greater than (or greater than or equal to) the given value, the method 600 instead proceeds to determining 620 whether hypertuning is needed. Hypertuning may refer to changing one or both of a learning rate and/or an epoch used by the method 600 in order to generate a different resulting trained repair cost estimation NN model. For example, in FIG. 6, if it is determined that such hypertuning is needed, the method 600 may return to defining 610 a learning rate so that one or more of the learning rate and or the epoch can be modified and a new repair cost estimation NN model can be trained.


If it is determined that hypertuning is not needed, the method 600 then concludes that additional or different historical time series data may be needed in order to generate an accurate model. In this case, the method 600 proceeds to collecting 604 (additional) historical time series data such that a larger and/or different set of such data may be used in conjunction with the method 600, with the hope of being able to generate a more accurate repair cost estimation NN model with the help of this additional data.


It is contemplated that determining 620 whether hypertuning is needed and/or determining 622 that additional historical time series data are needed to help generate a more accurate repair cost estimation NN model could happen in any order (as both actions may be independently capable of changing/improving the output of the repair cost estimation NN model trained by the method 600). Further, in some embodiments, it is also contemplated that perhaps only one or the other of determining 620 whether hypertuning is needed and/or determining 622 that additional historical time series data is needed to help generate a more accurate repair cost estimation NN model is performed during an iteration of the method 600.



FIG. 7 illustrates a repair cost estimation system 702, according to an embodiment. The repair cost estimation system 702 may be used to make a repair cost estimation corresponding to damage in a user provided digital image received at the repair cost estimation system 702. Further, the repair cost estimation system 702 may be capable of overlaying segmentation data on the user provided digital image that illustrates the location of the damage in the image for which the repair cost estimation has been made.


It is contemplated that the repair cost estimation system 702 could be part of a larger computer-implemented system. Such a larger computer-implemented system could include, in addition to the repair cost estimation system 702, one or more of, for example, the digital image preparation system 102, the vehicle damage detection training system 302, and/or the repair cost estimation training system 502 as those systems are described herein.


The repair cost estimation system 702 may include a memory 704, one or more processor(s) 706, one or more I/O device(s) 708, and a network interface 710. These elements may be connected by a data bus 712.


The I/O device(s) 708 may include devices connected to the repair cost estimation system 702 that allow a user of the repair cost estimation system 702 to provide input from the repair cost estimation system 702 and receive output from the repair cost estimation system 702. For example, these devices may include a mouse, a keyboard, speakers, a monitor, external storage, etc.


The network interface 710 may connect to a network 714 to allow the repair cost estimation system 702 to communicate with outside entities, such as, for example, a user device 716, a repair parts pricing database 718, the vehicle damage detection training system 302, and/or the repair cost estimation training system 502. The repair parts pricing database 718 may be a database containing current information regarding the price(s) of one or more repair parts. The vehicle damage detection training system 302 may be, e.g., the vehicle damage detection training system 302 of FIG. 3, and the repair cost estimation training system 502 may be, e.g., the repair cost repair cost estimation training system 502 of FIG. 5. Such communications may be performed using one or more APIs used by the repair cost estimation system 702.


The memory 704 may contain repair cost estimation instructions 720, which may be used by the processor(s) 706 to operate a repair cost estimation engine 722. The memory 704 may further include the damage detection NN model 320, which may be a NN model that has been trained to identify a location of, type of, intensity of, and repair part(s) corresponding to one or more instances of pictured vehicle damage in a preprocessed user provided digital image, as described above. The damage detection NN model 320 may have been received at the repair cost estimation system 702 from the vehicle damage detection training system 302 after training at the vehicle damage detection training system 302, in the manner described above. The memory 704 may further include the repair cost estimation NN model 522 which may be a NN model that has been trained to determine an estimated cost to repair damage based on a received damage type, a received damage intensity, and received repair part(s) to make a repair, in the manner described above. This repair cost estimation NN model 522 may have been received at the repair cost estimation system 702 from the repair cost estimation training system 502 after training at the repair cost estimation training system 502, in the manner described above. The memory 704 may further include a user provided digital image 724. The user provided digital image 724 may have been received from the user device 716. The user provided digital image 724 may depict a damaged vehicle that has one or more damage locations, types, and/or intensities, in the manner described herein. The memory 704 may further include a preprocessed user provided digital image 726 that was generated by the repair cost estimation system 702 from the user provided digital image 724 using the processor(s) 706. The memory 704 may further include results data 728 that contains the results of processing at the processor(s) 706 of the preprocessed user provided digital image 726 with the damage detection NN model 320. This results data 728 may include one or more of segmentation data to be overlaid on the user provided digital image 724 indicating the location of the vehicle damage, an estimated cost to repair the vehicle damage, the type of the vehicle damage, the intensity of the vehicle damage, and/or the one or more repair part(s) corresponding to the vehicle damage.



FIG. 8A illustrates a method 800 of making a repair cost estimation, according to an embodiment. The method 800 may be performed by, for example, the repair cost estimation engine 722 of the repair cost estimation system 702 (as instructed by the repair cost estimation instructions 720 of FIG. 7).


The method 800 includes receiving 802 a user provided digital image containing unannotated vehicle damage. For example, the repair cost estimation system 702 may receive such user provided digital image as the user provided digital image 724 over the network 714 from the user device 716.


The method 800 further includes generating 804 a preprocessed user provided digital image. This may be done by applying normalization to the user provided digital image. This may further be accomplished in some cases by applying grey scaling to the user provided digital image, which may allow compatibility with the damage detection NN model 320, in the case the damage detection NN model 320 was trained using grey scaled digital images, in the manner described above. This generating 804 may be performed by the processor(s) 706 using the repair cost estimation engine 722. This generating 804 may generate the preprocessed user provided digital image 726.


The method 800 further includes providing 806, to a damage detection NN model, the preprocessed user provided digital image. For example, the repair cost estimation system 702 may provide the preprocessed user provided digital image 726 to the damage detection NN model 320 for processing at the repair cost estimation system 702 (using the one or more processor(s) 706). Alternatively, the preprocessed user provided digital image 726 may be provided over the network 714 to a damage detection NN model hosted at the vehicle damage detection training system 302, which then processes the preprocessed user provided digital image 726 using the hosted damage detection NN model 320 using one or more processor(s) of the vehicle damage detection training system 302.


The method 800 further includes receiving 808, from the damage detection NN model, a location of the damage, corresponding repair part(s), a type of the damage, and an intensity of the damage. In the case of processing that occurs on the vehicle damage detection training system 302, the vehicle damage detection training system 302 returns these results to the repair cost estimation system 702. These results may represent the location of damage in the user provided digital image 724 as segmentation data that is configured to be overlaid on the user provided digital image 724. Further, these results may identify a damage type corresponding to the damage location, an intensity of the damage corresponding to the damage location, and/or one or more repair parts needed at the damage location to effectuate a repair of the identified damage. These results may be stored in the results data 728.


The method 800 further includes providing 810, to a repair cost estimation NN model, the corresponding repair part(s), the type of the damage, and the intensity of the damage. For example, the repair cost estimation system 702 may provide this information to the repair cost estimation NN model 522 for processing at the repair cost estimation system 702 (using the processor(s) 706). Alternatively, this information may be provided over the network 714 to a repair cost estimation NN model hosted at the repair cost estimation training system 502, which then processes this data using the hosted repair cost estimation NN model 522 using one or more processor(s) of the repair cost estimation training system 502. This information may be sourced from the results data 728 (as previously there stored).


The method 800 further includes receiving 812, from the repair cost estimation NN model, an estimated repair cost. This result may represent the expected or estimated total cost of repairing the unannotated vehicle damage from the user provided digital image at the present time. This result may be stored in the results data 728.


The method 800 further optionally includes determining 814 a current cost of the corresponding repair part(s). The current cost of the corresponding repair part(s) may be knowable in real time by communicating with, for example, the repair parts pricing database 718.


The method 800 further optionally includes updating 816 the estimated repair cost based on the current cost of the corresponding repair part(s). This may involve adjusting the estimated repair cost based on how well the current cost of the corresponding repair part(s) is in line with the repair cost estimation. For example, if the current cost of (any one of) the corresponding repair part(s) is determined to be much higher or lower recently (e.g., as discovered by referring again to the repair parts pricing database 718), the repair cost estimation system 702 may determine that the repair part costs that were used to train the repair cost estimation NN model used may have been somewhat out of date, and may adjust the estimated repair cost upward or downward accordingly.


The method 800 further includes providing 818, to a user device, results data comprising the estimated repair cost and segmentation data overlaid on the user provided digital image indicating the location of the unannotated vehicle damage. For example, the estimated repair cost may be an estimated repair cost predicted by the repair cost estimation NN model 522 and stored at the results data 728. The segmentation data may also be sourced from the results data 728. Then, such segmentation data is overlaid by the repair cost estimation system 702 on the user provided digital image 724. The estimated repair cost and the user provided digital image 724 overlaid with the segmentation data are then delivered to the user device 716 over the network 714.



FIG. 9A illustrates a user provided digital image 902 picturing vehicle damage, according to an embodiment. The user provided digital image 902 may be an image sent to, for example, the repair cost estimation system 702 for processing, according to an embodiment. The user provided digital image 902 as shown in FIG. 9A includes the damaged vehicle 904, with a visibly damaged front end.



FIG. 9B illustrates a result of processing on the user provided digital image 902, according to an embodiment. The user provided digital image 902 may have been processed by, for example, the repair cost estimation system 702 using the processor(s) 706, in the manner described above. As a result, the damage detection NN model 320 of the repair cost estimation system 702 has identified a first vehicle damage segmentation 906 identifying the location of first vehicle damage 912, a second vehicle damage segmentation 908 identifying the location of second vehicle damage 914, and a third vehicle damage segmentation 910 identifying the location of third vehicle damage 916 for the damaged vehicle 904 of the user provided digital image 902. Each of these “vehicle damage segmentations” may be a graphical representation of “segmentation data” as described above. Further, the processing using the damage detection NN model 320 may have also determined a type, a location, and corresponding repair part(s) for each of the first vehicle damage 912 (represented by the first vehicle damage segmentation 906), the second vehicle damage 914 (represented by the second vehicle damage segmentation 908), and the third vehicle damage 916 (represented by the third vehicle damage segmentation 910). For each of the first vehicle damage 912, the second vehicle damage 914, and the third vehicle damage 916, the segmentation data indicating the location of the damage, the damage type, the damage intensity, and the corresponding repair part(s) may be saved in, for example, the memory 704 of the repair cost estimation system 702 (e.g., as the results data 728).



FIG. 10 illustrates a user provided digital image 1002 of a damaged vehicle 1004 that illustrates the overlay of a first vehicle damage segmentation 1006, a second vehicle damage segmentation 1008, and a third vehicle damage segmentation 1010 on the user provided digital image 1002, according to an embodiment. The user provided digital image 1002 may have been provided by the user as showing first vehicle damage 1012, second vehicle damage 1014, and third vehicle damage 1016 to the repair cost estimation system 702, but that did not include the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and/or the third vehicle damage segmentation 1010. As illustrated, it is not necessarily a requirement that a user provided digital image such as the user provided digital image 1002 provided to the repair cost estimation system 702 include the entire vehicle. The user provided digital image 1002 with the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and/or the third vehicle damage segmentation 1010 may be one form of results data generated by a repair cost estimation system 702 and provided to (and displayed on) a user device 716 in response to receiving the user provided digital image 1002 from the user device 716. Each of these “vehicle damage segmentations” may be a graphical representation of “segmentation data” as described above.


As shown, first vehicle damage segmentation 1006 may illustrate the location of the first vehicle damage 1012, the second vehicle damage segmentation 1008 may illustrate the location of the second vehicle damage 1014, and the third vehicle damage segmentation 1010 may illustrate the location of third vehicle damage 1016. These illustrations may be shown by using the illustrated bounding boxes and/or masks showing the location of the respective identified damage. Further, as illustrated, each of these bounding boxes may include for display a textual representation of a damage type corresponding to the damage pictured. Additional and/or alternative textual representations may be similarly made, as discussed in further detail below.


Further, the first vehicle damage 1012, the second vehicle damage 1014, and the third vehicle damage 1016 may have each been independently identified by the repair cost estimation system 702 as being of one of various possible damage types. For example, the first vehicle damage 1012 may have been identified by the repair cost estimation system 702 as a scratch damage type, the second vehicle damage 1014 may have been identified by the repair cost estimation system 702 as a dent damage type, and the third vehicle damage 1016 may have been identified by the repair cost estimation system 702 as a smash damage type. As illustrated, these types may respectively be displayed textually in, for example, the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and the third vehicle damage segmentation 1010.


Further, the first vehicle damage 1012, the second vehicle damage 1014, and the third vehicle damage 1016 may have each been independently identified by the repair cost estimation system 702 as being of various possible damage intensities. For example, the first vehicle damage 1012 may have been identified as of a severe damage intensity, the second vehicle damage 1014 may have been identified by the repair cost estimation system 702 as of a minor damage intensity, and the third vehicle damage 1016 may have been identified by the repair cost estimation system 702 as of a severe damage intensity. While not illustrated, it is contemplated that these intensities may respectively be displayed textually in, for example, the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and the third vehicle damage segmentation 1010.


Various (and potentially different) repair parts may have been identified by the repair cost estimation system 702 as corresponding to the first vehicle damage 1012, the second vehicle damage 1014, and the third vehicle damage 1016. For example, a “rear passenger door” repair part may have been identified to correspond to the first vehicle damage 1012, a “none” repair part may have been identified to correspond to the second vehicle damage 1014 (meaning that the repair can be done without a repair part), and a “front passenger door” repair part may have been identified to correspond to the third vehicle damage 1016. It is further anticipated that, to the extent that data involving multiple repair parts for a single instance of vehicle damage is available in the training data used to train the damage detection NN model 320, the repair cost estimation system 702 may be able to identify multiple repair parts that correspond to one of the first vehicle damage 1012, the second vehicle damage 1014, and/or the third vehicle damage 1016. For example, in addition to the “front passenger door” part, a “front passenger window mechanism” may also be identified as corresponding to the third vehicle damage 1016. While not illustrated, it is contemplated that these repair parts may respectively be displayed textually in, for example, the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and the third vehicle damage segmentation 1010.


Once identified, the damage type, the damage intensity, and the corresponding repair part(s) for one or more of the first vehicle damage 1012, the second vehicle damage 1014, and/or the third vehicle damage 1016 may be processed with the repair cost estimation NN model 522 in order to generate a repair cost estimation for one (or more) of these. This repair cost estimation may also be an example of results data that is sent to and displayed on the user device 716. While not illustrated, it is contemplated that the repair cost estimations may respectively be displayed textually in, for example, the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and the third vehicle damage segmentation 1010.


It is further contemplated that the damage type, the damage intensity, and the corresponding repair part(s) for each of these could also be sent to (and displayed on) the user device 716 as results data.



FIG. 11 illustrates a user provided digital image 1102 of a damaged damage vehicle 1104 that illustrates the overlay of a first vehicle damage segmentation 1106, a second vehicle damage segmentation 1108, a third vehicle damage segmentation 1110, and a fourth vehicle damage segmentation 1112 on the user provided digital image 1102, according to an embodiment. The user provided digital image 1102 may have been provided by the user as showing first vehicle damage 1114, second vehicle damage 1116, third vehicle damage 1118, and fourth vehicle damage 1120 to the repair cost estimation system 702, but that did not include the first vehicle damage segmentation 1106, the second vehicle damage segmentation 1108, the third vehicle damage segmentation 1110, or the fourth vehicle damage segmentation 1112. As illustrated, it is not necessarily a requirement that a user provided digital image such as the user provided digital image 1102 provided to the repair cost estimation system 702 include the entire vehicle. The user provided digital image 1102 with the first vehicle damage segmentation 1106, the second vehicle damage segmentation 1108, the third vehicle damage segmentation 1110, and/or the fourth vehicle damage segmentation 1112 may be one form of results data generated by a repair cost estimation system 702 and provided to (and displayed on) a user device 716 in response to receiving the user provided digital image 1102 from the user device 716. Each of these “vehicle damage segmentations” may be a graphical representation of “segmentation data” as described above.


As shown, first vehicle damage segmentation 1106 may illustrate the location of the first vehicle damage 1114, the second vehicle damage segmentation 1108 may illustrate the location of the second vehicle damage 1116, the third vehicle damage segmentation 1110 may illustrate the location of the third vehicle damage 1118, and the fourth vehicle damage segmentation 1112 may illustrate the location of the fourth vehicle damage 1120. These illustrations may be shown by using the illustrated bounding boxes and/or masks showing the location of the respective identified damage. Further, as illustrated, each of these bounding boxes may include for display a textual representation of a damage type corresponding to the damage pictured. Additional and/or alternative textual representations may be similarly made, as discussed in further detail below.


Further, the first vehicle damage 1114, the second vehicle damage 1116, the third vehicle damage 1118, and the fourth vehicle damage 1120 may have each been independently identified by the repair cost estimation system 702 as being of one of various possible damage types. For example, the first vehicle damage 1114 may have been identified by the repair cost estimation system 702 as a lamp broken damage type, the second vehicle damage 1116 may have been identified by the repair cost estimation system 702 as a crack damage type, the third vehicle damage 1118 may have been identified by the repair cost estimation system 702 as a smash damage type, and the fourth vehicle damage 1120 may have been identified by the repair cost estimation system 702 as a scratch damage type. As illustrated, these types may respectively be displayed textually in, for example, the first vehicle damage segmentation 1106, the second vehicle damage segmentation 1108, the third vehicle damage segmentation 1110, and the fourth vehicle damage segmentation 1112.


Further, the first vehicle damage 1114, the second vehicle damage 1116, the third vehicle damage 1118, and the fourth vehicle damage 1120 may have each been independently identified by the repair cost estimation system 702 as being of various possible damage intensities. For example, the first vehicle damage 1114 may have been identified as of a medium damage intensity, the second vehicle damage 1116 may have been identified by the repair cost estimation system 702 as of a severe damage intensity, the third vehicle damage 1118 may have been identified by the repair cost estimation system 702 as of a severe damage intensity, and the fourth vehicle damage 1120 may have been identified by the repair cost estimation system 702 as of a medium damage intensity. While not illustrated, it is contemplated that these intensities may respectively be displayed textually in, for example, the first vehicle damage segmentation 1106, the second third vehicle damage segmentation 1110, the third vehicle damage segmentation 1110, and the fourth vehicle damage segmentation 1112.


Various (and potentially different) repair parts may have been identified by the repair cost estimation system 702 as corresponding to the first vehicle damage 1114, the second vehicle damage 1116, the third vehicle damage 1118, and the fourth vehicle damage 1120. For example, a “front driver headlamp” repair part may have been identified to correspond to the first vehicle damage 1114, a “front bumper” repair part may have been identified to correspond to the second vehicle damage 1116, a “front driver side panel” repair part may have been identified to correspond to the third vehicle damage 1118, and a “front driver side panel” repair part may have been identified to correspond to the fourth vehicle damage 1120. It is further anticipated that, to the extent that data involving multiple repair parts for a single instance of vehicle damage is available in the training data used to train the damage detection NN model 320, the repair cost estimation system 702 may be able to identify multiple repair parts that correspond to one of the first vehicle damage 1114, the second vehicle damage 1116, the third vehicle damage 1118, and/or the fourth vehicle damage 1120. For example, in addition to the identification of the “front bumper” repair part, a “radiator” repair part may also be identified as corresponding to the second vehicle damage 1116. While not illustrated, it is contemplated that these repair parts may respectively be displayed textually in, for example, the first vehicle damage segmentation 1106, the second third vehicle damage segmentation 1110, the third vehicle damage segmentation 1110, and the fourth vehicle damage segmentation 1112.


Once identified, the damage type, the damage intensity, and the corresponding repair part(s) for one or more of the first vehicle damage 1114, the second vehicle damage 1116, the third vehicle damage 1118, and/or the fourth vehicle damage 1120 may be processed with the repair cost estimation NN model 522 in order to generate a repair cost estimation for one (or more) of these. This repair cost estimation may also be an example of results data that is sent to and displayed on the user device 716. It is further contemplated that the damage type, the damage intensity, and the corresponding repair part(s) for each of these could also be sent to (and displayed on) the user device 716 as results data.



FIG. 12 illustrates a user device 716 displaying results data, according to an embodiment. While the user device 716 of FIG. 12 has been illustrated as a smartphone, it is contemplated that any other types of suitable user devices such as, for example, personal computers, tablet computers, etc., may be used as “user devices” as described herein.


The user device 716 may display some of the results data in a display region 1202. The results data shown in the display region 1202 may include the user provided digital image 1002 that was provided to the repair cost estimation system 702 as described in relation to FIG. 10, including the first vehicle damage 1012, the second vehicle damage 1014, and the third vehicle damage 1016. The results data shown in the display region 1202 may further include the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and the third vehicle damage segmentation 1010 (including any textual matter as describe above therein, as illustrated (but not legible) in the first vehicle damage segmentation 1006, the second vehicle damage segmentation 1008, and the first vehicle damage 1012 reproduced in the display region 1202) overlaid on the user provided digital image 1002, in the manner described above.


The user device 716 may display some of the results data in a textual information region 1204. In the example of FIG. 7, the user has made a selection 1206 of textual data corresponding to the first vehicle damage 1012, causing an indicator 1208 to appear on the screen to indicate the first vehicle damage 1012. The results data in the textual information region 1204 includes an estimated cost to repair the first vehicle damage 1012. Further, as illustrated, the results data in the textual information region 1204 also includes the intensity of the first vehicle damage 1012, the type of the first vehicle damage 1012, and the corresponding repair part for the first vehicle damage 1012.


Textual information region 1204 may also include textual information for the second vehicle damage 1014 and the third vehicle damage 1016, and a user may cause an indicator similar to the indicator 1208 to appear in relation to either of these within the display region 1202 by making a selection of the associated corresponding textual information.


The foregoing specification has been described with reference to various embodiments, including the best mode. However, those skilled in the art appreciate that various modifications and changes can be made without departing from the scope of the present disclosure and the underlying principles of the present disclosure. Accordingly, this disclosure is to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope thereof. Likewise, benefits, other advantages, and solutions to problems have been described above with regard to various embodiments. However, benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element.


As used herein, the terms “comprises,” “comprising,” or any other variation thereof are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.


Embodiments herein may include various engines, which may be embodied in machine-executable instructions to be executed by a general-purpose or special-purpose computer (or other electronic device). Alternatively, the engine functionality may be performed by hardware components that include specific logic for performing the function(s) of the engines, or by a combination of hardware, software, and/or firmware.


Principles of the present disclosure may be reflected in a computer program product on a tangible computer-readable storage medium having stored instructions thereon that may be used to program a computer (or other electronic device) to perform processes described herein. Any suitable computer-readable storage medium may be utilized, including magnetic storage devices (hard disks, floppy disks, and the like), optical storage devices (CD-ROMs, DVDs, Blu-Ray discs, and the like), flash memory, and/or other types of medium/machine readable medium suitable for storing electronic instructions. These instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions that execute on the computer or other programmable data processing apparatus create means for implementing the functions specified. These instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified. The instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified.


Principles of the present disclosure may be reflected in a computer program implemented as one or more software modules or components. As used herein, a software module or component may include any type of computer instruction or computer-executable code located within a memory device and/or computer-readable storage medium. A software module may, for instance, comprise one or more physical or logical blocks of computer instructions, which may be organized as a routine, a program, an object, a component, a data structure, etc., that perform one or more tasks or implement particular data types.


In certain embodiments, a particular software module may comprise disparate instructions stored in different locations of a memory device, which together implement the described functionality of the module. Indeed, a module may comprise a single instruction or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices. Some embodiments may be practiced in a distributed computing environment where tasks are performed by a remote processing device linked through a communications network. In a distributed computing environment, software modules may be located in local and/or remote memory storage devices. In addition, data being tied or rendered together in a database record may be resident in the same memory device, or across several memory devices, and may be linked together in fields of a record in a database across a network.


Suitable software to assist in implementing the disclosed embodiments is readily provided by those of skill in the pertinent art(s) using the teachings presented here and programming languages and tools, such as Java, JavaScript, Pascal, C++, C, database languages, APIs, SDKs, assembly, firmware, microcode, and/or other languages and tools.


Embodiments as disclosed herein may be computer-implemented in whole or in part on a digital computer. The digital computer includes a processor performing the required computations. The computer further includes a memory in electronic communication with the processor to store a computer operating system. The computer operating systems may include, but are not limited to, MS-DOS, Windows, Linux, Unix, AIX, CLIX, QNX, OS/2, and MacOS. Alternatively, it is expected that future embodiments will be adapted to execute on other future operating systems.


It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the disclosed embodiments.

Claims
  • 1. A computer-implemented method of training a vehicle damage detection neural network model, comprising: collecting a set of annotated digital images from a database, each annotated digital image comprising one or more annotations each corresponding to vehicle damage pictured in the annotated digital image, each annotation comprising: a location of the vehicle damage pictured in the annotated digital image;a repair part corresponding to the vehicle damage pictured in the annotated digital image;a type of the vehicle damage pictured in the annotated digital image; andan intensity of the vehicle damage pictured in the annotated digital image;converting the set of annotated digital images to annotated greyscale digital images;bifurcating the set of annotated greyscale digital images into a training subset and a validation subset;training the vehicle damage detection neural network model using the training subset, wherein after said training the vehicle damage detection neural network model is trained to, without the use of a reference image of an undamaged vehicle, for a preprocessed user provided digital image picturing vehicle damage: determine a location of the vehicle damage pictured in the user provided digital image;determine a repair part corresponding to the vehicle damage pictured in the user provided digital image;determine a type of the vehicle damage pictured in the user provided digital image; anddetermine an intensity of the vehicle damage pictured in the user provided digital image; anddetermining an accuracy of the vehicle damage detection neural network model using the validation subset based, at least in part, on losses determined based on the one or more annotations.
  • 2. The computer-implemented method of claim 1, wherein the type of the vehicle damage pictured in the annotated digital image comprises one or more of a dent, a scratch, a smash, and a break.
  • 3. The computer-implemented method of claim 1, wherein the intensity of the vehicle damage pictured in the annotated digital image comprises one or more of minor and severe.
  • 4. The computer-implemented method of claim 1, further comprising normalizing each annotated digital image of the set of annotated digital images prior to bifurcating the set of annotated digital images.
  • 5. The computer-implemented method of claim 1, further comprising augmenting the set of annotated greyscale digital images by modifying one or more of the annotated greyscale digital images and adding the one or more modified annotated greyscale digital images to the set of annotated greyscale digital images.
  • 6. The computer-implemented method of claim 1, wherein determining the accuracy of the vehicle damage detection neural network model using the validation subset comprises determining an overall loss based on the losses corresponding to each annotated greyscale digital image of the validation subset applied to the damage detection neural network model.
  • 7. The computer-implemented method of claim 1, wherein the vehicle damage detection neural network model comprises a mask RCNN.
  • 8. A computer-implemented method of training a repair cost estimation neural network model, comprising: collecting a set of historical data for damaged vehicles, each data point in the set comprising a damage type, a damage intensity, a used repair part, and a repair cost;bifurcating the set of historical data into a training subset and a validation subset;converting images picturing vehicle damage to greyscale digital images, each data point in the set corresponding to a respective greyscale digital image;training the repair cost estimation neural network model using the training subset and the grayscale digital images, wherein after said training the repair cost estimation neural network model is trained to determine an estimated cost to repair vehicle damage based on a received digital image picturing the vehicle damage or on a received damage type, a received damage intensity, and a received repair part to make a repair; anddetermining an accuracy of the repair cost estimation neural network model using the validation subset based, at least in part, on losses determined based on the one or more annotations.
  • 9. The computer-implemented method of claim 8, wherein the damage type of each data point in the set of historical data comprises one or more of a dent, a scratch, a smash, and a break.
  • 10. The computer-implemented method of claim 8, wherein the damage intensity of each data point in the set of historical data comprises one or more of minor and severe.
  • 11. The computer-implemented method of claim 8, further comprising standardizing the set of historical data by scaling to unit variance.
  • 12. The computer-implemented method of claim 8, wherein the repair cost estimation neural network model comprises an LSTM RNN.
  • 13. The computer-implemented method of claim 8, further comprising defining a learning rate for training the repair cost estimation neural network model.
  • 14. The computer-implemented method of claim 8, wherein each item in the set of historical data further comprises a geographic indication; and wherein after said training the repair cost estimation neural network model can further determine the estimated cost to repair damage based on a received geographic indication.
  • 15. A computer-implemented method for providing vehicle repair information to a user device, comprising: receiving, from the user device, a user provided digital image of a vehicle that pictures vehicle damage of the vehicle;generating a preprocessed user provided digital image from the user provided digital image;providing, to a damage detection neural network model trained with a training subset of grayscale images picturing vehicle damage and verified for accuracy with a validation subset of the grayscale images picturing the vehicle damage based on losses determined based, at least partially, on one or more annotations corresponding to the grayscale images, the preprocessed user provided digital image, wherein the damage detection neural network model is trained to, without a use of a reference image of an undamaged vehicle, determine a location of the vehicle damage, a repair part corresponding to the vehicle damage, a type of the vehicle damage, and an intensity of the vehicle damage using the preprocessed user provided digital image;receiving, from the damage detection neural network model, the location of the vehicle damage, the repair part corresponding to the vehicle damage, the type of the vehicle damage, the intensity of the vehicle damage;providing, to a repair cost estimation neural network model, the repair part corresponding to the vehicle damage, the type of the vehicle damage, and the intensity of the vehicle damage, wherein the repair cost estimation neural network model is trained to determine an estimated cost to repair the vehicle damage based on the repair part corresponding to the vehicle damage, the type of the vehicle damage, and the intensity of the vehicle damage;receiving, from the repair cost estimation neural network model, the estimated cost to repair the vehicle damage; andproviding, to the user device, results data comprising an estimated cost to repair the vehicle damage and segmentation data overlaid on the user provided digital image indicating the location of the vehicle damage.
  • 16. The computer-implemented method of claim 15, wherein generating a preprocessed user provided digital image from the user provided digital image comprises normalizing the user provided digital image.
  • 17. The computer-implemented method of claim 16, wherein generating a preprocessed user provided digital image from the user provided digital image further comprises grey scaling the user provided digital image.
  • 18. The computer-implemented method of claim 15, wherein the results data further comprises the type of the vehicle damage.
  • 19. The computer-implemented method of claim 15, wherein the results data further comprises the intensity of the vehicle damage.
  • 20. The computer-implemented method of claim 15, wherein the results data further comprises the repair part associated with the vehicle damage.
US Referenced Citations (602)
Number Name Date Kind
3792445 Bucks et al. Feb 1974 A
4258421 Juhasz et al. Mar 1981 A
4992940 Dworkin Feb 1991 A
5003476 Abe Mar 1991 A
5034889 Abe Jul 1991 A
5058044 Stewart et al. Oct 1991 A
5421015 Khoyi et al. May 1995 A
5442553 Parrillo Aug 1995 A
5452446 Johnson Sep 1995 A
5521815 Rose, Jr. May 1996 A
5649186 Ferguson Jul 1997 A
5694595 Jacobs et al. Dec 1997 A
5729452 Smith et al. Mar 1998 A
5764943 Wechsler Jun 1998 A
5787177 Leppek Jul 1998 A
5790785 Klug et al. Aug 1998 A
5835712 Dufresne Nov 1998 A
5845299 Arora et al. Dec 1998 A
5862346 Kley et al. Jan 1999 A
5911145 Arora et al. Jun 1999 A
5956720 Fernandez et al. Sep 1999 A
5974149 Leppek Oct 1999 A
5974418 Blinn et al. Oct 1999 A
5974428 Gerard et al. Oct 1999 A
5978776 Seretti et al. Nov 1999 A
5987506 Carter et al. Nov 1999 A
6003635 Bantz et al. Dec 1999 A
6006201 Berent et al. Dec 1999 A
6009410 Lemole et al. Dec 1999 A
6018748 Smith Jan 2000 A
6021416 Dauerer et al. Feb 2000 A
6021426 Douglis et al. Feb 2000 A
6026433 D'Arlach et al. Feb 2000 A
6041310 Green et al. Mar 2000 A
6041344 Bodamer et al. Mar 2000 A
6055541 Solecki et al. Apr 2000 A
6061698 Chadha et al. May 2000 A
6067559 Allard et al. May 2000 A
6070164 Vagnozzi May 2000 A
6134532 Lazarus et al. Oct 2000 A
6151609 Truong Nov 2000 A
6178432 Cook et al. Jan 2001 B1
6181994 Colson et al. Jan 2001 B1
6185614 Cuomo et al. Feb 2001 B1
6189104 Leppek Feb 2001 B1
6216129 Eldering Apr 2001 B1
6219667 Lu et al. Apr 2001 B1
6236994 Schwartz et al. May 2001 B1
6240365 Bunn May 2001 B1
6263268 Nathanson Jul 2001 B1
6285932 De Belledeuille et al. Sep 2001 B1
6289382 Bowman-Amuah Sep 2001 B1
6295061 Park et al. Sep 2001 B1
6330499 Chou et al. Dec 2001 B1
6343302 Graham Jan 2002 B1
6353824 Boguraev et al. Mar 2002 B1
6356822 Diaz et al. Mar 2002 B1
6374241 Lamburt et al. Apr 2002 B1
6397226 Sage May 2002 B1
6397336 Leppek May 2002 B2
6401103 Ho et al. Jun 2002 B1
6421733 Tso et al. Jul 2002 B1
6473849 Keller et al. Oct 2002 B1
6496855 Hunt et al. Dec 2002 B1
6505106 Lawrence et al. Jan 2003 B1
6505205 Kothuri et al. Jan 2003 B1
6519617 Wanderski et al. Feb 2003 B1
6529948 Bowman-Amuah Mar 2003 B1
6535879 Behera Mar 2003 B1
6539370 Chang et al. Mar 2003 B1
6546216 Mizoguchi et al. Apr 2003 B2
6553373 Boguraev et al. Apr 2003 B2
6556904 Larson et al. Apr 2003 B1
6564216 Waters May 2003 B2
6571253 Thompson et al. May 2003 B1
6581061 Graham Jun 2003 B2
6583794 Wattenberg Jun 2003 B1
6594664 Estrada et al. Jul 2003 B1
6606525 Muthuswamy et al. Aug 2003 B1
6629148 Ahmed et al. Sep 2003 B1
6640244 Bowman-Amuah et al. Oct 2003 B1
6643663 Dabney et al. Nov 2003 B1
6654726 Hanzek Nov 2003 B1
6674805 Kovacevic et al. Jan 2004 B1
6678706 Fishel Jan 2004 B1
6697825 Underwood et al. Feb 2004 B1
6701232 Yamaki Mar 2004 B2
6721747 Lipkin Apr 2004 B2
6728685 Ahluwalia Apr 2004 B1
6738750 Stone et al. May 2004 B2
6744735 Nakaguro Jun 2004 B1
6748305 Klausner et al. Jun 2004 B1
6785864 Te et al. Aug 2004 B1
6795819 Wheeler et al. Sep 2004 B2
6823258 Ukai et al. Nov 2004 B2
6823359 Heidingsfeld Nov 2004 B1
6826594 Pettersen Nov 2004 B1
6847988 Toyouchi et al. Jan 2005 B2
6850823 Eun et al. Feb 2005 B2
6871216 Miller et al. Mar 2005 B2
6901430 Smith Mar 2005 B1
6894601 Grunden et al. May 2005 B1
6917941 Wight et al. Jul 2005 B2
6922674 Nelson Jul 2005 B1
6941203 Chen Sep 2005 B2
6944677 Zhao Sep 2005 B1
6954731 Montague et al. Oct 2005 B1
6963854 Boyd et al. Nov 2005 B1
6965806 Eryurek et al. Nov 2005 B2
6965968 Touboul Nov 2005 B1
6978273 Bonneau et al. Dec 2005 B1
6981028 Rawat et al. Dec 2005 B1
6990629 Heaney et al. Jan 2006 B1
6993421 Pillar Jan 2006 B2
7000184 Matveyenko et al. Feb 2006 B2
7003476 Samra et al. Feb 2006 B1
7010495 Samra et al. Mar 2006 B1
7028072 Kliger et al. Apr 2006 B1
7031554 Iwane Apr 2006 B2
7039704 Davis et al. May 2006 B2
7047318 Svedloff May 2006 B1
7062343 Ogushi et al. Jun 2006 B2
7062506 Taylor et al. Jun 2006 B2
7072943 Landesmann Jul 2006 B2
7092803 Kapolka et al. Aug 2006 B2
7107268 Zawadzki et al. Sep 2006 B1
7124116 Huyler Oct 2006 B2
7152207 Underwood et al. Dec 2006 B1
7155491 Schultz et al. Dec 2006 B1
7171418 Blessin Jan 2007 B2
7184866 Squires et al. Feb 2007 B2
7197764 Cichowlas Mar 2007 B2
7219234 Ashland et al. May 2007 B1
7240125 Fleming Jul 2007 B2
7246263 Skingle Jul 2007 B2
7281029 Rawat Oct 2007 B2
7287000 Boyd et al. Oct 2007 B2
7322007 Schowtka et al. Jan 2008 B2
7386786 Davis et al. Jun 2008 B2
7401289 Lachhwani et al. Jul 2008 B2
7406429 Salonen Jul 2008 B2
7433891 Haber et al. Oct 2008 B2
7457693 Olsen et al. Nov 2008 B2
7477968 Lowrey Jan 2009 B1
7480551 Lowrey et al. Jan 2009 B1
7496543 Bamford et al. Feb 2009 B1
7502672 Kolls Mar 2009 B1
7536641 Rosenstein et al. May 2009 B2
7548985 Guigui Jun 2009 B2
7587504 Adams et al. Sep 2009 B2
7590476 Shumate Sep 2009 B2
7593925 Cadiz et al. Sep 2009 B2
7593999 Nathanson Sep 2009 B2
7613627 Doyle et al. Nov 2009 B2
7620484 Chen Nov 2009 B1
7624342 Matveyenko et al. Nov 2009 B2
7657594 Banga et al. Feb 2010 B2
7664667 Ruppelt et al. Feb 2010 B1
7739007 Logsdon Jun 2010 B2
7747680 Ravikumar et al. Jun 2010 B2
7778841 Bayer et al. Aug 2010 B1
7801945 Geddes et al. Sep 2010 B1
7818380 Tamura et al. Oct 2010 B2
7861309 Spearman et al. Dec 2010 B2
7865409 Monaghan Jan 2011 B1
7870253 Muilenburg et al. Jan 2011 B2
7899701 Odom Mar 2011 B1
7908051 Oesterling Mar 2011 B2
7979506 Cole Jul 2011 B2
8010423 Bodin et al. Aug 2011 B2
8019501 Breed Sep 2011 B2
8036788 Breed Oct 2011 B2
8051159 Muilenburg et al. Nov 2011 B2
8055544 Ullman et al. Nov 2011 B2
8060274 Boss et al. Nov 2011 B2
8095403 Price Jan 2012 B2
8099308 Uyeki Jan 2012 B2
8135804 Uyeki Mar 2012 B2
8145379 Schwinke Mar 2012 B2
8190322 Lin et al. May 2012 B2
8209259 Graham, Jr. et al. Jun 2012 B2
8212667 Petite et al. Jul 2012 B2
8271473 Berg Sep 2012 B2
8271547 Taylor et al. Sep 2012 B2
8275717 Ullman et al. Sep 2012 B2
8285439 Hodges Oct 2012 B2
8296007 Swaminathan et al. Oct 2012 B2
8311905 Campbell et al. Nov 2012 B1
8355950 Colson et al. Jan 2013 B2
8407664 Moosmann et al. Mar 2013 B2
8428815 Van Engelshoven et al. Apr 2013 B2
8438310 Muilenburg et al. May 2013 B2
8448057 Sugnet May 2013 B1
8521654 Ford et al. Aug 2013 B2
8538894 Ullman et al. Sep 2013 B2
8645193 Swinson et al. Feb 2014 B2
8676638 Blair et al. Mar 2014 B1
8725341 Ogasawara May 2014 B2
8745641 Coker Jun 2014 B1
8849689 Jagannathan et al. Sep 2014 B1
8886389 Edwards et al. Nov 2014 B2
8924071 Stanek et al. Dec 2014 B2
8954222 Costantino Feb 2015 B2
8996230 Lorenz et al. Mar 2015 B2
8996235 Singh et al. Mar 2015 B2
9014908 Chen et al. Apr 2015 B2
9015059 Sims et al. Apr 2015 B2
9026304 Olsen, III et al. May 2015 B2
9047722 Kurnik et al. Jun 2015 B2
9122716 Naganathan et al. Sep 2015 B1
9165413 Jones et al. Oct 2015 B2
9183681 Fish Nov 2015 B2
9325650 Yalavarty et al. Apr 2016 B2
9349223 Palmer May 2016 B1
9384597 Koch et al. Jul 2016 B2
9455969 Cabrera et al. Sep 2016 B1
9477936 Lawson et al. Oct 2016 B2
9577866 Rogers et al. Feb 2017 B2
9596287 Rybak et al. Mar 2017 B2
9619945 Adderly et al. Apr 2017 B2
9659495 Modica et al. May 2017 B2
9706008 Rajan et al. Jul 2017 B2
9715665 Schondorf et al. Jul 2017 B2
9754304 Taira et al. Sep 2017 B2
9778045 Bang Oct 2017 B2
9836714 Lander et al. Dec 2017 B2
9983982 Kumar et al. Mar 2018 B1
10032139 Adderly et al. Jul 2018 B2
10083411 Kinsey et al. Sep 2018 B2
10169607 Sheth et al. Jan 2019 B1
10229394 Davis et al. Mar 2019 B1
10448120 Bursztyn et al. Oct 2019 B1
10475256 Chowdhury et al. Nov 2019 B2
10509696 Gilderman et al. Dec 2019 B1
10541938 Timmerman et al. Jan 2020 B1
10552871 Chadwick Feb 2020 B1
10657707 Leise May 2020 B1
11080105 Birkett et al. Aug 2021 B1
11117253 Oleynik Sep 2021 B2
11190608 Amar et al. Nov 2021 B2
11282041 Sanderford et al. Mar 2022 B2
11322247 Bullington et al. May 2022 B2
11392855 Murakonda Jul 2022 B1
11443275 Prakash et al. Sep 2022 B1
11468089 Bales et al. Oct 2022 B1
11507892 Henckel et al. Nov 2022 B1
20010005831 Lewin et al. Jun 2001 A1
20010014868 Herz et al. Aug 2001 A1
20010037332 Miller et al. Nov 2001 A1
20010039594 Park et al. Nov 2001 A1
20010054049 Maeda et al. Dec 2001 A1
20020023111 Arora et al. Feb 2002 A1
20020024537 Jones et al. Feb 2002 A1
20020026359 Long et al. Feb 2002 A1
20020032626 Dewolf et al. Mar 2002 A1
20020032701 Gao et al. Mar 2002 A1
20020042738 Srinivasan et al. Apr 2002 A1
20020046245 Hillar et al. Apr 2002 A1
20020049831 Platner et al. Apr 2002 A1
20020052778 Murphy et al. May 2002 A1
20020059260 Jas May 2002 A1
20020065698 Schick et al. May 2002 A1
20020065739 Florance et al. May 2002 A1
20020069110 Sonnenberg Jun 2002 A1
20020073080 Lipkin Jun 2002 A1
20020082978 Ghouri et al. Jun 2002 A1
20020091755 Narin Jul 2002 A1
20020107739 Schlee Aug 2002 A1
20020111727 Vanstory et al. Aug 2002 A1
20020111844 Vanstory et al. Aug 2002 A1
20020116197 Erten Aug 2002 A1
20020116418 Lachhwani et al. Aug 2002 A1
20020123359 Wei et al. Sep 2002 A1
20020124053 Adams et al. Sep 2002 A1
20020128728 Murakami et al. Sep 2002 A1
20020129054 Ferguson et al. Sep 2002 A1
20020133273 Lowrey et al. Sep 2002 A1
20020138331 Hosea et al. Sep 2002 A1
20020143646 Boyden et al. Oct 2002 A1
20020154146 Rodriquez et al. Oct 2002 A1
20020169851 Weathersby et al. Nov 2002 A1
20020173885 Lowrey et al. Nov 2002 A1
20020188869 Patrick Dec 2002 A1
20020196273 Krause Dec 2002 A1
20020198761 Ryan et al. Dec 2002 A1
20020198878 Baxter et al. Dec 2002 A1
20030014443 Bernstein et al. Jan 2003 A1
20030023632 Ries et al. Jan 2003 A1
20030033378 Needham et al. Feb 2003 A1
20030036832 Kokes et al. Feb 2003 A1
20030036964 Boyden et al. Feb 2003 A1
20030037263 Kamat et al. Feb 2003 A1
20030046179 Anabtawi et al. Mar 2003 A1
20030051022 Sogabe et al. Mar 2003 A1
20030055666 Roddy et al. Mar 2003 A1
20030061263 Riddle Mar 2003 A1
20030065532 Takaoka Apr 2003 A1
20030065583 Takaoka Apr 2003 A1
20030069785 Lohse Apr 2003 A1
20030069790 Kane Apr 2003 A1
20030074392 Campbell et al. Apr 2003 A1
20030095038 Dix May 2003 A1
20030101262 Godwin May 2003 A1
20030115292 Griffin et al. Jun 2003 A1
20030120502 Robb et al. Jun 2003 A1
20030145310 Thames et al. Jul 2003 A1
20030177050 Crampton et al. Sep 2003 A1
20030177175 Worley et al. Sep 2003 A1
20030225853 Wang et al. Dec 2003 A1
20030229623 Chang et al. Dec 2003 A1
20030233246 Snapp et al. Dec 2003 A1
20040012631 Skorski Jan 2004 A1
20040039646 Hacker Feb 2004 A1
20040041818 White et al. Mar 2004 A1
20040073546 Forster et al. Apr 2004 A1
20040073564 Haber et al. Apr 2004 A1
20040088228 Mercer et al. May 2004 A1
20040093243 Bodin et al. May 2004 A1
20040117046 Colle et al. Jun 2004 A1
20040122735 Meshkin et al. Jun 2004 A1
20040128320 Grove et al. Jul 2004 A1
20040139203 Graham, Jr. et al. Jul 2004 A1
20040148342 Cotte Jul 2004 A1
20040156020 Edwards Aug 2004 A1
20040163047 Nagahara et al. Aug 2004 A1
20040181464 Vanker et al. Sep 2004 A1
20040199413 Hauser et al. Oct 2004 A1
20040220863 Porter et al. Nov 2004 A1
20040225664 Casement Nov 2004 A1
20040230897 Latzel Nov 2004 A1
20040255233 Croney et al. Dec 2004 A1
20040267263 May Dec 2004 A1
20040268225 Walsh et al. Dec 2004 A1
20040268232 Tunning Dec 2004 A1
20050015491 Koeppel Jan 2005 A1
20050021197 Zimmerman et al. Jan 2005 A1
20050027611 Wharton Feb 2005 A1
20050043614 Huizenga Feb 2005 A1
20050065804 Worsham et al. Mar 2005 A1
20050096963 Myr et al. May 2005 A1
20050108112 Ellenson et al. May 2005 A1
20050114270 Hind et al. May 2005 A1
20050114764 Gudenkauf et al. May 2005 A1
20050108637 Sahota et al. Jun 2005 A1
20050149398 McKay Jul 2005 A1
20050171836 Leacy Aug 2005 A1
20050176482 Raisinghani et al. Aug 2005 A1
20050187834 Painter et al. Aug 2005 A1
20050198121 Daniels et al. Sep 2005 A1
20050228736 Norman et al. Oct 2005 A1
20050256755 Chand et al. Nov 2005 A1
20050267774 Merritt et al. Dec 2005 A1
20050268282 Laird Dec 2005 A1
20050289020 Bruns et al. Dec 2005 A1
20050289599 Matsuura et al. Dec 2005 A1
20060004725 Abraido-Fandino Jan 2006 A1
20060031811 Ernst et al. Feb 2006 A1
20060059253 Goodman et al. Mar 2006 A1
20060064637 Rechterman et al. Mar 2006 A1
20060123330 Horiuchi et al. Jun 2006 A1
20060129423 Sheinson et al. Jun 2006 A1
20060129982 Doyle Jun 2006 A1
20060136105 Larson Jun 2006 A1
20060161841 Horiuchi et al. Jul 2006 A1
20060200751 Underwood et al. Sep 2006 A1
20060224447 Koningstein Oct 2006 A1
20060248205 Randle et al. Nov 2006 A1
20060248442 Rosenstein et al. Nov 2006 A1
20060265355 Taylor Nov 2006 A1
20060271844 Suklikar Nov 2006 A1
20060277588 Harrington et al. Dec 2006 A1
20060282328 Gerace et al. Dec 2006 A1
20060282547 Hasha et al. Dec 2006 A1
20070005446 Fusz et al. Jan 2007 A1
20070016486 Stone et al. Jan 2007 A1
20070027754 Collins et al. Feb 2007 A1
20070033087 Combs et al. Feb 2007 A1
20070033520 Kimzey et al. Feb 2007 A1
20070053513 Hoffberg Mar 2007 A1
20070100519 Engel May 2007 A1
20070150368 Arora et al. Jun 2007 A1
20070209011 Padmanabhuni et al. Sep 2007 A1
20070226540 Konieczny Sep 2007 A1
20070250229 Wu Oct 2007 A1
20070250327 Hedy Oct 2007 A1
20070250840 Coker et al. Oct 2007 A1
20070271154 Broudy et al. Nov 2007 A1
20070271330 Mattox et al. Nov 2007 A1
20070271389 Joshi et al. Nov 2007 A1
20070282711 Ullman et al. Dec 2007 A1
20070282712 Ullman et al. Dec 2007 A1
20070282713 Ullman et al. Dec 2007 A1
20070288413 Mizuno et al. Dec 2007 A1
20070294192 Tellefsen Dec 2007 A1
20070299940 Gbadegesin et al. Dec 2007 A1
20080010561 Bay et al. Jan 2008 A1
20080015921 Libman Jan 2008 A1
20080015929 Koeppel et al. Jan 2008 A1
20080027827 Eglen et al. Jan 2008 A1
20080119983 Inbarajan et al. May 2008 A1
20080172632 Stambaugh Jul 2008 A1
20080189143 Wurster Aug 2008 A1
20080195435 Bentley et al. Aug 2008 A1
20080195932 Oikawa et al. Aug 2008 A1
20080201163 Barker et al. Aug 2008 A1
20080255925 Vailaya et al. Oct 2008 A1
20090012887 Taub et al. Jan 2009 A1
20090024918 Silverbrook et al. Jan 2009 A1
20090043780 Hentrich, Jr. et al. Feb 2009 A1
20090070435 Abhyanker Mar 2009 A1
20090089134 Uyeki Apr 2009 A1
20090106036 Tamura et al. Apr 2009 A1
20090112687 Blair et al. Apr 2009 A1
20090138329 Wanker May 2009 A1
20090182232 Zhang et al. Jul 2009 A1
20090187513 Noy et al. Jul 2009 A1
20090187939 Lajoie Jul 2009 A1
20090198507 Rhodus Aug 2009 A1
20090204454 Lagudi Aug 2009 A1
20090204655 Wendelberger Aug 2009 A1
20090222532 Finlaw Sep 2009 A1
20090265607 Raz et al. Oct 2009 A1
20090313035 Esser et al. Dec 2009 A1
20100011415 Cortes et al. Jan 2010 A1
20100023393 Costy et al. Jan 2010 A1
20100070343 Taira et al. Mar 2010 A1
20100082778 Muilenburg et al. Apr 2010 A1
20100082780 Muilenburg et al. Apr 2010 A1
20100088158 Pollack Apr 2010 A1
20100100259 Geiter Apr 2010 A1
20100100506 Marot Apr 2010 A1
20100131363 Sievert et al. May 2010 A1
20100235219 Merrick et al. Sep 2010 A1
20100235231 Jewer Sep 2010 A1
20100293030 Wu Nov 2010 A1
20100312608 Shan et al. Dec 2010 A1
20100318408 Sankaran et al. Dec 2010 A1
20100324777 Tominaga et al. Dec 2010 A1
20110010432 Uyeki Jan 2011 A1
20110015989 Tidwell et al. Jan 2011 A1
20110022525 Swinson et al. Jan 2011 A1
20110082804 Swinson et al. Apr 2011 A1
20110145064 Anderson et al. Jun 2011 A1
20110161167 Jallapuram Jun 2011 A1
20110191264 Inghelbrecht et al. Aug 2011 A1
20110196762 Dupont Aug 2011 A1
20110224864 Gellatly et al. Sep 2011 A1
20110231055 Knight et al. Sep 2011 A1
20110288937 Manoogian, III Nov 2011 A1
20110307296 Hall et al. Dec 2011 A1
20110307411 Bolivar et al. Dec 2011 A1
20120066010 Williams et al. Mar 2012 A1
20120089474 Xiao et al. Apr 2012 A1
20120095804 Calabrese et al. Apr 2012 A1
20120116868 Chin et al. May 2012 A1
20120158211 Chen et al. Jun 2012 A1
20120209714 Douglas et al. Aug 2012 A1
20120221125 Bell Aug 2012 A1
20120265648 Jerome et al. Oct 2012 A1
20120268294 Michaelis et al. Oct 2012 A1
20120278886 Luna Nov 2012 A1
20120284113 Pollak Nov 2012 A1
20120316981 Hoover et al. Dec 2012 A1
20130046432 Edwards et al. Feb 2013 A1
20130080196 Schroeder et al. Mar 2013 A1
20130080305 Virag et al. Mar 2013 A1
20130151334 Berkhin et al. Jun 2013 A1
20130151468 Wu et al. Jun 2013 A1
20130191445 Gayman et al. Jul 2013 A1
20130204484 Ricci Aug 2013 A1
20130226699 Long Aug 2013 A1
20130317864 Tofte Nov 2013 A1
20130325541 Capriotti et al. Dec 2013 A1
20130332023 Bertosa et al. Dec 2013 A1
20140012659 Yan Jan 2014 A1
20140026037 Garb et al. Jan 2014 A1
20140052327 Hosein et al. Feb 2014 A1
20140081675 Ives Mar 2014 A1
20140088866 Knapp et al. Mar 2014 A1
20140094992 Lambert et al. Apr 2014 A1
20140122178 Knight May 2014 A1
20140136278 Carvalho May 2014 A1
20140229207 Swamy Aug 2014 A1
20140229391 East et al. Aug 2014 A1
20140244110 Tharaldson et al. Aug 2014 A1
20140277906 Lowrey et al. Sep 2014 A1
20140278805 Thompson Sep 2014 A1
20140316825 Van Dijk et al. Oct 2014 A1
20140324275 Stanek et al. Oct 2014 A1
20140324536 Cotton Oct 2014 A1
20140331301 Subramani et al. Nov 2014 A1
20140337163 Whisnant Nov 2014 A1
20140337825 Challa et al. Nov 2014 A1
20140379530 Kim et al. Dec 2014 A1
20140379817 Logue et al. Dec 2014 A1
20150032546 Calman et al. Jan 2015 A1
20150057875 McGinnis et al. Feb 2015 A1
20150058151 Sims et al. Feb 2015 A1
20150066781 Johnson et al. Mar 2015 A1
20150066933 Kolodziej et al. Mar 2015 A1
20150100199 Kurnik et al. Apr 2015 A1
20150142256 Jones May 2015 A1
20150142535 Payne et al. May 2015 A1
20150207701 Faaborg et al. Jul 2015 A1
20150227894 Mapes, Jr. et al. Aug 2015 A1
20150242819 Moses et al. Aug 2015 A1
20150248761 Dong Sep 2015 A1
20150254591 Raskind Sep 2015 A1
20150268059 Borghesani et al. Sep 2015 A1
20150268975 Du et al. Sep 2015 A1
20150278886 Fusz Oct 2015 A1
20150286475 Vangelov et al. Oct 2015 A1
20150286979 Ming et al. Oct 2015 A1
20150290795 Oleynik Oct 2015 A1
20150334165 Arling et al. Nov 2015 A1
20160004516 Ivanov et al. Jan 2016 A1
20160059412 Oleynik Mar 2016 A1
20160071054 Kakarala et al. Mar 2016 A1
20160092944 Taylor et al. Mar 2016 A1
20160132935 Shen et al. May 2016 A1
20160140609 Demir May 2016 A1
20160140620 Pinkowish et al. May 2016 A1
20160140622 Wang et al. May 2016 A1
20160148439 Akselrod et al. May 2016 A1
20160162817 Grimaldi et al. Jun 2016 A1
20160179968 Ormseth et al. Jun 2016 A1
20160180358 Battista Jun 2016 A1
20160180378 Toshida et al. Jun 2016 A1
20160180418 Jaeger Jun 2016 A1
20160267503 Zakai-Or et al. Sep 2016 A1
20160275533 Smith et al. Sep 2016 A1
20160277510 Du et al. Sep 2016 A1
20160307174 Marcelle et al. Oct 2016 A1
20160335727 Jimenez Nov 2016 A1
20160337278 Peruri et al. Nov 2016 A1
20160357599 Glatfelter Dec 2016 A1
20160371641 Wilson et al. Dec 2016 A1
20170034547 Jain et al. Feb 2017 A1
20170039785 Richter et al. Feb 2017 A1
20170053460 Hauser et al. Feb 2017 A1
20170060929 Chesla et al. Mar 2017 A1
20170064038 Chen Mar 2017 A1
20170093700 Gilley et al. Mar 2017 A1
20170124525 Johnson et al. May 2017 A1
20170126848 George et al. May 2017 A1
20170206465 Jin Jul 2017 A1
20170262894 Kirti et al. Sep 2017 A1
20170293894 Taliwal et al. Oct 2017 A1
20170308844 Kelley Oct 2017 A1
20170308864 Kelley Oct 2017 A1
20170308865 Kelley Oct 2017 A1
20170316459 Strauss et al. Nov 2017 A1
20170337573 Toprak Nov 2017 A1
20170352054 Ma et al. Dec 2017 A1
20170359216 Naiden et al. Dec 2017 A1
20170364733 Estrada Dec 2017 A1
20180067932 Paterson et al. Mar 2018 A1
20180074864 Chen et al. Mar 2018 A1
20180095733 Torman et al. Apr 2018 A1
20180173806 Forstmann et al. Jun 2018 A1
20180204281 Painter et al. Jul 2018 A1
20180225710 Kar et al. Aug 2018 A1
20180232749 Moore, Jr. et al. Aug 2018 A1
20180285901 Zackrone Oct 2018 A1
20180285925 Zackrone Oct 2018 A1
20180300124 Malladi et al. Oct 2018 A1
20190028360 Douglas et al. Jan 2019 A1
20190073641 Utke Mar 2019 A1
20190114330 Xu et al. Apr 2019 A1
20190213426 Chen Jul 2019 A1
20190294878 Endras Sep 2019 A1
20190297162 Amar et al. Sep 2019 A1
20190334884 Ross et al. Oct 2019 A1
20200019388 Jaeger et al. Jan 2020 A1
20200038363 Kim Feb 2020 A1
20200050879 Zaman Feb 2020 A1
20200066067 Herman Feb 2020 A1
20200118365 Wang Apr 2020 A1
20200177476 Agarwal et al. Jun 2020 A1
20200327371 Sharma et al. Oct 2020 A1
20210072976 Chintagunta et al. Mar 2021 A1
20210090694 Colley et al. Mar 2021 A1
20210157562 Sethi et al. May 2021 A1
20210184780 Yang et al. Jun 2021 A1
20210224975 Ranca Jul 2021 A1
20210240657 Kumar et al. Aug 2021 A1
20210256616 Hayward Aug 2021 A1
20210287106 Jerram Sep 2021 A1
20210303644 Shear Sep 2021 A1
20210350334 Ave et al. Nov 2021 A1
20210359940 Shen et al. Nov 2021 A1
20220020086 Kuchenbecker et al. Jan 2022 A1
20220028928 Seo et al. Jan 2022 A1
20220046105 Amar et al. Feb 2022 A1
20220172723 Tendolkar et al. Jun 2022 A1
20220191663 Karpoor et al. Jun 2022 A1
20220208319 Ansari et al. Jun 2022 A1
20220237084 Bhagi et al. Jul 2022 A1
20220237171 Bailey et al. Jul 2022 A1
20220293107 Leaman et al. Sep 2022 A1
20220300735 Kelly et al. Sep 2022 A1
20230214892 Christian et al. Jul 2023 A1
Foreign Referenced Citations (3)
Number Date Country
2494350 May 2004 CA
0461888 Mar 1995 EP
2007002759 Jan 2007 WO
Non-Patent Literature Citations (185)
Entry
http://web.archive.org/web/20010718130244/http://chromedata.com/maing2/about/index.asp, 1 pg.
http://web.archive.org/web/20050305055408/http://www.dealerclick.com/, 1 pg.
http://web.archive.org/web/20050528073821/http://www.kbb.com/, 1 pg.
http://web.archive.org/web/20050531000823/http://www.carfax.com/, 1 pg.
Internet Archive Dan Gillmor Sep. 1, 1996.
Internet Archive Wayback Machine, archive of LDAP Browser.com—FAQ. Archived Dec. 11, 2000. Available at <http://web.archive.org/web/200012110152/http://www.ldapbrowser.com/faq/faq.php3?sID=fe4ae66f023d86909f35e974f3a1ce>.
Internet Archive Wayback Machine, archive of LDAP Browser.com—Product Info. Archived Dec. 11, 2000. Available at <http://web.archive.org/web/200012110541/http://www.ldapbrowser.com/prodinfo/prodinfo.php3?sID=fe4ae66f2fo23d86909f35e974f3a1ce>.
Internet Archive: Audio Archive, http://www.archive.org/audio/audio-searchresults.php?search=@start=0&limit=100&sort=ad, printed May 12, 2004, 12 pgs.
Internet Archive: Democracy Now, http://www.archive.org/audio/collection.php?collection=democracy_now, printed May 12, 2004, 2 pgs.
Java 2 Platform, Enterprise Edition (J2EE) Overview, printed Mar. 6, 2010, 3 pgs.
Java version history—Wikipedia, the free encyclopedia, printed Mar. 6, 2010, 9 pgs.
Permissions in the Java™ 2 SDK, printed Mar. 6, 2010, 45 pgs.
Trademark Application, Serial No. 76375405. 13 pages of advertising material and other application papers enclosed. Available from Trademark Document Retrieval system at.
Trademark Electronic Search System record for Serial No. 76375405, Word Mark “NITRA”.
“An Appointment with Destiny—The Time for Web-Enabled Scheduling has Arrived”, Link Fall, 2007, 2 pages.
“How a Solution found a Problem of Scheduling Service Appointments”, Automotive News, 2016, 4 pages.
“IBM Tivoli Access Manager Base Administration Guide”, Version 5.1. International Business Machines Corporation. Entire book enclosed and cited., 2003, 402 pgs.
“NetFormx Offers Advanced Network Discovery Software”, PR Newswire. Retrieved from http://www.highbeam.com/doc/1G1-54102907.html>., Mar. 15, 1999.
“Openbay Announces First-of-its-Kind Connected Car Repair Service”, openbay.com, Mar. 31, 2015, 14 pages.
“Service Advisor”, Automotive Dealership Institute, 2007, 26 pages.
“xTime.com Web Pages”, Jan. 8, 2015, 1 page.
“xTimes Newsletter”, vol. 7, 2013, 4 pages.
U.S. Appl. No. 10/350,795 , Non-Final Office Action, Dec. 26, 2008, 13 pages.
U.S. Appl. No. 10/350,795 , Non-Final Office Action, Feb. 6, 2006, 11 pages.
U.S. Appl. No. 10/350,795 , Non-Final Office Action, Jul. 22, 2009, 22 pages.
U.S. Appl. No. 10/350,795 , Final Office Action, Jul. 6, 2041, 26 pages.
U.S. Appl. No. 10/350,795 , Non-Final Office Action, Jun. 29, 2006, 11 pages.
U.S. Appl. No. 10/350,795 , Non-Final Office Action, Mar. 12, 2007, 10 pages.
U.S. Appl. No. 10/350,795 , Final Office Action, Mar. 3, 2010, 24 pages.
U.S. Appl. No. 10/350,795 , Non-Final Office Action, May 29, 2008, 10 pages.
U.S. Appl. No. 10/350,795 , Notice of Allowance, May 7, 2012, 15 pages.
U.S. Appl. No. 10/350,795 , Non-Final Office Action, Nov. 1, 2010, 19 pages.
U.S. Appl. No. 10/350,796 , Notice of Allowance, Feb. 1, 2006, 5 pages.
U.S. Appl. No. 10/350,796 , Non-Final Office Action, May 19, 2005, 7 pages.
U.S. Appl. No. 10/350,810 , Notice of Allowance, Apr. 14, 2008, 6 pages.
U.S. Appl. No. 10/350,810 , Non-Final Office Action, Apr. 17, 2007, 12 pages.
U.S. Appl. No. 10/350,810 , Final Office Action, Apr. 5, 2005, 12 pages.
U.S. Appl. No. 10/350,810 , Notice of Non-compliant Amendment, Dec. 12, 2006.
U.S. Appl. No. 10/350,810 , Non-Final Office Action, Dec. 9, 2005, 14 pages.
U.S. Appl. No. 10/350,810 , Final Office Action, May 18, 2006, 15 pages.
U.S. Appl. No. 10/350,810 , Final Office Action, Nov. 14, 2007, 13 pages.
U.S. Appl. No. 10/350,810 , Non-Final Office Action, Sep. 22, 2004, 10 pages.
U.S. Appl. No. 10/351,465 , Non-Final Office Action, Jul. 27, 2004, 9 pages.
U.S. Appl. No. 10/351,465 , Final Office Action, May 5, 2005, 8 pages.
U.S. Appl. No. 10/351,465 , Notice of Allowance, Sep. 21, 2005, 4 pages.
U.S. Appl. No. 10/351,606 , Notice of Allowance, Apr. 4, 2006, 12 pages.
U.S. Appl. No. 10/351,606 , Non-final Office Action, May 17, 2004, 5 pages.
U.S. Appl. No. 10/351,606 , Non-final Office Action, Dec. 19, 2005, 7 pages.
U.S. Appl. No. 10/655,899, Non-Final Office Action, Sep. 17, 2007, 11 pages.
U.S. Appl. No. 10/665,899 , Non-Final Office Action, Aug. 30, 2010, 23 pages.
U.S. Appl. No. 16/951,833 , Notice of Allowance, Jun. 16, 2021, 14 pages.
Hu, Bo , “A Platform based Distributed Service Framework for Large-scale Cloud Ecosystem Development”, IEEE Computer Society, 2015, 8 pages.
U.S. Appl. No. 15/478,042, Non-Final Office Action, Nov. 19, 2021, 45 pages.
U.S. Appl. No. 16/911,154 , Final Office Action, Mar. 28, 2022, 17 pages.
U.S. Appl. No. 17/156,254, Non-Final Office Action, Feb. 25, 2022, 18 pages.
U.S. Appl. No. 13/025,019 , Non-Final Office Action, Oct. 6, 2017.
U.S. Appl. No. 13/025,019 , Final Office Action, Sep. 12, 2013, 13 pages.
U.S. Appl. No. 13/025,019 , Non-Final Office Action, Sep. 18, 2014, 15 pages.
U.S. Appl. No. 13/025,019 , Notice of Allowance, Sep. 26, 2019, 9 pages.
U.S. Appl. No. 14/208,042 , Final Office Action, Apr. 16, 2018.
U.S. Appl. No. 14/208,042 , Non-Final Office Action, Aug. 21, 2020, 13 pages.
U.S. Appl. No. 14/208,042 , Final Office Action, Dec. 6, 2016, 26 pages.
Chen, Deren “Business to Business Standard and Supply Chain System Framework in Virtual Enterprises”, Compuer Supported Cooperative Work in Design, The Sixth International Conference on 2001, pp. 472-476.
U.S. Appl. No. 14/208,042 , Final Office Action, Jan. 11, 2019, 16 pages.
U.S. Appl. No. 14/208,042 , Advisory Action, Jul. 12, 2018.
U.S. Appl. No. 14/208,042 , Non-Final Office Action, Jun. 30, 2016, 23 pages.
U.S. Appl. No. 14/208,042 , Notice of Allowance, May 6, 2021, 13 pages.
U.S. Appl. No. 14/208,042 , Non-Final Office Action, Sep. 20, 2017.
U.S. Appl. No. 14/208,042 , Non-Final Office Action, Sep. 21, 2018.
U.S. Appl. No. 15/134,779 , Final Office Action, Feb. 27, 2020, 18 pages.
U.S. Appl. No. 15/134,779 , Non-Final Office Action, Jan. 30, 2019, 26 pages.
U.S. Appl. No. 15/134,779 , Advisory Action, Jul. 29, 2019, 6 pages.
U.S. Appl. No. 15/134,779 , Final Office Action, May 17, 2019, 25 pages.
U.S. Appl. No. 15/134,779 , Non-Final Office Action, Nov. 19, 2019, 27 pages.
U.S. Appl. No. 15/134,779 , Notice of Allowance, Sep. 9, 2020, 12 pages.
U.S. Appl. No. 15/134,793 , Non-Final Office Action, Jan. 30, 2019, 26 pages.
U.S. Appl. No. 15/134,793 , Advisory Action, Jul. 29, 2019, 6 pages.
U.S. Appl. No. 15/134,793 , Final Office Action, Mar. 27, 2020, 22 pages.
U.S. Appl. No. 15/134,793 , Final Office Action, May 13, 2019, 26 pages.
U.S. Appl. No. 15/134,793 , Non-Final Office Action, Nov. 19, 2019, 31 pages.
U.S. Appl. No. 15/134,793 , Notice of Allowance, Nov. 2, 2020, 13 pages.
U.S. Appl. No. 15/134,820 , Non-Final Office Action, Feb. 23, 2018.
U.S. Appl. No. 15/134,820 , Notice of Allowance, Jan. 28, 2019, 7 pages.
U.S. Appl. No. 15/134,820 , Final Office Action, Sep. 21, 2018.
U.S. Appl. No. 15/478,042 , Non-Final Office Action, Aug. 4, 2020, 42 pages.
U.S. Appl. No. 15/478,042 , Final Office Action, Mar. 19, 2020, 35 pages.
U.S. Appl. No. 15/478,042 , Final Office Action, May 5, 2021, 38 pages.
U.S. Appl. No. 15/478,042 , Non-Final Office Action, Oct. 10, 2019, 26 pages.
U.S. Appl. No. 15/478,048 , Final Office Action, Apr. 9, 2020, 42 pages.
U.S. Appl. No. 15/478,048 , Non-Final Office Action, Mar. 8, 2021, 69 pages.
U.S. Appl. No. 15/478,048 , Non-Final Office Action, Sep. 30, 2019, 30 pages.
U.S. Appl. No. 15/602,999 , Notice of Allowance, Apr. 18, 2019, 6 pages.
U.S. Appl. No. 15/602,999 , Advisory Action, Jan. 31, 2019, 3 pages.
U.S. Appl. No. 15/602,999 , Non-Final Office Action, May 3, 2018.
U.S. Appl. No. 15/602,999 , Final Office Action, Nov. 21, 2018.
U.S. Appl. No. 16/041,552 , Final Office Action, Apr. 27, 2021, 23 pages.
U.S. Appl. No. 16/041,552 , Non-Final Office Action, Dec. 27, 2019, 13 pages.
U.S. Appl. No. 16/041,552 , Final Office Action, May 29, 2020, 18 pages.
U.S. Appl. No. 16/041,552 , Non-Final Office Action, Sep. 17, 2020, 16 pages.
U.S. Appl. No. 16/951,833 , Non-Final Office Action, Feb. 4, 2021, 10 pages.
Aloisio, Giovanni , et al., “Web-based access to the Grid using the Grid Resource Broker portal”, Google, 2002, pp. 1145-1160.
Anonymous , “Software ready for prime time”, Automotive News. Detroit, vol. 76, Issue 5996, Nov. 5, 2001, p. 28.
Bedell, Doug , Dallas Morning News, “I Know Someone Who Knows Kevin Bacon”. Oct. 27, 1998. 4 pgs.
Chadwick, D.W. , “Understanding X.500—The Directory”, Available at <http://sec.cs.kent.ac.uk/x500book/>. Entire work cited., 1996.
Chatterjee, Pallab , et al., “On-board diagnostics not just for racing anymore”, EDN.com, May 6, 2013, 7 pages.
Clemens Grelck , “A Multithread Compiler Backend for High-Level Array Programming”, 2003.
CNY Business Journal , “Frank La Voila named Southern Tier Small-Business Person of 1999”, Jun. 11, 1999, 2 pages.
Croswell, Wayne , “Service Shop Optimiztion”, Modern Tire Retailer, May 21, 2013, 7 pages.
Davis, Peter T., et al., “Sams Teach Yourself Microsoft Windows NT Server 4 in 21 Days”, Sams® Publishing, ISBN: 0-672-31555-6, 1999, printed Dec. 21, 2008, 15 pages.
Derfler, Frank J., et al., “How Networks Work: Millennium Edition”, Que, A Division of Macmillan Computer Publishing, ISBN: 0-7897-2445-6, 2000, 9 pages.
Drawbaugh, Ben , “Automatic Link Review: an expensive way to learn better driving habits”, Endgadget.com, Nov. 26, 2013, 14 pages.
Emmanuel, Daniel , “Basics to Creating an Appointment System for Automotive Service Customers”, Automotiveservicemanagement.com, 2006, 9 pages.
Hogue , et al., “Thresher: Automating the Unwrapping of Semantic Content from the World Wide Web”, ACM, 2005, pp. 86-95.
Housel, Barron C., et al., “WebExpress: A client/intercept based system for optimizing Web browsing in a wireless environment”, Google, 1998, pp. 419-431.
Interconnection , In Roget's II The New Thesaurus. Boston, MA: Houghton Mifflin http://www.credoreference.com/entry/hmrogets/interconnection, 2003, Retrieved Jul. 16, 2009, 1 page.
Jenkins, Will , “Real-time vehicle performance monitoring with data intergrity”, A Thesis Submitted to the Faculty of Mississippi State University, Oct. 2006, 57 pages.
Johns, Pamela , et al., “Competitive intelligence in service marketing”, Marketing Intelligence & Planning, vol. 28, No. 5, 2010, pp. 551-570.
Lavrinc, Damon , “First Android-powered infotainment system coming to 2012 Saab 9-3”, Autoblog.com, Mar. 2, 2011, 8 pages.
Lee, Adam J., et al., “Searching for Open Windows and Unlocked Doors: Port Scanning in Large-Scale Commodity Clusters”, Cluster Computing and the Grid, 2005. IEEE International Symposium on vol. 1, 2005, pp. 146-151.
Michener, J.R. , et al., “Managing System and Active-Content Integrity”, Computer; vol. 33, Issue: 7, 2000, pp. 108-110.
Milic-Frayling, Natasa , et al., “SmartView: Enhanced Document Viewer for Mobile Devices”, Google, Nov. 15, 2002, 11 pages.
Needham, Charlie , “Google Now Taking Appointments for Auto Repair Shops”, Autoshopsolutions.com, Aug. 25, 2015, 6 pages.
Open Bank Project , https://www.openbankproject.com/, retrieved Nov. 23, 2020, 10 pages.
openbay.com Web Pages , Openbay.com, retrieved from archive.org May 14, 2019, Apr. 2015, 6 pages.
openbay.com Web Pages , Openbay.com, retrieved from archive.org on May 14, 2019, Feb. 2014, 2 pages.
openbay.com Web Pages , Openbay.com, retrieved from archive.org, May 14, 2019, Mar. 2015, 11 pages.
Phelan, Mark , “Smart phone app aims to automate car repairs”, Detroit Free Press Auto Critic, Mar. 31, 2015, 2 pages.
Pubnub Staff , “Streaming Vehicle Data in Realtime with Automatic (Pt 1)”, Pubnub.com, Aug. 17, 2015, 13 pages.
Standards for Technology in Auto , https://www.starstandard.org/, retrieved Nov. 23, 2020, 4 pages.
Strebe, Matthew , et al., MCSE: NT Server 4 Study Guide, Third Edition. Sybex Inc. Front matter, 2000, pp. 284-293, and 308-347.
Warren, Tamara , “This Device Determines What Ails Your Car and Finds a Repair Shop—Automatically”, CarAndDriver.com, Apr. 8, 2015, 7 pages.
You, Song , et al., “Overview of Remote Diagnosis and Maintenance for Automotive Systems”, 2005 SAE World Congress, Apr. 11-14, 2015, 10 pages.
U.S. Appl. No. 10/665,899 , Final Office Action, Feb. 24, 2010, 22 pages.
U.S. Appl. No. 10/665,899 , Final Office Action, Jul. 7, 2008, 11 pages.
U.S. Appl. No. 10/665,899 , Final Office Action, Mar. 8, 2011, 21 pages.
U.S. Appl. No. 10/665,899 , Final Office Action, May 11, 2009, 14 pages.
U.S. Appl. No. 10/665,899 , Non-Final Office Action, Nov. 13, 2008, 11 pages.
U.S. Appl. No. 10/665,899 , Non-Final Office Action, Sep. 14, 2009, 14 pages.
U.S. Appl. No. 11/149,909 , Final Office Action, Feb. 4, 2009, 14 pages.
U.S. Appl. No. 11/149,909 , Non-Final Office Action, May 13, 2008, 14 pages.
U.S. Appl. No. 11/149,909 , Non-Final Office Action, May 6, 2009, 6 pages.
U.S. Appl. No. 11/149,909 , Notice of Allowance, Sep. 16, 2009, 7 pages.
U.S. Appl. No. 11/414,939 , Non-Final Office Action, Jul. 19, 2010, 7 pages.
U.S. Appl. No. 11/414,939 , Non-Final Office Action, Mar. 9, 2010, 11 pages.
U.S. Appl. No. 11/414,939 , Notice of Allowance, Nov. 2, 2010.
U.S. Appl. No. 11/442,821 , Final Office Action, Apr. 7, 2009, 19 pages.
U.S. Appl. No. 11/442,821 , Notice of Allowance, Jul. 30, 2012, 6 pages.
U.S. Appl. No. 11/442,821 , Non-Final Office Action, Jun. 1, 2011, 23 pages.
U.S. Appl. No. 11/442,821 , Final Office Action, May 21, 2010, 28 pages.
U.S. Appl. No. 11/442,821 , Non-Final Office Action, Nov. 12, 2009, 19 pages.
U.S. Appl. No. 11/442,821 , Final Office Action, Nov. 29, 2011, 26 pages.
U.S. Appl. No. 11/442,821 , Non-Final Office Action, Sep. 3, 2008, 14 pages.
U.S. Appl. No. 11/446,011 , Notice of Allowance, Aug. 9, 2011, 10 pages.
U.S. Appl. No. 11/446,011 , Final Office Action, Jun. 8, 2010, 12 pages.
U.S. Appl. No. 11/446,011 , Non-Final Office Action, Mar. 1, 2011, 15 pages.
U.S. Appl. No. 11/446,011 , Non-Final Office Action, Nov. 27, 2009, 14 pages.
U.S. Appl. No. 11/524,602 , Notice of Allowance, Aug. 6, 2013, 22 pages.
U.S. Appl. No. 11/524,602 , Non-Final Office Action, Dec. 11, 2009, 20 pages.
U.S. Appl. No. 11/524,602 , Final Office Action, Jul. 27, 2010, 13 pages.
U.S. Appl. No. 11/524,602 , Final Office Action, Jun. 26, 2012, 11 pages.
U.S. Appl. No. 11/524,602 , Non-Final Office Action, Nov. 14, 2011, 19 pages.
U.S. Appl. No. 11/525,009 , Non-Final Office Action, Aug. 10, 2011, 18 pages.
U.S. Appl. No. 11/525,009 , Final Office Action, Aug. 3, 2010, 16 pages.
U.S. Appl. No. 11/525,009 , Non-Final Office Action, Dec. 16, 2009, 20 pages.
U.S. Appl. No. 11/525,009 , Notice of Allowance, Jul. 23, 2012, 19 pages.
U.S. Appl. No. 12/243,852 , Restriction Requirement, Dec. 7, 2010.
U.S. Appl. No. 12/243,852 , Notice of Allowance, Feb. 27, 2013, 6 pages.
U.S. Appl. No. 12/243,852 , Non-Final Office Action, Jan. 16, 2013, 5 pages.
U.S. Appl. No. 12/243,852 , Non-Final Office Action, Mar. 17, 2011, 8 pages.
U.S. Appl. No. 12/243,852 , Supplemental Notice of Allowability, Mar. 19, 2013, 3 pages.
U.S. Appl. No. 12/243,852 , Final Office Action, Oct. 24, 2011, 13 pages.
U.S. Appl. No. 12/243,855 , Notice of Allowance, Nov. 22, 2010, 10 pages.
U.S. Appl. No. 12/243,855 , Non-Final Office Action, Oct. 14, 2010, 6 pages.
U.S. Appl. No. 12/243,855 , Notice of Allowance, Oct. 28, 2010, 5 pages.
U.S. Appl. No. 12/243,861 , Final Office Action, Jun. 22, 2011, 5 pages.
U.S. Appl. No. 12/243,861 , Non-Final Office Action, Nov. 8, 2010, 8 pgs.
U.S. Appl. No. 12/243,861 , Notice of Allowance, Sep. 6, 2011, 10 pgs.
U.S. Appl. No. 13/025,019 , Non-Final Office Action, Apr. 22, 2016, 16 pages.
U.S. Appl. No. 13/025,019 , Non-Final Office Action, Apr. 5, 2013, 15 pages.
U.S. Appl. No. 13/025,019 , Final Office Action, Aug. 28, 2015, 25 pages.
U.S. Appl. No. 13/025,019 , Final Office Action, Dec. 20, 2016, 16 pages.
U.S. Appl. No. 13/025,019 , Final Office Action, Jul. 13, 2018, 11 pages.
U.S. Appl. No. 15/478,048 , Final Office Action, Sep. 17, 2021, 32 pages.
U.S. Appl. No. 16/041,552 , Notice of Allowance, Sep. 30, 2021, 17 pages.
U.S. Appl. No. 16/911,154 , Non-Final Office Action, Sep. 16, 2021, 15 pages.
Related Publications (1)
Number Date Country
20220148050 A1 May 2022 US