System and method for assessing device usage

Information

  • Patent Grant
  • 11775010
  • Patent Number
    11,775,010
  • Date Filed
    Monday, December 2, 2019
    5 years ago
  • Date Issued
    Tuesday, October 3, 2023
    a year ago
Abstract
A method for monitoring device usage includes: receiving user device information; based on the user interaction data, determining a state associated with the user device; and based on the state of the user device, determining a set of tap parameters. Additionally or alternatively, the method can include any or all of: determining a risk score based on at least on the set of tap parameters; determining an output based on the risk score; and/or any other suitable processes performed in any suitable order.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is related to U.S. application Ser. No. 14/566,408, filed 10 Dec. 2014, and U.S. application Ser. No. 15/835,284, filed 7 Dec. 2017, each of which is incorporated in its entirety by this reference.


TECHNICAL FIELD

This invention relates generally to the vehicular activity monitoring field, and more specifically to a new and useful system and method for monitoring device usage by a user in the vehicle activity monitoring field.


BACKGROUND

A user's driving behavior, such as the riskiness of their driving behavior, is dependent upon many factors. A large factor in recent years has been device usage while driving, such as texting, calling, engaging an application, and various other behaviors. While some conventional systems and methods can approximate that a mobile device is in a vehicle, there are various different ways in which the user can interact with the device, each of which can be associated with vastly different levels of risk.


Thus, there is a need in the vehicle monitoring field to create a new and useful system and method for assessing and properly classifying mobile device usage associated with a driver of a vehicle.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 is a schematic representation of a method 100 for assessing device usage.



FIG. 2 is a schematic representation of a system 200 for assessing device usage.



FIGS. 3A-3C depict a variation detecting user interaction with a secondary client application.



FIGS. 4A-4C depict a variation of different types of device usage.



FIG. 5 depicts a variation of a method 100 for assessing device usage.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention.


1. Overview.


As shown in FIG. 1, a method 100 for monitoring device usage includes: receiving user device information S100; based on the user interaction data, determining a state associated with the user device S120; and based on the state of the user device, determining a set of tap parameters and/or a device classification S130. Additionally or alternatively, the method 100 can include any or all of: determining a risk score S140; determining an output based on the risk score S150; and/or any other suitable processes performed in any suitable order.


As shown in FIG. 2, a system 200 for monitoring device usage includes: a client application executing on a user device. Additionally or alternatively, the system 200 can include and/or be configured to interface with any or all of: a set of sensors, a user device, a processing system, a computing system (e.g., remote computing system), a vehicle, and/or any other suitable component(s).


2. Benefits.


Variations of the system and method can confer various benefits over existing systems and methods.


First, in some variations, the system and/or method confers the benefit of distinguishing between various types of device usage by a driver, each of which can be associated with different levels of risk. In specific examples, the method confers the benefit of enabling a determination to be made of whether the user is engaging with a mobile device while the mobile device is being held by a user or while the mobile device is in a mount/holder (e.g., mounted to a dashboard of the vehicle). This information can be used by an insurance company to appropriately assess the riskiness of a driver. In other specific examples, additional or alternative to these, the system and/or method can be used to provide feedback (e.g., notification of time spent in a state of distracted driving, risk score, etc.) and/or another output (e.g., suspension of device usage while driving) to the driver and/or another suitable user.


Second, in some variations, the system and/or method confers the benefit of minimizing a number of false positives associated with the driver engaging in device usage behavior. In specific examples, for instance, the system and/or method, in assessing device usage, takes into account a central processing unit (CPU) spiking for reasons (e.g., downloading data, transitioning between songs in an audio application, etc.) unrelated to device usage by the user.


Third, in some variations, the system and/or method confers the benefit of assessing device usage associated with a mobile device based only on data collected at the mobile device. In specific examples, for instance, any or all of motion data, processing data, and proximity/contact data can be used to determine device usage.


Fourth, in some variations (e.g., as shown in FIGS. 3A-3C), the system and/or method confers the benefit of assessing a user's interaction at a secondary client application with a primary client application, wherein information is not directly received at the primary client application from the secondary client application. In specific examples, for instance, a telematics client application can indirectly assess usage of a secondary client application through the detection and processing of any or all of: motion information of the user device (e.g., from an accelerometer, from a gyroscope, etc.), orientation information of the user device (e.g., from a gyroscope, from an orientation sensor, etc.), proximity information (e.g., from a proximity sensor of the user device), processing activity of the user device (e.g., CPU activity, network transfer information, available free memory, etc.), and/or any other suitable sensor information.


Additionally or alternatively, the system and/or method can confer any other suitable benefit(s).


3. System.


The system 200 functions to provide information with which to assess a device usage by a user. Additionally or alternatively, the system 200 can function to perform at least a portion of the processing required to determine a device usage, transfer information to another entity (e.g., remote computing system) at which the processing is performed, produce an output, and/or perform any other suitable function.


The system 200 can include and/or be configured to interface with a user device, wherein the device usage is associated with the user device. Examples of the user device include a tablet, smartphone, mobile phone, laptop, watch, wearable device (e.g., glasses), or any other suitable user device. The user device can include power storage (e.g., a battery), processing systems (e.g., CPU, GPU, memory, etc.), user outputs (e.g., display, speaker, vibration mechanism, etc.), user inputs (e.g., a keyboard, touchscreen, microphone, etc.), a location system (e.g., a GPS system), sensors (e.g., optical sensors, such as light sensors and cameras, orientation sensors, such as accelerometers, gyroscopes, and altimeters, audio sensors, such as microphones, magnetometers, etc.), data communication system (e.g., a WiFi transceiver(s), Bluetooth transceiver(s), cellular transceiver(s), etc.), or any other suitable component.


The system 200 can additionally include and/or be configured to interface with a set of sensors (e.g., as part of user device, separate from user device, etc.), which can include any or all of: motion sensors (e.g., accelerometer, gyroscope, magnetometer, orientation sensor, etc.), which can function to detect any or all of: user device movement, user device orientation, vehicle movement, vehicle orientation, position of user device within the vehicle, and/or any other suitable information; proximity sensors (e.g., optical sensors, capacitive sensors, etc.), which can function to detect and/or classify a user's handling of a user device; location sensors (e.g., GPS); any or all of the sensors described above; any or all of the sensors described below; and/or any other suitable sensors.


In addition to or alternative to the information described below in the method 100, the sensor system can collect any or all of the following user activity data and vehicle data. User activity data (e.g., user activity datasets) can include any one or more of: device datasets (e.g., describing devices associated with the user; historical pairings and/or communications with other devices, such as vehicles and/or other user devices via Bluetooth or other communication protocols; etc.); behavioral datasets (e.g., behavior data upon which user activity models can be based or generated for users and/or user accounts; social media datasets derived from social media actions taken by the user and/or related to the user, which can indicate user activity states corresponding to social media actions; physical activity datasets associated with movement patterns of the user; etc.); demographic datasets (e.g., nationality, ethnicity, age, gender, etc.); device event datasets (e.g., associated with text messaging; phone calling; device idling; device charging; application usage; sensor usage; intelligent personal assistants; outputs such as media casting; inputs; etc.), and/or any other suitable datasets associated with a user. Vehicle data can include any one or more of: proximity sensor data (e.g., radar, electromagnetic sensor data, ultrasonic sensor data, light detection and ranging, light amplification for detection and ranging, line laser scanner, laser detection and ranging, airborne laser swath mapping, laser altimetry, sonar, etc.), vehicle camera data (e.g., in-car cameras, exterior cameras, back-up cameras, dashboard cameras, front-view cameras, side-view cameras, image recognition data, infrared camera, 3D stereo camera, monocular camera, etc.), car speed, RPM data, odometer, altimeter, location sensor data (e.g., GPS data, compass data, etc.), motion sensor data (e.g., from an accelerometer, gyroscope, etc.), environmental data (e.g., pressure, temperature, etc.), light sensor data (e.g., from infrared sensors, ambient light sensors, etc.), fuel level (e.g., percentile-based, distance-based, low warning, etc.), fuel system status, oxygen sensor data, throttle position, gear data (e.g., drive, neutral, park, reverse, gear number, etc.), HVAC data (e.g., current temperature, target temperature, etc.), driving status (e.g., restricted features such as user inputs into control panel, unrestricted features, etc.), connectedness status (e.g., near-field communication pairing between the vehicle and a user's mobile device), and/or any other suitable vehicle data. For example, vehicle data can be used in comparisons with mobile device-derived movement data in training, updating, and/or otherwise processing computational models described herein and/or other suitable models. However, supplementary data types can be configured in any suitable manner.


The system 200 can include one or more client applications executing on the user device, which can function to collect information with which to determine a device usage and/or perform any or all of the processing required to determine the device usage.


The client applications can include a primary client application, which is preferably a telematics application, which functions to receive information from one or more sensors (e.g., as described in S110). Additionally or alternatively, the primary client application can perform any or all of the process required to determine a device usage.


The client applications can optionally include one or more secondary client applications (e.g., messaging applications, native user device applications, social media applications, viewer applications, Facebook, YouTube, etc.), wherein the primary client application indirectly detects usage of (e.g., without directly receiving data from) the secondary client applications (e.g., as shown in FIGS. 3A-3C).


Additionally or alternatively, the client applications can be otherwise arranged or described.


The system 200 preferably interfaces with a vehicle, wherein the device usage is determined for a user driving the vehicle and to which the user device (equivalently referred to herein as a mobile device) is optionally removably coupled (e.g., at a mount).


The vehicle to which the mobile electronic device is removably coupled (e.g., present within, mounted to an interior of, etc.), and in the context of which variations of the method 100 can be implemented, can be an automotive vehicle (e.g., a car, truck, SUV, etc.), a light electric vehicle (e.g., an electric bike, an electric scooter, an electric skateboard, any other suitable electric vehicle that is configured to carry at least one human, etc.), an aerial vehicle (e.g., a fixed wing aircraft, a rotary wing aircraft, a drone, a hovercraft, etc.), a mass transit land vehicle (e.g., a bus, a locomotive train, a light-rail train, etc.), an aquatic vehicle (e.g., a pleasure craft, a ferry, a hydrofoil craft, etc.), and/or any other suitable vehicle.


Additionally or alternatively, the system 200 can include any other suitable components and/or be otherwise configured.


4. Method.


The method 100 functions to assess the usage of a mobile device by a user driving a vehicle. Additionally or alternatively, the method 100 can function to distinguish between multiple types of device usage (e.g., phone call vs. texting vs. interacting with a phone in a mount); quantify a device usage (e.g., duration, severity, etc.); determine a risk score for a user associated with the riskiness of their driving; produce an output based on the device usage and/or risk score (e.g., report to an insurance company, limit device usage, provide awareness to the user, etc.), and/or perform any other suitable function.


The method 100 is preferably performed with a system as described above, but can additionally or alternatively be performed with any other suitable system.


4.1 Method: Receiving User Device Information S100


The method 100 includes receiving user device information S100, which functions to receive information with which to determine a device usage associated with a mobile device by the user. Additionally or alternatively, S100 can function to eliminate data to be used when determining a device usage (e.g., to reduce false positives, to remove noise, etc.) and/or perform any other suitable function(s).


The user device information, equivalently referred to herein as user device data, is preferably generated and/or sampled at the user device, such as at any or all of: one or more sensors of the sensor system, at a processing system (e.g., CPU) of the user device, at a computing system of the user device, at a client application executing on the user device, at an SDK associated with a client application, and/or at any other suitable component(s) of the user device. Information received from a user device is preferably received from a single user device, but can additionally or alternatively be received from multiple user devices.


Additionally or alternatively, information can be received from any or all of: an external sensor system; a vehicle (e.g., speedometer); a data storage source; a remote computing system; and/or any other suitable data source.


The information is preferably received at a user device (e.g., the same user device as where it's generated, a separate user device, etc.), further preferably at a client application (e.g., primary client application, secondary client application, etc.) executing on the user device, but can additionally or alternatively be received at any or all of: a processing system of the user device, a computing system of the user device, an SDK associated with the client application, and/or at any other suitable component or module of the user device. Further additionally or alternatively, information can be received at a remote computing system and/or any other suitable component(s).


The information collected in S110 (and/or supplementary datasets) is preferably sampled by components arranged at a mobile device, but can additionally or alternatively be sampled by components associated with (e.g., arranged at, positioned within, mounted to, physically connected to, etc.) any suitable device and/or vehicle. The user device information (and/or supplementary datasets) is preferably associated with one or more users (e.g., collected at a mobile device of the user; collected at a mobile device physically proximal a user; stored in association with user accounts for one or more users; movement datasets describing user movement, such as pre-, during, and/or post-driving session; etc.). Further, the user device information is preferably associated with one or more driving sessions, which can be characterized by characteristics and/or patterns in movement data (e.g., movement data indicating movement that is characteristic of a vehicle) and/or supplementary data (e.g., engine sounds as sensed by a smartphone microphone), and where driving sessions can include movement of, stoppage of, operation control related to, and/or other suitable aspects associated with vehicles. Additionally or alternatively, the user device information can be associated or independent of any suitable entities.


Any or all of the information in S110 can be collected continuously from and/or at mobile devices in a vehicle (e.g., during vehicle operation), such as in real-time, but can additionally or alternatively include collecting information periodically (e.g., at vehicle stoppage events, at predetermined time intervals such as every minute, etc.), and/or in temporal relation (e.g., in response to, etc.) to trigger conditions (e.g., movement-related thresholds; satisfaction or failure of device association conditions; detection of user's interaction with the user device; conditions associated with supplementary data; etc.), but collecting information can be performed at any suitable time and/or frequency. Any or all of the information can be associated with one or more temporal indicators (e.g., a time period, a time point, etc.) describing when the information was collected (e.g., sampled at the corresponding sensor, etc.), determined, and/or otherwise processed. Additionally or alternatively, associated temporal indicators can describe temporal context surrounding the information but can further additionally or alternatively correspond to any suitable temporal indicator or be time-independent. However, collecting information can be performed in any suitable manner.


S110 can be performed by any or all of: an application (e.g., a native application executing in the background, a client, etc.); the operating system of the mobile device (e.g., iOS™, Android™); a remote computing system associated with the operating system; or by any suitable system. In variants where S110 is performed by the operating system or remote system associated with the operating system, the method can reduce additional resource consumption (e.g., battery consumption) because the underlying data is being collected and used for other processes; the application itself is minimizing or not consuming any additional resources. In one variation, S110 includes setting a wake condition with the mobile device operating system, wherein the mobile device operating system delivers an event (e.g., notification) to the application, launches the application, progresses to S120, or otherwise triggers further application operation in response to the condition being met.


The user device information preferably includes movement information (equivalently referred to herein as motion information), which is preferably associated with motion of the user device (e.g., with respect to the vehicle), but can additionally or alternatively be associated with the vehicle (e.g., speed of vehicle), the user, and/or any other suitable component(s).


The movement information can include any or all of: movement data (e.g., position data; velocity data; acceleration data; a position, velocity, acceleration (PVA) dataset; a motion dataset, motion data, movement data, etc.); orientation data (e.g., orientation of the user device, change in orientation direction at which a screen of the user device is facing, gravity deviation, rate of gravity deviation, rate of gravity deviation around a projected centroid, etc.); and/or any other suitable information.


The movement information is preferably sampled at one or more movement sensors indicative of motion and/or orientation, such as any or all of: multi-axis accelerometers, single-axis accelerometers, gyroscopes, and/or any other motion or orientation sensors (e.g., virtual sensor utilizing accelerometer and gyroscope data; virtual sensor utilizing accelerometer, gyroscope, and magnetometer data; etc.). Additionally or alternatively, motion information and/or any other information collected in S110 can be collected from and/or derived from any or all of: location sensors (e.g., GPS data collection components, magnetometer, compass, altimeter, etc.), optical sensors, audio sensors, electromagnetic (EM)-related sensors (e.g., radar, lidar, sonar, ultrasound, infrared radiation, magnetic positioning, etc.), environmental sensors (e.g., temperature sensors, altitude sensors, pressure sensors, etc.), biometric sensors, and/or any other suitable sensors.


In some variations, the motion information includes acceleration data collected from an accelerometer of the user device and orientation data (e.g., rate of gravity deviation) collected from a gyroscope and/or an orientation sensor (e.g., virtual sensor) of the user device. The motion information is preferably collected continuously while the user is driving (e.g., as detected by the accelerometer, as detected by a GPS application, etc.), but can additionally or alternatively be collected in response to another trigger, collected intermittently, and/or otherwise collected.


Further additionally or alternatively, S110 can be performed in absence of collecting motion information.


The user device information further preferably includes system performance information, which functions to determine whether or not the user is engaging with the user device. The system performance information can additionally or alternatively function to determine if one or more secondary client applications are being used, determine which client applications are being used, quantify (e.g., in duration of time, number of interactions, etc.) the user's engagement with the user device, and/or perform any other suitable function.


The system performance information preferably includes processing information associated with the user device, such as activity information collected at the CPU of the user device, which can be associated with any or all of: network transfer activities, downloading activities, and/or any other suitable activities associated with one or more processing systems of the user device.


The system performance information can additionally or alternatively include any other suitable information associated with the user device, such as an amount of free memory.


In some variations, the system performance information enables a primary client application to detect that a user is interacting with a secondary client application, despite not having direct access to the information being collected at the secondary client application. In specific examples, this is enabled by collecting CPU activity information, network transfer information, and an amount of free memory (e.g., which can be compared with a previous value to determine and/or predict a user's activity of the device) associated with the user device.


Additionally or alternatively, any other suitable information can be collected and/or S110 can be performed in absence of collecting system performance information.


The device information further preferably includes device handling information, which functions to determine if and how the user is physically interacting with (e.g., holding in hand for texting, holding close to ear for calling, etc.) the user device. Collecting device handling information can additionally or alternatively function to determine if the phone is not being held by the user (e.g., is held in a mount, is otherwise coupled to an interior of the vehicle, is resting on a seat of the vehicle, etc.), determine a degree to which the user is interacting with the user device, determine and/or predict an activity (e.g., texting, calling, viewing a map and minimally interacting, etc.) associated with the interaction, and/or perform any other suitable function(s).


The handling information preferably includes proximity and/or contact information, which can be collected at any or all of: proximity sensors (e.g., infrared sensors), optical sensors (e.g., light sensor, ambient light sensors, etc.), contact sensors (e.g., electrical conductance sensor), heat sensors, and/or any other suitable sensors configured to determine if the user device is proximal to the user (e.g., touching, proximal, proximal to the user's face, etc.). The handling information is preferably collected by one or more sensors onboard the user device (e.g., a proximity sensor proximal to a speaker of the user device, an ambient light sensor proximal to a speaker of the user device, etc.), but can additionally or alternatively be collected at any other suitable sensors onboard or external to the user device.


Data from the proximity sensor preferably includes return signals and/or a return field resulting from the emission of an electromagnetic field and/or a beam of electromagnetic radiation (e.g., infrared radiation). This can be used for instance, to determine if the user device is being held to the user's ear (e.g., non-stationary and proximity sensor indicates user is close, during a phone call, etc.) versus being held in a user's hand (e.g., non-stationary and proximity sensor does not indicate that a user is close, while texting, etc.). Data from another sensor, such as an optical sensor (e.g., ambient light sensor), can additionally or alternatively be used (e.g., to provide a higher confidence determination of how the user is interacting with the phone, to detect a user speaking on speaker phone and holding the phone backward, to distinguish between a user holding the phone and the phone resting face down on a seat, etc.). The second sensor can function, for instance, to distinguish between different types of handling of the user device by the user. Additionally or alternatively, any other suitable sensors can be used.


The sensors can be arranged proximal to each other (e.g., on the same surface of the user device), on different surfaces of the user device (e.g., to accurately characterize a user's handling of the phone, etc.), all onboard the user device, all offboard the user device, and/or any combination of these.


In some variations, the handling information is collected at a proximity sensor arranged onboard the user device (e.g., proximal to a speaker, proximal to a screen, etc.) and at an ambient light sensor arranged onboard the user device (e.g., proximal to a speaker, proximal to a screen, etc.), which are collectively used to determine and/or predict how the user is handling a non-stationary phone (e.g., holding up to ear, holding in hand, etc.).


Additionally or alternatively, any other suitable information can be collected and/or S110 can be performed in absence of collecting handling information.


The method 100 can additionally include collecting location information S112, which functions to determine one or more characteristics of the vehicle. The location information can additionally or alternatively function to characterize and/or eliminate user device information based on one or more parameters (e.g., speed) associated with the vehicle, and/or perform any other suitable function.


S112 can include collecting one or more location datasets (e.g., collecting location data), which can include any one or more of: GPS data (e.g., position coordinates, associated time stamps, etc.), indoor positioning system data, local positioning system data, multilateration data, GSM localization data, self-reported positioning, control plane locating data, compass data, magnetometer data, and/or any other suitable location-related data. In an example, GPS data can be leveraged for complete PVA solutions, but can additionally or alternatively include any movement data, such as retrieved using GNSS data (e.g., via GLONASS, Galileo, BeiDou). In another example, proximity sensors associated with mobile phones (e.g., for capacitive proximity sensing; IR-based proximity sensing; etc.) can be used to in determining location of objects (e.g., users within a vehicle). In another example, S112 can include collecting a micro-location dataset (e.g., sampled using beacons within and/or mounted to a vehicle). In another example, S112 can include collecting a location dataset sampled by a mobile device and indicating the location of the mobile device within the vehicle (e.g., proximal front of vehicle; back of vehicle; etc.). However, collecting location datasets can be performed in any suitable manner.


In some variations, S112 includes continuously collecting location data of the vehicle, which can be used to determine any or all of: a speed of the vehicle, a road being traveled by the vehicle, one or more turns of the vehicle, and/or any other suitable information.


In one variation, S110 includes continuously collecting (e.g., while the user device is in a vehicle, while the user device is in a vehicle as determined by location information, etc.) motion information (e.g., from an accelerometer, from a gyroscope, from an orientation sensor, etc.) associated with the user device, performance information (e.g., processing activity, CPU activity, network transfer information, amount of free memory, etc.) associated with the user device, and optionally handling information (e.g., from a proximity sensor, from an optical sensor, etc.) associated with the user device, which is used to assess a user's interaction with the user device in subsequent processes of the method. Additionally, location information can be collected and used to determine one or more parameters of the vehicle (e.g., speed, location in space, etc.), which can also be used in subsequent processes of the method (e.g., to eliminate data, to appropriately determine a risk score, etc.).


Additionally or alternatively, any other suitable information can be collected in S110.


4.2 Method: Based on the User Device Information, Determining a State of the User Device S120


The method 100 includes determining a state of the user device S120, which functions to classify whether or not the user device is stationary with respect to the vehicle. Additionally or alternatively, S210 can function to determine any other characterization of the user device, such as any or all of: the user device is being handled (e.g., held, moved around, etc.) by the user, the user device is coupled to the vehicle such as in a mount, and/or any other suitable characterization.


The state preferably indicates whether or not the user device is stationary (e.g., with respect to the vehicle) based on a stationary metric, which is preferably determined, at least in part, based on any or all of the motion information (e.g., acceleration information, gyroscope information, orientation information, etc.) described above. Additionally, the state can be determined based on any or all of the other information described above (e.g., performance information, handling information, location information, etc.), as well as any other supplementary data (e.g., environmental data). The stationary metric can include a binary indication of whether or not the device is stationary; a classification of the state as any or all of: stationary, non-stationary, in transition between stationary and non-stationary; a parameter along a spectrum of how stationary the device is; and/or any other suitable metric(s). The stationary metric can be determined based on any or all of: one or more algorithms, a deep learning model, comparison with a threshold, and/or determined in any other suitable way.


The state of the user device preferably serves as a trigger for one or more sub-processes of S130 (e.g., determining whether phone is handled by a user or in a mount, etc.), but can additionally or alternatively be used in any suitable way(s) throughout the method 100.


The state of the device can be determined for each of a set of sections of data, equivalently referred to herein as segments of data, wherein the sections of data are separated by a transition (e.g., a motion threshold, a performance threshold, a device handling threshold, etc.), and/or any other suitable information.


In one variation, S120 includes classifying, based on any or all of the information collected in S110 and an algorithm (e.g., of a machine learning model), a classification of at least whether the device is in a stationary state or a non-stationary state, which functions to trigger one or more actions in S130.


4.3 Method: Based on the State of the User Device, Determining a Set of Tap Parameters and/or a Device Classification S130


The method 100 includes based on the state of the user device, determining a set of tap parameters and/or a device classification S130, which functions to determine (e.g., quantify, assess, etc.) a user's (e.g., driver's) interaction with the user device.


Taps herein refer to any actions or gestures associated with a user's interaction with a user device, further preferably with a touch-sensitive surface of the user device, such as, but not limited to: a tap or touch, a swiping motion, a pinching motion, a zooming motion (e.g., zooming in, zooming out, etc.), a typing action, and/or any other suitable interaction. Additionally or alternatively, taps can refer to interactions with other components of the user device, such as a pressing of a button (e.g., power, volume, etc.), a user of one or more sensors (e.g., an audio sensor to provide a command, etc.), and/or any other suitable interactions.


The tap parameters can include and/or be determined based on any or all of: a number of taps, a frequency of taps, a duration of taps, a time in between taps, a set of taps (e.g., closely-spaced taps), a tap region (e.g., an aggregated set of taps), a tap event (e.g., which remains after comparisons with a predetermined set of thresholds), and/or any other suitable parameters.


In an event that the device is in a stationary state (e.g., during a section of data collected in S110), S120 preferably includes determining a first set of tap parameters S132.


Determining that the user device is in a stationary state preferably triggers a second state to be determined, which determines whether or not the user device is in a mount. The second state can be determined based on any or all of: an algorithm, a deep learning model, comparison with a threshold, sensor data (e.g., visual data from a camera, etc.), any of the information collected in S110, knowing the location of the user device within the vehicle (e.g., consistently at the front of the vehicle, on a dashboard, etc.) and/or any other suitable information.


In an event that it is determined that the user device is in a mount, S132 preferably includes determining (e.g., computing) a set of tap parameters, wherein the set of tap parameters are determined, at least in part based on motion information collected in S110, such as data collected from any or all of: an accelerometer, a gyroscope, and an orientation sensor. Additionally or alternatively, the set of tap parameters can be based on any other suitable sensor information and/or any other information.


The set of tap parameters preferably includes a set of tap regions, which can be determined based on any or all of the following processes. Additionally or alternatively, the set of tap parameters can include any or all of the intermediate parameters described below, any or all of the parameters described in S134, and/or any other suitable parameters.


In preferred variation, the set of tap parameters is determined based on a gravity deviation parameter, which can reflect a user's interaction with (e.g., perturbation of) the user device (e.g., tapping the screen, asymmetrically tapping the screen, causing perturbations to the phone while it is mounted, etc.) while the device is mounted. The gravity deviation parameter is preferably calculated about a projected centroid (e.g., medoid) of the user device, but can additionally or alternatively be calculated about any suitable region, point, axis, plane, or other feature (physical or virtual) of the user device. In specific examples, the gravity deviation parameter includes a rate of gravity deviation. Additionally or alternatively, any other gravity deviation parameter can be calculated and/or any other suitable parameter can be calculated.


S132 can optionally include classifying at least a subset of the gravity parameters as potential tap events, which functions to eliminate tap events which are likely not associated with any or all of: the user interacting with the user device (e.g., and instead noise), the user being distracted by his interaction with the user device (e.g., for brief or rare taps), and/or can perform any other suitable function. In some variations, a predetermined percentage (e.g., 10%, less than 10%, greater than 10%, between 2% and 25%, etc.) of highest-value (e.g., greatest rate of gravity deviation) tap parameters are classified as potential tap events. The predetermined percentage can be determined. Additionally or alternatively, tap parameters which exceed a predetermined threshold (e.g., minimum rate of gravity deviation threshold) can be classified as potential tap events, a set of random tap parameters can be classified as potential tap events, all random tap parameters can be classified as potential tap events, and/or potential tap events can be determined in any other suitable way(s).


S132 can optionally include aggregating the set of potential tap events into tap regions, which functions to take into account one or more temporal parameters (e.g., duration of tap event, duration of a set of tap events, time between subsequent tap events, etc.) associated with the user's interaction with the user device. Additionally, this can function to eliminate events not associated with a user interaction (e.g., a false positive), not significant in causing user distraction, and/or any other suitable events. The temporal parameter preferably includes a time between adjacent tap events, which is compared with a maximum time threshold (e.g., 10 seconds, less than 10 seconds, greater than 10 seconds, etc.), wherein potential tap events which have a time spacing below the time threshold are aggregated into a tap region. Additionally or alternatively, a deep learning model and/or algorithm can be used to determine a tap region, a minimum threshold can be used to determine a tap region, a set of multiple thresholds can be used, and/or any other suitable parameters can be used to determine a tap region. Further additionally or alternatively, S132 can be performed in absence of aggregating potential tap events into a tap region. In some variations, a set of potential tap events which have a time between events within 10 seconds form a tap region.


S132 can optionally include discarding one or more tap regions based on a predetermined threshold, such as a temporal threshold (e.g., duration of tap region, number of potential tap events in the tap region, etc.). In some variations, S132 includes discarding tap regions (e.g., removing from further consideration) having a duration (e.g., from the beginning of the first potential tap event to the end of the last potential tap event) of less than a minimum threshold (e.g., 3 seconds, less than 3 seconds, greater than 3 seconds, etc.).


S132 can optionally include analyzing the tap regions, which functions to determine one or more features associated with the tap region. These features can be used to eliminate tap regions from further consideration, rank a riskiness of tap regions, and/or perform any other suitable function. The tap regions are preferably analyzed with respect to system performance information, further preferably with respect to processing activity (e.g., CPU usage) of the user device associated with the tap region. This can function, for instance, to further narrow in the tap regions which are most likely associated with user interaction with the user device. In preferred variations, for instance, S132 includes analyzing each of the set of tap regions for an associated CPU usage occurring contemporaneously (e.g., at the same time, substantially at the same time, etc.) with the tap region, wherein tap regions associated with a CPU usage below one or more predetermined thresholds are discarded (e.g., removed, eliminated from further consideration, etc.). In specific examples, a CPU curve (e.g., scaled, normalized, smoothed, scaled and smoothed, etc.) associated with each tap region is analyzed, and if the CPU curve has a predetermined number of points (e.g., 2, 1, greater than 2, etc.) above a predetermined threshold value (e.g., 0.75, 0.5, less than 0.75, greater than 0.75, etc.), it moves forward to further processing; otherwise it is discarded. Additionally or alternatively, one or more thresholds can be dynamically determined (e.g., based on the collective set of CPU curves associated with the tap region), a different number of thresholds can be implemented, an algorithm and/or a deep learning model can be implemented, and/or the tap regions can be otherwise analyzed and/or processed.


S132 can optionally include discarding and/or chopping one or more tap regions based on a vehicle parameter (e.g., as determined based on information collected in S110, as determined based on location information, etc.), such as a vehicle speed, which can function to discard tap regions associated with low vehicle speeds (e.g., less risky behavior) and/or no speed. In some variations, for instance, S132 includes discarding and/or chopping (e.g., reducing a duration of) tap regions associated with a vehicle speed below a predetermined threshold (e.g., 5 meters per second, less than 5 meters per second, greater than 5 meters per second, 11 miles per hour, less than 11 miles per hour, greater than 11 miles per hour, etc.). Additionally or alternatively, the tap regions can be otherwise analyzed.


In an event that the device is not in a stationary state and is rather in a mobile state, S120 preferably includes determining a classification of the user device S134, wherein the classification is determined, at least in part, based on movement information (e.g., orientation information, motion information, etc.) collected in S110, such as data collected from any or all of: an accelerometer, a gyroscope, and an orientation sensor. Additionally or alternatively, the classification can be based on any other suitable sensor information and/or any other information. Further additionally or alternatively, S134 can include determining a second set of tap parameters (e.g., similar to those determined in S132, identical to those determined in S132, differently from those determined in S132, based on movement data, based on processing activity data, etc.), and/or any other suitable parameters. The second set of tap parameters can include a set of tap sections, which can be determined based on any or all of the following processes. Additionally or alternatively, the second set of tap parameters can include any or all of the intermediate parameters described below, any or all of the parameters described in S132, and/or any other suitable parameters.


S134 preferably includes splitting any or all of the user interaction data (e.g., any or all of the data collected in S110) into sections based on a set of transitions, wherein the set of transitions preferably separate stationary events from mobile events (e.g., based on motion information), but can additionally or alternatively separate different types or severities (e.g., in terms of risk) of mobile events, and/or otherwise determine subsections of data. Further additionally or alternatively, S134 can be performed based on a collective set of sections.


S134 further preferably includes classifying each section of user interaction data, which can function to determine and/or approximate how the user is interacting with the user device (e.g., as shown in FIGS. 4A-4C). The sections are preferably determined, at least in part, based on orientation information (e.g., from a gyroscope of the user device, from an orientation sensor of the user device, from an accelerometer of the user device) associated with the user device, wherein the orientation information can be used to determine and/or approximate if the user is engaging in a phone call; if the user is interacting with a screen of the user device (e.g., reading, texting, interacting with a client application, providing inputs to a client application, receiving outputs from a client application, etc.); and/or any other suitable classification (e.g., unknown interaction, no interaction, etc.). In some variations, for instance, an orientation corresponding to the screen facing upward or nearly upward corresponds to a user interacting with the screen of the user device. An orientation corresponding to a screen facing to the side or substantially to the side (e.g., against a user's ear) can correspond to a user making a phone call. Additionally or alternatively, the orientation information can be otherwise used in producing any or all of the classifications.


Additionally or alternatively, any or all of the proximity information can be used in classifying each section of user interaction data. In some variations, for instance, the proximity information can indicate whether a user device is being held against the user (e.g., against the user's ear), or if the user device is not arranged proximal to the user (e.g., being held in a user's hand with the screen facing up, not being held by the user, etc.).


Further additionally or alternatively, any other information (e.g., collected in S110, outside of S110, etc.) can be used.


S134 can optionally further include reporting and/or recording (e.g., tallying, saving, etc.) one or more sections based on the classification. In some variations, sections having a classification indicating that the user is interacting with a screen of the user device (e.g., reading) carry on to further processes of the method, such as determining a risk score in S140. Prior to this, any or all of the classifications can optionally be assessed to predict whether or not the user is actively engaging with the user device during the section (e.g., based on motion data, based on CPU data, etc.). In examples, CPU activity must meet a minimum threshold (e.g., as described above) in order to be marked as active. Additionally or alternatively, the classifications can be otherwise analyzed or not analyzed.


The sections can further optionally be analyzed in relation to a parameter of the vehicle (e.g., speed, as described in S132, etc.), such as prior to being reported or recorded. Additionally or alternatively, S134 can be performed in absence of determining a vehicle parameter and analyzing the sections in light of it.


Additionally or alternatively, the set of tap parameters can be determined in the same way in S132 and S134, and/or be determined in any other suitable way(s).


S130 is preferably performed multiples times throughout the method 100, such as throughout the duration of a trip, wherein one of S132 and S134 is being implemented at any time point that the user is driving. Additionally or alternatively, S130 can be performed once, intermittently, in response to a trigger, and/or at any other suitable times.


In one variation, S130 includes in response to detecting that the device is in a stationary state: determining a set of tap events occurring in a set of data (e.g., data stream) corresponding to the stationary state, wherein the set of data can include any or all of the information collected in S100; determining a set of tap regions based on the set of tap events, wherein determining the set of tap regions includes comparing each of the set of tap events with a maximum temporal spacing threshold (e.g., 10 seconds) and aggregating tap events with a temporal spacing less than the threshold into a tap region; determining a duration of the tap region; eliminating tap regions from further processing which have a duration less than a predetermined threshold (e.g., 3 seconds); optionally determining a processing activity associated with the tap region (e.g., a normalized CPU curve) and moving tap regions to further processing which have a processing parameter above a predetermined threshold (e.g., at least a predetermined number of points above a predetermined threshold value); determining a parameter associated with the vehicle (e.g., speed); and eliminating tap regions from further processing which are associated with a vehicle parameter below a predetermined threshold (e.g., 11 mph). Additionally or alternatively S132 can include any other suitable processes.


In a specific example of the first variation, determining the set of tap events includes determining a set of rates of gravity deviation and determining a subset (e.g., top 10%) of the rates of gravity deviation having the highest value; aggregating the subset of tap events into tap regions which have a spacing of less than or equal to 10 seconds between adjacent tap events; discarding tap regions from further processing which have a duration of less than 3 seconds; discarding tap regions from further processing which are associated with a CPU activity below a predetermined threshold; and discarding tap regions from further processing which are associated with (e.g., occurring contemporaneously) with a vehicle speed less than or equal to 11 miles per hour.


In a second variation, additional or alternative to the first variation, S130 includes in response to detecting that the device is in a non-stationary state: splitting data (e.g., data stream), wherein the set of data can include any or all of the information collected in Sn, corresponding to the non-stationary state into a set of sections based on a set of transitions (e.g., motion transitions, orientation transitions, etc.) occurring in the data; classifying each section (e.g., as a call, as a user interacting with/touching a screen of the user device, as unknown, as shown in FIGS. 4A-4C, etc.) based on orientation information associated with the user device; and optionally reporting and/or otherwise marking classifications associated with the riskiest behavior (e.g., user interacting with/touching a screen). Additionally, S134 can include discarding sections associated with a vehicle parameter outside of a set of threshold (e.g., below a predetermined speed threshold), and/or otherwise analyzing the data.


4.4 Method: Determining a Risk Score Based on the Set of Tap Parameters S140


The method 100 can optionally include determining a risk score based on the set of tap parameters S140, which functions to quantify the riskiness of the user's interactions with his or her user device.


The risk score is preferably determined based on a phone tapping risk score, which is determined based on the percentage of a trip in which the user was tapping on the user device. The risk score can additionally or alternatively be determined based on a phone handling risk score, which is determined based on the percentage of a trip the user was handling the user device. The events corresponding to the user handling the user device preferably includes the duration in time in which the device is mobile. Additionally or alternatively, the handling events can include events wherein the user device is mobile but the user is not interacting with the screen (e.g., when the user is on a call, when the user is not on a call and not interacting with the screen, when the user is holding the user device, etc.), but can additionally or alternatively include any other suitable events.


In variations wherein the risk score takes into account both the phone tapping and phone handling scores, the phone tapping scores and phone handling scores (e.g., all events wherein the user device is mobile) can be directly added together without taking into account overlapping events. This can function to signify that a user device handling event and a phone tapping event occurring simultaneously is as risky or nearly as risky as them occurring separately. Additionally or alternatively, overlapping events can be taken into account, the risk scores can be weighted prior to combining, a deep learning model and/or one or more algorithms can be used to determine the risk score, and/or the risk score can be otherwise determined.


The risk score can be determined on a per trip basis (e.g., based on vehicle location data, based on vehicle motion data, etc.), aggregated over multiple trips, and/or be otherwise determined and/or associated with at least a portion of a trip and/or multiple trips.


Additionally or alternatively, any or all of the methods, processes, and embodiments described in U.S. application Ser. No. 15/835,284, filed 7 Dec. 2017, which is incorporated herein in its entirety by this reference can be implemented in S140.


4.5 Method: Determining an Output Based on the Risk Score S150


The method 100 can optionally include determining an output based on the risk score S150, which can function to update a driver assessment, produce an actionable output, and/or perform any other suitable function.


The output can optionally include determining and/or updating a report, such as a report sent to any or all of: an insurance company (e.g., in an event that an accident occurs, to reward a user for safe driving, to determine a user's insurance cost, etc.), the user (e.g., to facilitate safe driving, to alert the user to potentially risky behavior, etc.), a supervisory agency (e.g., a probation officer), another user (e.g., a family member, a passenger, etc.), and/or any other suitable user. Additionally or alternatively, the output can include any or all of: updating a user profile (e.g., at the primary client application), updating an aggregated risk score (e.g., associated with the user profile), transmitting a notification (e.g., to any of the individuals described above, in response to the risk score exceeding a predetermined threshold, etc.), restricting access to any or all of user device, and/or producing any suitable output.


The method 100 can optionally further include, at any time during the method 100, classifying (e.g., verifying) a user as the driver, or alternatively a passenger. This is preferably performed based on any or all of the methods, processes, and embodiments described in U.S. application Ser. No. 16/022,120, filed 28 Jun. 2018, which is incorporated herein in its entirety by this reference, but can additionally or alternatively be determined in any other suitable way.


Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein.


4.6 Method: Variations


In a first variation of the method 100 (e.g., as shown in FIG. 5), the method 100 includes any or all of: continuously collecting (e.g., while the user device is in a vehicle, while the user device is in a vehicle as determined by location information, etc.) motion information (e.g., from an accelerometer, from a gyroscope, from an orientation sensor, etc.) associated with the user device, performance information (e.g., processing activity, CPU activity, network transfer information, amount of free memory, etc.) associated with the user device, and optionally handling information (e.g., from a proximity sensor, from an optical sensor, etc.) associated with the user device, which is used to assess a user's interaction with the user device in subsequent processes of the method; collecting location information to determine one or more parameters of the vehicle (e.g., speed, location in space, etc.), which can be used in subsequent processes of the method (e.g., to eliminate data, to appropriately determine a risk score, etc.); classifying, based on any or all of the information collected in S100 and an algorithm (e.g., of a machine learning model, a phone movement algorithm, etc.), a classification of at least whether the device is in a stationary state or a non-stationary state, which functions to trigger one or more actions in S130; in an event that the device is in a stationary state: determining a set of tap events occurring in a set of data (e.g., data stream) corresponding to the stationary state, wherein the set of data can include any or all of the information collected in S100; determining a set of tap regions based on the set of tap events, wherein determining the set of tap regions includes comparing each of the set of tap events with a maximum temporal spacing threshold (e.g., 10 seconds) and aggregating tap events with a temporal spacing less than the threshold into a tap region; determining a duration of the tap region; eliminating tap regions from further processing which have a duration less than a predetermined threshold (e.g., 3 seconds); optionally determining a processing activity associated with the tap region (e.g., a normalized CPU curve) and moving tap regions to further processing which have a processing parameter above a predetermined threshold (e.g., at least a predetermined number of points above a predetermined threshold value); determining a parameter associated with the vehicle (e.g., speed); and eliminating tap regions from further processing which are associated with a vehicle parameter below a predetermined threshold (e.g., 11 mph). Additionally or alternatively S132 can include any other suitable processes; in an event that the device is in a non-stationary state: splitting data (e.g., data stream), wherein the set of data can include any or all of the information collected in Sn, corresponding to the non-stationary state into a set of sections based on a set of transitions (e.g., motion transitions, orientation transitions, etc.) occurring in the data; classifying each section (e.g., as a call, as a user interacting with/touching a screen of the user device, as unknown, etc.) based on orientation information associated with the user device; and optionally reporting and/or otherwise marking classifications associated with the riskiest behavior (e.g., user interacting with/touching a screen). Additionally, S134 can include discarding sections associated with a vehicle parameter outside of a set of threshold (e.g., below a predetermined speed threshold), and/or otherwise analyzing the data; determining a phone tapping score based on one or both of S132 and S134; determining a phone handling score based on S132; determining a risk score by combining (e.g., directly adding, adding in a weighted fashion, combining according to an algorithm, etc.) the phone tapping score and the phone handling score; and optionally determining an output (e.g., updating a user profile, updating an insurance model/report, etc.) based on the risk score.


Additionally or alternatively, the method 100 can include any other suitable processes performed in any suitable combination.


As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.

Claims
  • 1. A method for determining a risk score associated with a user's interaction with a user device while the user is driving a vehicle, the method comprising: receiving, at a client application executing on the user device, user device information, the user device information comprising: movement information, the movement information comprising: acceleration information from an accelerometer of the user device;orientation information from a gyroscope of the user device; andsystem performance information, the system performance information comprising: processing activity information from a processing subsystem of the user device;based, at least in part, on the user device information, determining a state of a set of states of the user device, wherein the set of states comprises a stationary state of the user device relative to the vehicle and a non-stationary state;in an event that the state is a stationary state, determining a set of tap parameters associated with a touch sensitive surface of the user device, wherein determining the set of tap parameters comprises determining a set of orientation parameters associated with the user device during the stationary state;in an event that the state is a non-stationary state, determining a classification of a set of classifications of the user device, wherein the set of classifications comprises a calling classification and a reading classification;collecting location information at the client application;determining a speed of the vehicle based on the location information;determining a subset of tap parameters based on the speed, and eliminating the subset of tap parameters from the set of tap parameters;determining a phone tapping score based on at least one of the set of tap parameters;based on at least one calling classification and at least one reading classification of the user device, determining a phone handling score associated with the non-stationary state, wherein the phone handling score is determined separately from the phone tapping score;determining a risk score based on both the phone tapping score and the phone handling score.
  • 2. The method of claim 1, wherein in response to determining that the user device is in a stationary state, the method further comprises that the user device is coupled to a mount.
  • 3. The method of claim 1, wherein determining the classification comprises splitting the user device information into sections of user device information based on a set of transitions, wherein the set of transitions are determined based on the movement information.
  • 4. The method of claim 1, wherein determining the risk score comprises adding the phone tapping score and the phone handling score.
  • 5. The method of claim 1, wherein the system performance information further comprises network transfer information.
  • 6. The method of claim 5, wherein the system performance information further comprises an amount of free memory associated with the user device.
  • 7. The method of claim 1, wherein determining the subset of tap parameters based on the speed, and eliminating the subset of tap parameters from the set of tap parameters comprises comparing the speed with a minimum speed threshold, wherein the subset of tap parameters are associated with a speed below the minimum speed threshold.
  • 8. The method of claim 7, wherein the minimum speed threshold is between 5 and 20 miles per hour.
  • 9. The method of claim 1, wherein the set of orientation parameters comprises a rate of deviation associated with the user device, the rate of deviation determined relative to gravity.
  • 10. The method of claim 9, further comprising determining a set of tap regions based on the set of tap parameters, wherein each of the set of tap regions comprises a subset of tap parameters having a duration of time between adjacent tap parameters below a predetermined threshold.
  • 11. The method of claim 10, wherein the predetermined threshold is between 5 and 15 seconds.
  • 12. The method of claim 10, further comprising eliminating a subset of tap regions from further processing, wherein each of the subset of tap regions is associated with a processing activity below a predetermined threshold.
  • 13. A method for determining a risk score associated with a user's interaction with a user device while the user is driving a vehicle, the method comprising: receiving, at a client application executing on the user device, user device information, the user device information comprising: movement information; andprocessing activity information from a processing subsystem of the user device;based, at least in part, on the user device information, determining a state of a set of states of the user device, wherein the set of states comprises a stationary state of the user device relative to the vehicle and a non-stationary state;in an event that the state is a stationary state, determining a set of tap parameters associated with a touch sensitive surface of the user device;based on the set of tap parameters, determining a plurality of tap regions comprising a set of temporal parameters;determining a phone tapping score based on the plurality of tap regions;in an event that the state is a non-stationary state, determining a classification of a set of classifications of the user device, wherein the set of classifications comprises a calling classification and a reading classification;based on at least one calling classification and at least one reading classification of the user device, determining a phone handling score associated with the non-stationary state, wherein the phone handling score is determined separately from the phone tapping score; anddetermining a risk score based on both the phone tapping score and the phone handling score.
  • 14. The method of claim 13, wherein the movement information comprises at least: acceleration information; andorientation information.
  • 15. The method of claim 13, wherein determining the plurality of tap regions from the set of tap parameters comprises eliminating at least a first subset of tap parameters, wherein the first subset of tap parameters is eliminated based on at least one of: a threshold of time between adjacent tap parameters;a speed threshold associated with the vehicle; anda processing activity occurring contemporaneously with the set of tap parameters.
  • 16. The method of claim 13, further comprising: collecting location information at the client application;determining a speed of the vehicle based on the location information; and eliminating a second subset of tap parameters from the tap regions based on the speed.
  • 17. The method of claim 13, further comprising: providing the risk score to a remote system.
  • 18. The method of claim 1, wherein the risk score satisfies a threshold value, wherein the method further comprises: providing the risk score to a remote system.
  • 19. The method of claim 1, wherein the tap score is associated with user tap interactions at the touch sensitive surface of the user device.
  • 20. The method of claim 13, wherein determining the set of tap parameters comprises determining a set of orientation parameters associated with the user device during the stationary state, the set of orientation parameters determined relative to gravity and comprising a deviation parameter.
  • 21. The method of claim 20, wherein the deviation parameter comprises a rate of orientation deviation associated with the user device.
US Referenced Citations (270)
Number Name Date Kind
5673039 Pietzsch et al. Sep 1997 A
5864305 Rosenquist Jan 1999 A
6049778 Walker et al. Apr 2000 A
6055477 McBurney et al. Apr 2000 A
6064318 Kirchner et al. May 2000 A
6178374 Moehlenkamp et al. Jan 2001 B1
6240364 Kerner et al. May 2001 B1
6826477 Ladetto et al. Nov 2004 B2
6941222 Yano et al. Sep 2005 B2
7065449 Brewster et al. Jun 2006 B2
7532196 Hinckley May 2009 B2
7668931 Parupudi et al. Feb 2010 B2
7801675 Currie et al. Sep 2010 B2
7881868 Greene et al. Feb 2011 B2
8054168 Mccormick et al. Nov 2011 B2
8264375 Devries Sep 2012 B2
8290480 Abramson et al. Oct 2012 B2
8326257 Shiu et al. Dec 2012 B2
8352189 Scott et al. Jan 2013 B2
8369876 Bachmann et al. Feb 2013 B2
8395542 Scherzinger et al. Mar 2013 B2
8489330 Ellanti et al. Jul 2013 B2
8498610 Staehlin Jul 2013 B2
8504035 Shin et al. Aug 2013 B2
8521193 Paek et al. Aug 2013 B2
8577703 Mcclellan et al. Nov 2013 B2
8634822 Silver et al. Jan 2014 B2
8731530 Breed et al. May 2014 B1
8738523 Sanchez et al. May 2014 B1
8754766 Oesterling et al. Jun 2014 B2
8912103 Heo et al. Dec 2014 B2
8971927 Zhou et al. Mar 2015 B2
8972103 Elwart et al. Mar 2015 B2
8989914 Nemat-Nasser et al. Mar 2015 B1
8996234 Tamari et al. Mar 2015 B1
9064412 Baur Jun 2015 B2
9121940 Psiaki et al. Sep 2015 B2
9141974 Jones et al. Sep 2015 B2
9185526 Guba et al. Nov 2015 B2
9188451 Magnusson et al. Nov 2015 B2
9221428 Kote et al. Dec 2015 B2
9222798 Curtis et al. Dec 2015 B2
9224293 Taylor Dec 2015 B2
9250090 Hille et al. Feb 2016 B2
9311211 Chatterjee et al. Apr 2016 B2
9311271 Wright Apr 2016 B2
9360323 Grokop Jun 2016 B2
9368027 Jang et al. Jun 2016 B2
9390625 Green et al. Jul 2016 B2
9414221 Simon et al. Aug 2016 B1
9423318 Liu et al. Aug 2016 B2
9449495 Call et al. Sep 2016 B1
9457754 Christensen et al. Oct 2016 B1
9467515 Penilla et al. Oct 2016 B1
9495601 Hansen Nov 2016 B2
9536428 Wasserman Jan 2017 B1
9558520 Peak et al. Jan 2017 B2
9564047 Wu Feb 2017 B2
9566981 Rebhan et al. Feb 2017 B2
9587952 Slusar Mar 2017 B1
9628975 Watkins et al. Apr 2017 B1
9632507 Korn Apr 2017 B1
9633487 Wright Apr 2017 B2
9645970 Boesch et al. May 2017 B2
9650007 Snyder et al. May 2017 B1
9674370 Kim et al. Jun 2017 B2
9689698 Wesselius et al. Jun 2017 B2
9716978 Sankaran Jul 2017 B2
9731713 Horii Aug 2017 B2
9773281 Hanson Sep 2017 B1
9794729 Meyers et al. Oct 2017 B2
9800716 Abramson et al. Oct 2017 B2
9801027 Levy et al. Oct 2017 B2
9805601 Fields et al. Oct 2017 B1
9818239 Pal et al. Nov 2017 B2
9842120 Siris et al. Dec 2017 B1
9852475 Konrardy et al. Dec 2017 B1
9854396 Himmelreich et al. Dec 2017 B2
9868394 Fields et al. Jan 2018 B1
9870649 Fields et al. Jan 2018 B1
9888392 Snyder et al. Feb 2018 B1
9900747 Park Feb 2018 B1
9932033 Slusar et al. Apr 2018 B2
9994218 Pal et al. Jun 2018 B2
10137889 Pal et al. Nov 2018 B2
10154382 Matus Dec 2018 B2
10157423 Fields et al. Dec 2018 B1
10176524 Brandmaier et al. Jan 2019 B1
10304329 Matus et al. May 2019 B2
10319037 Chan Jun 2019 B1
10324463 Konrardy et al. Jun 2019 B1
10386192 Konrardy et al. Aug 2019 B1
10510123 Konrardy et al. Dec 2019 B1
10533870 Slusar Jan 2020 B1
10559196 Matus et al. Feb 2020 B2
10631147 Matus Apr 2020 B2
10759441 Balakrishnan Sep 2020 B1
10824145 Konrardy et al. Nov 2020 B1
10848913 Pal et al. Nov 2020 B2
10872525 Xu et al. Dec 2020 B2
10885592 Hsu-Hoffman et al. Jan 2021 B2
10983523 Sim Apr 2021 B2
11069157 Matus Jul 2021 B2
11170446 Thurber Nov 2021 B1
11205232 Chan Dec 2021 B1
20020161517 Yano et al. Oct 2002 A1
20020161518 Petzold et al. Oct 2002 A1
20030018430 Ladetto et al. Jan 2003 A1
20040046335 Knox et al. Mar 2004 A1
20040082311 Shiu et al. Apr 2004 A1
20050080555 Parupudi et al. Apr 2005 A1
20050093868 Hinckley May 2005 A1
20050197773 Brewster et al. Sep 2005 A1
20050288856 Ohki et al. Dec 2005 A1
20060153198 Chadha Jul 2006 A1
20070005228 Sutardja Jan 2007 A1
20070208494 Chapman et al. Sep 2007 A1
20070208501 Downs et al. Sep 2007 A1
20080033776 Marchese Feb 2008 A1
20080103907 Maislos et al. May 2008 A1
20080243439 Runkle et al. Oct 2008 A1
20080312832 Greene et al. Dec 2008 A1
20090024419 McClellan et al. Jan 2009 A1
20100030582 Rippel et al. Feb 2010 A1
20100056175 Bachmann et al. Mar 2010 A1
20100100398 Auker et al. Apr 2010 A1
20100106406 Hille et al. Apr 2010 A1
20100131304 Collopy et al. May 2010 A1
20100198517 Scott et al. Aug 2010 A1
20100219944 Mc et al. Sep 2010 A1
20100273508 Parata et al. Oct 2010 A1
20100299021 Jalili Nov 2010 A1
20100332131 Horvitz et al. Dec 2010 A1
20110077028 Wilkes et al. Mar 2011 A1
20110124311 Staehlin May 2011 A1
20110161116 Peak et al. Jun 2011 A1
20110224898 Scofield et al. Sep 2011 A1
20110246156 Zecha et al. Oct 2011 A1
20110294520 Zhou et al. Dec 2011 A1
20120050095 Scherzinger et al. Mar 2012 A1
20120065871 Deshpande et al. Mar 2012 A1
20120066053 Agarwal Mar 2012 A1
20120071151 Abramson et al. Mar 2012 A1
20120089328 Ellanti et al. Apr 2012 A1
20120109517 Watanabe May 2012 A1
20120129545 Hodis et al. May 2012 A1
20120136529 Curtis et al. May 2012 A1
20120136567 Wang et al. May 2012 A1
20120149400 Paek et al. Jun 2012 A1
20120158820 Bai et al. Jun 2012 A1
20120197587 Luk et al. Aug 2012 A1
20120226421 Kote et al. Sep 2012 A1
20120245963 Peak et al. Sep 2012 A1
20130006469 Green et al. Jan 2013 A1
20130041521 Basir et al. Feb 2013 A1
20130052614 Mollicone et al. Feb 2013 A1
20130069802 Foghel et al. Mar 2013 A1
20130096731 Tamari et al. Apr 2013 A1
20130124074 Horvitz et al. May 2013 A1
20130130639 Oesterling et al. May 2013 A1
20130144461 Ricci Jun 2013 A1
20130204515 Emura Aug 2013 A1
20130211618 Iachini Aug 2013 A1
20130282264 Bastiaensen et al. Oct 2013 A1
20130289819 Hassib et al. Oct 2013 A1
20130302758 Wright Nov 2013 A1
20130316737 Guba et al. Nov 2013 A1
20130317860 Schumann Nov 2013 A1
20130325517 Berg Dec 2013 A1
20130332357 Green et al. Dec 2013 A1
20130344856 Silver et al. Dec 2013 A1
20140038640 Wesselius et al. Feb 2014 A1
20140046896 Potter Feb 2014 A1
20140074402 Hassib et al. Mar 2014 A1
20140081670 Lim et al. Mar 2014 A1
20140187219 Yang et al. Jul 2014 A1
20140188638 Jones et al. Jul 2014 A1
20140197967 Modica et al. Jul 2014 A1
20140207497 Collins et al. Jul 2014 A1
20140232592 Psiaki et al. Aug 2014 A1
20140244150 Boesch et al. Aug 2014 A1
20140244156 Magnusson et al. Aug 2014 A1
20140288765 Elwart et al. Sep 2014 A1
20140288828 Werner et al. Sep 2014 A1
20140358321 Ibrahim Dec 2014 A1
20140358394 Picciotti Dec 2014 A1
20150025917 Stempora Jan 2015 A1
20150046197 Peng et al. Feb 2015 A1
20150084757 Annibale et al. Mar 2015 A1
20150087264 Goyal Mar 2015 A1
20150097703 Baur Apr 2015 A1
20150112731 Binion et al. Apr 2015 A1
20150187146 Chen et al. Jul 2015 A1
20150229666 Foster et al. Aug 2015 A1
20150233718 Grokop Aug 2015 A1
20150246654 Tadic et al. Sep 2015 A1
20150327034 Abramson et al. Nov 2015 A1
20150329121 Lim et al. Nov 2015 A1
20150332407 Wilson et al. Nov 2015 A1
20150334545 Maier et al. Nov 2015 A1
20160021238 Abramson et al. Jan 2016 A1
20160033366 Liu et al. Feb 2016 A1
20160042767 Araya et al. Feb 2016 A1
20160048399 Shaw Feb 2016 A1
20160059855 Rebhan et al. Mar 2016 A1
20160068156 Horii Mar 2016 A1
20160086285 Jordan et al. Mar 2016 A1
20160129913 Boesen May 2016 A1
20160133130 Grimm et al. May 2016 A1
20160150070 Goren et al. May 2016 A1
20160171521 Ramirez et al. Jun 2016 A1
20160174049 Levy et al. Jun 2016 A1
20160189303 Fuchs Jun 2016 A1
20160189442 Wright Jun 2016 A1
20160225263 Salentiny et al. Aug 2016 A1
20160232785 Wang Aug 2016 A1
20160269852 Meyers et al. Sep 2016 A1
20160272140 Kim et al. Sep 2016 A1
20160282156 Ott et al. Sep 2016 A1
20160325756 Cordova et al. Nov 2016 A1
20160328893 Cordova et al. Nov 2016 A1
20160339910 Jonasson et al. Nov 2016 A1
20160358315 Zhou et al. Dec 2016 A1
20160364983 Downs et al. Dec 2016 A1
20160375908 Biemer Dec 2016 A1
20160379310 Madigan et al. Dec 2016 A1
20160379485 Anastassov et al. Dec 2016 A1
20160381505 Sankaran Dec 2016 A1
20170034656 Wang et al. Feb 2017 A1
20170053461 Pal Feb 2017 A1
20170057518 Finegold Mar 2017 A1
20170097243 Ricci Apr 2017 A1
20170103342 Rajani et al. Apr 2017 A1
20170103588 Rajani et al. Apr 2017 A1
20170105098 Cordova Apr 2017 A1
20170115125 Outwater et al. Apr 2017 A1
20170116792 Jelinek et al. Apr 2017 A1
20170124660 Srivastava May 2017 A1
20170126810 Kentley et al. May 2017 A1
20170138737 Cordova et al. May 2017 A1
20170164158 Watkins et al. Jun 2017 A1
20170178416 Barreira Avegliano et al. Jun 2017 A1
20170178422 Wright Jun 2017 A1
20170178424 Wright Jun 2017 A1
20170210323 Cordova et al. Jul 2017 A1
20170211939 Cordova et al. Jul 2017 A1
20170232963 Pal et al. Aug 2017 A1
20170234689 Gibson et al. Aug 2017 A1
20170241791 Madigan et al. Aug 2017 A1
20170279947 Rajakarunanayake et al. Sep 2017 A1
20170289754 Anderson et al. Oct 2017 A1
20170369055 Saigusa et al. Dec 2017 A1
20170371608 Wasserman Dec 2017 A1
20180061230 Madigan et al. Mar 2018 A1
20180075309 Sathyanarayana et al. Mar 2018 A1
20180090001 Fletcher Mar 2018 A1
20180154908 Chen Jun 2018 A1
20180158329 Benhammou et al. Jun 2018 A1
20180165531 Sathyanarayana et al. Jun 2018 A1
20180174446 Wang Jun 2018 A1
20180276485 Heck et al. Sep 2018 A1
20180308128 Deluca et al. Oct 2018 A1
20190007511 Rodriguez et al. Jan 2019 A1
20190035266 Riess et al. Jan 2019 A1
20190281416 Watkins et al. Sep 2019 A1
20190295133 Hirtenstein et al. Sep 2019 A1
20200216078 Katz Jul 2020 A1
20210309261 Rosales et al. Oct 2021 A1
20220005291 Konrardy et al. Jan 2022 A1
20220180448 Konrardy et al. Jun 2022 A1
Foreign Referenced Citations (22)
Number Date Country
108269406 Jul 2018 CN
108431839 Aug 2018 CN
108819841 Nov 2018 CN
3439000 Apr 1986 DE
102008008555 Aug 2008 DE
102017221643 Jul 2018 DE
0534892 Mar 1993 EP
3022705 May 2016 EP
3638542 Jan 2022 EP
2492369 Apr 2014 GB
2000009482 Jan 2000 JP
2002215236 Jul 2002 JP
2005098904 Apr 2005 JP
2007212265 Aug 2007 JP
2009133702 Jun 2009 JP
2011133240 Jul 2011 JP
2013195143 Sep 2013 JP
2013200223 Oct 2013 JP
20130106106 Sep 2013 KR
2004085220 Oct 2004 WO
2006000166 Jan 2006 WO
2015122729 Aug 2015 WO
Non-Patent Literature Citations (11)
Entry
“Mohan Prashanth; et al. Nericell: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones, SenSys '08, Nov. 5-7, 2008, Raleigh, North Carolina.”, Feb. 28, 2018 00:00:00.0.
“Credit Karma launches Karma Drive, a simple way to qualify for an auto insurance discount in as few as 30 days”, Dec. 17, 2020, Credit Karma Press Room, https://www.creditkarma.com/about/releases/credit-karma-launches-karma-drive-a-simple-way-to-qualify-for-an-auto-insurance-discount-in-as-few-as-30-days.
Chu, Hon Lung , “In-Vehicle Driver Detection Using Mobile Phone Sensors”, https://ece.duke.edu/sites/ece.duke.edu/files/ GWDD2011_Chu.pdf—2011 (Year: 2011).
Giuseppe, Guido , et al., “Using Smartphones As a Tool To Capture Road Traffic Attributes”, University of Calabria, Department of Civil Engineering, via P.Bucci, 87036—Rende (CS) Italy, Applied Mechanics and Materials, vol. 432 (2013, Trans Tech Publications, Switzerland, pp. 513-519.
Jiangqiang , et al., “Driver Pre-Accident Operation Mode Study Based on Vehicle-Vehicle Traffic Accidents”, 2011, Publisher: IEEE.
Kalra, Nidhi, “Analyzing Driving and Road Events via Smartphone”, International Journal of Computer Applications (0975-8887), vol. 98—No. 12, Jul. 2014, pp. 5-9.
Liang-Bi, et al., “An Implementation of Deep Learning based IoV System for Traffic Accident Collisions Detection with an Emergency Alert Mechanism”, 2018, Publisher: IEE.
Pattara-Atikom, W., et al., “Estimating Road Traffic Congestion using Vehicle Velocity”, 2006, Publisher: IEEE.
Short, Jeffrey , et al., “Identifying Autonomous Vehicle Technology Impacts on the Trucking Industry”, http://atri-online.org/wp-content/uploads/2016/11/ATRI-Autonomous-Vehicle-Impacts-11-2016.pdf (Year: 2016).
Tathagata, Das , et al., “PRISM: Platform for Remote Sensing using Smartphones”, In. Proc. Mobisys '10, Jun. 15-18, 2010, San Francisco, USA, pp. 63-76.
Walter, D., et al., “Novel Environmental Features for Robust Multisensor Navigation”, Proceedings of the 26th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2013) Sep. 16-20, 2013 Nashville Convention Center, Nashville, Tennessee.
Related Publications (1)
Number Date Country
20210163016 A1 Jun 2021 US