Not Applicable.
This disclosure relates generally to systems and methods for collecting data related to a driver, analyzing the collected data to determine the driving behavior of a driver and triggering actions in response to driving behavior. More particularly, but not by way of limitation, this disclosure relates to using sensors and cameras to capture real-world image data, making determinations in real-time pertaining to the captured data, and triggering actions based upon such determinations.
Law enforcement officers (“LEOs”) have a higher risk of injury or death than most other occupations. Among the more dangerous activities LEOs engage in involves driving. For example, According to the United States Bureau of Labor Statistics, 14% of all nonfatal injuries, and 41% of all fatal injuries to LEOs were caused by transportation incidents in 2014. See Bureau of Labor Statistics, https://www.bls.gov/iif/oshwc/cfoi/police-officers-2014-chart5-data.htm. Vehicle accidents involving LEOs can occur for the same reasons as other drivers such as distracted driving, speeding, reckless or aggressive driving, weather, running red lights and stop signs, fatigue, or carelessness.
Additionally, several causes of vehicle accidents are unique to the functions of LEOs, such as high-speed pursuits, or responding to emergencies. LEOs have various means of technology at their disposal to perform their tasks. However, while technology has provided law enforcement officers powerful tools to perform their jobs, it has also added a level of complexity for officers on patrol. There are many distractions competing for an officer's attention. “Workload” refers to the tasks which an LEO must perform in a short amount of time. For example, an LEO may be driving, observing driving behavior of others, listening to a dispatch radio, talking on the radio to other LEOs, reviewing a Be on Look Out (“BOLO”) alert, and manipulating certain auxiliary vehicle controls such as a light bar or spotlight. LEOs frequently also have vehicle-mounted computers which they may view or be required to type upon.
High-speed pursuits introduce the highest risk for death or injury to LEOs, the suspect(s) and the public. High-speed pursuits present a unique and complicated set of risks which must be balanced against the objectives of the pursuit (e.g. suspect apprehension). Specifically, high-speed pursuits place the LEO, suspects, victims, bystanders, and the community at greater risk of harm of vehicle accidents. LEOs must process factors including, but not limited to, the LEO's own driving skills, traffic, weather, terrain, road conditions, proximity to other vehicles and bystanders, proximity of other LEOs available for backup, avenues for escape by the suspect, and the potential for apprehension (i.e. the officer must consider the likelihood of apprehension in the decision to continue a chase). The LEO must weight these factors to determine whether a suspect can be pursued without excessive risk. Many law enforcement agencies have guidelines, training and officer education intended to address these risks.
Many high-speed pursuits are initiated in response to routine traffic stops. For example, an LEO may observe a minor traffic offense such as running a red light. When the LEO turns on the police car's emergency lights or siren to signal to the suspect to pull over, the suspect may flee. Alternatively, the suspect may initially pull over and wait for the officer to exit his or her vehicle before fleeing. In either case, the LEO is frequently unaware of the suspect's identity or the level of risk the suspect poses to the public. For example, a suspect wanted for a violent crime may present a greater risk to the public if he is not apprehended. In such a case, the interests of law enforcement (i.e. suspect apprehension) might militate in favor of pursing the suspect. Where the suspect is not suspected of prior criminal conduct, and the suspect cannot be pursued safely, some law enforcement policies may advise the pursing LEO to “give up” with the hope that the suspect will cease driving recklessly once he or she recognizes the pursuit has terminated. Many law enforcement agencies have guidelines, training and officer education intended to address these risks and provide guidance when, or if, an LEO should cease a pursuit.
High-speed pursuits are times of intense workload for an LEO. Furthermore, during a pursuit, a person's stress response (i.e. fight-or-flight response) causes the adrenal gland to release adrenaline, dilate the pupils and constrict blood vessels. This physiological change causes heightened focus on the fleeing suspect, but reduces the LEO's peripheral awareness of the environment around the LEO. This physiological condition is referred to as “tunnel vision.” The dramatic increase in the LEO's workload simultaneously occurring with the LEO's tunnel vision can cause the LEO to overlook certain risk factors associated with a pursuit and result in excessive risk to the LEO or the public.
Therefore, systems, apparatuses, and methods are discussed to monitor an LEO's driving behavior, monitor the driving behavior of other drivers (including a suspect), and trigger various events based upon the behavior of each. Also discussed are systems, apparatuses, and methods of a tiered analysis of the LEO's driving behavior that considers the dichotomy between the interests of law enforcement and the interests of public safety and the safety of the LEO.
In view of the aforementioned problems and trends, embodiments of the present invention provide systems and methods for detecting, monitoring, evaluating and triggering actions in response to the driving behavior of a driver and/or a third party.
According to an aspect of the invention, a system includes a vehicle, the vehicle including at least one camera device configured to capture image data, at least one sensor configured to capture vehicle telemetry data, and a microprocessor linked to the at least one camera device and the at least one sensor, wherein the microprocessor is configured with instructions to detect the occurrence of at least one designated event.
According to another aspect of the invention, a method includes capturing image data using a camera device mounted to a vehicle, sampling telemetry data from at least one sensor mounted to the vehicle, and detecting the occurrence of at least one designated event from the image data and/or the telemetry data.
According to another aspect of the invention, a method includes capturing image data using a camera device, wherein the camera device is mounted to a first vehicle; sampling telemetry data from at least one sensor, wherein the at least one sensor is mounted to the first vehicle; determining an applicable speed limit; determining a first speed of the first vehicle using at least one of (i) the image data captured by the camera mounted to the first vehicle, and (ii) the telemetry data sampled from the at least one sensor mounted on the first vehicle; calculating a relative speed of a second vehicle to the first vehicle using the image data captured by the camera device mounted to the first vehicle; calculating a second speed of the second vehicle by adding together (i) the relative speed of the second vehicle to the first vehicle and (ii) the first speed of the first vehicle; and comparing the second speed of the second vehicle with the applicable speed limit.
According to another aspect of the invention, a system includes a first vehicle, the first vehicle including at least one camera device configured to capture image data, at least one sensor configured to capture vehicle telemetry data, and a microprocessor linked to the at least one camera device and the at least one sensor, wherein the microprocessor is configured with instructions to: determine an applicable speed limit; determine a first speed of the first vehicle using at least one of (i) image data captured by the camera device, and (ii) vehicle telemetry data captured by the at least one sensor; calculate a relative speed of a second vehicle to the first vehicle using the image data captured by the camera device; calculate a second speed of the second vehicle by adding together (i) the relative speed of the second vehicle to the first vehicle and (ii) the first speed of the first vehicle; and compare the second speed of the second vehicle with the applicable speed limit.
Other aspects of the embodiments described herein will become apparent from the following description and the accompanying drawings, illustrating the principles of the embodiments by way of example only.
The following figures form part of the present specification and are included to further demonstrate certain aspects of the present claimed subject matter, and should not be used to limit or define the present claimed subject matter. The present claimed subject matter may be better understood by reference to one or more of these drawings in combination with the description of embodiments presented herein. Consequently, a more complete understanding of the present embodiments and further features and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numerals may identify like elements, wherein:
Certain terms are used throughout the following description and claims to refer to particular system components and configurations. As one skilled in the art will appreciate, the same component may be referred to by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” (and the like) and “comprising” (and the like) are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . .” Also, the term “couple,” “coupled,” or “linked” is intended to mean either an indirect or direct electrical, mechanical, or wireless connection. Thus, if a first device couples to or is linked to a second device, that connection may be through a direct electrical, mechanical, or wireless connection, or through an indirect electrical, mechanical, or wireless connection via other devices and connections.
As used throughout this disclosure the term “computer” encompasses special purpose microprocessor-based devices such as a digital video surveillance system primarily configured for executing a limited number of applications, and general purpose computers such as laptops, workstations, or servers which may be configured by a user to run any number of off the shelf or specially designed software applications. Computer systems and computer devices will generally interact in the same way with elements and aspects of disclosed embodiments. This disclosure also refers to memory or storage devices and storage drives interchangeably. In general, memory or a storage device/drive represents a medium accessible by a computer (via wired or wireless connection) to store data and computer program instructions. It will also be appreciated that use of the term “microprocessor” in this disclosure encompasses one or more processors.
The terms “video data” and “visual data” refer to still image data, moving image data, or both still and moving image data, as traditionally understood. Further, the terms “video data” and “visual data” refer to such image data alone, i.e., without audio data and without metadata. The term “image data” (in contrast to “still image data” and “moving image data”) encompasses not only video or visual data but also audio data and/or metadata. That is, image data may include visual or video data, audio data, metadata, or any combination of these three. This image data may be compressed using industry standard compression technology (e.g., Motion Picture Expert Group (MPEG) standards, Audio Video Interleave (AVI), etc.) or another proprietary compression or storage format. The terms “camera,” “camera device,” and the like are understood to encompass devices configured to record or capture visual/video data or image data. Such devices may also be referred to as video recording devices, image capture devices, or the like. Metadata may be included in the files containing the video (or audio and video) data or in separate, associated data files, that may be configured in a structured text format such as eXtensible Markup Language (XML).
The term “metadata” refers to information associated with the recording of video (or audio and video) data, or information included in the recording of image data, and metadata may contain information describing attributes associated with one or more acts of actual recording of video data, audio and video data, or image data. That is, the metadata may describe who (e.g., Officer ID) or what (e.g., automatic trigger) initiated or performed the recording. The metadata may also describe where the recording was made. Metadata may also include telemetry or other types of data. For example, location may be obtained using global positioning system (GPS) information or other telemetry information. The metadata may also describe why the recording was made (e.g., event tag describing the nature of the subject matter recorded). The metadata may also describe when the recording was made, using timestamp information obtained in association with GPS information or from an internal clock, for example. Metadata may also include information relating to the device(s) used to capture or process information (e.g. a unit serial number). From these types of metadata, circumstances that prompted the recording may be inferred and may provide additional information about the recorded information. This metadata may include useful information to correlate recordings from multiple distinct recording systems as disclosed herein. This type of correlation information may assist in many different functions (e.g., query, data retention, chain of custody, and so on). The metadata may also include additional information as described herein, such as: location and size of an object of interest on screen, object's color and confidence level, vehicle make and confidence level, vehicle type and confidence level, license plate number/state (e.g., which of the 50 US states) and confidence level, and number of pedestrians. The terms “license plate number,” “license plate character,” and the like are all understood to encompass both numbers and other characters on a license plate.
The terms “cloud” and “cloud storage” are used interchangeably in this disclosure to describe that data is stored in an area generally accessible across a communication network (which may or may not be the Internet). A “cloud” may refer to a public cloud, private cloud, or combination of a public and private cloud (e.g., hybrid cloud). The term “public cloud” generally refers to a cloud storage area that is maintained by an unrelated third party but still has certain security measures in place to ensure that access is only allowed to authorized users. The term “private cloud” generally refers to a cloud storage area that is maintained by a related entity or that is maintained on physical computer resources that are separate from any unrelated users.
The term “global” refers to worldwide and the term “global access” refers to being available or accessible from anywhere in the world via conventional communication means (e.g. the communication network described herein).
The term “telemetry data” refers to the data sampled from sensors which measure the parameters of a vehicle or its system and components.
The term “vehicle dynamics” refers to how the performance of the vehicle is controlled to accelerate, brake, or steer the vehicle.
The term “aggressive driving” refers to the behavior of a driver which tends to endanger the driver, other persons, or property.
The foregoing description of the figures is provided for the convenience of the reader. It should be understood, however, that the embodiments are not limited to the precise arrangements and configurations shown in the figures. Also, the figures are not necessarily drawn to scale, and certain features may be shown exaggerated in scale or in generalized or schematic form, in the interest of clarity and conciseness. The same or similar parts may be marked with the same or similar reference numerals.
While various embodiments are described herein, it should be appreciated that the present invention encompasses many inventive concepts that may be embodied in a wide variety of contexts. The following detailed description of exemplary embodiments, read in conjunction with the accompanying drawings, is merely illustrative and is not to be taken as limiting the scope of the invention, as it would be impossible or impractical to include all of the possible embodiments and contexts of the invention in this disclosure. Upon reading this disclosure, many alternative embodiments of the present invention will be apparent to persons of ordinary skill in the art. The scope of the invention is defined by the appended claims and equivalents thereof.
Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are necessarily described for each embodiment disclosed in this specification. In the development of any such actual embodiment, numerous implementation-specific decisions may need to be made to achieve the design-specific goals, which may vary from one implementation to another. It will be appreciated that such a development effort, while possibly complex and time-consuming, would nevertheless be a routine undertaking for persons of ordinary skill in the art having the benefit of this disclosure. It will also be appreciated that the parts and component dimensions of the embodiments disclosed herein may not be drawn to scale.
Although the content of this disclosure is presented largely in the context of law enforcement, it should be understood that this content is also applicable outside of that context. For example, this content is also applicable in the consumer and commercial context, e.g., detecting driver behavior with regard to consumer and commercial vehicles.
The vehicle 10 computer 12 is configured to access one or more databases (onboard the vehicle 10 or remote via the communication network 18) containing a repository with detailed information and data of existing vehicles, structures, objects, people, etc.). For example, an accessible database may be populated with data regarding parameters, shapes, other information relating to particular individuals, states and cities, vehicle identification parameters/characteristics (makes, models, colors, etc.), weapons data, etc. The database(s) can be updated as often as necessary. It will be appreciated that for law enforcement applications, the computer 12 may have access to databases and data repositories that are not available to the general public. In some embodiments, the police station 14 memory storage bank 17 houses the database accessed by the vehicle 10 computer 12.
In addition to receiving regular communications via the receiver 13, the vehicle computer 12 microprocessor is configured with specific instructions to be carried out upon receipt of certain communications, such as Amber alerts, Silver alerts, etc., (via the communication network 18) from the police station 14 or other designated agencies or systems, such as the FBI, DEA, ATF, etc. For example, law enforcement agencies often issue Be on Look Out (“BOLO”) alerts to bring to the attention of law enforcement officers key information regarding an occurrence or activity of high importance. Such alerts typically include a description with some known details and facts relating to a suspect or an item or event of interest. The officer who receives the BOLO alert is intended to keep an eye out for the suspect or item of interest by continually or periodically scanning his environment for the particular descriptive details of the suspect/item identified in the alert.
The present disclosure provides the officer the means to leverage technology to perform this continual monitoring task. Upon receipt of such alerts, the computer 12 microprocessor activates the camera device 16 (if not already activated) to start collecting information and processing the captured image data to determine whether the specific content identified in the alert is present in the captured image data. The computer 12 microprocessor is configured to search the captured image data for the presence of the designated content according to the received alert or communication. For example, the designated content may include information such as: a geographical parameter (e.g. GPS coordinate), location data (street designation, historic site, monument, etc.), vehicle type (SUV, truck, sedan, motorcycle, etc.), license plate number(s), particular objects (traffic lights, street signs, etc.), particular shapes (human, animal, etc.), or a person, e.g., with particular characteristics.
When an object enters the scene, the computer 12 microprocessor performs analytics on the captured image data using an analytics engine that references the accessed database(s), and the analytics include creating snapshots and character scanning, optical character recognition (OCR), pixel scanning, and shape/pattern recognition analytics to analyze and search the captured data for the presence of images matching the designated content. The analytics software may also analyze a scene, tracking identified objects of interest, for example, a police officer's movements. For example, if an officer falls and becomes horizontal for a certain amount of predetermined time, the microprocessor can send an alert to police dispatch through the communication network 18 so that dispatch can call via radio or cell phone to check on the fallen officer. If there is no response from the fallen officer in a predetermined amount of time, dispatch can send support to assist in case of a serious issue. The shape/pattern detection analytics may also be used to detect objects already in or coming into the scene, such as a person walking or running, and also to detect the direction of travel of such objects. It may also be used to detect objects or people approaching the officer based on changes in the detected measured distance between the officer and person/object, and based on this analysis, the microprocessor can send an alert to the officer on the scene (e.g., via radio, 3G/4G wireless networks, or Body Worn Camera (BWC) speaker over Wi-Fi or Bluetooth®). Additional features that may be provided by the analytics engine include automatically marking image data if a crash was detected in the background of the scene, such as a vehicle rolling or flipping. Yet another aspect of the shape/pattern detection features provided by the analytics engine is the determination of a weapon threat. The scene can be scanned for the detection of objects such as potential weapon types like guns, knives, etc., being held in a person's hand or for various threatening stances by a potential adversary such as detecting when the adversary is standing, squatting sideways, running, etc.
The detection/analytics capabilities of the disclosed embodiments also include the ability to scan the entire or specified area of a scene for any movement. For example, if an officer is parked somewhere filling out a report and looking down, if the system detects movement an alert sound or message on a display (e.g. the vehicle display) can notify the officer to be aware. With multiple viewing angles, the alerts can also notify the officer which direction the movement came from by using distinct sounds for each direction such as front, rear, right side or left side, voice notification of the direction and/or notification messages on the display. The system can also notify the officer if it is a vehicle, person, or an unknown object and if the object is moving fast or in a threatening manner. Such embodiments may incorporate the camera/microphone unit 16 described below with respect to
In some embodiments, once the analytics engine detects a match or near match of the designated content in the captured image data, the analytics engine proceeds to another step of further analyzing the data containing the designated content to detect for the presence of one or more designated details or attributes of or associated with the designated content. For example, a communication may be received by the receiver 13 (such as a BOLO, Amber, or Silver alert), designating the content to search for as a car, and the attributes as a silver Audi A6 sedan. In this case, the analytics engine will scan and search the captured image data for a match of the descriptor, i.e., the car. If the analytics engine detects the presence of a car in the captured image data, the data is then further analyzed to determine if the designated attributes (i.e., vehicle make —Audi, vehicle model—A6, color—silver, vehicle type—sedan) are present in the data. Other possible designated attributes that may be provided in a communication or alert include, for example: state identifiers (e.g., license plate numbers, characters, emblems, mottos, etc.). In some embodiments, the computer 12 microprocessor continually writes all metadata/attribute information associated with the detected designated content to a text or XML file. It will be appreciated that the designated content descriptors and associated designated attributes may comprise an unlimited variety of items and descriptors, as exist in the real world. The embodiments of this disclosure are not to be limited to any specific content or attribute of such content.
In some embodiments, the analysis further includes the determination of a confidence level or criterion for the designated attribute(s). Modern processors provide the ability for high-speed analysis of vast amounts of data. Physical dimensions and parameters of real-world objects represent factual data that can be mathematically measured, analyzed, and compared. For example, the length, width, and height of a vehicle of a given make and model represents factual data. In some embodiments, the analytics engine analysis of the collected data entails a breakdown of the captured images into data points or pixels that are then analyzed to determine respective spacing and dimensions, which can then be compared to the real-world parameters in the database library of existing items. For instance, continuing with the silver Audi A6 example, once the analytics engine detects a vehicle in the image data, it then performs further analysis to detect for the color silver based on a pixel hue analysis, it may then continue the analysis to mathematically define the dimensions of the detected vehicle for comparison against the actual Audi A6's dimension parameters stored in the database. If a match or near match is found between the dimensions of the detected car and one of the A6 models in the library, the engine then calculates a probability factor representing a confidence level for the match and compares that to a criterion for equivalence or matching of the detected object and the object stored in the database. If, for example, the criterion for equivalence has been set (e.g., by a user via the software) at 95% or greater for vehicle data matching parameters and the calculated probability factor equaled or exceeded 95%, the analytics engine would determine a positive result and proceed with triggering an action as described for the disclosed embodiments.
Different criteria for equivalence can be set for different items. For example, the criterion of equivalence for an affirmative match result for a license plate number may be set at 55% or better, to allow for instances when only a partial plate number is decipherable from the captured image. In the case of attributes for which there are no standard items (for comparison against the detected item for purposes of determining equivalence) stored in the database, the analytics engine can bypass this database query and perform a character-recognition analysis. However, for law enforcement applications, the database available to officers will likely contain all available information relating to data such as a license plate number. In some embodiments, the criterion of equivalence for an affirmative match result may be based on a probability factor from a combination of analyzed attributes.
In some embodiments, the analytics to determine a confidence level or criterion for the designated attribute(s) are based on a deep learning algorithm. The computer 12 may be configured with software providing a deep learning analytics engine. Defined shapes and movement rules, multiple images of vehicle types, make, model, etc., can be input and stored in the deep learning engine at different viewing angles, distances, various lighting conditions, etc. The captured image data can be compared against the engine contents to provide a data output with a percentage of confidence of accuracy for its attributes to trigger an action as described herein. The analytics and rules can be applied to any object (e.g., pedestrians, animals, street signs, etc.).
In some embodiments, the analytics for recognition and detection of the designated content is distributed among the vehicle 10 computer 12 and one or more remote computers (e.g. the server 15 in the police station 14). In such embodiments, the server 15 may be configured to generate a neural net object model for the vehicle 10 computer 12. The vehicle 10 computer 12 can also be configured to use a separate neural network to instantly achieve multiple object recognition as described herein. The vehicle 10 computer 12 and the remote computer(s) can communicate and exchange data via the communication network 18. In yet other embodiments, the vehicle 10 computer 12 and/or the remote computer(s) (e.g. server 15) may be configured with artificial intelligence (AI) software providing the system the ability to learn, to further increase the accuracy of object recognition. In some embodiments, the analytics engine is configured to detect unknown objects (e.g. a modified vehicle). This data can be locally stored for later upload or immediately transmitted to another location (e.g. to server 15) for verification and/or classification to aid in the training of detection of objects by the detection engine. With AI implementations, this type of classification can be done in or near real-time on the edge device such as an in-car video unit or a wearable device such as a body worn camera. In this description, an “edge” device generally refers to a device used or located at a point of interest. Thus, for the disclosed embodiments, an edge device is considered an on-scene device. It will be appreciated by those skilled in the art that embodiments of this disclosure may be implemented using conventional software platforms and coding configured to perform the techniques as disclosed herein.
Once the analytics engine determines that the designated attribute(s) is/are present in the captured image data, the microprocessor triggers an action. The triggered action may include:
A benefit of the functionality provided by the disclosed embodiments is that the camera device and detection/analytics engine may find an object or person of interest that a police officer didn't notice. For example, a police officer may be driving down the street when a BOLO is issued for the silver Audi sedan. The officer may be focusing on driving or performing some other activity/task and may not see the item of interest, in this case the disclosed systems can alert multiple officers to be aware of the potential object of interest and thereby improve the chances for detection. This can also increase safety and efficiency for the officer. Officer efficiency may also be improved with embodiments wherein the camera device and detection/analytics engine are configured to detect expired vehicle tags. Once the analytics engine makes such a determination, the microprocessor can trigger an action as described above (e.g., flash an alert on the vehicle display, issue a notice to the police station 14, record the information as metadata, etc.). Moreover, the disclosed embodiments provide the means to perform the described detection and analytics techniques in real-time, as image data is being captured.
Turning to
In some embodiments, the vehicle 10 computer 12 microprocessor may also be configured with instructions to send out a communication (via the communication network 18) to activate the camera devices 16 in other law enforcement vehicles (e.g., in-car video (ICV) units 28), and the BWCs 29 worn by officers, within a set range or perimeter of where the object of interest (corresponding to the designated content) was detected, as depicted by the arrows in
As previously mentioned, BWCs can be used with implementations of the embodiments of this disclosure. Suitable BWCs include the devices commercially available from COBAN Technologies Inc., in Houston, Tex. (http//www.cobantech.com). The BWCs are worn by officers on patrol. The BWC can be conveniently clipped to the officer's uniform or body gear as desired. BWCs may also be configured with a microphone to collect audio data. The collected audio data may be transmitted together with the captured image/video and/or metadata to another device (e.g., located in a police car, at a police station, on another police officer, or in the cloud) as described herein. It will be appreciated by those skilled in the art that various conventional BWC devices and storage units may be used to implement embodiments of this disclosure. Similarly, various wireless technologies may also be used to implement the embodiments as known in the art. It will also be appreciated that as technology improves, smaller and lower power camera and transmission devices may become available which may further improve performance and run time. Such devices may easily be integrated into the embodiments of this disclosure.
In some embodiments, the vehicle 10 computer 12 may be configured to perform wireless networked or distributed analytics processing. As previously described, in some embodiments the vehicle 10 computer 12 is configured to access an onboard database and perform the disclosed analytics processing as a stand-alone unit. In other embodiments, the vehicle 10 computer 12 may be configured to communicate via the communication network 18 (e.g. using the cloud) with other computers (e.g. remote ICV units 28 and BWCs 29) to perform a distributed and shared image data analysis. With reference to
In some embodiments, the ICV 28 is configured to detect and take snapshots, or receive snapshots from a wearable device (e.g. BWC 29), of a person's face to run facial recognition locally or by transmitting the data to a remote server (e.g. server 15) for further analytics. This further enhances the BOLO capabilities. For example, a BOLO may include an alert to look for a white male, wearing a black jacket, having an age in the mid-twenties, etc. The detection of attributes is also enhanced, such as detection of approximate age, gender, and race. The use of AI software and other advanced software applications may provide additional benefits. Some embodiments may also be configured to receive video data via transmission such as Real Time Streaming Protocol (RTSP) streaming for detection and analytics of attributes and facial recognition. Some embodiments of this disclosure provide for selective search and export of the captured information. In one such embodiment, an authorized user linked to the computer 12 microprocessor via the communication network 18 (e.g., using a smart phone, laptop computer, tablet, etc.) can analyze the information according to specific criteria established by the user. For example, a user can select or draw an area on a map to display vehicles in a given region, along with their associated data such as specific location data/time/number of recorded events/event type/duration, license plate data, vehicle type, shape, color etc. If an event or specific data is of interest, the user can select an option to send a request to any or all vehicle computers 12 to scan their storage drives, that are continuously recording, for the desired information and send back a response with the search results or to retrieve the designated data with time markers of start and stop points to export video, snapshots, or metadata. This embodiment can be implemented for a local or global application.
As discussed, computer 312 may be configured to store the telemetry data in internal memory 313. The telemetry data may also be transferred to a cloud sever. The telemetry data can be multiplexed and synchronized with the captured image data (or other data) for later analysis, evidence gathering, evaluation, and training. The telemetry data may also be stored as metadata for the captured image data of camera device 316. Preferably, the process of sampling, storing, and analyzing the telemetry data would be initiated automatically to reduce the workload on the LEO. For example, computer 312 may begin sampling, storing, and analyzing the telemetry data when vehicle 310 is started if the law enforcement agency intends to monitor the driving behavior of the LEO at all times. Alternatively, the computer 312 may begin sampling, storing, and analyzing the telemetry data when the emergency light bar 307 is activated if the law enforcement agency intends to monitor the driving behavior of the LEO when the LEO is responding to call or engaged in a pursuit. Alternatively, the computer 312 may begin sampling, storing, and analyzing the telemetry data when motion of the vehicle is detected or the vehicle reaches a particular speed. Additionally, the computer 312 can bookmark the activation of the emergency light bar 307 within the metadata for the captured image data.
According to one aspect of this disclosure, system 300 may be configured to detect the speed of vehicle 310. The computer 312 may determine the speed of vehicle 310 in several different ways. For example, the computer 312 may sample the speed sensor 304. Speed sensor 304 may be located on the wheel, within the wheel hubs, on brake components, within the transmission, within the torque converter, within the differential, on the flywheel or other rotating components of vehicle 310. The speed sensor can be of a magnetic type (e.g. Hall Effect, or Reed Switch) or an optical speed sensor. The computer 312 may also sample the telemetry data from GPS device 305 at a known interval and calculate the change in position of the vehicle 310 over a known time interval using data obtained from the GPS device 305. The computer 312 may also sample the telemetry data from accelerometer 303 at a known interval. The computer 312 may then calculate the vehicle's speed based upon its acceleration over a period of time.
The computer 312 may also employ the above-referenced analytics engine to analyze the image data captured by the camera device 316. As discussed, above, the captured image data can be analyzed to determine physical dimensions and parameters of observed objects, which can then be compared to known dimensions and parameters in the database library. For example, the analytics engine can recognize known objects (i.e. with known dimensions) in the captured image data. The image data can be sampled at a known rate. The computer 312 can calculate the change in size of the known objects relative to the entire frame of the image. In other words, the rate at which an object of known size appears to increase (or decrease) can be used to calculate the relative speed of the vehicle 310 to the known object. Thus, this is one way in which the computer 312 can determine the speed of vehicle 310 without use of sensors (speed sensor, accelerometer, GPS device, etc.).
The computer 312 can also determine the applicable speed limit. The computer 312 can determine the applicable speed limit from one or more sources. For example, the internal storage for the computer 312 can store a database which relates location to the applicable speed limit. Sampling the data from the GPS device 305, computer 312 can look up the corresponding applicable speed limit from the database in its internal storage. In some cases, the GPS device 305 may receive explicit local speed limit data. As depicted in
System 300 may be configured to monitor different designated events indicative of disfavored driving behaviors. For example, the system can detect when vehicle 310 fails to stop at a stop sign.
The computer 312 may sample the telemetry data from the vehicle 310 and/or analyze the image data captured by the camera 316 to determine whether the vehicle 310 stopped at the stop sign 331. For example, the computer 312 may sample the captured image data at a known rate. Having identified stop sign 331 represented by bounding box 330 in the video frame 320, the computer 312 would calculate the dimensions X and Y for each sampling of the video frame 320. Where two or more samplings of the video frame 320 are equivalent, the computer 312 could determine that the vehicle 310 was not moving relative to the stop sign 331 (which is stationary) for the period of time between the video samples. In other words, two or more samplings of the video frame 320 with constant dimensions X and Y are indicative of a stationary vehicle. In contrast, where dimensions X and Y continuously increase until the stop sign 331 disappears out of the video frame 320 (i.e. X and Y are not constant for two or more samplings of the video frame 320), the computer 312 may determine that the vehicle 310 did not come to a complete stop. Alternatively, the computer 312 can calculate the change in size of known objects (e.g. the stop sign 331) relative to the entire frame of the image to calculate the relative speed of the vehicle 310 to the known object. Analyzing this data, the computer 312 can determine if vehicle 310 has come to a complete stop (i.e. speed=0) within a predetermined proximity of a stop sign. The computer 312 may also sample the telemetry from the vehicle's speed sensor 304, accelerometer 303, and/or GPS device 305 to determine the vehicle's speed.
Similarly, the system 300 may be configured to detect when the vehicle 310 fails to stop at a stop light.
The computer 312 may sample the telemetry data from the vehicle 310 and/or analyze the image data captured by the camera 316 to determine whether the vehicle 310 stopped at the traffic light 341. For example, the computer 312 may sample the captured image data at a known rate. Having identified traffic light 341 represented by bounding box 340 in video frame 320, the computer 312 would calculate the dimensions X and Y for each sampling of the video frames sampled. Where two or more samplings of the video frame 320 are equivalent, the computer 312 could determine that the vehicle 310 was not moving relative to the traffic light 341 (which is stationary) for the period of time between the video samples. In other words, two or more samplings of the video frame 320 with constant dimensions X and Y are indicative of a stationary vehicle. In contrast, where (i) the computer 312 recognizes the traffic light 341 as red, and (ii) dimensions X and Y continuously increase until the traffic light 341 disappears out of video frame 320 (i.e. X and Y are not constant for two or more samplings of the video frame 320), the computer 312 may determine that the vehicle 310 did not come to a complete stop. Alternatively, the computer 312 can calculate the change in size of the known objects (e.g. the traffic light) relative to the entire frame of the image to calculate the relative speed of the vehicle 310 to the known object. Analyzing this data, the computer 312 can determine if the vehicle 310 has come to a complete stop (i.e. speed=0) within a predetermined proximity of the traffic light 341. The computer 312 may also sample the telemetry from the vehicle's speed sensor 304, accelerometer 303, and/or GPS device 305 to determine the vehicle's speed.
In another embodiment the computer 312 samples telemetry from the GPS device 305. The computer 312 may determine the vehicle's position at various times determined by the sample rate. The computer 312 can calculate the change in position relative to the sample period to determine the vehicle's speed. Similarly, the computer 312 can calculate the acceleration (i.e. change in speed over time). In another embodiment, the analytics engine can recognize known fixed (i.e. stationary) objects (e.g. signage) in the captured image data. The image data can be sampled at a known rate. The computer 312 can calculate the change in size of the known objects relative to the entire video frame as discussed above. In other words, the rate at which fixed objects of known size appear to increase (or decrease) can be used to calculate the relative speed of the vehicle 310 to the known fixed objects. With this data, the computer 312 can calculate the acceleration of the vehicle 310. This calculated acceleration may also be used to determine whether the driver is driving aggressively.
In another aspect of the disclosure, the system 300 can detect a near collision with another vehicle, pedestrian or object. As discussed, above, physical dimensions and parameters of real-world objects can be analyzed to determine respective spacing and dimensions, which can then be compared to the real-world parameters in the database library of known items and dimensions. The analytics engine can recognize known objects (i.e. with known dimensions) in the captured image data. The image data can be sampled at a known rate. The computer 312 can calculate the change in size of the known objects relative to the entire video frame of the captured image. The rate at which an object of known size appears to increase in the image frame can be used to calculate the relative closing speed of the vehicle 310 to the known object. Where the known object is fixed, the analytics engine can calculate the absolute speed and acceleration of the vehicle using the techniques disclosed above.
System 300 can sample the vehicle telemetry and store the telemetry data on internal storage and/or upload the telemetry data to a cloud server. The computer 312 may store separate telemetry files from each sensor, or it may multiplex the telemetry data from multiple sensors into a single file. The telemetry data may be stored in a separate file from the video images captured by the camera device 316 or the telemetry data may be stored in metadata for the video file of the video images captured by the camera device 316. The computer 312 may transmit any of the captured image data, telemetry data and metadata via the communication network for viewing on a smart phone, tablet, PC, or at a remote location. In some embodiments, the vehicle 310 computer 312 may also be configured to mark start and end points in the captured image data, telemetry data and/or metadata. In other embodiments, the vehicle 310 computer 312 may also be configured to isolate images or produce snapshots or video clips associated with the metadata. The computer 312 may also be configured to isolate portions of the telemetry data associated with the metadata. All of this data can be stored locally for a predetermined time or on a FIFO basis; it can also be uploaded to a remote location continually or in configurable intervals and packet sizes.
Other driving behaviors can be monitored as well. The computer 312 may be configured to monitor telemetry data from the accelerometer 303.
Having sampled the vehicle telemetry and captured the image data, the computer 312 can also be configured with software instructions to determine if the driver's behavior falls within or outside one or more sets of predetermined parameters. For example, the computer 312 can be configured with software instructions to determine if the values of accelerometer 303 (in any direction) exceed one or more predetermined thresholds. While this example is offered in connection with the sampling telemetry data from the accelerometer 303, the computer 312 can also be configured with software instructions to determine if other telemetry data (e.g. speed, etc.) falls within or outside one or more sets of predetermined parameters.
Further in regard to acceleration, the computer 312 may, for example, be configured to monitor longitudinal acceleration and deceleration (i.e. the vehicle's acceleration or deceleration in the forward-backward direction) by sampling the Y-values of accelerometer 303. Additionally, the computer 312 can be configured with software instructions to determine if the vehicle's acceleration or deceleration is excessive—i.e. above one or more predetermined thresholds. Additionally, the computer 312 can monitor lateral acceleration and deceleration (i.e. the vehicle's acceleration in the left or right direction) by sampling the X-values of accelerometer 303. Excessive lateral acceleration is indicative of swift lane changes or swerving. Excessive acceleration in either the X or Y values may indicate disfavored driving behavior such as aggressive driving, distracted driving, fatigue, or a driver under the influence. Significant deceleration would be indicative of a collision.
For example, a high performance vehicle may experience maximum longitudinal acceleration of 0.8 Gs for the Y-values in
A law enforcement agency may determine that safe, attentive driving would be consistent with: (i) maximum longitudinal acceleration of 0.5 Gs for the Y-values when accelerating, (ii) maximum longitudinal deceleration of −0.8 Gs when decelerating, and/or (iii) maximum lateral acceleration of +/−0.7 Gs for the X-values. As discussed, below, computer 312 could trigger one or more actions when these values are above these thresholds.
In another embodiment, a law enforcement agency may also determine the interests of law enforcement (e.g. suspect apprehension or responding to an emergency call) might militate in favor of more aggressive driving. Therefore, the computer 312 can be configured with software instructions to determine if the vehicle's longitudinal or lateral acceleration exceed a second predetermined set of threshold values. This may be so because while the interests of law enforcement may justify more aggressive driving by an LEO, the law enforcement agency may also wish to trigger one or more events based upon this second predetermined set of threshold values for driving. For example, a law enforcement agency may determine that safely responding to an emergency call would be consistent with: (i) maximum longitudinal acceleration of 0.8 Gs for the Y-values when accelerating, (ii) maximum longitudinal deceleration of −1.0 G when decelerating, and/or (iii) maximum lateral acceleration of +/−0.9 Gs for the X-values. It should be noted that the values discussed in connection with the threshold values are provided as non-limiting examples. Law enforcement agencies may employ different values based upon their own judgments.
The computer 312 may also be configured with software instructions to determine whether to bypass the determination of whether vehicle 310 is exceeding the first predetermined threshold values. This determination may be based on one or more inputs such as the activation of the emergency light bar 307 of vehicle 310, a radio communication, a separate control operated by the LEO, or the vehicle's telemetry data. As a non-limiting example, the computer 312 could be configured with software instructions to determine whether emergency light bar 307 of the vehicle 310 has been activated. The activation of the emergency light bar 307 of the vehicle 310 could be used to bypass the determination of whether the vehicle 310 is exceeding the first predetermined threshold values. This may be useful where the LEO is responding to an emergency call and it is expected that the LEO's driving behavior will exceed the first predetermined threshold values.
In another embodiment, the computer 312 may trigger the activation of the emergency light bar 307 of the vehicle 310 if the first predetermined threshold values have been exceeded. This may be useful to reduce the LEO's workload in the case of an emergency call or pursuit. If there is no emergency call or pursuit then it could serve as an alert to the LEO that the LEO has exceeded the first predetermined threshold values.
In yet another embodiment, a law enforcement agency may determine that the interests of law enforcement justify some level of heightened risk from the LEO's more aggressive driving. The law enforcement agency may still seek to limit the risk to the LEO and the public. Therefore, the computer 312 can be configured with software instructions to determine if the vehicle's longitudinal or lateral acceleration exceed a third predetermined set of threshold values. While the interests of law enforcement may justify more aggressive driving by an LEO, the law enforcement agency may also wish to trigger one or more events based upon this third predetermined set of threshold values for driving. For example, a law enforcement agency may determine that danger to the LEO and the public are consistent with: (i) maximum longitudinal acceleration of 0.8 Gs for the Y-values when accelerating, (ii) maximum longitudinal deceleration of −1.1 G when decelerating, and/or (iii) maximum lateral acceleration of +/−1.1 Gs for the X-values.
A similar system of tiered threshold values can also be applied to other types of data. As another non-limiting example, the computer 312 may be configured with software instructions to determine the speed of the vehicle 310. Rather than comparing the vehicle's speed against a first predetermined threshold value, the computer 312 could compare the vehicle's speed against a variable value—e.g. the speed limit. Alternatively, the computer 312 could compare the vehicle's speed against a predetermined range of values around the speed limit—e.g. between (i) the speed limit−10 mph and (ii) the speed limit+5 mph. Alternatively, the computer 312 could compare the vehicle's speed against a predetermined range bounded by the speed limit—e.g. between the speed limit and 10 mph less than the speed limit. Once again, the values provided are for illustrative purposes only and could be determined based on the judgment of each law enforcement agency. Collectively, this disclosure refers to these thresholds (limits of the ranges) as “speed thresholds.” Conceptually, an LEO speeding may be a disfavored practice by the law enforcement agency, and driving too slowly may be indicative of another disfavored driving practice such as distracted driving, driving while fatigued, or intoxication. Where the LEO is responding to an emergency call, however, it may be expected that the LEO's driving behavior will exceed the posted speed limit. A law enforcement agency may determine that the interests of law enforcement justify some level of heightened risk from the LEO's more aggressive driving, but the law enforcement agency may still seek to limit the risk to the LEO and the public. Therefore, computer 312 can be configured with software instructions to determine if the vehicle's speed is outside a second speed threshold. Where the LEO's driving behavior exceeds this second speed threshold, computer 312 may trigger one or more actions. Non-limiting examples of such actions are discussed, below, and may include providing an alert to the driver, sending an alert to dispatch, or activating emergency light bar 307 or siren. It may be desirable to take additional actions where the LEO's speed exceeds a third speed threshold. Non-limiting examples of such actions are discussed, below, and may include intervening into the vehicle's dynamics (e.g. to apply the brakes), limit the LEO's throttle demand, or alert the LEO to terminate a pursuit of a suspect vehicle. This tiered approach is consistent with the dichotomy discussed, above, where the interests of law enforcement conflict with the interests of public safety and the safety of the LEO. The tiered approach may reflect a law enforcement agency's judgment that the LEO should be provided discretion to respond to an emergency, while placing boundaries on that discretion to protect the public and the LEO.
Collisions and rollovers can also be detected with system 300 by sampling telemetry data from the accelerometer 303. During a collision the magnitude of the telemetry data from accelerometer 303 is much higher than the magnitude of the telemetry data from the accelerometer 303 discussed in connection with aggressive driving. For example, where the maximum longitudinal deceleration experienced by a vehicle may be approximately −1.3 Gs, a vehicle involved in a collision would experience much greater values of deceleration. For example, a vehicle stopping from a speed of 30 miles per hour in 1 foot would experience approximately −30 Gs. This type of rapid deceleration is indicative of a collision. Similarly, a vehicle's vertical acceleration (depicted as +/−Z values in
Still other driving behaviors may also be detected. As one example, it may be determined whether or not a driver of a given vehicle is maintaining a safe distance from a vehicle in front of him (put in other words, whether or not the driver is driving dangerously close to a vehicle in front of him). Similarly, it may be determined whether or not a driver of a given vehicle is maintaining a safe distance from a vehicle in to his side (put in other words, whether or not the driver is passing dangerously close to a vehicle on his side), e.g., a moving or parked vehicle. For the purposes of this disclosure, failure to maintain a safe distance is deemed an instance of aggressive or reckless driving. The determinations as to whether a safe distance is being maintained may be made based on captured image data of the two vehicles and/or data from sensors, such as GPS data (e.g., pertaining to the positions of the two vehicles), speed data, and acceleration data. The determinations may be made based on preassigned values of safe distances, which values have been inputted into the analytics engine for use in making the determinations. For example, a preassigned value of a safe distance between two vehicles may be 2 seconds (based on the speed of the vehicles) or a certain number of feet or vehicle lengths. One of ordinary skill in the art will understand how the analytics engine may make such determinations, in view of the description herein of other determinations made by the analytics engine.
Another driving behavior that may be detected is (repeated) hard braking of a vehicle. For the purposes of this disclosure, (repeated) hard braking of a vehicle is deemed an instance of aggressive or reckless driving. One of ordinary skill in the art will understand how the analytics engine may detect this behavior, e.g., based on acceleration (deceleration) and/or speed data in view of the description herein of other determinations made by the analytics engine. Another driving behavior that may be detected is improper crossing of a line marked on the roadway, e.g., crossing a double yellow line (which it is forbidden to cross), or repeated crossing of a white lane line (which is allowed, but repeated crossing of it may indicate aggressive, reckless, or intoxicated driving). For the purposes of this disclosure, improper crossing of a line marked on the roadway is deemed an instance of aggressive or reckless driving. One of ordinary skill in the art will understand how the analytics engine may detect this behavior, e.g., based on captured image data and/or sensor data, whether from a GPS device or other sensor, in view of the description herein of other determinations made by the analytics engine.
Having discussed the detection of the occurrence of various designated events, which the analytics engine is capable of performing, we turn to examples of various actions which may be triggered in response to such events. Once the analytics engine determines that one of the above-referenced events has occurred, the computer 312 may trigger one or more of the actions such as those discussed above. For example, but not by way of limitation, the triggered action may include:
With reference again to
In another non-limiting example, the computer 312 of system 300 can also detect whether the second vehicle 350 is speeding. Using methods similar to those discussed, above, the analytics engine can determine the relative speed of the second vehicle 350 to the first vehicle 310 (i.e. the speed of the second vehicle 350 relative to the speed of the first vehicle 310). For example, the analytics engine can recognize the second vehicle 350 in the captured image data from video frame 320. The image data can be sampled at a known rate. The computer 312 can calculate the change in size of the known objects relative to the entire video frame as discussed above. In other words, the rate at which the second vehicle 350 appears to increase (or decrease) can be used to calculate the relative speed of the vehicle 310 to the second vehicle 350. Also, using the methods discussed, above, system 300 can sample the telemetry from the speed sensor 304. Adding the speed of the first vehicle 310 to the relative speed of the second vehicle 350 would yield the speed of the second vehicle 350. Computer 312 may also determine the speed of second vehicle 350, and hence determine if vehicle 350 is speeding, in other ways. For example, video data of the second vehicle 350 could be captured continually for a period of time. As the video progresses frame by frame, the second vehicle 350 appearing in the video display moves across the display. The distance the second vehicle 350 moves across the display from an initially measured/recorded position in a given frame to a subsequently measured/recorded position in a given subsequent frame can be measured. These positions and this distance in the display can be correlated with actual physical positions and distance. In addition, the frame rate of the video (e.g., 30 frames per second) is known. Accordingly, it may be determined how far the second vehicle 350 moves in a given period of time. From this, the speed of second vehicle 350 can be determined. As described here, the computer 312 can thus in various ways determine the speed of second vehicle 350 without use of sensors (speed sensor, accelerometer, GPS device, etc.). Computer 312 may use multiple different methods to determine the speed of second vehicle 350, for example, one or more methods may be used to corroborate results obtained another one or more methods. System 300 could store the speed of the second vehicle 350 as a separate file in internal memory 313 or store the telemetry data as metadata with the video data of the image data captured from the camera device 316. This method could be useful to law enforcement agencies for detecting speeding without emitting electromagnetic radiation or light, which are detectable with radar or laser detectors.
According to another embodiment, a vehicle may be provided with a display (e.g., a dash cam) configured to display detected events. In some cases, the events displayed may be events of the vehicle itself, e.g., the display of vehicle 310 displays the fact that vehicle 310 is speeding. In some cases, the events displayed may be events of another vehicle, e.g., the display of vehicle 310 displays the fact that vehicle 350 is speeding. In some cases, the events displayed may be events of the vehicle itself and events of another vehicle. When an event of a given vehicle (e.g., the vehicle is speeding) is detected, the device (e.g., computer 312) detecting the event may send a notification of the detected event to a remote (e.g., cloud-based) server. Upon receipt of the notification, the server may send out a corresponding notification of the detected event to others (e.g., vehicles/drivers) to alert them of the detected event that occurred with the given vehicle. Notifications may be sent to others within a designated radius (distance) of the detected event or within a designated radius (distance) of the vehicle with respect to which the event was detected. This arrangement could be used to alert the police, other citizens/drivers, or even pedestrians nearby (e.g., by flashing the alert on a road sign). These notifications could serve, e.g., to alert the police and innocent citizens of a danger such as a reckless driver in the vicinity. Citizens would then be better able to protect themselves from such dangers (such as being hit by the reckless driver), by being on guard when driving or walking near, or crossing, a road in the vicinity of the vehicle in question. As for the content of the notification, it may include identification of the vehicle with respect to which the event was detected (e.g., the speeding vehicle), such as license plate information, vehicle make/model/color/year, etc., and identification of the nature of the event (e.g., reckless driving, speeding, going through a red light, etc.) and the location of the event (e.g., street address, distance from the recipient of the notification, etc.). The notification may also include still images or video clips of the vehicle (or another vehicle of the same make/model/color/year, etc.) and/or the detected event. This content may be displayed, as appropriate, as text (displayed and/or audible) and/or image/video, on the display of the vehicle receiving the notification. Citizens or drivers may register with a service or with the police in order to participate, that is, either to have their vehicles be monitored for events and/or to receive notifications of detected events of other vehicles. In some embodiments, self-driving vehicles would participate as monitored vehicles and/or notification recipients. In a network of self-driving vehicles, all the self-driving vehicles may be monitored and receive notifications (at least from other self-driving vehicles in the network). The travel vectors or trajectories of the self-driving vehicles may be known/obtained by a service, and the service may send notifications to recipient parties projected to be soon in the vicinity of the vehicle with respect to which an event was detected. Also, the notifications may include (in addition to content noted above) the travel vector/trajectory of the vehicle with respect to which an event was detected.
With regard to the various determinations that computer 312 or the analytics engine may make, it will be understood that in some cases more than one method may be used to determine a given piece of data (e.g., speed, acceleration, etc.) for purposes to of corroboration, verification, validation, triangulation (i.e., cross-checking multiple data sources and collection procedures to evaluate the extent to which all evidence converges), etc.
In light of the principles and example embodiments described and depicted herein, it will be recognized that the example embodiments can be modified in arrangement and detail without departing from such principles. Also, the foregoing discussion has focused on particular embodiments, but other configurations are also contemplated. In particular, even though expressions such as “in one embodiment,” “in another embodiment,” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the invention to particular embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments. As a rule, any embodiment referenced herein is freely combinable with any one or more of the other embodiments referenced herein, and any number of features of different embodiments are combinable with one another, unless indicated otherwise.
Similarly, although example processes have been described with regard to particular operations performed in a particular sequence, numerous modifications could be applied to those processes to derive numerous alternative embodiments of the present invention. For example, alternative embodiments may include processes that use fewer than all of the disclosed operations, processes that use additional operations, and processes in which the individual operations disclosed herein are combined, subdivided, rearranged, or otherwise altered. This disclosure describes one or more embodiments wherein various operations are performed by certain systems, applications, modules, components, etc. In alternative embodiments, however, those operations could be performed by different components. Also, items such as applications, modules, components, etc., may be implemented as software constructs stored in a machine accessible storage medium, such as an optical disk, a hard disk drive, etc., and those constructs may take the form of applications, programs, subroutines, instructions, objects, methods, classes, or any other suitable form of control logic; such items may also be implemented as firmware or hardware, or as any combination of software, firmware and hardware, or any combination of any two of software, firmware and hardware.
This disclosure may include descriptions of various benefits and advantages that may be provided by various embodiments. One, some, all, or different benefits or advantages may be provided by different embodiments.
In view of the wide variety of useful permutations that may be readily derived from the example embodiments described herein, this detailed description is intended to be illustrative only, and should not be taken as limiting the scope of the invention. What is claimed as the invention, therefore, are all implementations that come within the scope of the following claims, and all equivalents to such implementations.
This application is a Continuation-in-Part of, and claims priority to, U.S. patent application Ser. No. 15/413,205, entitled “Systems, Apparatuses and Methods for Triggering Actions Based on Data Capture and Characterization,” filed on Jan. 23, 2017, which in turn claims the benefit of U.S. Provisional Patent Application No. 62/333,818, entitled “Systems, Apparatuses and Methods for Creating, Identifying, Enhancing, and Distributing Evidentiary Data,” filed on May 9, 2016. U.S. patent application Ser. No. 15/413,205 and U.S. Provisional Patent Application No. 62/333,818 are hereby incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4344184 | Edwards | Aug 1982 | A |
4543665 | Sotelo et al. | Sep 1985 | A |
4590614 | Erat | May 1986 | A |
4910795 | McCowen et al. | Mar 1990 | A |
5012335 | Cohodar | Apr 1991 | A |
5111289 | Lucas et al. | May 1992 | A |
5408330 | Squicciarini et al. | Apr 1995 | A |
5477397 | Naimpally et al. | Dec 1995 | A |
5613032 | Cruz et al. | Mar 1997 | A |
5724475 | Kirsten | Mar 1998 | A |
5815093 | Kikinis | Sep 1998 | A |
5841978 | Rhoads | Nov 1998 | A |
5862260 | Rhoads | Jan 1999 | A |
5926218 | Smith | Jul 1999 | A |
5946343 | Schotz et al. | Aug 1999 | A |
5970098 | Herzberg | Oct 1999 | A |
6002326 | Turner | Dec 1999 | A |
6009229 | Kawamura | Dec 1999 | A |
6028528 | Lorenzetti et al. | Feb 2000 | A |
6038257 | Brusewitz et al. | Mar 2000 | A |
6122403 | Rhoads | Sep 2000 | A |
6141611 | Mackey et al. | Oct 2000 | A |
6163338 | Johnson et al. | Dec 2000 | A |
6175860 | Gaucher | Jan 2001 | B1 |
6181711 | Zhang et al. | Jan 2001 | B1 |
6275773 | Lemelson et al. | Aug 2001 | B1 |
6298290 | Abe et al. | Oct 2001 | B1 |
6346965 | Toh | Feb 2002 | B1 |
6405112 | Rayner | Jun 2002 | B1 |
6411874 | Morgan et al. | Jun 2002 | B2 |
6421080 | Lambert | Jul 2002 | B1 |
6424820 | Burdick et al. | Jul 2002 | B1 |
6462778 | Abram et al. | Oct 2002 | B1 |
6505160 | Levy et al. | Jan 2003 | B1 |
6510177 | Bonet et al. | Jan 2003 | B1 |
6518881 | Monroe | Feb 2003 | B2 |
6624611 | Kirmuss | Sep 2003 | B2 |
6778814 | Koike | Aug 2004 | B2 |
6788338 | Dinev et al. | Sep 2004 | B1 |
6788983 | Zheng | Sep 2004 | B2 |
6789030 | Coyle et al. | Sep 2004 | B1 |
6791922 | Suzuki | Sep 2004 | B2 |
6825780 | Saunders et al. | Nov 2004 | B2 |
6831556 | Boykin | Dec 2004 | B1 |
7010328 | Kawasaki et al. | Mar 2006 | B2 |
7091851 | Mason et al. | Aug 2006 | B2 |
7119832 | Blanco et al. | Oct 2006 | B2 |
7120477 | Huang | Oct 2006 | B2 |
7155615 | Silvester | Dec 2006 | B1 |
7167519 | Comaniciu et al. | Jan 2007 | B2 |
7190882 | Gammenthaler | Mar 2007 | B2 |
7231233 | Gosieski | Jun 2007 | B2 |
7272179 | Siemens et al. | Sep 2007 | B2 |
7317837 | Yatabe et al. | Jan 2008 | B2 |
7356473 | Kates | Apr 2008 | B2 |
7386219 | Ishige | Jun 2008 | B2 |
7410371 | Shabtai et al. | Aug 2008 | B2 |
7414587 | Stanton | Aug 2008 | B2 |
7428314 | Henson | Sep 2008 | B2 |
7515760 | Sai et al. | Apr 2009 | B2 |
7542813 | Nam | Jun 2009 | B2 |
7551894 | Gerber et al. | Jun 2009 | B2 |
7554587 | Shizukuishi | Jun 2009 | B2 |
7618260 | Daniel et al. | Nov 2009 | B2 |
7631195 | Yu et al. | Dec 2009 | B1 |
7688203 | Rockefeller et al. | Mar 2010 | B2 |
7693289 | Stathem et al. | Apr 2010 | B2 |
7768548 | Silvernail et al. | Aug 2010 | B2 |
7778601 | Seshadri et al. | Aug 2010 | B2 |
7792189 | Finizio et al. | Sep 2010 | B2 |
7818078 | Iriarte | Oct 2010 | B2 |
7835530 | Avigni | Nov 2010 | B2 |
7868912 | Venetianer et al. | Jan 2011 | B2 |
7877115 | Seshadri et al. | Jan 2011 | B2 |
7974429 | Tsai | Jul 2011 | B2 |
7995652 | Washington | Aug 2011 | B2 |
8068023 | Dulin et al. | Nov 2011 | B2 |
8081214 | Vanman et al. | Dec 2011 | B2 |
8086277 | Ganley et al. | Dec 2011 | B2 |
8121306 | Cilia et al. | Feb 2012 | B2 |
8126276 | Bolle et al. | Feb 2012 | B2 |
8126968 | Rodman et al. | Feb 2012 | B2 |
8139796 | Nakashima et al. | Mar 2012 | B2 |
8144892 | Shemesh et al. | Mar 2012 | B2 |
8145134 | Henry et al. | Mar 2012 | B2 |
8150089 | Segawa et al. | Apr 2012 | B2 |
8154666 | Mody | Apr 2012 | B2 |
8166220 | Ben-Yacov et al. | Apr 2012 | B2 |
8174577 | Chou | May 2012 | B2 |
8195145 | Angelhag | Jun 2012 | B2 |
8208024 | Dischinger | Jun 2012 | B2 |
8228364 | Cilia | Jul 2012 | B2 |
8230149 | Long et al. | Jul 2012 | B1 |
8253796 | Renkis | Aug 2012 | B2 |
8254844 | Kuffner et al. | Aug 2012 | B2 |
8260217 | Chang et al. | Sep 2012 | B2 |
8264540 | Chang et al. | Sep 2012 | B2 |
8270647 | Crawford et al. | Sep 2012 | B2 |
8289370 | Civanlar et al. | Oct 2012 | B2 |
8300863 | Knudsen et al. | Oct 2012 | B2 |
8311549 | Chang et al. | Nov 2012 | B2 |
8311983 | Guzik | Nov 2012 | B2 |
8358980 | Tajima et al. | Jan 2013 | B2 |
8380131 | Chiang | Feb 2013 | B2 |
8422944 | Flygh et al. | Apr 2013 | B2 |
8446469 | Blanco et al. | May 2013 | B2 |
8457827 | Ferguson | Jun 2013 | B1 |
8489065 | Green et al. | Jul 2013 | B2 |
8489151 | Engelen et al. | Jul 2013 | B2 |
8497940 | Green et al. | Jul 2013 | B2 |
8554145 | Fehr | Oct 2013 | B2 |
8612708 | Drosch | Dec 2013 | B2 |
8630908 | Forster | Jan 2014 | B2 |
8661507 | Hesselink et al. | Feb 2014 | B1 |
8707392 | Birtwhistle et al. | Apr 2014 | B2 |
8731742 | Zagorski | May 2014 | B2 |
8780199 | Mimar | Jul 2014 | B2 |
8781292 | Ross et al. | Jul 2014 | B1 |
8849557 | Levandowski | Sep 2014 | B1 |
9041803 | Chen et al. | May 2015 | B2 |
9070289 | Saund et al. | Jun 2015 | B2 |
9159371 | Ross et al. | Oct 2015 | B2 |
9201842 | Plante | Dec 2015 | B2 |
9225527 | Chang | Dec 2015 | B1 |
9253452 | Ross et al. | Feb 2016 | B2 |
9307317 | Chang et al. | Apr 2016 | B2 |
9325950 | Haler | Apr 2016 | B2 |
9471059 | Wilkins | Oct 2016 | B1 |
9589448 | Schneider et al. | Mar 2017 | B1 |
9665094 | Russell | May 2017 | B1 |
10074394 | Ross et al. | Sep 2018 | B2 |
20020003571 | Schofield | Jan 2002 | A1 |
20020051061 | Peters et al. | May 2002 | A1 |
20020135679 | Scaman | Sep 2002 | A1 |
20030052970 | Dodds et al. | Mar 2003 | A1 |
20030080878 | Kirmuss | May 2003 | A1 |
20030081122 | Kirmuss | May 2003 | A1 |
20030081127 | Kirmuss | May 2003 | A1 |
20030081128 | Kirmuss | May 2003 | A1 |
20030081934 | Kirmuss | May 2003 | A1 |
20030081935 | Kirmuss | May 2003 | A1 |
20030095688 | Kirmuss | May 2003 | A1 |
20030103140 | Watkins | Jun 2003 | A1 |
20030151663 | Lorenzetti et al. | Aug 2003 | A1 |
20030197629 | Saunders et al. | Oct 2003 | A1 |
20040008255 | Lewellen | Jan 2004 | A1 |
20040051793 | Tecu et al. | Mar 2004 | A1 |
20040107030 | Nishira | Jun 2004 | A1 |
20040146272 | Kessel et al. | Jul 2004 | A1 |
20040177253 | Wu et al. | Sep 2004 | A1 |
20050007458 | Benattou | Jan 2005 | A1 |
20050078195 | VanWagner | Apr 2005 | A1 |
20050083404 | Pierce et al. | Apr 2005 | A1 |
20050088521 | Blanco et al. | Apr 2005 | A1 |
20050122397 | Henson et al. | Jun 2005 | A1 |
20050154907 | Han et al. | Jul 2005 | A1 |
20050158031 | David | Jul 2005 | A1 |
20050185936 | Lao et al. | Aug 2005 | A9 |
20050243171 | Ross et al. | Nov 2005 | A1 |
20050286476 | Crosswy et al. | Dec 2005 | A1 |
20060072672 | Holcomb et al. | Apr 2006 | A1 |
20060077256 | Silvemail et al. | Apr 2006 | A1 |
20060078046 | Lu | Apr 2006 | A1 |
20060130129 | Dai et al. | Jun 2006 | A1 |
20060133476 | Page et al. | Jun 2006 | A1 |
20060165386 | Garoutte | Jul 2006 | A1 |
20060270465 | Lee et al. | Nov 2006 | A1 |
20060274116 | Wu | Dec 2006 | A1 |
20070005609 | Breed | Jan 2007 | A1 |
20070064108 | Haler | Mar 2007 | A1 |
20070086601 | Mitchler | Apr 2007 | A1 |
20070111754 | Marshall et al. | May 2007 | A1 |
20070124292 | Kirshenbaum et al. | May 2007 | A1 |
20070217761 | Chen et al. | Sep 2007 | A1 |
20070219685 | Plante | Sep 2007 | A1 |
20080005472 | Khalidi et al. | Jan 2008 | A1 |
20080030782 | Watanabe | Feb 2008 | A1 |
20080129825 | DeAngelis et al. | Jun 2008 | A1 |
20080165250 | Ekdahl et al. | Jul 2008 | A1 |
20080186129 | Fitzgibbon | Aug 2008 | A1 |
20080208755 | Malcolm | Aug 2008 | A1 |
20080294315 | Breed | Nov 2008 | A1 |
20080303903 | Bentley et al. | Dec 2008 | A1 |
20090017881 | Madrigal | Jan 2009 | A1 |
20090022362 | Gagvani et al. | Jan 2009 | A1 |
20090074216 | Bradford et al. | Mar 2009 | A1 |
20090076636 | Bradford et al. | Mar 2009 | A1 |
20090118896 | Gustafsson | May 2009 | A1 |
20090195651 | Leonard et al. | Aug 2009 | A1 |
20090195655 | Pandey | Aug 2009 | A1 |
20090213902 | Jeng | Aug 2009 | A1 |
20100026809 | Curry | Feb 2010 | A1 |
20100030929 | Ben-Yacov et al. | Feb 2010 | A1 |
20100057444 | Cilia | Mar 2010 | A1 |
20100081466 | Mao | Apr 2010 | A1 |
20100131748 | Lin | May 2010 | A1 |
20100136944 | Taylor | Jun 2010 | A1 |
20100180051 | Harris | Jul 2010 | A1 |
20100238009 | Cook | Sep 2010 | A1 |
20100274816 | Guzik | Oct 2010 | A1 |
20100287545 | Corbefin | Nov 2010 | A1 |
20100289648 | Ree | Nov 2010 | A1 |
20100302979 | Reunamäki | Dec 2010 | A1 |
20100309971 | Vanman et al. | Dec 2010 | A1 |
20110016256 | Hatada | Jan 2011 | A1 |
20110044605 | Vanman et al. | Feb 2011 | A1 |
20110092248 | Evanitsky | Apr 2011 | A1 |
20110142156 | Haartsen | Jun 2011 | A1 |
20110233078 | Monaco et al. | Sep 2011 | A1 |
20110234379 | Lee | Sep 2011 | A1 |
20110280143 | Li et al. | Nov 2011 | A1 |
20110280413 | Wu et al. | Nov 2011 | A1 |
20110299457 | Green et al. | Dec 2011 | A1 |
20120014534 | Bodley et al. | Jan 2012 | A1 |
20120028599 | Hatton | Feb 2012 | A1 |
20120078397 | Lee et al. | Mar 2012 | A1 |
20120083960 | Zhu | Apr 2012 | A1 |
20120119894 | Pandy | May 2012 | A1 |
20120163309 | Ma et al. | Jun 2012 | A1 |
20120173577 | Millar et al. | Jul 2012 | A1 |
20120266251 | Birtwhistle et al. | Oct 2012 | A1 |
20120300081 | Kim | Nov 2012 | A1 |
20120307070 | Pierce | Dec 2012 | A1 |
20120310394 | El-Hoiydi | Dec 2012 | A1 |
20120310395 | El-Hoiydi | Dec 2012 | A1 |
20130114849 | Pengelly et al. | May 2013 | A1 |
20130135472 | Wu | May 2013 | A1 |
20130163822 | Chigos et al. | Jun 2013 | A1 |
20130201884 | Freda et al. | Aug 2013 | A1 |
20130218427 | Mukhopadhyay | Aug 2013 | A1 |
20130223653 | Chang | Aug 2013 | A1 |
20130236160 | Gentile et al. | Sep 2013 | A1 |
20130242262 | Lewis | Sep 2013 | A1 |
20130251173 | Ejima et al. | Sep 2013 | A1 |
20130268357 | Heath | Oct 2013 | A1 |
20130287261 | Lee et al. | Oct 2013 | A1 |
20130302758 | Wright | Nov 2013 | A1 |
20130339447 | Ervine | Dec 2013 | A1 |
20130346660 | Kwidzinski et al. | Dec 2013 | A1 |
20140037142 | Bhanu | Feb 2014 | A1 |
20140038668 | Vasavada et al. | Feb 2014 | A1 |
20140078304 | Othmer | Mar 2014 | A1 |
20140085475 | Bhanu | Mar 2014 | A1 |
20140092251 | Troxel | Apr 2014 | A1 |
20140100891 | Turner | Apr 2014 | A1 |
20140114691 | Pearce | Apr 2014 | A1 |
20140143545 | McKeeman et al. | May 2014 | A1 |
20140162598 | Villa-Real | Jun 2014 | A1 |
20140184796 | Klein et al. | Jul 2014 | A1 |
20140236414 | Droz | Aug 2014 | A1 |
20140236472 | Rosario | Aug 2014 | A1 |
20140278052 | Slavin | Sep 2014 | A1 |
20140280584 | Ervine | Sep 2014 | A1 |
20140281498 | Bransom et al. | Sep 2014 | A1 |
20140297687 | Lin | Oct 2014 | A1 |
20140309849 | Ricci | Oct 2014 | A1 |
20140321702 | Schmalstieg | Oct 2014 | A1 |
20140355951 | Tabak | Dec 2014 | A1 |
20140375807 | Muetzel | Dec 2014 | A1 |
20150012825 | Rezvani et al. | Jan 2015 | A1 |
20150032535 | Li et al. | Jan 2015 | A1 |
20150066349 | Chan | Mar 2015 | A1 |
20150084790 | Arpin | Mar 2015 | A1 |
20150086175 | Lorenzetti | Mar 2015 | A1 |
20150088335 | Lambert et al. | Mar 2015 | A1 |
20150103159 | Shashua | Apr 2015 | A1 |
20150161483 | Allen et al. | Jun 2015 | A1 |
20150211868 | Matsushita | Jul 2015 | A1 |
20150266575 | Borko | Sep 2015 | A1 |
20150294174 | Karkowski | Oct 2015 | A1 |
20160023762 | Wang | Jan 2016 | A1 |
20160035391 | Ross et al. | Feb 2016 | A1 |
20160042767 | Araya et al. | Feb 2016 | A1 |
20160062762 | Chen et al. | Mar 2016 | A1 |
20160062992 | Chen et al. | Mar 2016 | A1 |
20160063642 | Luciani et al. | Mar 2016 | A1 |
20160064036 | Chen et al. | Mar 2016 | A1 |
20160065908 | Chang et al. | Mar 2016 | A1 |
20160144788 | Perrin, III | May 2016 | A1 |
20160148638 | Ross et al. | May 2016 | A1 |
20160285492 | Vembar et al. | Sep 2016 | A1 |
20160332747 | Bradlow | Nov 2016 | A1 |
20170032673 | Scofield | Feb 2017 | A1 |
20170053169 | Cuban et al. | Feb 2017 | A1 |
20170053674 | Fisher et al. | Feb 2017 | A1 |
20170059265 | Winter et al. | Mar 2017 | A1 |
20170066374 | Hoye | Mar 2017 | A1 |
20170076396 | Sudak | Mar 2017 | A1 |
20170085829 | Waniguchi et al. | Mar 2017 | A1 |
20170113664 | Nix | Apr 2017 | A1 |
20170178422 | Wright | Jun 2017 | A1 |
20170178423 | Wright | Jun 2017 | A1 |
20170178678 | Ross | Jun 2017 | A1 |
20170193828 | Holtzman et al. | Jul 2017 | A1 |
20170253330 | Saigh et al. | Sep 2017 | A1 |
20170324897 | Swaminathan et al. | Nov 2017 | A1 |
Number | Date | Country |
---|---|---|
2907145 | May 2007 | CN |
101309088 | Nov 2008 | CN |
102355618 | Feb 2012 | CN |
102932703 | Feb 2013 | CN |
202957973 | May 2013 | CN |
103617005 | Mar 2014 | CN |
1148726 | Oct 2001 | EP |
1655855 | May 2006 | EP |
2107837 | Oct 2009 | EP |
2391687 | Nov 2004 | GB |
2003150450 | May 2003 | JP |
2005266934 | Sep 2005 | JP |
2009169922 | Jul 2009 | JP |
2012058832 | Mar 2012 | JP |
1997038526 | Oct 1997 | WO |
2000013410 | Mar 2000 | WO |
2000021258 | Apr 2000 | WO |
2000045587 | Aug 2000 | WO |
2000072186 | Nov 2000 | WO |
2002061955 | Aug 2002 | WO |
2004066590 | Aug 2004 | WO |
2004111851 | Dec 2004 | WO |
2005053325 | Jun 2005 | WO |
2005054997 | Jun 2005 | WO |
2007114988 | Oct 2007 | WO |
2009058611 | May 2009 | WO |
2009148374 | Dec 2009 | WO |
2012001143 | Jan 2012 | WO |
2012100114 | Jul 2012 | WO |
2012116123 | Aug 2012 | WO |
2013020588 | Feb 2013 | WO |
2013074947 | May 2013 | WO |
2013106740 | Jul 2013 | WO |
2013107516 | Jul 2013 | WO |
2013150326 | Oct 2013 | WO |
2014057496 | Apr 2014 | WO |
2016033523 | Mar 2016 | WO |
2016061516 | Apr 2016 | WO |
2016061525 | Apr 2016 | WO |
2016061533 | Apr 2016 | WO |
Entry |
---|
“IEEE 802.1X,” Wikipedia, Aug. 23, 2013, 8 pages, available at: http://en.wikipedia.org/w/index.php?title=IEEE_802.1X&oldid=569887090. |
“Near Field Communication,” Wikipedia, Jul. 19, 2014, 8 pages, available at: https://en.wikipedia.org/w/index.php?title=near_field_communication&oldid=617538619. |
“Portable Application,” Wikipedia, Jun. 26, 2014, 4 pages, available at: http://en.wikipedia.org/w/index.php?title=Portable_application&oldid=614543759. |
“Radio-Frequency Identification,” Wikipedia, Oct. 18, 2013, 31 pages, available at: http://en.wikipedia.org/w/index.php?title=Radio-frequency_identification&oldid=577711262. |
Bell-Northern Research L TO., “A Multi-Bid Rate Interframe Movement Compensated Multi mode Coder forVideo Conferencing” (Final Report prepared for DARPA), Apr. 1982, 92 pages, Ottawa, Ontario, Canada. |
Chapter 5: “Main Memory,” Introduction to Computer Science course, 2004, 20 pages, available at http://www2.cs.ucy.ac.cy/˜nicolast/courses/lectures/MainMemory.pdf. |
Gregory J. Allen, “The Feasibility of Implementing Video Teleconferencing Systems Aboard Afloat Naval Units” (Master's Thesis, Naval Postgraduate School, Monterey, California), Mar. 1990, 143 pages. |
PCT International Search Report and Written Opinion issued in Application No. PCT/US07/63485 dated Feb. 8, 2008, 10 pages. |
PCT International Search Report and Written Opinion issued in Application No. PCT/US15/47532 dated Jan. 8, 2016, 22 pages. |
Sony Corporation, Digital Still Camera (MVC-CD200/CD300), Operation Manual, 2001, 108 pages, Sony, Japan. |
Steve's Digicams, Kodak Professional DCS 620 Digital Camera, 1999, 11 pages, United States, available at: http://www.steves-digicams.com/dcs620.html. |
Xiaoqing Zhu, Eric Setion, Bernd Girod, “Rate Allocation for Multi-Camera Surveillance Over an Ad HocNireless Network,” 2004, 6 pages, available at http://msw3.stanford.edu/-zhuxq/papers/pcs2004.pdf. |
Number | Date | Country | |
---|---|---|---|
20180025636 A1 | Jan 2018 | US |
Number | Date | Country | |
---|---|---|---|
62333818 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15413205 | Jan 2017 | US |
Child | 15722572 | US |