Highways are the original network; the Internet came later. Numerous technologies are available for use in trying to manage congestion and routing of packets across the Internet. Numerous technologies also exist to try to improve Internet safety via content filtering, malware detection, etc. In contrast, decades old problems that existed with roadways still exist today. For example, traffic jams, delayed arrivals, and road safety issues are still commonplace. Other than in-dash navigation, entertainment, and Bluetooth calling, consumer-facing technology in automobiles has changed slowly.
The present application describes systems and methods of incorporating artificial intelligence (AI) and machine learning technology into the automobile experience. As a first example, a road sense system is configured to provide near-real-time environmental updates including road conditions, temporary hazards, micro weather and more. As a second example, a predictive maintenance system is configured to uncover problems before they happen, leveraging automatically curated maintenance records and seamless integration with car dealers and service providers. As a third example, the conventional key for an automobile is replaced with a smart key, which is a blockchain-enabled ID that unlocks access to AI services and serves as a natural language capable AI avatar in a key fob and a secure, digital identity to access user preferences. As a fourth example, a visual search system enables natural language querying and computer vision processing based on past or current conditions, so that a user can get answers to questions such as “was a newspaper delivery waiting on the front lawn as I was leaving in the morning?” As a fifth example, a smart route system provides a platform for intelligent traffic management based on information received from multiple vehicles that were recently on the road, are currently on the road, and/or will be on the road.
Particular aspects of the present disclosure are described below with reference to the drawings. In the description, common features are designated by common reference numbers throughout the drawings. As used herein, various terminology is used for the purpose of describing particular implementations only and is not intended to be limiting. For example, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It may be further understood that the terms “comprise,” “comprises,” and “comprising” may be used interchangeably with “include,” “includes,” or “including.” Additionally, it will be understood that the term “wherein” may be used interchangeably with “where.” As used herein, “exemplary” may indicate an example, an implementation, and/or an aspect, and should not be construed as limiting or as indicating a preference or a preferred implementation. As used herein, an ordinal term (e.g., “first,” “second,” “third,” etc.) used to modify an element, such as a structure, a component, an operation, etc., does not by itself indicate any priority or order of the element with respect to another element, but rather merely distinguishes the element from another element having a same name (but for use of the ordinal term). As used herein, the term “set” refers to a grouping of one or more elements, and the term “plurality” refers to multiple elements.
In the present disclosure, terms such as “determining,” “calculating,” “estimating,” “shifting,” “adjusting,” etc. may be used to describe how one or more operations are performed. It should be noted that such terms are not to be construed as limiting and other techniques may be utilized to perform similar operations. Additionally, as referred to herein, “generating,” “calculating,” “estimating,” “using,” “selecting,” “accessing,” and “determining” may be used interchangeably. For example, “generating,” “calculating,” “estimating,” or “determining” a parameter (or a signal) may refer to actively generating, estimating, calculating, or determining the parameter (or the signal) or may refer to using, selecting, or accessing the parameter (or signal) that is already generated, such as by another component or device.
As used herein, “coupled” may include “communicatively coupled,” “electrically coupled,” or “physically coupled,” and may also (or alternatively) include any combinations thereof. Two devices (or components) may be coupled (e.g., communicatively coupled, electrically coupled, or physically coupled) directly or indirectly via one or more other devices, components, wires, buses, networks (e.g., a wired network, a wireless network, or a combination thereof), etc. Two devices (or components) that are electrically coupled may be included in the same device or in different devices and may be connected via electronics, one or more connectors, or inductive coupling, as illustrative, non-limiting examples. In some implementations, two devices (or components) that are communicatively coupled, such as in electrical communication, may send and receive electrical signals (digital signals or analog signals) directly or indirectly, such as via one or more wires, buses, networks, etc. As used herein, “directly coupled” may include two devices that are coupled (e.g., communicatively coupled, electrically coupled, or physically coupled) without intervening components.
Certain operations are described herein as being performed by a network-accessible server. However, it is to be understood that such operations may be performed by multiple servers, such as in a cloud computing environment, or by node(s) a decentralized peer-to-peer system. Certain operations are also described herein as being performed herein by a computer in a vehicle. In alternative implementations, such operations may be performed by a different computer, such as a user's mobile phone or a smart key device (see below).
Maps and routing apps are great for estimates and a rough sense of what the environment looks like, but they're hardly ever up-to-date with the most current data, imagery and road information. It would be advantageous if a global positioning system (GPS) navigation app warned a user about an impending pot hole, or that workers are using the right-most lane two miles out and the user should probably switch to using a different (e.g., the left) lane. The disclosed road sense system enables this type of near-real-time information, and much more.
When in autonomous mode, the described road sense system enables a vehicle to become smarter, safer and more aware. The road sense system may provide a smoother experience by virtue of having access not only to its own sensor data but also what is and/or was perceived by (sensors of) an entire network of vehicles.
In one example, the road sense system utilizes communication between both local components in a vehicle and remote components accessible to the vehicle via one or more networks. To illustrate, each of a plurality of vehicles (e.g., automobiles, such as cars or trucks) may have on-board sensors, such as temperature, vibration, speed, direction, motion, fluid levels, visual/infrared camera views around the vehicle, GPS transceivers, etc. The vehicles may also have navigation software that is executed on a computer in the vehicles. The software on a particular vehicle may be configured to display maps and provide turn-by-turn navigation directions. The software may also update a network server with the particular vehicle's GPS location, a route that has been completed/is in-progress/is planned for the future, etc. The software may also be configured to download from the network server information regarding road conditions. The network server may aggregate information from each of the vehicles, execute artificial intelligence algorithms based on the received information, and provide notifications to the selected vehicles.
For example, on-board sensors on Car 1 may detect a road condition. To illustrate, the on-board sensors may detect a pothole because Car 1 drove over the pothole, resulting in relevant sensor data, or because a computer vision algorithm executing at Car 1 or the network sever detected the pothole based on image(s) from camera(s) on Car 1. A notification may be provided to Car 2 that a road condition is in a particular location on the road. In this example, the notification to Car 2 may be provided by the network server or by Car 1. To illustrate, the network server may know that Car 2 will be traveling where the road condition is located based on the fact that Car 1's software has informed the network server of its in-progress route (e.g., a position and a velocity of Car 1) and based on the fact that Car 2's software has informed the network server of its in-progress route (e.g., a position and a velocity of Car 2). Thus, the server may provide the notification based on a determination that Car 2 is approaching the position of Car 1 when Car 1 encountered the road condition. As another example, Car 1 may broadcast a message that is received by Car 2 either directly or via relay by one or more other vehicles and/or fixed communication relays. When a different car detects that the road condition has been alleviated, the notification may be cancelled so that drivers of other cars are not needlessly warned. In this fashion, near-real-time updates regarding road conditions can be provided to multiple vehicles. To illustrate, until the road condition is addressed, multiple vehicles that may encounter the road condition may be notified so that their drivers can be warned. In some examples, a vehicle operating in self-driving mode may take evasive action to avoid the road condition, such as by automatically rerouting or traveling in a different lane to avoid a predicted position of the road condition based on an instruction from the network server.
It is to be understood that the specific use cases described herein, such as the pothole use case above, are for illustration only and are not to be considered limiting. Other use cases may also apply to the described techniques. For example, vehicles may be notified if a particular lane is closed a mile or two away, so drivers (or the self-driving logic) have ample time to change lanes or take an alternative route (which may be recommended by the intelligent navigation system in the car or by the network server), which may serve to alleviate bottlenecks related to lane closures.
Whenever a new or used vehicle is purchased, it is natural for the consumer to want to be certain that every service performed on the vehicle, and every replacement part used, meets quality standards. The disclosed predictive maintenance system is a vehicle health platform that uses blockchain-powered digital records and predictive maintenance technology so that the vehicles stay in excellent shape. Using data gathered from advanced on-board sensors, AI algorithms within the vehicles and/or at network servers predict maintenance needs and any failures before they occur. These notifications are integrated with secure blockchain records creating provenance and automated service tickets (such as with a consumer's preferred service provider).
For example, aggregate historical data from multiple vehicles and maintenance service providers may include information regarding what service was performed on a vehicle and when, as well as dozens or even hundreds of data points from various sensors during time periods preceding each of the service needs. These data points can include data from sensors in the vehicles as well as sensors outside the vehicles (e.g., on roadways, street signs, etc.). Using automated model building techniques, it may be determined which of the data points are best at predicting, with a sufficient amount of lead time (e.g., a week, a month, etc.) that a particular type of service is going to be needed for a vehicle. Examples of such automated model building techniques are described with reference to
A model can be used to predict when a particular user's vehicle has a high likelihood of needing a particular maintenance service in the near future. The model may be executed at a network server and/or on the vehicle's on-board computer. As an illustrative non-limiting example, the model may determine based on a combination of sensors/metrics (e.g., temperature reading, vibration reading, fluid viscosity reading, fuel efficiency reading, tire pressure reading, etc.) that a specific engine problem (e.g., oil pump failure, spark knock, coolant leakage, radiator clog, spark plug wear, loosening gas cap, etc.) is ongoing or will occur sometime in a particular period of time (e.g., the next two weeks). In response, a notification may be displayed in the vehicle, sent to the user's smart key (see below), sent to the user via text/email, etc. A preferred maintenance service provider of the user may also be notified, and in some cases a service appointment may be automatically calendared for the user while respecting other obligations already marked on the user's calendar and other appointments that are already present on the maintenance service provider's schedule.
In accordance with the described techniques, each vehicle may come with one or more unique, digitally signed key fobs referred to herein as smart keys. A smart key may be (or may include) an embedded, wireless computer that enables a user to maintain constant connectivity with digital services. An always-available AI system within the smart key supports any-time voice conversation with the smart key. An integrated e-paper display provides notifications and prompts from the cognitive platform. The smart key can also unlock additional benefits, including, but not limited to, integration with “pervasive preferences.” For example, as soon as the person in possession of a particular smart key enters a vehicle and/or uses their smart key to activate the vehicle, various vehicle persona preferences may be fetched from a network server (or from a memory of the smart key itself) and may be applied to the vehicle. It is to be understood that such preferences need not be vehicle-specific. Rather, the preferences may be applied whether the car is owned by the user, a rental car, or even if the user is a passenger and the driver of the car allows the preference to be applied (e.g., the user is in the back seat of a vehicle while using a ride-hailing service and the user's preferred radio station is tuned in response to the user's smart key).
Illustrative, non-limiting examples of “pervasive preferences” that can be triggered by a smart key include automatic seat adjustment, steering settings, climate control settings, mirror and camera settings, lighting settings, entertainment settings (including downloading particular apps, music, podcasts, etc.), and vehicle performance profiles.
In various examples, the smart key includes physical buttons and/or touch buttons integrated with or surrounding a display, such as an e-paper or LCD display. The buttons may control functions such as lock/unlock, panic, trunk open/close, etc. The display may show weather information, battery status, messages received from the vehicle, the network server, or another user, calendar information, estimated travel time, etc. The smart key may also be used to access/interact with other systems described herein. For example, the smart key may display notifications from the road sense system. As another example, smart key may display notifications from the predictive maintenance system. As another example, the smart key may be used to provide voice input to initiate a search by a visual search system (see below) and display results of the search. As yet another example, a user may user their smart key to provide a smart route system (see below) voice input regarding a planned route. A particular illustrative example of a smart key is shown in
In accordance with the described techniques, a user's vehicle provides the appearance of a near-perfect photographic memory. As examples, the user can ask their car to remind them where exactly they saw that wonderful gelateria with the beautiful red door, whether there was a package by the front door that they forgot to notice as they were driving to work in the morning, etc. With the visual search system, a vehicle is capable of seeing, perceiving and remembering, as well as responding to questions expressed in natural language. The visual search system may be accessed from a smart key, a mobile phone app, and/or within the vehicle itself.
In some examples, the visual search system stores images/videos captured by some or all of a vehicle's cameras. Such data may be stored at the vehicle, at network-accessible storage, or a combination thereof. The images/videos may be stored in compressed fashion or computer vision features extracted by feature extraction algorithms may be stored rather than storing the raw images/video.
Artificial intelligence algorithms such as object detection, object recognition, etc. may operate on the stored data based on input from a natural language processing system and potentially in conjunction with other systems. For example, in the “gelateria with the beautiful red door” example described above, the natural language processing system may determine that the user is looking for a dessert shop that the user drove past, where the dessert shop (or a shop near it) had a door that was painted red (or a color close to red) and may have had decoration on the door. Using this input, the visual search system may conduct a search of historical camera data from the user's vehicle, GPS/trip information regarding previous travel by the user (whether in the user's car or in another car while the user had his/her smart key), and navigation places-of-interest information to find candidates of the dessert shop in question. A list of the search results can be displayed to the user via the smart key, a mobile app, or on a display screen in the vehicle the user is in. Search results that serve gelato or have red doors may be elevated in the list of search results, and a photo of such a red door (or the establishment in general) may be displayed, if available.
A more targeted search can be conducted for the “did I fail to notice a package this morning” example. In this example, the visual search system may simply determine which camera(s) were pointed at the door/yard of the user's home when the user's car was parked overnight, and may scan through the images/video from such cameras to determine if a package was present or a delivery was made during the timeframe in question.
Other automatic/manually-initiated searches are also possible using the visual search system: “What's that Thai place I love?”, “Where's that ice cream shop? I know there was a park with a white fence around it.”, “Where is the soccer tournament James took Tommy to this morning?” (where James and Tommy are family members and at least one of them have their own smart key or other GPS-enabled device), “Have I seen a blue SUV with a license plate number ending in 677?” The last of these may even be performed automatically in response to an Amber/Silver/Gold/Blue Alert. Some examples of search queries, including visual search queries, are shown in
A smart route system in accordance with the present disclosure may utilize predictive algorithms that monitor expected arrival times reported by various vehicles/user devices. The smart route system may also utilize an AI-powered reservation system that supports “booking” of roadway (e.g., highway) capacity by piloted and autonomous vehicles. For example, various vehicles that will be traveling on a commonly-used roadway may “book” the roadway. “Booking” a roadway may simply mean notifying a network server of the intended route/time of travel, or may actually involve receiving confirmation of booking, from a network server associated with a transit/toll authority, to travel on the road. The confirmation of booking may identify a particular time or time period that the vehicle has booked. Such “bookings” may be incentivized, for example by lower toll fees or by virtue of fines, tolls, or higher tolls being levied against un-booked vehicles.
The smart route system may be simple to use. A user may start by associating an account with their smart key. Next, the user may specify their home, office, and other frequent destinations. AI can do the rest. As the user begins to drive their vehicle, the smart route system detects common trips and schedules. Using the smart key (or a mobile app), the smart route system may prompt the user whether they would like to make advance reservations for roadways and provide information on a successful booking (e.g., time that the reservation was made) via the smart key (or the mobile app). The smart route system may integrate with the user's calendar to propose advance route reservations for any identified destination.
To illustrate, as more and more vehicles include the smart route system and more and more users use their smart route system, more accurate predictions regarding current route delays can be made and more advance knowledge of the origins and destinations of vehicles is available. The smart route system may use this data to project future roadway capacity constraints. In some examples, the smart route system may re-route a vehicle, notify a driver of departure time changes, and list optional travel windows with expected arrival times based on intended routes of other vehicles, the user's calendar, current location of the vehicle, a destination of the vehicle, or a combination thereof.
In some cases, the smart route system rewards responsible drivers who follow recommended instructions/road reservations. The smart route system may also recommend a driving speed, because in some cases reducing your speed may actually help a user reach their destination faster. Similarly, the smart route system may notify the user that they are better off leaving earlier or later than planned in view of expected traffic. If a user has a flexible schedule, the smart route system may incentivize delayed departures and give route priority to drivers that are on a tighter schedule.
In
The AI system tier 120 includes automated model building, models (some of which may be artificial neural networks), computer vision algorithms, intelligent routing algorithms, and natural language processing engines. Examples of such AI system components are further described with reference to
The output category 130 includes road sense notifications, predictive maintenance notifications, smart key output, visual search results, and smart route recommendations. It is to be understood that in alternative implementations, the input category 110, the AI system tier 120, and/or the output category 130 may have different components than those shown in
In some examples, the described techniques may enable a vehicle to operate as an autonomous agent device. Unless otherwise clear from the context, the term “autonomous agent device” refers to both fully autonomous devices and semi-autonomous devices while such semi-autonomous devices are operating independently. A fully autonomous device is a device that operates as an independent agent, e.g., without external supervision or control. A semi-autonomous device is a device that operates at least part of the time as an independent agent, e.g., autonomously within some prescribed limits or autonomously but with supervision. An example of a semi-autonomous agent device is a self-driving vehicle in which a human driver is present to supervise operation of the vehicle and can take over control of the vehicle if desired. In this example, the self-driving vehicle may operate autonomously after the human driver initiates a self-driving system and may continue to operate autonomously until the human driver takes over control. As a contrast to this example, an example of a fully autonomous agent device is a fully self-driving car in which no driver is present (although passengers may be).
In some examples, such as for the predictive maintenance system, a public, tamper-evident ledger may be used. The public, tamper-evident ledger includes a blockchain of a shared blockchain data structure, instances of which may be stored in local memories of vehicles and/or at network servers.
As described further below, the agent devices 402-408 of
Although
In some implementations, the agent devices 402-408 include diverse types of devices. For example, the agent device 402 may differ in type and functionality (e.g., expected behavior) from the agent device 408. To illustrate, the agent device 402 may include an autonomous aircraft, and the agent device 408 may include an infrastructure device at an airport. Likewise, the other agent devices 404, 406 may be of the same type as one another or may be of different types. While only the features of the agent device 402 are shown in detail in
In
The sensors 422 can include a wide variety of types of sensors configured to sense an environment around the agent device 402. The sensors 422 can include active sensors that transmit a signal (e.g., an optical, acoustic, or electromagnetic signal) and generate sensed data based on a return signal, passive sensors that generate sensed data based on signals from other devices (e.g., other agent devices, etc.) or based on environmental changes, or a combination thereof. Generally, the sensors 422 can include any combination of or set of sensors that enable the agent device 402 to perform its core functionality and that further enable the agent device 402 to detect the presence of other agent devices 404-408 in proximity to the agent device 402. In some implementations, the sensors 422 further enable the agent device 402 to determine an action that is being performed by an agent device that is detected in proximity to the agent device 402. In this implementation, the specific type or types of the sensors 422 can be selected based on actions that are to be detected. For example, if the agent device 402 is to determine whether one of the other agent devices 404-408 is driving erratically, the agent device 402 may include an acoustic sensor that is capable of isolating sounds associated with erratic driving (e.g., tire squeals, engine noise variations, etc.). Alternatively, or in addition, the agent device 402 may include an optical sensor that is capable of detecting erratic movement of a vehicle.
The behavior actuators 426 include any combination of actuators (and associated linkages, joints, etc.) that enable the agent device 402 to perform its core functions. The behavior actuators 426 can include one or more electrical actuators, one or more magnetic actuators, one or more hydraulic actuators, one or more pneumatic actuators, one or more other actuators, or a combination thereof. The specific arrangement and type of behavior actuators 426 depends on the core functionality of the agent device 402. For example, if the agent device 402 is an automobile, the behavior actuators 426 may include one or more steering actuators, one or more acceleration actuators, one or more braking actuators, etc. In another example, if the agent device 402 is a household cleaning robot, the behavior actuators 426 may include one or more movement actuators, one or more cleaning actuators, etc. Thus, the complexity and types of the behavioral actuators 426 can vary greatly from agent device to agent device depending on the purpose or core functions of each agent device.
The processor 420 is configured to execute instructions 436 from the memory 434 to perform various operations. For example, the instructions 436 include behavior instructions 438 which include programming or code that enables the agent device 402 to perform processing associated with one or more useful functions of the agent device 402. To illustrate, the behavior instructions 438 may include artificial intelligence instructions that enable the agent device 402 to autonomously (or semi-autonomously) determine a set of actions to perform. The behavior instructions 438 are executed by the processor 420 to perform core functionality of the agent device 402 (e.g., to perform the main task or tasks for which the agent device 402 was designed or programmed). As a specific example, if the agent device 402 is a self-driving vehicle, the behavior instructions 438 include instructions for controlling the vehicle's speed, steering the vehicle, processing sensor data to identify hazards, avoiding hazards, and so forth.
The instructions 436 also include blockchain manager instructions 444. The blockchain manager instructions 444 are configured to generate and maintain the blockchain. As explained above, the blockchain data structure 450 is an instance of, or an instance of at least a portion of, the shared blockchain data structure 410. The shared blockchain data structure 410 is shared in a distributed manner across a plurality of the agent devices 402-408 or across all of the agent devices 402-408. In a particular implementation, each of the agent devices 402-408 stores an instance of the shared blockchain data structure 410 in local memory of the respective agent device. In other implementations, each of the agent devices 402-408 stores a portion of the shared blockchain data structure 410 and each portion is replicated across multiple of the agent devices 402-408 in a manner that maintains security of the shared blockchain data structure 410 public (i.e., available to other agent devices) and incorruptible (or tamper evident) ledger.
The shared blockchain data structure 410 stores, among other things, data determined based on observation reports from the agent devices 402-408. An observation report for a particular time period includes data descriptive of a sensed environment around one of the agent devices 402-408 during the particular time period. To illustrate, when a first agent device senses the presences of or actions of a second agent device, the first agent device may generate an observation include data reporting the location and/or actions of the second agent and may include the observation (possibly with one or more other observations) in an observation report. Each agent device 402-408 sends its observation reports to the other agent devices 402-408. For example, the agent device 402 may broadcast an observation report 480 to the other agent device 404-408. In another example, the agent device 402 may transmit an observation report 480 to another agent device (e.g., the agent device 404) and the other agent device may forward the observation report 480 using a message forwarding functionality or a mesh networking communication functionality. Likewise, the other agent devices 404-408 transmit observation reports 482-486 that are received by the agent device 402. In some examples when the distributed agents include vehicles, observation reports may include information regarding conditions (e.g., travel speed, traffic conditions, weather conditions, potholes, etc.) detected by the vehicles, trip/booking information, etc.
The observation reports 480-486 are used to generate blocks of the shared blockchain data structure 410. For example,
The block data of each block includes information that identifies the block (e.g., a block id.) and enables the agent devices 402-408 to confirm the integrity of the blockchain of the shared blockchain data structure 410. For example, the block id. of the sample block 418 may include or correspond to a result of a hash function (e.g., a SHA256 hash function, a RIPEMD hash function, etc.) based on the observation data in the sample block 418 and based on a block id. from the prior block of the blockchain. For example, in
Each of the observation reports 480-486 may include a self-reported location and/or action of the agent device that send the observation report, a sensed location and/or action of another agent device, sensed locations and/or observations or several other agent devices, other information regarding “smart” vehicle functions described with reference to
In some implementations, the blockchain manager instructions 442 are configured to determine whether an observation in the observation buffer 448 is confirmed by one or more other observations. For example, after the observation report 482 is received from the agent device 404, data from the observation report 482 (e.g., one or more observations) are stored in the observation buffer 448. Subsequently, the sensors 422 of the agent device 402 may generate sensed data that confirms the data. Alternatively, or in addition, another of the agent devices 406-408 may send an observation report 484, 486 that confirms the data. In this example, the blockchain manager instructions 442 may indicate that the data from the observation report 482 stored in the observation buffer 448 is confirmed. For example, the blockchain manager instructions 442 may mark or tag the data as confirmed (e.g., using a confirmed bit, a pointer, or a counter indicating a number of confirmations). As another example, the blockchain manager instructions 442 may move the data to a location of the memory 434 of the observation buffer 448 that is associated with confirmed observations. In some implementations, data that is not confirmed is eventually removed from the observation buffer 448. For example, each observation or each observation report 480-486 may be associated with a time stamp, and the blockchain manager instructions 442 may remove an observation from the observation buffer 448 if the observation is not confirmed within a particular time period following the time stamp. As another example, the blockchain manager instructions 442 may remove an observation from the observation buffer 448 if at least one block that includes observations within a time period correspond to the time stamp has been added to the blockchain.
The blockchain manager instructions 442 are also configured to determine when a block forming trigger satisfies a block forming condition. The block forming trigger may include or correspond to a count of observations in the observation buffer 448, a count of confirmed observations in the observation buffer 448, a count of observation reports received since the last block was added to the blockchain, a time interval since the last block was added to the blockchain, another criterion, or a combination thereof. If the block forming trigger corresponds to a count (e.g., of observations, of confirmed observations, or of observation reports), the block forming condition corresponds to a threshold value for the count, which may be based on a number of agent devices in the group. For example, the threshold value may correspond to a simple majority of the agent devices in the group or to a specified fraction of the agent devices in the group.
In a particular implementation, when the block forming condition is satisfied, the blockchain manager instructions 444 form a block using confirmed data from the observation buffer 448. The blockchain manager instructions 444 then cause the block to be transmitted to the other agent devices, e.g., as block Bk_n+1 490 in
The memory 434 also includes behavior evaluation instructions 446, which are executable by the processor 420 to determine a behavior of another agent and to determine whether the behavior conforms to a behavior criterion associated with the other agent device. The behavior can be determined based on observation data from the blockchain, from confirmed observations in the observation buffer 448, or a combination thereof. Some behaviors may be determined based on a single confirmed observation. For example, if a device is observed swerving to avoid an obstacle on the road and the observation is confirmed, the confirmed observation corresponds to the behavior “avoiding obstacle”. Other behaviors may be determined based on two or more confirmed observations. For example, a first confirmed observation may indicate that the agent device is at a first location at a first time, and a second confirmed observation may indicate that the agent device is at a second location at a second time. These two confirmed observations can be used to determine a behavior indicating an average direction (i.e., from the first location toward the second location) and an average speed of movement of the agent device (based on the first time, the second time, and a distance between the first location and the second location). Such information may be utilized by the road sense system and/or the smart route system described with reference to
The particular behavior or set of behaviors determined for each agent device may depend on behavior criteria associated with each agent device. For example, if behavior criteria associated with the agent device 404 specify a boundary beyond which the agent device 404 is not allowed to carry passengers, the behavior evaluation instructions 446 may evaluate each confirmed observation of the agent device 404 to determine whether the agent device 404 is performing a behavior corresponding to carrying passengers, and a location of the agent device 404 for each observation in which the agent device 404 is carrying passengers. In another example, a behavior criterion associated with the agent device 406 may specify that the agent device 406 should always move at a speed less than a speed limit value. In this example, the behavior evaluation instructions 446 do not determine whether the agent device 406 is performing the behavior corresponding to carrying passengers; however, the behavior evaluation instructions 446 may determine a behavior corresponding to an average speed of movement of the agent device 406. The behavior criteria for any particular agent device 402-408 may identify behaviors that are required (e.g., always stop at stop signs), behaviors that are prohibited (e.g., never exceed a speed limit), behaviors that are conditionally required (e.g., maintain an altitude of greater than 4000 meters while operating within 2 kilometers of a naval vessel), behaviors that are conditionally prohibited (e.g., never arm weapons while operating within 2 kilometers of a naval vessel), or a combination thereof. Based on the confirmed observations, each agent device 402-408 determines corresponding behavior of each other agent device based on the behavior criteria for the other agent device.
After determining a behavior for a particular agent device, the behavior evaluation instructions 446 compare the behavior to the corresponding behavior criterion to determine whether the particular agent device is conforming to the behavior criterion. In some implementations, the behavior criterion is satisfied if the behavior is allowed (e.g., is whitelisted), required, or conditionally required and the condition is satisfied. In other implementations, the behavior criterion is satisfied if the behavior is not disallowed (e.g., is not blacklisted), is not prohibited, is not conditionally prohibited and the condition is satisfied, or is conditionally prohibited but the condition is not satisfied. In yet other examples, criteria representing events of interest (e.g., avoiding road obstacles, slowing down due to traffic congestion, exiting to a roadway that is not listed in a previously filed (e.g., in the blockchain) travel plan, etc. may be established and checked.
In some implementations, the behavior criteria for each of the agent devices 402-408 are stored in the shared blockchain data structure 410. In other implementations, the behavior criteria for each of the agent devices 402-408 are stored in the memory of each agent devices 402-408. In other implementations, the behavior criteria are accessed from a trusted public source, such as a trusted repository, based on the identity or type of agent device associated with the behavior criteria. In yet another implementation, an agent device may transmit data indicating behavior criteria for the agent device to other agent devices of the group when the agent device joins the group. In this implementation, the data may include or be accompanied by information that enables the other agent devices to confirm the authenticity of the behavior criteria. For example, the data (or the behavior criteria) may be encrypted by a trusted source (e.g., using a private key of the trusted source) before being stored on the agent device. To illustrate, when the agent device 402 receives data indicating behavior criteria for the agent device 406, the agent device 402 can confirm that the behavior criteria came from the trusted source by decrypting the data using a public key associated with the trusted source. Thus, the agent device 406 is not able to transmit fake behavior criteria to avoid appropriate scrutiny of its behavior.
In some implementations, if a first agent device determines that a second agent device is violating a criterion for expected behavior associated with the second agent device, the first agent device may execute response instructions 440. The response instructions 440 are executable to initiate and perform a response action. For example, each agent device 402-408 may include a response system, such as a response system 430 of the agent device 402. Depending on implementation and the nature of the agent devices, the response system 430 may initiate various actions.
In the case of autonomous military aircraft, the actions may be configured to stop the second agent device or to limit effects of the second agent device's non-conforming behavior. For example, the first agent device may attempt to secure, constrain, or confine the second agent device. To illustrate, such actions may include causing the agent device 402 to move toward the agent device 404 to block a path of the agent device 404, using a restraint mechanism (e.g., a tether) that the agent device 402 can attach to the agent device 404 to stop or limit the non-conforming behavior of the agent device 404, etc.
In the case of autonomous road vehicles (e.g., passenger cars, trucks, and SUVs), the response actions may include communicating and/or using observations regarding other agents. For example, if a first vehicle observes a second vehicle in a neighboring lane swerve to avoid a road obstacle, both the first vehicle and the second vehicle may provide corresponding observations and data (e.g., sensor readings, camera photos of the obstacle, etc.) to the road sense system, which may in turn respond to the verified observation of the road obstacle by pushing an alert to other vehicles that will encounter the obstacle. When confirmed observation(s) are received that the obstacle has been cleared, the road sense system may clear the notification.
Referring to
In particular aspects, the genetic algorithm 510 is executed on a different device, processor (e.g., central processor unit (CPU), graphics processing unit (GPU) or other type of processor), processor core, and/or thread (e.g., hardware or software thread) than the backpropagation trainer 580. The genetic algorithm 510 and the backpropagation trainer 580 may cooperate to automatically generate a neural network model of a particular data set, such as an illustrative input data set 502. In particular aspects, the system 500 includes a pre-processor 504 that is communicatively coupled to the genetic algorithm 510. Although
As further described herein, the system 500 may provide an automated data-driven model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set. Additionally, the system 500 simplify the neural network model to avoid overfitting and to reduce computing resources required to run the model.
The genetic algorithm 510 includes or is otherwise associated with a fitness function 540, a stagnation criterion 550, a crossover operation 560, and a mutation operation 570. As described above, the genetic algorithm 510 may represent a recursive search process. Consequently, each iteration of the search process (also called an epoch or generation of the genetic algorithm) may have an input set (or population) 520 and an output set (or population) 530. The input set 520 of an initial epoch of the genetic algorithm 510 may be randomly or pseudo-randomly generated. After that, the output set 530 of one epoch may be the input set 520 of the next (non-initial) epoch, as further described herein.
The input set 520 and the output set 530 may each include a plurality of models, where each model includes data representative of a neural network. For example, each model may specify a neural network by at least a neural network topology, a series of activation functions, and connection weights. The topology of a neural network may include a configuration of nodes of the neural network and connections between such nodes. The models may also be specified to include other parameters, including but not limited to bias values/functions and aggregation functions.
Additional examples of neural network models are further described with reference to
The connection data 620 for each connection in a neural network may include at least one of a node pair or a connection weight. For example, if a neural network includes a connection from node N1 to node N2, then the connection data 620 for that connection may include the node pair <N1, N2>. The connection weight may be a numerical quantity that influences if and/or how the output of N1 is modified before being input at N2. In the example of a recurrent network, a node may have a connection to itself (e.g., the connection data 620 may include the node pair <N1, N1>).
The model 600 may also include a species identifier (ID) 630 and fitness data 640. The species ID 630 may indicate which of a plurality of species the model 600 is classified in, as further described with reference to
Returning to
In a particular aspect, fitness evaluation of models may be performed in parallel. To illustrate, the system 500 may include additional devices, processors, cores, and/or threads 590 to those that execute the genetic algorithm 510 and the backpropagation trainer 580. These additional devices, processors, cores, and/or threads 590 may test model fitness in parallel based on the input data set 502 and may provide the resulting fitness values to the genetic algorithm 510.
In a particular aspect, the genetic algorithm 510 may be configured to perform speciation. For example, the genetic algorithm 510 may be configured to cluster the models of the input set 520 into species based on “genetic distance” between the models. Because each model represents a neural network, the genetic distance between two models may be based on differences in nodes, activation functions, aggregation functions, connections, connection weights, etc. of the two models. In an illustrative example, the genetic algorithm 510 may be configured to serialize a model into a bit string. In this example, the genetic distance between models may be represented by the number of differing bits in the bit strings corresponding to the models. The bit strings corresponding to models may be referred to as “encodings” of the models. Speciation is further described with reference to
Because the genetic algorithm 510 is configured to mimic biological evolution and principles of natural selection, it may be possible for a species of models to become “extinct.” The stagnation criterion 550 may be used to determine when a species should become extinct, e.g., when the models in the species are to be removed from the genetic algorithm 510. Stagnation is further described with reference to
The crossover operation 560 and the mutation operation 570 is highly stochastic under certain constraints and a defined set of probabilities optimized for model building, which produces reproduction operations that can be used to generate the output set 530, or at least a portion thereof, from the input set 520. In a particular aspect, the genetic algorithm 510 utilizes intra-species reproduction but not inter-species reproduction in generating the output set 530. Including intra-species reproduction and excluding inter-species reproduction may be based on the assumption that because they share more genetic traits, the models of a species are more likely to cooperate and will therefore more quickly converge on a sufficiently accurate neural network. In some examples, inter-species reproduction may be used in addition to or instead of intra-species reproduction to generate the output set 530. Crossover and mutation are further described with reference to
Left alone and given time to execute enough epochs, the genetic algorithm 510 may be capable of generating a model (and by extension, a neural network) that meets desired accuracy requirements. However, because genetic algorithms utilize randomized selection, it may be overly time-consuming for a genetic algorithm to arrive at an acceptable neural network. In accordance with the present disclosure, to “help” the genetic algorithm 510 arrive at a solution faster, a model may occasionally be sent from the genetic algorithm 510 to the backpropagation trainer 580 for training. This model is referred to herein as a trainable model 522. In particular, the trainable model 522 may be based on crossing over and/or mutating the fittest models of the input set 520, as further described with reference to
The backpropagation trainer 580 may utilize a portion, but not all of the input data set 502 to train the connection weights of the trainable model 522, thereby generating a trained model 582. For example, the portion of the input data set 502 may be input into the trainable model 522, which may in turn generate output data. The input data set 502 and the output data may be used to determine an error value, and the error value may be used to modify connection weights of the model, such as by using gradient descent or another function.
The backpropagation trainer 580 may train using a portion rather than all of the input data set 502 to mitigate overfit concerns and/or to shorten training time. The backpropagation trainer 580 may leave aspects of the trainable model 522 other than connection weights (e.g., neural network topology, activation functions, etc.) unchanged. Backpropagating a portion of the input data set 502 through the trainable model 522 may serve to positively reinforce “genetic traits” of the fittest models in the input set 520 that were used to generate the trainable model 522. Because the backpropagation trainer 580 may be executed on a different device, processor, core, and/or thread than the genetic algorithm 510, the genetic algorithm 510 may continue executing additional epoch(s) while the connection weights of the trainable model 522 are being trained. When training is complete, the trained model 582 may be input back into (a subsequent epoch of) the genetic algorithm 510, so that the positively reinforced “genetic traits” of the trained model 582 are available to be inherited by other models in the genetic algorithm 510.
Operation of the system 500 is now described with reference to
During a configuration stage of operation, a user may specify data sources from which the pre-processor 504 is to determine the input data set 502. The user may also specify a particular data field or a set of data fields in the input data set 502 to be modeled. The pre-processor 504 may determine the input data set 502, determine a machine learning problem type to be solved, and initialize the AMB engine (e.g., the genetic algorithm 510 and/or the backpropagation trainer 580) based on the input data set 502 and the machine learning problem type. As an illustrative non-limiting example, the pre-processor 504 may determine that the data field(s) to be modeled corresponds to output nodes of a neural network that is to be generated by the system 500. For example, if a user indicates that the value of a particular data field is to be modeled (e.g., to predict the value based on other data of the data set), the model may be generated by the system 500 to include an output node that generates an output value corresponding to a modeled value of the particular data field. In particular implementations, the user can also configure other aspects of the model. For example, the user may provide input to indicate a particular data field of the data set that is to be included in the model or a particular data field of the data set that is to be omitted from the model. As another example, the user may provide input to constrain allowed model topologies. To illustrate, the model may be constrained to include no more than a specified number of input nodes, no more than a specified number of hidden layers, or no recurrent loops.
Further, in particular implementations, the user can configure aspects of the genetic algorithm 510, such as via input to the pre-processor 504 or graphical user interfaces (GUIs) generated by the pre-processor 504. For example, the user may provide input to limit a number of epochs that will be executed by the genetic algorithm 510. Alternatively, the user may specify a time limit indicating an amount of time that the genetic algorithm 510 has to generate the model, and the genetic algorithm 510 may determine a number of epochs that will be executed based on the specified time limit. To illustrate, an initial epoch of the genetic algorithm 510 may be timed (e.g., using a hardware or software timer at the computing device executing the genetic algorithm 510), and a total number of epochs that are to be executed within the specified time limit may be determined accordingly. As another example, the user may constrain a number of models evaluated in each epoch, for example by constraining the size of the input set 520 and/or the output set 530. As yet another example, the user can define a number of trainable models 522 to be trained by the backpropagation trainer 580 and fed back into the genetic algorithm 510 as trained models 582.
In particular aspects, configuration of the genetic algorithm 510 by the pre-processor 504 includes performing other pre-processing steps. For example, the pre-processor 504 may determine whether a neural network is to be generated for a regression problem, a classification problem, a reinforcement learning problem, etc. As another example, the input data set 502 may be “cleaned” to remove obvious errors, fill in data “blanks,” etc. in the data source(s) from which the input data set 502 is generated. As another example, values in the input data set 502 may be scaled (e.g., to values between 0 and 1) relative to values in the data source(s). As yet another example, non-numerical data (e.g., categorical classification data or Boolean data) in the data source(s) may be converted into numerical data or some other form of data that is compatible for ingestion and processing by a neural network. Thus, the pre-processor 504 may serve as a “front end” that enables the same AMB engine to be driven by input data sources for multiple types of computing problems, including but not limited to classification problems, regression problems, and reinforcement learning problems.
During automated model building, the genetic algorithm 510 may automatically generate an initial set of models based on the input data set 502, received user input indicating (or usable to determine) the type of problem to be solved, etc. (e.g., the initial set of models is data-driven). As illustrated in
The initial set of models may be input into an initial epoch of the genetic algorithm 510 as the input set 520, and at the end of the initial epoch, the output set 530 generated during the initial epoch may become the input set 520 of the next epoch of the genetic algorithm 510. In some examples, the input set 520 may have a specific number of models. For example, as shown in a first stage 700 of operation in
For the initial epoch of the genetic algorithm 510, the topologies of the models in the input set 520 may be randomly or pseudo-randomly generated within constraints specified by any previously input configuration settings. Accordingly, the input set 520 may include models with multiple distinct topologies. For example, a first model may have a first topology, including a first number of input nodes associated with a first set of data parameters, a first number of hidden layers including a first number and arrangement of hidden nodes, one or more output nodes, and a first set of interconnections between the nodes. In this example, a second model of epoch may have a second topology, including a second number of input nodes associated with a second set of data parameters, a second number of hidden layers including a second number and arrangement of hidden nodes, one or more output nodes, and a second set of interconnections between the nodes. Since the first model and the second model are both attempting to model the same data field(s), the first and second models have the same output nodes.
The genetic algorithm 510 may automatically assign an activation function, an aggregation function, a bias, connection weights, etc. to each model of the input set 520 for the initial epoch. In some aspects, the connection weights are assigned randomly or pseudo-randomly. In some implementations, a single activation function is used for each node of a particular model. For example, a sigmoid function may be used as the activation function of each node of the particular model. The single activation function may be selected based on configuration data. For example, the configuration data may indicate that a hyperbolic tangent activation function is to be used or that a sigmoid activation function is to be used. Alternatively, the activation function may be randomly or pseudo-randomly selected from a set of allowed activation functions, and different nodes of a model may have different types of activation functions. In other implementations, the activation function assigned to each node may be randomly or pseudo-randomly selected (from the set of allowed activation functions) for each node the particular model. Aggregation functions may similarly be randomly or pseudo-randomly assigned for the models in the input set 520 of the initial epoch. Thus, the models of the input set 520 of the initial epoch may have different topologies (which may include different input nodes corresponding to different input data fields if the data set includes many data fields) and different connection weights. Further, the models of the input set 520 of the initial epoch may include nodes having different activation functions, aggregation functions, and/or bias values/functions.
Continuing to a second stage 750 of operation, each model of the input set 520 may be tested based on the input data set 502 to determine model fitness. For example, the input data set 502 may be provided as input data to each model, which processes the input data set (according to the network topology, connection weights, activation function, etc., of the respective model) to generate output data. The output data of each model may be evaluated using the fitness function 540 to determine how well the model modeled the input data set 502. For example, in the case of a regression problem, the output data may be evaluated by comparing a prediction value in the output data to an actual value in the input data set 502. As another example, in the case of a classification problem, a classifier result indicated by the output data may be compared to a classification associated with the input data set 502 to determine if the classifier result matches the classification in the input data set 502. As yet another example, in the case of a reinforcement learning problem, a reward may be determined (e.g., calculated) based on evaluation of an environment, which may include one or more variables, functions, etc. In a reinforcement learning problem, the fitness function 540 may be the same as or may be based on the reward function(s). Fitness of a model may be evaluated based on performance (e.g., accuracy) of the model, complexity (or sparsity) of the model, or a combination thereof. As a simple example, in the case of a regression problem or reinforcement learning problem, a fitness value may be assigned to a particular model based on an error value associated with the output data of that model or based on the value of the reward function, respectively. As another example, in the case of a classification problem, the fitness value may be assigned based on whether a classification determined by a particular model is a correct classification, or how many correct or incorrect classifications were determined by the model.
In a more complex example, the fitness value may be assigned to a particular model based on both prediction/classification accuracy or reward optimization as well as complexity (or sparsity) of the model. As an illustrative example, a first model may model the data set well (e.g., may generate output data or an output classification with a relatively small error, or may generate a large positive reward function value) using five input nodes (corresponding to five input data fields), whereas a second potential model may also model the data set well using two input nodes (corresponding to two input data fields). In this illustrative example, the second model may be sparser (depending on the configuration of hidden nodes of each network model) and therefore may be assigned a higher fitness value that the first model.
As shown in
Continuing to
In a particular aspect, the genetic algorithm 510 uses species fitness to determine if a species has become stagnant and is therefore to become extinct. As an illustrative non-limiting example, the stagnation criterion 550 may indicate that a species has become stagnant if the fitness of that species remains within a particular range (e.g., +/−6%) for a particular number (e.g., 6) epochs. If a species satisfies stagnation criteria, the species and all underlying models may be removed from the genetic algorithm 510. In the illustrated example, species 760 of
Proceeding to the fourth stage 850, the fittest models of each “elite species” may be identified. The fittest models overall may also be identified. In the illustrated example, the three fittest models of each “elite species” are denoted “elite members” and shown using a hatch pattern. Thus, model 870 is an “elite member” of the “elite species” 820. The three fittest models overall are denoted “overall elites” and are shown using black circles. Thus, models 860, 862, and 864 are the “overall elites” in the illustrated example. As shown in
Referring now to
Continuing to
The rest of the output set 530 may be filled out by random intra-species reproduction using the crossover operation 560 and/or the mutation operation 570. In the illustrated example, the output set 530 includes 10 “overall elite” and “elite member” models, so the remaining 590 models may be randomly generated based on intra-species reproduction using the crossover operation 560 and/or the mutation operation 570. After the output set 530 is generated, the output set 530 may be provided as the input set 520 for the next epoch of the genetic algorithm 510.
During the crossover operation 560, a portion of one model may be combined with a portion of another model, where the size of the respective portions may or may not be equal. To illustrate with reference to the model “encodings” described with respect to
Thus, the crossover operation 560 may be a random or pseudo-random biological operator that generates a model of the output set 530 by combining aspects of a first model of the input set 520 with aspects of one or more other models of the input set 520. For example, the crossover operation 560 may retain a topology of hidden nodes of a first model of the input set 520 but connect input nodes of a second model of the input set to the hidden nodes. As another example, the crossover operation 560 may retain the topology of the first model of the input set 520 but use one or more activation functions of the second model of the input set 520. In some aspects, rather than operating on models of the input set 520, the crossover operation 560 may be performed on a model (or models) generated by mutation of one or more models of the input set 520. For example, the mutation operation 570 may be performed on a first model of the input set 520 to generate an intermediate model and the crossover operation 560 may be performed to combine aspects of the intermediate model with aspects of a second model of the input set 520 to generate a model of the output set 530.
During the mutation operation 570, a portion of a model may be randomly modified. The frequency of mutations may be based on a mutation probability metric, which may be user-defined or randomly selected/adjusted. To illustrate with reference to the model “encodings” described with respect to
The mutation operation 570 may thus be a random or pseudo-random biological operator that generates or contributes to a model of the output set 530 by mutating any aspect of a model of the input set 520. For example, the mutation operation 570 may cause the topology a particular model of the input set to be modified by addition or omission of one or more input nodes, by addition or omission of one or more connections, by addition or omission of one or more hidden nodes, or a combination thereof. As another example, the mutation operation 570 may cause one or more activation functions, aggregation functions, bias values/functions, and/or or connection weights to be modified. In some aspects, rather than operating on a model of the input set, the mutation operation 570 may be performed on a model generated by the crossover operation 560. For example, the crossover operation 560 may combine aspects of two models of the input set 520 to generate an intermediate model and the mutation operation 570 may be performed on the intermediate model to generate a model of the output set 530.
The genetic algorithm 510 may continue in the manner described above through multiple epochs. When the genetic algorithm 510 receives the trained model 582, the trained model 582 may be provided as part of the input set 520 of the next epoch, as shown in a seventh stage 5400 of
In the example of
Operation at the system 500 may continue iteratively until specified a termination criterion, such as a time limit, a number of epochs, or a threshold fitness value (of an overall fittest model) is satisfied. When the termination criterion is satisfied, an overall fittest model of the last executed epoch may be selected and output as representing a neural network that best models the input data set 502. In some examples, the overall fittest model may undergo a final training operation (e.g., by the backpropagation trainer 580) before being output.
Although various aspects are described with reference to a backpropagation training, it is to be understood that in alternate implementations different types of training may also be used in the system 500. For example, models may be trained using a genetic algorithm training process. In this example, genetic operations similar to those described above are performed while all aspects of a model, except for the connection weight, are held constant.
Performing genetic operations may be less resource intensive than evaluating fitness of models and training of models using backpropagation. For example, both evaluating the fitness of a model and training a model include providing the input data set 502, or at least a portion thereof, to the model, calculating results of nodes and connections of a neural network to generate output data, and comparing the output data to the input data set 502 to determine the presence and/or magnitude of an error. In contrast, genetic operations do not operate on the input data set 502, but rather merely modify characteristics of one or more models. However, as described above, one iteration of the genetic algorithm 510 may include both genetic operations and evaluating the fitness of every model and species. Training trainable models generated by breeding the fittest models of an epoch may improve fitness of the trained models without requiring training of every model of an epoch. Further, the fitness of models of subsequent epochs may benefit from the improved fitness of the trained models due to genetic operations based on the trained models. Accordingly, training the fittest models enables generating a model with a particular error rate in fewer epochs than using genetic operations alone. As a result, fewer processing resources may be utilized in building highly accurate models based on a specified input data set 502.
The system 500 of
Referring to
It is to be understood that operations described herein as being performed by the first neural network 1210, the second neural network(s) 1220, the third neural network 1270, or the calculator/detector 1230 may be performed by a device executing software configured to execute the calculator/detector 1230 and to train and/or evaluate the neural networks 1210, 1220, 1270. The neural networks 1210, 1220, 1270 may be represented as data structures stored in a memory, where the data structures specify nodes, links, node properties (e.g., activation function), and link properties (e.g., link weight). The neural networks 1210, 1220, 1270 may be trained and/or evaluated on the same or on different devices, processors (e.g., central processor unit (CPU), graphics processing unit (GPU) or other type of processor), processor cores, and/or threads (e.g., hardware or software thread). Moreover, execution of certain operations associated with the first neural network 1210, the second neural network(s) 1220, the third neural network 1270, or the calculator/detector 1230 may be parallelized.
The system 100 may generally operate in two modes of operation: training mode and use mode.
Turning now to
The first neural network 1210 may include an input layer, an output layer, and zero or more hidden layers. The input layer of the first neural network 1210 may include n nodes, each of which receives one of the n first features 1202 as input. The output layer of the first neural network 1210 may include k nodes, where k is an integer greater than zero, and where each of the k nodes represents a unique cluster possibility. In a particular aspect, in response to the first input data 1201 being input to the first neural network 1210, the neural network 1210 generates first output data 1203 having k numerical values (one for each of the k output nodes), where each of the numerical values indicates a probability that the first input data 1201 is part of (e.g., classified in) a corresponding one of the k clusters, and where the sum of the numerical values is one. In the example of
A “pseudo-input” may be automatically generated and provided to the third neural network 1270. In the example of
In a particular aspect, the second neural network(s) 1220 include a variational autoencoder (VAE). The second neural network(s) 1220 may receive second input data 1204 as input. In a particular aspect, the second input data 1204 is generated by a data augmentation process 1280 based on a combination of the first input data 1201 and the third input data 1292. For example, the second input data 1204 may include the n first features 1202 and may include k second features 1205, where the k second features 1205 are based on the third input data 1292, as shown in
The second neural network(s) 1220 generates second output data 1206 based on the second input data 1204. In a particular aspect, the second output data 1206 includes k entries 12061-1206k, each of which is generated based on the corresponding entry 12041-1204k of the second input data 1204. Each entry of the second output data 1206 may include at least third features 1207 and variance values 1208 for the third features 1207. Although not shown in
Referring to FIG. *** 12, the second neural network(s) 1220 may include an encoder network 1310 and a decoder network 1320. The encoder network 1310 may include an input layer 1301 including an input node for each of the n first features 1202 and an input node for each of the k second features 1205. The encoder network 1310 may also include one or more hidden layers 1302 that have progressively fewer nodes. A “latent” layer 1303 serves as an output layer of the encoder network 1310 and an input layer of the decoder network 1320. The latent layer 1303 corresponds to a dimensionally reduced latent space. The latent space is said to be “dimensionally reduced” because there are fewer nodes in the latent layer 1303 than there are in the input layer 1301. The input layer 1301 includes (n+k) nodes, and in some aspects the latent layer 1303 includes no more than half as many nodes, i.e., no more than (n+k)/2 nodes. By constraining the latent layer 1303 to fewer nodes than the input layer, the encoder network 1310 is forced to represent input data (e.g., the second input data 1204) in “compressed” fashion. Thus, the encoder network 1310 is configured to encode data from a feature space to the dimensionally reduced latent space. In a particular aspect, the encoder network 1310 generates values e, Ee, which are data vectors having mean and variance values for each of the latent space features. The resulting distribution is sampled to generate the values (denoted “z”) in the “latent” layer 1303. The “e” subscript is used here to indicate that the values are generated by the encoder network 1310 of the VAE. The latent layer 1303 may therefore represent cluster identification and latent space location along with the input features in a “compressed” fashion. Because each of the clusters has its own Gaussian distribution, the VAE may considered a Gaussian Mixture Model (GMM) VAE.
The decoder network 1320 may approximately reverse the process performed by the encoder network 1310 with respect to the n features. Thus, the decoder network 1320 may include one or more hidden layers 1304 and an output layer 1305. The output layer 1305 outputs a reconstruction of each of the n input features and a variance (σ2) value for each of the reconstructed features. Therefore, the output layer 1305 includes n+n=2n nodes.
Returning to
In a particular aspect, the reconstruction loss function LR_confeature for a continuous feature is represented by Gaussian loss in accordance with Equation 1:
where ln is the natural logarithm function, σ2 is variance, x′ is output/reconstruction value, and x is input value.
To illustrate, if the feature A of
In a particular aspect, the reconstruction loss function LR_catfeature for a binary categorical feature is represented by binomial loss in accordance with Equation 3:
L
R_catfeature
=x
true ln x′+(1−xtrue)ln(1−x′) Equation 3,
where ln is the natural logarithm function, xtrue is one if the value of the feature is true, xtrue is zero if the value of the feature is false, and x′ is the output/reconstruction value (which will be a number between zero and one). It will be appreciated that Equation 3 corresponds to the natural logarithm of the Bernoulli probability of x′ given xtrue, which can also be written as ln P(x′|xtrue).
As an example, if the feature N of
L
R_catfeature(N)=Ntrue ln N′+(1−Ntrue)ln(1−N′) Equation 4.
The total reconstruction loss LR for an entry may be a sum of each of the per-feature losses determined based on Equation 1 for continuous features and based on Equation 3 for categorical features:
L
R
=ΣL
R_confeature
+ΣL
R_catfeature Equation 5
It is noted that Equations 1-5 deal with reconstruction loss. However, as the system 100 of
The first KL divergence, KL1, is represented by Equation 6 below and represents the deviation of μP, ΣP from μe, Σe:
KL1=KL(μe,Σe∥μp,Σp) Equation 6,
where μe, Σe are the clustering parameters generated at the VAE (i.e., the second neural network(s) 1220) and μp, Σp are the values shown at 1272 being output by the latent space cluster mapping network (i.e., the third neural network 1270).
The second KL divergence, KL2, is based on the deviation of a uniform distribution from the cluster probabilities being output by the latent space cluster mapping network (i.e., the third neural network 1270). KL2 is represented by Equation 7 below:
KL2=KL(P∥PUniform) Equation 7,
where P is the cluster probability vector represented by the first output data 1203.
The calculator/detector 1230 may determine an aggregate loss L for each training sample (e.g., the first input data 1201) in accordance with Equation 8 below:
where KL2 is from Equation 7, p(k) are the cluster probabilities in the first output data 1203 (which are used as weighting factors), LR is from Equation 5, and KL1 is from Equation 6. It will be appreciated that the aggregate loss L of Equation 8 is a single quantity that is based on both reconstruction loss as well as cluster distance, where the reconstruction loss function differs for different types of data.
The calculator/detector 1230 may initiate adjustment at one or more of the first neural network 1210, the second neural network(s) 1220, or the third neural network 1270, based on the aggregate loss L. For example, link weights, bias functions, bias values, etc. may be modified via backpropagation to minimize the aggregate loss L using stochastic gradient descent. In some aspects, the amount of adjustment performed during each iteration of backpropagation is based on learning rate. In one example, the learning rate, lr, is initially based on the following heuristic:
where Ndata is the number of features and Nparams is the number of parameters being adjusted in the system 100 (e.g., link weights, bias functions, bias values, etc. across the neural networks 1210, 1220, 1270). In some examples, the learning rate, lr, is determined based on Equation 8 but is subjected to floor and ceiling functions so that lr is always between 5×10−6 and 10−3.
The calculator/detector 1230 may also be configured to output anomaly likelihood 1260, as shown in
As described above, the system 100 may generally operate in two modes of operation: training mode and use mode. During operation in the training mode (
After training is completed, the system 100 enters use mode (alternatively referred to as “evaluation mode”) (
AnomalyScore=LR(i)×N(μe|μp,Σp) Equation 10,
where i is the cluster identified by the cluster ID 1250, LR(i) is the reconstruction loss for the ith entry of the second input data (which includes the one-hot encoding for cluster i), and the second term corresponds to the Gaussian probability of μe given μp and Σp. The anomaly likelihood 1260 indicates the likelihood that the first input data 1201 corresponds to an anomaly. The anomaly likelihood 1260 increases in value with reconstruction loss and when the most likely cluster for the new data sample is far away from where the new data sample was expected to be mapped.
The system 100 of
Moreover, it will be appreciated the system 100 may be applied in various technological settings. As a first illustrative non-limiting example, each of multiple machines, industrial equipment, turbines, engines, etc. may have one or more sensors. The sensors may be on-board or may be coupled to or otherwise associated with the machines. Each sensor may provide periodic empirical measurements to a network server. Measurements may include temperature, vibration, sound, movement in one or more dimensions, movement along one or more axes of rotation, etc. When a new data sample (e.g., readings from multiple sensors) is received, the new data sample may be passed through the clustering and anomaly detection system. The cluster ID 1250 for the data sample may correspond to a state of operation of the machine. Some cluster IDs often lead to failure and do not otherwise occur, and such cluster IDs may be used as failure prognosticators. The anomaly likelihood 1260 may also be used as a failure prognosticator. The cluster ID 1250 and/or the anomaly likelihood 1260 may be used to trigger operational alarms, notifications to personnel (e.g., e-mail, text message, telephone call, etc.), automatic parts shutdown (and initiation of fault-tolerance or redundancy measures), repair scheduling, etc.
As another example, the system 100 may be used to monitor for rare anomalous occurrences in situations where “normal” operations or behaviors can fall into different categories. To illustrate, the system 100 may be used to monitor for credit card fraud based on real-time or near-real-time observation of credit card transactions. In this example, clusters may represent different types of credit users. For example, a first cluster may represent people who generally use their credit cards a lot and place a large amount of money on the credit card each month, a second cluster may represent people who only use their credit card when they are out of cash, a third cluster may represent people who use their credit card very rarely, a fourth cluster may represent travelers who use their credit card a lot and in various cities/states/countries, etc. In this example, the cluster ID 1250 and the anomaly likelihood 1260 may be used to trigger account freezes, automated communication to the credit card holder, notifications to credit card/bank personnel, etc. By automatically determining such trained clusters during unsupervised learning (each of which can have its own Gaussian distribution), the combined clustering/anomaly detection system described herein may generate fewer false positives and fewer false negatives then a conventional VAE (which would assume all credit card users should be on a single Gaussian distribution).
In some examples, the system 100 may include a driving feature detector (not shown) that is configured to compare the feature distribution within a particular cluster to the feature distributions of other clusters and of the input data set as a whole. By doing so, the driving feature detector may identify features that most “drive” the classification of a data sample into the particular cluster. Automated alarms/operations may additionally or alternatively be set up based on examining such driving features, which in some cases may lead to faster notification of a possible anomaly than with the system 100 of
In particular aspects, topologies of the neural networks 1210, 1220, 1270 may be determined prior to training the neural networks 1210, 1220, 1270. In a first example, a neural network topology is determined based on performing principal component analysis (PCA) on an input data set. To illustrate, the PCA may indicate that although the input data set includes X features, the data can be represented with sufficient reconstructability using Y features, where X and Y are integers and Y is generally less than or equal to X/2. It will be appreciated that in this example, Y may be the number of nodes present in the latent layer 1303. After determining Y, the number of hidden layers 1302, 1304 and the number of nodes in the hidden layers 1302, 1304 may be determined. For example, each of the hidden layers may progressively halve the number of nodes from X to Y.
As another example, the topology of a neural network may be determined heuristically, such as based on an upper bound. For example, the topology of the first neural network 1210 may be determined by setting the value of k to an arbitrarily high number (e.g., 20, 50, 100, 500, or some other value). This value corresponds to the number of nodes in the output layer of the first neural network 1210, and the number of nodes in the input layer of the first neural network 1210 may be set to be the n, i.e., the number of first features 1202 (though in a different example, the number of input nodes may be less than n and may be determined using a feature selection heuristic/algorithm). Once the number of input and output nodes are determined for the first neural network 1210, the number of hidden layers and number of nodes in each hidden layer may be determined (e.g., heuristically).
As yet another example, a combination of PCA and hierarchical density-based spatial clustering of applications with noise (HDB SCAN) may be used to determine neural network topologies. As an illustrative non-limiting example, the input feature set may include one hundred features (i.e., n=100) and performing the PCA results in a determination that a subset of fifteen specific features (i.e., p=15) is sufficient to represent the data while maintaining at least a threshold variance (e.g., 90%). Running a HDBSCAN algorithm on the fifteen principal components results in a determination that there are eight clusters in the PCA data set. The number of clusters identified by the HDBSCAN algorithm may be adjusted by a programmable constant, such as +2, to determine a value of k. In this example, k=8+2=10. The number of input features (n=100), the number of clusters from HDB SCAN (k=10) and the number of principal components (p=15) may be used to determine neural network topologies (below, a hidden layer is assumed to have twice as many nodes as the layer it outputs to).
In a particular example, the hidden layer topology of the clustering network and the encoder network of the VAE may be the same. To illustrate, the VAE may have the topology shown in Table 1 above and the clustering network may have the topology shown in Table 4 below.
It is to be understood that the division and ordering of steps of various methods described herein is for illustrative purposes only and is not be considered limiting. In alternative implementations, certain steps or certain of the methods may be combined and other steps or methods may be subdivided into multiple steps or methods. Moreover, the ordering of steps within a method may change.
In a particular aspect, a method includes receiving, at a server, first sensor data from a first vehicle. The method includes receiving, at the server, second sensor data from a second vehicle. The second sensor data includes condition data indicating a road condition, engine data indicating an engine problem, booking data indicating an intended route, or a combination thereof. The method includes aggregating, at the server, a plurality of sensor readings to generate aggregated sensor data. The plurality of sensor readings include the first sensor data and the second sensor data. The method further includes transmitting a first message based on the aggregated sensor data to the first vehicle, wherein the first message causes the first vehicle to perform a first action, the first action comprising avoiding the road condition, displaying an indicator corresponding to the engine problem, displaying a booked route, or a combination thereof.
In another particular aspect, a server includes a processor and a memory. The memory storing instructions executable by the processor to perform operations including receiving first sensor data from a first vehicle. The operations include receiving, at the server, second sensor data from a second vehicle. The second sensor data includes condition data indicating a road condition, engine data indicating an engine problem, booking data indicating an intended route, or a combination thereof. The operations include aggregating, at the server, a plurality of sensor readings to generate aggregated sensor data. The plurality of sensor readings include the first sensor data and the second sensor data. The operations further include transmitting a first message based on the aggregated sensor data to the first vehicle, wherein the first message causes the first vehicle to perform a first action, the first action comprising avoiding the road condition, displaying an indicator corresponding to the engine problem, displaying a booked route, or a combination thereof.
In another particular aspect, a computer-readable storage device storing instructions that, when executed by a processor, cause the processor to perform operations including receiving first sensor data from a first vehicle. The operations include receiving, at the server, second sensor data from a second vehicle. The second sensor data includes condition data indicating a road condition, engine data indicating an engine problem, booking data indicating an intended route, or a combination thereof. The operations include aggregating, at the server, a plurality of sensor readings to generate aggregated sensor data. The plurality of sensor readings include the first sensor data and the second sensor data. The operations further include transmitting a first message based on the aggregated sensor data to the first vehicle, wherein the first message causes the first vehicle to perform a first action, the first action comprising avoiding the road condition, displaying an indicator corresponding to the engine problem, displaying a booked route, or a combination thereof.
The systems and methods illustrated herein may be described in terms of functional block components, screen shots, optional selections and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. For example, the system may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, the software elements of the system may be implemented with any programming or scripting language such as C, C++, C #, Java, JavaScript, VBScript, Macromedia Cold Fusion, COBOL, Microsoft Active Server Pages, assembly, PERL, PHP, AWK, Python, Visual Basic, SQL Stored Procedures, PL/SQL, any UNIX shell script, and extensible markup language (XML) with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements. Further, it should be noted that the system may employ any number of techniques for data transmission, signaling, data processing, network control, and the like.
The systems and methods of the present disclosure may take the form of or include a computer program product on a computer-readable storage medium or device having computer-readable program code (e.g., instructions) embodied or stored in the storage medium or device. Any suitable computer-readable storage medium or device may be utilized, including hard disks, CD-ROM, optical storage devices, magnetic storage devices, and/or other storage media. As used herein, a “computer-readable storage medium” or “computer-readable storage device” is not a signal.
Systems and methods may be described herein with reference to block diagrams and flowchart illustrations of methods, apparatuses (e.g., systems), and computer media according to various aspects. It will be understood that each functional block of a block diagrams and flowchart illustration, and combinations of functional blocks in block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions.
Computer program instructions may be loaded onto a computer or other programmable data processing apparatus to produce a machine, such that the instructions that execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory or device that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Accordingly, functional blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each functional block of the block diagrams and flowchart illustrations, and combinations of functional blocks in the block diagrams and flowchart illustrations, can be implemented by either special purpose hardware-based computer systems which perform the specified functions or steps, or suitable combinations of special purpose hardware and computer instructions.
Although the disclosure may include a method, it is contemplated that it may be embodied as computer program instructions on a tangible computer-readable medium, such as a magnetic or optical memory or a magnetic or optical disk/disc. All structural, chemical, and functional equivalents to the elements of the above-described exemplary embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present disclosure, for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. As used herein, the terms “comprises”, “comprising”, or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Changes and modifications may be made to the disclosed embodiments without departing from the scope of the present disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure, as expressed in the following claims.
The present application claims priority from U.S. Provisional Application No. 62/702,232, filed Jul. 23, 2018, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62702232 | Jul 2018 | US |