This disclosure generally relates to systems and methods for assisting people with cognitive disabilities and, more particularly, to vehicle-based multi-modal trip planning for people with cognitive disabilities.
Autonomous vehicles increasingly are being used. Some passengers of autonomous vehicles may experience cognitive disabilities than disorient them inside and outside of a vehicle. An autonomous vehicle may drive a passenger to a location, but the passenger may experience disorientation even after exiting the vehicle.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein like reference numerals are used to designate like parts in the accompanying description.
Passengers of vehicles may benefit from being driven to a destination location, but may experience disorientation once they exit the vehicle, even at the destination location. For example, a passenger may be driven to a grocery store in an autonomous vehicle, but after arriving at the grocery store, may forget why they are there, where to go, what to do, and/or how to return to the vehicle. Alternatively, a vehicle driver may become lost while driving a vehicle.
Embodiments described herein detect when a vehicle passenger experiences a disorientation event when outside of a vehicle, presenting to the passenger and/or others (e.g., family, friends, medical professionals, and the like) instructions regarding where the passenger is, where the passenger is supposed to go, what tasks the passenger is to complete, what timeframe the passenger is supposed to be at a destination, how to get from the vehicle to a physical location (e.g., a physical structure such as a building for a store, office, doctor's office, residence, etc.), how to navigate within a physical structure (e.g., directions inside of a building), and/or how to return to the vehicle from the physical structure. The instructions also may include updates to another party (e.g., family, friends, medical professionals, and the like)
In some embodiments, a disorientation event of a person may include memory loss, inability to navigate to a location, being at a location for longer than an expected time, being stationary or within a small area for longer than a threshold time, moving at a speed lower than a threshold, being outside of a specified location/boundary (e.g., based on geographical coordinates, geo-fencing, etc.), having vital signs that are above or below respective thresholds (e.g., indicative of fatigue, stress, etc.), taking too long to complete a task or set of tasks, and the like.
In some embodiments, detection of a disorientation event may use a combination of image data, device location data, device motion data, and/or biometric sensor data. For example, with user consent and in accordance with relevant laws, a user's location may be monitored using device location data (e.g., global navigation satellite system data, Wi-Fi data, Bluetooth data, etc.), a user's state of being may be monitored using images and/or sensor data (e.g., images used for analysis to detect facial expressions, items in a person's possession, attire, gait, injuries, and the like, and biometric sensor data such as body temperature, heartrate, breathing rate, pulse, etc.), and a user's activity may be monitored using device motion data (e.g., accelerometer data indicative of a person falling down or moving in a manner that is unexpected or indicative of stress). A system associated with a vehicle may detect disorientation events.
In some embodiments, based on the detection of a disorientation event, a system associated with a vehicle may generate instructions such as maps, directions to and from multiple locations, lists of tasks to complete and/or items to purchase or possess, expected time durations for a person to be at respective locations and/or whether those time durations have expired or allow for more time at a location, and user queries to confirm (e.g., with a return response or absence of a return response) whether a user is okay and/or has any queries regarding location, tasks, time duration, etc. The instructions also may be provided to other parties to inform them of a passenger's status and/or the maps, directions to and from multiple locations, lists of tasks to complete and/or items to purchase or possess, expected time durations for a person to be at respective locations and/or whether those time durations have expired or allow for more time at a location, and user queries to confirm whether a user is okay and/or has any queries regarding location, tasks, time duration, etc. The instructions may include any combination of audio and/or visual data (e.g., audio and/or video instructions, etc.).
In some embodiments, the instructions may be generated based on factors such as task completion rate (e.g., historical data indicative of how often a person performs a task or arrives at a location within an amount of time), the type and/or severity of a disorientation (e.g., a cardiac event indicated by biometric sensor data may require sending an emergency medical team to a person, whereas a person who may need help remembering an event or directions to a location may need a reminder, map, directions, etc.). The instructions may break up trips (e.g., from one destination to at least one other destination) by adding rest time (e.g., elongating time periods for tasks/locations), additional tasks (e.g., food or bathroom breaks, etc.), adding destinations (e.g., for breaks, meals, etc.), or by reducing destinations and/or tasks.
In some embodiments, the instructions and/or criteria used to detect a disorientation event may vary based on factors such as a time of day, the disorientation condition (e.g., profiles based on a condition such a Alzheimer's or Autism and including preset and/or adjusted criteria, such as thresholds for respective types of data), and/or environmental conditions (e.g., lighting, temperature, crowded or sparse areas, etc.). For example, at night, disorientation may be more severe, and disorientation may be more likely in crowded areas or certain types of venues (e.g., a grocery or department store) than other venues (e.g., based on venue type or size). In this manner, time of day and/or environmental conditions may alter threshold times, geo-fencing, and the like, with which to detect disorientation events, and also may alter instructions (e.g., directions to and from locations may avoid certain areas that are crowded, darker, or the like, in favor of less crowded areas, areas with better lighting, etc.).
In some embodiments, the system associated with a vehicle may detect a root cause of a detected disorientation event (e.g., to update a profile to use in detecting a disorientation event and/or to generate instructions in response to a disorientation event). The system may identify conditions that may have caused the disorientation event, such as location, time of day, biometric data, health conditions, environmental conditions, length of time to complete a task, and the like. The instructions generated by the system may indicate the root cause conditions to the appropriate parties. The system also may avoid the root cause conditions in future situations. For example, the system may set threshold amounts of time to complete tasks to reduce the risk of a person becoming disoriented during a time that is too long, may avoid generating directions using certain locations and/or environments (e.g., low-lighting environments, crowded environments, etc.), and the like. When instructions for a user require the root cause conditions (e.g., a longer trip, a crowded location, etc.), the system may generate instructions indicating that the user should travel with a companion to avoid or assist with a disorientation event.
In some embodiments, the vehicle may not be autonomous, and the person being monitored for a disorientation event may be the driver. The image data, location data, and/or sensor data of the driver may be used similarly to detect whether the driver is confused, not using an expected route or not within a threshold distance of an expected distance given an expected route, whether biometric data indicate that the person is having an event (e.g., cardiac arrest, fatigue, etc.). The instructions may be generated and presented using an in-vehicle system while the person is driving.
In some embodiments, the system associated with a vehicle may include processing, communication, and sensor devices for detecting disorientation events, receiving user inputs, generating maps and directions, presenting maps and directions, generating and presenting instructions, and sending instructions to be presented by one or more devices. For example, the vehicle's hardware and software may perform the detection, processing, sending, and presentation of data, and/or a remote system (e.g., a sever-based system) may be in communication with the vehicle to receive data from the vehicle and/or user devices, to analyze the data to detect events, and to generate instructions to be sent to the vehicle and/or other devices for presentation.
Referring to
Still referring to
Still referring to
In some embodiments, a disorientation event of the passenger 106 may include memory loss, inability to navigate to a location (e.g., the physical location 118 and/or any of the locations 119), being at a location (e.g., the physical location 118 and/or any of the locations 119) for longer than a threshold time, being stationary or within a small area for longer than a threshold time (e.g., based on location data of the device 112 and/or device motion data, such as accelerometer data, of the device 112), moving at a speed lower than a threshold, being outside of a specified location/boundary (e.g., based on geographical coordinates of the device 112, geo-fencing, etc.), having vital signs that are above or below respective thresholds (e.g., indicative of fatigue, stress, etc. as indicated by sensor data of the device 112 or other devices, such as shown in
In some embodiments, detection of a disorientation event may use a combination of image data (e.g., from the images 111), device location data, device motion data, and/or biometric sensor data. For example, with user consent and in accordance with relevant laws, the passenger's location may be monitored using device location data (e.g., global navigation satellite system data, Wi-Fi data, Bluetooth data, etc.), a user's state of being may be monitored using the images 111 and/or sensor data (e.g., images used for analysis to detect facial expressions, items in the passenger's possession, attire, gait, injuries, and the like, and biometric sensor data such as body temperature, heartrate, breathing rate, pulse, etc.), and the passenger's activity may be monitored using device motion data (e.g., accelerometer data indicative of the passenger 106 falling down or moving in a manner that is unexpected or indicative of stress).
In some embodiments, based on the detection of a disorientation event, the system associated with the vehicle 104 may generate instructions such as maps, directions to and from multiple locations, lists of tasks to complete and/or items to purchase or possess, expected time durations for a person to be at respective locations and/or whether those time durations have expired or allow for more time at a location, and user queries to confirm (e.g., with a return response or absence of a return response) whether a user is okay and/or has any queries regarding location, tasks, time duration, etc. The instructions also may be provided to other parties to inform them of the passenger's status and/or the maps, directions to and from multiple locations, lists of tasks to complete and/or items to purchase or possess, expected time durations for the passenger to be at respective locations and/or whether those time durations have expired or allow for more time at a location, and user queries to confirm whether the passenger is okay and/or has any queries regarding location, tasks, time duration, etc. The instructions may include any combination of audio and/or visual data (e.g., audio and/or video instructions, etc.).
In some embodiments, the instructions may be generated based on factors such as task completion rate (e.g., historical data indicative of how often a person, such as the passenger 106 or another person or group of persons, performs a task or arrives at a location within an amount of time), the type and/or severity of a disorientation (e.g., a cardiac event indicated by biometric sensor data may require sending an emergency medical team to a person, whereas a person who may need help remembering an event or directions to a location may need a reminder, map, directions, etc.). The instructions may break up trips (e.g., from one destination to at least one other destination) by adding rest time (e.g., elongating time periods for tasks/locations), additional tasks (e.g., food or bathroom breaks, etc.), adding destinations (e.g., for breaks, meals, etc.), or by reducing destinations and/or tasks.
In some embodiments, the instructions and/or criteria used to detect a disorientation event may vary based on factors such as a time of day and/or environmental conditions (e.g., lighting, temperature, crowded or sparse areas, etc.). For example, at night, disorientation may be more severe, and disorientation may be more likely in crowded areas or certain types of venues (e.g., a grocery or department store) than other venues (e.g., based on venue type or size). In this manner, time of day and/or environmental conditions may alter threshold times, geo-fencing, and the like, with which to detect disorientation events, and also may alter instructions (e.g., directions to and from locations may avoid certain areas that are crowded, darker, or the like, in favor of less crowded areas, areas with better lighting, etc.).
Referring to
In some embodiments, the system associated with an autonomous vehicle may include processing, communication, and sensor devices for detecting disorientation events, receiving user inputs, generating maps and directions, presenting maps and directions, generating and presenting instructions, and sending instructions to be presented by one or more devices. For example, the vehicle's hardware and software may perform the detection, processing, sending, and presentation of data, and/or the remote system 204 may be in communication with the vehicle 202 to receive data from the vehicle 202 and/or the one or more devices 206, to analyze the data to detect events, and to generate instructions to be sent to the vehicle and/or other devices for presentation. The vehicle 202 and/or the remote system 204 may include components shown in
In one or more embodiments, the vehicle 202, the remote system 204, and/or the one or more devices 206 may include a personal computer (PC), a wearable wireless device (e.g., bracelet, watch, glasses, ring, etc.), a desktop computer, a mobile computer, a laptop computer, an Ultrabook™ computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, an internet of things (IoT) device, a sensor device, a PDA device, a handheld PDA device, an on-board device, an off-board device, a hybrid device (e.g., combining cellular phone functionalities with PDA device functionalities), a consumer device, a vehicular device, a non-vehicular device, a mobile or portable device, a non-mobile or non-portable device, a mobile phone, a cellular telephone, a PCS device, a PDA device which incorporates a wireless communication device, a mobile or portable GPS device, a DVB device, a relatively small computing device, a non-desktop computer, a video device, an audio device, an A/V device, a set-top-box (STB), a Blu-ray disc (BD) player, a BD recorder, a digital video disc (DVD) player, a high definition (HD) DVD player, a DVD recorder, a HD DVD recorder, a personal video recorder (PVR), a broadcast HD receiver, a video source, an audio source, a video sink, an audio sink, a stereo tuner, a broadcast radio receiver, a flat panel display, a personal media player (PMP), a digital video camera (DVC), a digital audio player, a speaker, an audio receiver, an audio amplifier, a gaming device, a data source, a data sink, a digital still camera (DSC), a media player, a smartphone, a television, a music player, or the like.
Referring to
At block 402, a device (e.g., the passenger assist device 519 of
At block 404, the device may generate directions from the vehicle (e.g., at the destination location) to a physical structure (e.g., the physical location 118 of
At block 408, the device may detect an image of the passenger outside of the vehicle at the destination location. The vehicle may capture one or more images (e.g., the images 111 of
At block 412, based on the user information identified from the image data and based on device location data of a device of the passenger (e.g., the device 112 of
In some embodiments, with reference to block 412, a disorientation event of a person may include memory loss, inability to navigate to a location, being at a location for longer than an expected time, being stationary or within a small area for longer than a threshold time, moving at a speed lower than a threshold, being outside of a specified location/boundary (e.g., based on geographical coordinates, geo-fencing, etc.), having vital signs that are above or below respective thresholds (e.g., indicative of fatigue, stress, etc.), taking too long to complete a task or set of tasks, and the like.
In some embodiments, with reference to block 412, detection of a disorientation event may use a combination of image data, device location data, device motion data, and/or biometric sensor data. For example, with user consent and in accordance with relevant laws, a user's location may be monitored using device location data (e.g., global navigation satellite system data, Wi-Fi data, Bluetooth data, etc.), a user's state of being may be monitored using images and/or sensor data (e.g., images used for analysis to detect facial expressions, items in a person's possession, attire, gait, injuries, and the like, and biometric sensor data such as body temperature, heartrate, breathing rate, pulse, etc., such as provided by the one or more devices 206 of
At block 414, the device may generate, based on the detection of the disorientation event, instructions to present to the passenger (and/or to another user as shown in
The examples presented herein are not meant to be limiting.
For example, the computing system 500 of
Processor bus 512, also known as the host bus or the front side bus, may be used to couple the processors 502-506, and a passenger assist device 519 (e.g., for facilitating any of the functions described with respect to
System interface 524 may be connected to the processor bus 512 to interface other components of the system 500 with the processor bus 512. For example, system interface 524 may include a memory controller 518 for interfacing a main memory 516 with the processor bus 512. The main memory 516 typically includes one or more memory cards and a control circuit (not shown). System interface 524 may also include an input/output (I/O) interface 520 to interface one or more I/O bridges 525 or I/O devices 530 with the processor bus 512. One or more I/O controllers and/or I/O devices may be connected with the I/O bus 526, such as I/O controller 528 and I/O device 530, as illustrated.
I/O device 530 may also include an input device (not shown), such as an alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processors 502-506, and/or the passenger assist device 519. Another type of user input device includes cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processors 502-506, and for controlling cursor movement on the display device.
System 500 may include a dynamic storage device, referred to as main memory 516, or a random access memory (RAM) or other computer-readable devices coupled to the processor bus 512 for storing information and instructions to be executed by the processors 502-506 and/or the passenger assist device 519. Main memory 516 also may be used for storing temporary variables or other intermediate information during execution of instructions by the processors 502-506 and/or the passenger assist device 519. System 500 may include read-only memory (ROM) and/or other static storage device coupled to the processor bus 512 for storing static information and instructions for the processors 502-506 and/or the passenger assist device 519. The system outlined in
According to one embodiment, the above techniques may be performed by computer system 500 in response to processor 504 executing one or more sequences of one or more instructions contained in main memory 516. These instructions may be read into main memory 516 from another machine-readable medium, such as a storage device. Execution of the sequences of instructions contained in main memory 516 may cause processors 502-506 and/or the passenger assist device 519 to perform the process steps described herein. In alternative embodiments, circuitry may be used in place of or in combination with the software instructions. Thus, embodiments of the present disclosure may include both hardware and software components.
According to one embodiment, the processors 502-506 may represent machine learning models. For example, the processors 502-506 may allow for neural networking and/or other machine learning techniques used to operate the vehicle 202, the remote system 204, and/or the one or more devices 206 of
In one or more embodiments, the computer system 500 may perform any of the steps of the processes described with respect to
In one or more embodiments, the computer system 500 may include image devices 532 (e.g., cameras, such as to capture the images 111 of
In one or more embodiments, the computer system 500 may include an HMI 534 (e.g., corresponding to the infotainment system 302 of
Various embodiments may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable the performance of the operations described herein. The instructions may be in any suitable form, such as, but not limited to, source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory, etc.
A machine-readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Such media may take the form of, but is not limited to, non-volatile media and volatile media and may include removable data storage media, non-removable data storage media, and/or external storage devices made available via a wired or wireless network architecture with such computer program products, including one or more database management products, web server products, application server products, and/or other additional software components. Examples of removable data storage media include Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc Read-Only Memory (DVD-ROM), magneto-optical disks, flash drives, and the like. Examples of non-removable data storage media include internal magnetic hard disks, solid state devices (SSDs), and the like. The one or more memory devices (not shown) may include volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and/or non-volatile memory (e.g., read-only memory (ROM), flash memory, etc.).
Computer program products containing mechanisms to effectuate the systems and methods in accordance with the presently described technology may reside in main memory 516, which may be referred to as machine-readable media. It will be appreciated that machine-readable media may include any tangible non-transitory medium that is capable of storing or encoding instructions to perform any one or more of the operations of the present disclosure for execution by a machine or that is capable of storing or encoding data structures and/or modules utilized by or associated with such instructions. Machine-readable media may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more executable instructions or data structures.
Embodiments of the present disclosure include various steps, which are described in this specification. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware, software, and/or firmware.
Various modifications and additions can be made to the exemplary embodiments discussed without departing from the scope of the present invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combinations of features and embodiments that do not include all of the described features. Accordingly, the scope of the present invention is intended to embrace all such alternatives, modifications, and variations together with all equivalents thereof.
The operations and processes described and shown above may be carried out or performed in any suitable order as desired in various implementations. Additionally, in certain implementations, at least a portion of the operations may be carried out in parallel. Furthermore, in certain implementations, less than or more than the operations described may be performed.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
As used herein, unless otherwise specified, the use of the ordinal adjectives “first,” “second,” “third,” etc., to describe a common object, merely indicates that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or any other manner.
It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.
Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure.
Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment.