Using Vehicle Sensor Data to Monitor Pedestrian Health

Information

  • Patent Application
  • 20180137372
  • Publication Number
    20180137372
  • Date Filed
    December 01, 2016
    8 years ago
  • Date Published
    May 17, 2018
    6 years ago
Abstract
A system and method for monitoring pedestrians based upon information is collected by a wide array of sensors already included in modern motor vehicles. Also included, is a system of monitoring pedestrians by aggregating data collected by an array of vehicles.
Description
BACKGROUND

There are currently an estimated 260 million cars in the United States that drive annually a total of 3.2 trillion miles. Each modern car has upwards of 200 sensors. As a point of reference, the Sojourner Rover of the Mars Pathfinder mission had only 12 sensors, traveled a distance of just over 100 meters mapping the Martian surface, and generated 2.3 billion bits of information including 16,500 pictures and made 8.5 million measurements. Therefore, there is an unrealized potential to utilize the over 200 sensors on the 260 million cars to collect detailed information about our home planet.





BRIEF DESCRIPTION OF THE DRAWINGS

A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings wherein:



FIG. 1 is an example system that uses a network of vehicles to monitor pedestrians.



FIG. 2 is a communication diagram for a vehicle.



FIG. 3 is a block diagram of the electric vehicle computer.



FIG. 4 is a block diagram for a process of monitoring pedestrians.



FIG. 5 is an illustration of the “Bubbles of Vision” of a vehicle.



FIG. 6 is an illustration of the interaction of the “Bubbles of Vision” of two vehicles.



FIG. 7 is an illustration of normal pedestrian behavior.



FIG. 8 is an illustration of injured pedestrian behavior.



FIG. 9 is an illustration of hazardous pedestrian behavior.



FIG. 10 is an illustration of criminal pedestrian behavior.



FIG. 11 is a block diagram of the database server.



FIG. 12 is a block diagram for a process of monitoring pedestrians.



FIG. 13A is a thermal profile of a person.



FIG. 13B is a kinematic model of a person.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

A wide array of sensors is required for the modern operation of a motor vehicle. These sensors are required for the vehicle to navigate, avoid collisions with other cars, and adjust the operating parameters of the drive systems. However, the data collected by these sensors is confined to the vehicle, is ephemeral and is only used locally in the vehicle. The present disclosure provides a system which utilizes the data already being collected by the motor vehicle to convert the motor vehicle into a rolling laboratory for monitoring pedestrians. Further, the system aggregates the data collected from a plurality of vehicles so that differential measurements can be performed on the same pedestrian from multiple perspectives and over multiple time periods.


Advanced driver assistance systems (ADAS) automate and enhance the safety system of a vehicle and provide a more pleasurable driving experience. Examples of ADAS systems currently available include Adaptive Cruise Control, Lane Departure Warning Systems, Blind Spot Detectors, and Hill Decent Control. In order to implement these systems, a wide array of sensors is required.


The present scheme includes a network of cars, each equipped with an ADAS system, that are constantly collecting data about the environment surrounding the vehicle. This collected information is then analyzed by a vehicle computer. The vehicle computer then determines if a pedestrian is displaying hazardous, criminal, injured or normal behavior. Then, based on the determined behavior, the computer may transmit data to a server and contact emergency service officials.



FIG. 1 depicts a diagram of an example system practicing the method of monitoring pedestrians. In the system, an array of vehicles 110A . . . 110B may be communicatively coupled to a database server 1100 and be connected to the Internet 100 via a wireless channel 105. The wireless communication channels 105 may be of the form of any wireless communication mechanism such as LTE, 3G, WiMax etc.


Each vehicle in the array of vehicles 110A . . . 110B may contain a vehicle computer (VC) 300 that is communicatively coupled to a plurality of sensors 150. The sensors 150 may include thermal imagers, LIDAR, radar, ultrasonic and high definition (HD) cameras. In addition, sensors 150 may also include air quality, temperature, radiation, magnetic field and pressure that are used to monitor various systems of the vehicle.


Both the array of vehicles 110A . . . 110B and the database server 1100 may communicate with emergency services providers 130 over the Internet. The emergency services providers 130 may include fire, police or medical services.


The communicative connections of the VC 300 are graphically shown in FIG. 2. The VC 300 is communicatively coupled to a user interface 230. The VC 300 may instruct the user interface 230 to display information stored in the memory 310 or storage 320 of the VC 300. In addition, the VC 300 may instruct the user interface 230 to display alert messages. The user interface 230 may include a touch screen that enables the user to input information to the VC 300. The user interface 230 may be a discrete device or integrated into an existing vehicle entertainment or navigation system.


The VC 300 may also be able to communicate with the Internet 100 via a wireless communication channel 105. A database server 1100 is also connected to the Internet 100 via communication channel 125. It should be understood that the Internet 100 may represent any network connection between respective components.


The VC 300 is also communicatively coupled to a real time communication interface 250. The real time communication interface 250 enables the VC 300 to access the Internet 100 over wireless communication channel 105. This enables the VC 300 to store and retrieve information stored in database server 1100 in real time. The real time communication interface 250 may include one or more antennas, receiving circuits, and transmitting circuits. The wireless communication channel 105 provides near real time communication of the VC 300 to the database while the vehicle is in motion.


Additionally, the VC 300 may communicate with the Internet 100 through short range wireless interface 260 over wireless communication channel 210 via an access point 270. Wireless channel 210 may be 802.11 (WiFi), 802.15 (Bluetooth) or any similar technology. Access point 270 may be integrated in the charging unit of an electric vehicle, located at a gas refueling station, or be located in an owner's garage. The wireless channel 210 allows the VC 300 to quickly and cheaply transmit large amounts of data when the vehicle is not in motion and real time data transmission is not required.


When the VC 300 detects that the short range wireless interface 260 is connected to the Internet 1100, the VC 300 transmits the data stored in storage 320 to the database 1100 over wireless channel 210. The VC 300 may then delete the data stored in storage 320.


The VC 300 may also be communicatively linked to a geo locating system 240. The geolocating system 240 is able to determine the location of the vehicle 110 based on a locating standard such as the Global Positioning System (GPS) or Galileo.


The VC 300 may also be communicatively linked to the plurality of sensors 150. The plurality of sensors may include one or more thermal imager 210 and one or more high definition camera 220. The thermal imager 210 may include any form of thermographic cameras such as a Forward Looking Infrared (FLIR) camera. The high definition cameras 220 may include any form of digital imaging device that captures images in the visible light spectrum.



FIG. 3 depicts a block diagram of the VC 300. The VC 300 includes an Input/Output interface 330. The Input/Output interface 330 may facilitate communication of data with the plurality of sensors 150, user interface 230, geo locating system 240, real time communication interface 250 and short range wireless interface 260. The VC 800 also includes a processor 330 that is communicatively linked to the Input/Output interface 330, the memory 310 and the storage 320. The storage 320 may be a hard disk drive, solid state drive or any similar technology for the nonvolatile storage and retrieval of data.



FIG. 4 depicts a method for monitoring the pedestrians that may be implemented by the processor 330. A plurality of images is acquired (405) from the thermal imager 210 and the HD camera 220. For example, vehicle 110 will acquire the plurality of images of objects within Bubbles of Vision 515, 525 and 535. The bubble of vision will include areas directly in front of the vehicle 110, behind the vehicle and along the sides of the vehicle. The area alongside the vehicle may include other vehicle travel lanes, pedestrian sidewalks or any other area adjacent to the path of the vehicle. The acquired images are then analyzed (410) to determine if the images contain a pedestrian.


In an embodiment, pedestrians are determined based upon a comparison of the thermal profiles. A human being is a unique thermal profile 1310 as shown in FIG. 13A. Methods of detecting a person based on a thermal profile are well known in the art. For instance, U.S. Pat. No. 8,355,839 for a “Vehicle vision system with night vision function”, which is hereby incorporated herein by reference, teaches an example method that may be implemented. As a result of the thermal profile, the system is able to determine and differentiate the movement of inanimate objects and animals from that of a person based upon this unique thermal profile.


In another embodiment, pedestrians are determined as being present based upon the development of a kinematic model. There is a unique kinematic profile 1320 as shown in FIG. 13B for a walking person. Methods for detecting a pedestrian based on this unique kinematic profile are well known in the art. For instance, “Walking Pedestrian Recognition” by Curio et al. (Curio, C., J. Edelbrunner, T. Kalinke, C. Tzomakas, and W. Von Seelen. “Walking Pedestrian Recognition.” IEEE Transactions on Intelligent Transportation Systems 1.3 (2000): 155-63), “, which is hereby incorporated herein by reference, teaches an example method of identifying a pedestrian based on a kinematic model that may be implemented by the system. The implementation of these methods enable the system to determine if a moving object is a pedestrian or some other object that does not require additional analysis.


If the images are determined to not contain a pedestrian, no further processing of the images is required (420), and the acquired images are stored in the storage 320. However, if one or more pedestrians are detected, the images are analyzed to determine if the pedestrian behavior (415) matches a predetermined pedestrian behavior. Methods for determining pedestrian behavior that may be implemented by the system include “Framework for Real-Time Behavior Interpretation From Traffic Video.” (Kumar, P., S. Ranganath, H. Weimin, and K. Sengupta. “Framework for Real-Time Behavior Interpretation From Traffic Video.” IEEE Transactions on Intelligent Transportation Systems 6.1 (2005): 43-53)) and “Pedestrian Protection Systems: Issues, Survey, and Challenges” (Gandhi, T., and M.m. Trivedi. “Pedestrian Protection Systems: Issues, Survey, and Challenges.” IEEE Transactions on Intelligent Transportation Systems 8.3 (2007): 413-30), both of which are hereby incorporated herein by reference.


If the analysis of the images reveals the pedestrian is engaged in hazardous behavior, the driver is alerted (425) via the user interface 230, and the acquired images, time and the location of the vehicle 110 are transmitted (430) to the database server 1100 using the real time communication interface 250. Examples of hazardous behavior may include a child playing in traffic, a person jay walking, or a person chasing after a ball.


For example, the vehicle 110 may acquire images of a small child because the small child is playing on a sidewalk or driveway adjacent to the roadway. The vehicle will acquire images of the small child because the sidewalk or driveway is located within the Bubble of Vision 515. In step 410, the small child will be identified as a pedestrian and the small child's behavior will be analyzed 415 and the analysis may reveal that the child is playing with a ball. A small child playing with a ball adjacent to traveling path of the vehicle 110 will be determined by the system to be a “Hazardous Behavior.” Specifically, the system may recognize that a small child may suddenly run after a ball into the roadway. Accordingly, the system may alert the occupants of the vehicle (425) of the small child and transmit the data (430) to the database server 1100. The database server 1100 may use this information to notify other vehicles traveling in the area to the hazard of the child playing near the roadway.


If the analysis of the images (415) reveals potentially criminal behavior, the driver is alerted to the potentially criminal behavior (435) via the user interface 230. Additionally the images, time, and location information is transmitted (440) to the database server 1100 using the real time communication interface 250. Additionally, emergency services 130 are alerted (450) using the real time communication interface 250. The alert to law enforcement may include the acquired images, the time and location information, as well as an identification of the suspected behavior. Potentially criminal behavior could include a physical assault, purse snatching, or the displaying of weapons. In addition, the criminal behavior may include drug dealing or prostitution.


Methods for analyzing an image to determine criminal behavior may include U.S. Pat. No. 5,666,157 for an “Abnormality detection and surveillance system” and “Crime Detection with ICA and Artificial Intelligent Approach” (Junoh, Ahmad Kadri, Muhammad Naufal Mansor, Alezar Mat Ya'acob, Farah Adibah Adnan, Syafawati Ab. Saad, and Nornadia Mohd Yazid. “Crime Detection with ICA and Artificial Intelligent Approach.” AMR Advanced Materials Research 816-817 (2013): 616-22.) which are hereby incorporated herein by reference.


For example, a vehicle 110 may acquire an image of a sidewalk located at a particular street corner as the vehicle is driving along a roadway. The vehicle will acquire images of the street corner because the street corner lies within the Bubbles of Vision 515. The system may identify that a person is standing on the street corner in step 410. A person standing on a street corner is not by itself a criminal behavior, therefore the single observation of the person standing on a corner would be determined to be “Normal Behavior” and no further processing would be required, and the information would be sent to the database server 1100 over the short range communication channel 290. However, if multiple vehicles observe the same person standing on the same particular street corner for an extended period of time (for example, greater than 30 minutes), the database server 1100 may identify this as criminal behavior. The system would identify this as criminal behavior because an individual standing on a street corner for an extended period of time is consistent with the person being a drug dealer. Once the potential drug dealer was identified, emergency services 130 may be contacted by the system in step 450.


If the analysis of the images (415) reveals a potentially injured pedestrian behavior, the driver is alerted to the potentially injured individual (455) via the user interface 230. Additionally the images, time, and location information is transmitted (440) to the database server 1100 using the real time communication interface 250. Additionally, emergency services 130 are alerted (450) using the real time communication interface 250. The alert to law enforcement may include the acquired images, the time and location information, as well as an identification of the suspected behavior. A potentially injured pedestrian may be identified by an individual falling, lying on the ground, or displaying a highly elevated thermal profile.


Example methods that may be implemented to determine that the pedestrian is injured may include “A Real-Time Wall Detection Method for Indoor Environments” (Moradi, Hadi, Jongmoo Choi, Eunyoung Kim, and Sukhan Lee. “A Real-Time Wall Detection Method for Indoor Environments.” 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (2006): n. pag.)


For example, a vehicle 110 may acquire an image of a sidewalk because it is located in the Bubble of Vision 115. A person who is having a heart attack who was previously walking on the sidewalk would be included in images that are acquired by the system. The system may determine that the person is having a heart attack based on their thermal profile or by detecting that the person is lying on the sidewalk. The system would detect a person having a heart attack is an “Injured Pedestrian Behavior.” Accordingly, the system would alert the occupants of the vehicle (455) and send (450) an alert to emergency services 130.


If the analysis of the images (415) reveals normal pedestrian behavior, no further processing is required, (460) and the images are stored in the storage 320.



FIG. 5 depicts various “Bubbles of Vision” associated with the different sensors 150. For example, certain sensors have a higher resolution and limited sensing distance 535 from the vehicle 110. Other sensors have a much longer sensing range but have lower resolution 515. Yet other sensors operate in a medium sensing distance and resolution 525. Although only discrete Bubbles are shown, a person of ordinary skill would understand that any number of layers can be included. Further, the Bubbles are shown depicted as oval merely for convenience, and the sensors 150 may produce sensing ranges of any shape.



FIG. 6 depicts the interaction of the “Bubbles of Vision” associated with two different vehicles 610A and 610B. Each vehicle has an associated inner Bubble of Vision 635A and 335B, outer Bubble of Vision 615A and 615B, and intermediate Bubble of Vision 625A and 625B. As a result of the overlapping Bubble of Vision, multiple views and prospective of an object can be measured. The multiple views and prospective of the same object may be used to further identify the object or to calibrate the sensors on a particular vehicle relative to another vehicle.



FIG. 7 shows an illustration of normal pedestrian behavior. In this illustration, four vehicles 710A. 710B, 710C and 710D are depicted. Each vehicle includes an outer Bubble of Vision 715A, 715B, 715C and 715D, respectively. Pedestrian 750A is located within the Bubble of Vision 715C of vehicle 710C. Accordingly, in Step 410 the images acquired by the sensors will be determined to contain a pedestrian. In Step 415, pedestrian 750A's behavior would be analyzed and determined to be normal behavior because the pedestrian 750A is safely walking parallel to the flow of traffic. Similarly, pedestrian 750B is within the Bubble of Vision 715D of vehicle 710D. Again, since the pedestrian 750B is safely walking parallel to the flow of traffic, the pedestrian 750B would be determined to be displaying normal behavior in step 460. Since neither pedestrian 750A nor 750B is within vehicle 710A or 710B Bubble of Vision 715A and 715B respectively, in step 410, both vehicles would determine that no pedestrians were contained in the images, and no further processing was required and the images would be stored in the storage 320.


The injured pedestrian behavior is illustrated in FIG. 8. In this illustration, pedestrian 850B is within Bubble of Vision 715D. Therefore, in Step 410 the images would be determined to contain a pedestrian. When the images were analyzed, the pedestrian 850B would be determined to be displaying injured behaviors using any one or combination of the methods previously described. Specifically, the pedestrian 850B is depicted as lying on the ground. This may have been caused by a traumatic medical event such as a heart attack, the result of a trip and fall, or as a result of an assault. As a result, vehicle 710D would alert the passengers of the vehicle (Step 455), transmit the data to server (Step 440), and alert emergency services (step 450).


Similarly, pedestrian 850A is depicted in the process of falling since pedestrian 850A is within the Bubble of Vision 715C. Accordingly, vehicle 710C would detect the pedestrian (step 410) and detect injured behavior (step 415) using any one or combination of the methods previously described alert the passengers of the vehicle (Step 455), transmits the data to server (Step 440) and alerts emergency services (step 450).


In FIG. 9, examples of hazardous pedestrian behaviors are illustrated. In this illustration, pedestrian 850B is a small child playing with a balloon. The child is located within the Bubbles of Vision 715A and 715B. Accordingly, both vehicle 710A and 710B will detect a pedestrian (step 410), and hazardous pedestrian behavior would be determined (step 415) using any one or combination of the methods previously described. Pedestrian 850B would be determined to display hazardous behavior because 850B is a small child that is playing with a toy close to the passing vehicle. Therefore, as a result, vehicle 710D would alert the passengers of the vehicle (Step 455), transmit the data to server (Step 440), and alert emergency services (step 450).


Also shown in FIG. 9 is pedestrian 950A who is walking perpendicular to the flow of traffic and is, in fact, walking directly in front of vehicle 710D. As a result, vehicle 710D would determine that the acquired images contained a pedestrian (step 410) and that pedestrian 950A was displaying hazardous behaviors using any one or combination of the methods previously described. As a result, vehicle 710D would alert the passengers of the vehicle (Step 445), transmit the data to server (Step 440), and alert emergency services (step 450).



FIG. 10 shows an example of an illustration of criminal pedestrian behavior. Pedestrians 1050C are located inside of Bubble of Vision 715D. Therefore, vehicle 710D would determine that the acquired images contained pedestrians (step 410). The vehicle 710D would further determine that pedestrians 1050C are engaged in criminal behavior using any one or combination of the methods previously described. Specifically, the vehicle 710D would determine that pedestrians 1050C are engaged in a larceny, specifically a purse snatching. As a result, vehicle 710D would alert the passengers of the vehicle (Step 445), transmit the data to server (Step 440), and alert emergency services (step 450).


Vehicle 710D would also detect pedestrians 1050B when vehicle 710D analyzed pedestrians 1050B behavior as criminal using any one or combination of the methods previously described. Specifically, vehicle 710D would determine that pedestrians 1050B are engaged in an assault. Accordingly, vehicle 710D would alert the passengers of the vehicle (Step 435), transmit the data to server (Step 440), and alert emergency services (step 450).


Vehicle 710C would detect (step 410) pedestrian 1050A because the pedestrian 1050A is within Bubble of Vision 715C. The vehicle 710C would then analyze the pedestrians 1050A's behavior using any one or combination of the methods previously described and determine (Step 415) is consistent with illegal commercial transactions. For instance, 710C may be able to identify an illegal drug sale or prostitution. As a result, vehicle 710C would alert the passengers of the vehicle (Step 435), transmit the data to server (Step 440), and alert emergency services (step 450).



FIG. 11 depicts the components of the database server 1100. The database server 1100 may include a memory 1110, a communication interface 1130, storage 1120 and a processor 1140. The processor 1140 is able to transmit and receive information from the Internet 100 via the communication interface 1130. In addition, the processor 1140 is able to store data received by the communication 1130.



FIG. 12 is a block diagram for the process implemented by the database server 1100 for monitoring pedestrians based on data acquired from the array of vehicles 110a . . . 110n. Data acquired from the plurality of when the data is received (1205) from the individual vehicles via the real time communication channel 105 and the short range communication channel 290. The data may include the raw data collected by the plurality of sensors 150, thermal images acquired by the thermal imager 210, high definition images captured by HD camera 220, geolocation data determined by the geo locating system 240 and data when the information was recorded. In addition, the data may include identifiers that identify which vehicle 110 from the array of vehicles 110a . . . 11On that acquired the data.


The received data is then aggregated (1210) based on the location where the data was collected and the time when it was collected. The aggregated data is then analyzed (1215) to determine if a predetermined pedestrian behavior is detected. In the event that the analysis reveals only normal pedestrian behavior, no further action is taken (1125). If the result of the analysis 1215 is that hazardous behavior is detected, such as jay walking pedestrians or children playing near the roadway, an alert is sent to emergency services 130. Emergency services may use this post hoc analysis to determine how to allocate policing resources to address the detected behavior.


Similarly, if the analysis 1215 using any one or combination of the methods previously described determines potentially criminal behavior, emergency services are alerted 1220. By aggregating the data over an extended period of time and from many vehicles, the database server 1100 may be able to identify criminal behaviors that an individual vehicle may miss. For instance, an individual standing on a street corner is not by itself suspicious. However, if that individual is observed standing on the same street corner by multiple vehicles over an extended period of time or in successive days, this behavior may be indicative of criminal activity.


If the analysis 1215 detects injured behavior based on the aggregated data, the database server 1100 still sends (1220) an alert to emergency services 130. The transmitted alert may be useful in determining the cause and potential liability for the injured pedestrian.



FIG. 13A depicts the thermal profile 1310 of pedestrians. This thermal profile may be used by the processor 300 to determine if the acquired image contains a pedestrian (step 410).



FIG. 13B depicts a kinematic model 1320 of a pedestrian. This kinematic model may be used by the processor 300 to determine if the acquired image contains a pedestrian (step 410).


Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, any of the steps described above may be automatically performed by either the VC 300 or database server 1100.


Furthermore, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and non-transitory computer-readable storage media. Examples of non-transitory computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media, such as internal hard disks and


removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).

Claims
  • 1. An apparatus for monitoring pedestrian health comprising: one or more thermal imagers,one or more high definition imagers,a real time communication interface,a short range communication interface, anda vehicle computer communicatively coupled to the one or more thermal imagers, the one or more high definition imagers, the real time communication interface and the short range communication interface;wherein the vehicle computer:acquires a plurality of thermal images from the one or more thermal imagers,acquires a plurality of high definition images form the one or more high definition imagers,determines if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians,selectively displays an alert on a display communicatively coupled to the vehicle computer based on the plurality of thermal images and the plurality of high definition images containing one or more pedestrians, andselectively transmits via the real time interface the plurality of thermal images and the plurality of high definition images to a database server based on the plurality of thermal images and the plurality of high definition images containing one or more pedestrians.
  • 2. The apparatus of claim 1, wherein the vehicle computer further: selectively transmits via the short range communication interface the plurality of thermal images and the plurality of high definition images to the database server on a condition that the plurality of thermal images and the plurality of high definition images do not contain one or more pedestrians.
  • 3. The apparatus of claim 1, wherein the vehicle computer further: if the plurality of thermal images and the plurality of high definition images are determined to contain one or more pedestrians, analyzes the plurality of thermal images and the plurality of high definition images to determine if the one or more pedestrians is displaying a predetermined pedestrian behavior; andwherein the selectively transmits via the real time interface is further based on the one or more pedestrians displaying the predetermined behavior.
  • 4. The apparatus of claim 3, wherein the predetermined behavior is selected from the group containing hazard behavior, injured behavior, and criminal behavior.
  • 5. The apparatus of claim 1, wherein the vehicle computer further: determines if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians based on a thermal signature of the one or more pedestrians.
  • 6. The apparatus of claim 1, wherein the vehicle computer further: determines if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians based on a kinematic profile of the one or more pedestrians.
  • 7. A method for monitoring pedestrian health comprising: acquiring, by a vehicle computer, a plurality of thermal images from one or more thermal imagers;acquiring, by the vehicle computer, a plurality of high definition images from one or more high definition imagers,determining, by the vehicle computer, if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians,selectively displaying, by the vehicle computer, an alert on a display communicatively coupled to the vehicle computer based on the plurality of thermal images and the plurality of high definition images containing one or more pedestrians; andselectively transmits via a real time interface of the vehicle computer the plurality of thermal images and the plurality of high definition images to a database server based on the images containing one or more pedestrians.
  • 8. The method of claim 7 further comprising: selectively transmitting via a short range communication interface of the vehicle computer the plurality of thermal images and the plurality of high definition images to the database server on a condition that the images do not contain one or more pedestrians.
  • 9. The method of claim 7 further comprising: if the plurality of thermal images and the plurality of high definition images are determined to contain one or more pedestrians, analyzing, by the vehicle computer, the plurality of thermal images and the plurality of high definition images to determine if the one or more pedestrians is displaying a predetermined pedestrian behavior; andwherein the selectively transmitting via the real time interface is further based on the one or more pedestrians displaying the predetermined behavior.
  • 10. The method of claim 9, wherein the predetermined behavior is selected from the group containing hazard behavior, injured behavior, and criminal behavior.
  • 11. The method of claim 7 further comprising: determining, by the vehicle computer, if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians based on a thermal signature of the one or more pedestrians.
  • 12. The method of claim 7 further comprising: determining, by the vehicle computer, if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians based on a kinematic profile of the one or more pedestrians.
  • 13. A system for monitoring pedestrian health comprising: a plurality of vehicles, wherein each of the plurality of vehicles includes: one or more thermal imagers,one or more high definition imagers,a real time communication interface,a short range communication interface, anda vehicle computer communicatively coupled to the one or more thermal imagers, the one or more high definition imagers, the real time communication interface and the short range communication interface; anda database server communicatively coupled to the plurality of vehicles, wherein the database server includes: a communication interface,a memory,storage, anda processor communicatively coupled to the memory, the storage and the communication interface;wherein the processor of the database server:receives, via the communication interface, a plurality of images from the plurality of vehicles, wherein the plurality of images include images acquired by the one or more high definition imagers and the high definition imagers,aggregates the plurality of images based on geolocation information and temporal information provided by the plurality of vehicles to form aggregated data,analyzes the aggregated data to determine pedestrian behavior, andselectively alerts emergency service providers based on the determined pedestrian behavior.
  • 14. The system of claim 13, wherein each of the plurality of vehicles: acquire a plurality of thermal images from the one or more thermal imagers,acquire a plurality of high definition images from the one or more high definition imagers,determines if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians, andselectively transmit via the real time interface the plurality of thermal images and the plurality of high definition images to the database server based on the images containing one or more pedestrians
  • 15. The system of claim 14, wherein each of the plurality of vehicles further: selectively transmit via the short range communication interface the plurality of thermal images and the plurality of high definition images to the database server on a condition that the images do not contain one or more pedestrians.
  • 16. The system of claim 13, wherein the determined pedestrian behavior is selected from the group containing hazard behavior, injured behavior, and criminal behavior.
  • 17. The system of claim 15, wherein the plurality of vehicles further: if the plurality of thermal images and the plurality of high definition images are determined to contain one or more pedestrians, analyze the plurality of thermal images and the plurality of high definition images to determine if the one or more pedestrians is displaying a predetermined behavior; andwherein the selectively transmit via the real time interface is further based on the one or more pedestrians displaying the predetermined behavior.
  • 18. The system of claim 17, wherein the predetermined behavior is selected from the group containing hazard behavior, injured behavior, and criminal behavior.
  • 19. The system of claim 14, wherein the plurality of vehicles determine if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians based on a thermal signature of the one or more pedestrians.
  • 20. The system of claim 14, wherein the plurality of vehicles determine if the plurality of thermal images and the plurality of high definition images contain one or more pedestrians based on a kinematic profile of the one or more pedestrians.
Parent Case Info

This application claims the benefit of U.S. Provisional Application No. 62/420,985 having a filing date of Nov. 11, 2016 which is incorporated by reference as if fully set forth.

Provisional Applications (1)
Number Date Country
62420985 Nov 2016 US