In high stress and oftentimes hazardous work environments-including firefighting, search & rescue, oil and gas, surgery, fighter pilots, mining, special ops, and the like, one false step has critical consequences, but so do too many slow steps. Go too fast and something life-threatening may be missed; go too slow and the results could be doubly devastating. The challenges of effectively and safely performing critical work in harsh and obscured environments have always existed. These challenges combine the physical strain imposed by hazardous terrain with the mental distress placed upon the individual operating within them. Critical human performance in high-stress environments is limited by how rapidly and effectively the brain can process impoverished or jumbled sensory inputs. Until now technology has been leveraged primarily to increase the amount of information provided to the senses, but not designed to specifically enhance the brain's existing (and unmatched) cognitive ability to make sense of that information.
For example, several emergency response systems are centered on the use of thermal imaging cameras (TICs) and augmented reality (AR) optics to provide a hands-free thermal display to the user. Current systems are typically carried by a crewmembers who must iteratively scan, mentally process and communicate what they perceive. Current handheld and hands-free TICs lack the computational resources and software required to unobtrusively offer advanced image processing and data visualization features to all crewmembers in real-time. This capability and time gap in the visual understanding of hazardous environments has been identified as a significant causative factor in responder line of duty deaths. Such systems cause crewmembers, such as first responders, to operate in a Stop, Look, Process and Remember paradigm, which is cumbersome and time consuming.
Accordingly, there is a need for improved methods and systems for integrating improved components, such as a TIC, with a government certified or compliant face mask, such as a self-contained breathing apparatus (SCBA), in a manner that the SCBA retains its certification after the integration.
The exemplary embodiment provides a retrofittable mount system for a mask having a mask window in a cognitive load reducing platform. A sensor is removably mounted to the mask to collect information about an environment as sensor data. The sensor is removably mounted to the mask with a first mount mechanism that does not penetrate the mask window. A processor is coupled to the sensor, wherein the processor executes one or more cognitive enhancement engines to process the sensor data from the sensor into enhanced characterization data. An output device is removably mounted to the mask with a second mount mechanism without penetrating the mask window. The output device electronically receives the enhanced characterization data from the processor and communicates the enhanced characterization data to a wearer of the mask. The enhanced characterization data is integrated into natural senses of the wearer and optimized for the performance of a specific task of the wearer to reduce the cognitive load of the wearer.
According to the method and system disclosed herein, once the components of the cognitive load reducing platform are integrated with a government certified or compliant face mask, such as a self-contained breathing apparatus (SCBA), for example, the nature of the noninvasive integration ensures that the SCBA retains its certification.
The exemplary embodiment relates to a retrofittable mask mount system for a cognitive load reducing platform. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the exemplary embodiments and the generic principles and features described herein will be readily apparent. The exemplary embodiments are mainly described in terms of particular methods and systems provided in particular implementations. However, the methods and systems will operate effectively in other implementations. Phrases such as “exemplary embodiment”, “one embodiment” and “another embodiment” may refer to the same or different embodiments. The embodiments will be described with respect to systems and/or devices having certain components. However, the systems and/or devices may include more or less components than those shown, and variations in the arrangement and type of the components may be made without departing from the scope of the invention. The exemplary embodiments will also be described in the context of particular methods having certain steps. However, the method and system operate effectively for other methods having different and/or additional steps and steps in different orders that are not inconsistent with the exemplary embodiments. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein.
In many critical, high-stress activities, such as firefighting, specialized tools have been developed to support challenging environments and critical objectives of crewmembers engaged in the high-stress activities. For the most part, these tools have evolved to support the crewmembers' physical needs—heat protection, airway protection, forcible entry, fire suppression, and the like. In the past 10-15 years, a greater focus has been placed on supporting the crewmembers' informational needs, including hazardous environment detection, communication, and safety alerting. For example, hearing aids, binoculars, and seismic sensors all increase the collection of information, but don't increase crewmembers' abilities to process or critically discern that extra information. Polarized glasses, gas monitors, thermal imagers, and the like all collect information, but still do not address the time and stress penalty required to absorb and interpret all that information. This “more is better” approach is both distracting and inefficient.
Unfortunately, often times stress is the limiting factor to crewmembers successfully completing these critical and dangerous activities. These are, by definition, high-stress environments and the difficulty in absorbing more and more information is made worse by stress. The health of the crewmembers is also compromised by stress, and regrettably contributes to a majority of crewmembers fatalities every year.
The exemplary embodiments are directed to a retrofittable mount system for a cognitive load reducing platform that leverages the principles of neuroscience and the tools of computer vision to reduce the cognitive load of a user and elevate human performance in high stress environments. The principles of neuroscience are used to integrate sensor data into the natural senses (e.g., visual perception) of the user in a manner that is optimized for the task at hand, e.g. search and rescue, and computer vision supplies the means in one embodiment. The cognitive load reducing platform significantly enhances the crewmembers' or user's ability to make well informed decisions rapidly when operating in complex environments where cognitive abilities decline. A premise of the cognitive load reducing platform is that if thinking and understanding are easier for crewmembers, then crewmembers can achieve objectives more rapidly, spend less time in harsh conditions, and have potentially reduced stress levels because of the real-time assurance or reinforcement of a human sense, i.e., vision, hearing and or touch. Example users of the cognitive load reducing platform include, but are not limited to, firefighters, surgeons, soldiers, police officers, search and rescue and other types of first responders.
The cognitive load reducing platform 10 comprises, one or more sensors 12a-12n (collectively sensors 12) that collect information about an environment as sensor data. The information collected about the environment refers primarily to sensor data that can be used for navigation and detecting hazards, but also to a user's health status. In one embodiment, the sensors are worn by the crewmembers. For example, multiple sensors may be incorporated into a sensor package that is worn by one or more crewmembers. In another embodiment, other sensors may be remote from the crewmembers, such as on a drone equipped with a camera, toxicity detector, and the like.
Example categories of sensors include situational awareness sensors and biometric sensors for health status. The situational awareness sensors collect data about the user's external environment for environmental hazard detection and navigation. Examples of situational awareness sensors for hazard detection may include, but are not limited to: cameras (e.g., a TIC, a drone camera), a spectrometer, a photosensor, magnetometer, a seismometer, an acoustic sensor, a gas detector, a chemical sensor, a radiological sensor, a voltage detector, a flow sensor, a scale, a thermometer, a pressure sensor, and the like. Examples of situational awareness sensors used for user navigation may include, but are not limited to: an inertial measurement unit (IMU), a GPS sensor, a speedometer, a pedometer, an accelerometer, an altimeter, a barometer, an attitude indicator, a depth gauge, a compass (e.g., a fluxgate compass), a gyroscope, and the like. Examples of biometric sensors that measure health conditions/status of the user may include, but are not limited to: a heart rate sensor, a blood pressure monitor, a glucose sensor, an electrocardiogram (EKG or ECG) sensor, an electroencephalogram (EEG) sensor, an electromyography (EMG) sensor, a respiration sensor, and a neurological sensor.
The platform also includes a high-speed processor complex 14 coupled to the sensors 12. The high-speed processor complex 14 includes a memory 16, a communication interface 19, and one or more processors 18, such as graphics processor units (GPUs). The processor/GPUs 18 execute one more software-based cognitive enhancement engines 20 to process the sensor data from the sensors 12 into enhanced characterization data that incorporate contextual and physiological visuals, auditory and/or haptic cues. The cognitive load reducing platform 200 is sensor agnostic and as any type of sensor can be added to the platform as long a corresponding cognitive enhancement engine 20 is provided to process and present that sensor data.
The cognitive load reducing platform 10 further includes one or more output devices 22 coupled to the processor complex 14 to electronically communicate the enhanced characterization data to the user such that the enhanced characterization data is integrated into natural senses of the wearer in a manner that is optimized for the performance of a specific task of the user to reduce the cognitive load of the user. In one embodiment, the output devices 22 may be implemented as a visual display, headphones/ear buds and/or a haptic device.
Prior solutions increase the amount of information provided to the user's senses without specifically enhancing the brain's existing (and unmatched) cognitive ability to make sense of that information. The cognitive load reducing platform 10, in contrast, filters, summarizes, and focuses sensor data into the enhanced characterization data comprising contextual and physiological visuals, audio and/or haptic cues to create a new category called “Assisted Perception” that significantly reduces complexity and cognitive load (and accompanying stress)—and decreases Time-To-Clarity required to save lives. The cognitive load reducing platform 10 is designed to reduce risk, improve human safety, and save lives. The platform has shown performance improvements of 267% (e.g., reducing the time to complete mission critical search and rescue tasks from 4.5 mins to 1.7 mins).
The cognitive load reducing platform supports the introduction of life-saving, Assisted Perception solutions to high-stress environments. One example use of this new category of Assisted Perception is as a firefighting vision system. In this embodiment, the cognitive load reducing platform is a real-time computer vision engine designed to aid first responders as they navigate smoke filled, hazardous environments with little or no visible light. In this embodiment, the cognitive load reducing platform increases the speed and safety of first responders in the field with a focus upon navigation and visual communication applications. The Assisted Perception of the cognitive load reducing platform dramatically enhances one's ability to make well informed decisions rapidly when operating in complex environments where cognitive abilities decline.
Several emergency response systems are based on the use of a thermal camera and AR optics to provide a hands-free imaging system to the user. However, the cognitive load reducing platform provides a novel integrated design of these hardware and software elements into a system that efficiently integrates into natural human visual perception in a manner that decreases stress in the field. In the first responder embodiment, the platform combines a unique combination of enhanced thermal imaging, augmented reality (AR), and environment visualization and mapping capabilities.
Each of the assisted perception modules 221 comprises a modular set of components including a TIC 212, a processor complex 214 in communication with the TIC 212 for executing an edge enhancement engine 220, and a display unit 222, which is removably attached to the mask 224. In relation to
In the embodiment shown, the display unit 222 may comprise an augmented reality (AR) display unit, a virtual reality (VR) display unit, or a head-mounted projection display unit. In the AR embodiment, the AR display unit may comprise optical see through glasses that can be either binocular or monocular, or optics integrated into the mask window.
As stated above, in one embodiment, the cognitive load reducing platform is a wearable electronic system. As such, there are many placement embodiments for the components of the cognitive load reducing platform. In most embodiments, all components are located on, or otherwise carried by, a user. For example,
In some embodiments, however, the sensors 12 and/or the processor complex 14 may be located remote from the user. As an example, consider the use case where a remote gas sensor controlled by a third party sends gas data to a cognitive enhancement engine 20 executed by the processor complex 14 to process. In one embodiment, the gas sensor data from the remote gas sensor could be pushed to the cognitive load reducing platform where the sensor data is processed locally by the corresponding cognitive enhancement engine 20. However, in another, the processor complex 14 may be implemented as a remote server in the cloud that wirelessly receives sensor data of various types. A third party could collect and push the gas sensor data into the cognitive load reducing platform in the cloud where the processor complex 14 converts an output into a brain optimized visual format sent for display to the user on the output device 22.
There are also many communication embodiments for the components of the cognitive load reducing platform. For example, in the embodiment shown in
In one embodiment, the display unit 222 (including digital signal processing board 260, processing board 256, and antenna 258) is mounted inside the mask 224. However, in an alternative embodiment, the display unit 222 is mounted outside the mask 224. For example, the display itself may be positioned outside the mask 224, while the digital signal processing board 260, processing board 256 and antenna 258, may be worn by the user, such as being clipped to a belt or clothing, stowed in a pouch or a pocket, or attached to a back frame of the SCBA.
According to one aspect of the disclosed embodiments, the edge enhancement engine 220 in the firefighting embodiment performs high speed processing on the thermal images from the TIC 212 to enhance the edges or outlines of objects and obstacles and projects the enhanced outlines as an AR image on the AR glasses/monocle in the user's field of view, so the user can see and effectively navigate in obscure conditions without overwhelming the user's ability to process the displayed information. The edge enhancement engine 220 provides a stream of visual formation to field of view of the wearer that increases the luminosity and contrast of edges in the image to appear as a decluttered, enhanced cartoon image. In this embodiment, the enhanced cartoon image produced by the platform dramatically enhances the user's ability to make well-informed decisions rapidly when operating in complex environments where cognitive abilities decline, such as a first responder (e.g., fire fighter or search and rescue personnel).
The Assisted Perception provided by the cognitive load reduction platform leverages the principles of neuroscience to enhance aggregated sensor data in real-time to allow first responders to do their jobs significantly faster and more safely. The closest competitor to an infrared sensor-based, extreme environment tool, would be the handheld or helmet mounted infrared camera and display systems. However, none of these systems offer any context-specific interpretive processing of the output, nor are they designed as true augmented reality interfaces that reduce the cognitive load of the user.
Referring again to
Traditional emergency response tools to aid the incident commander focus upon the Incident Commander's ability to integrate information unavailable to the crewmembers, and to then communicate these insights via radio channels. In contrast, the cognitive load reducing platform allows the incident commander to see the moment to moment visual experience of their crewmembers and to communicate back to them using visual cues displayed to crewmembers equipped with assisted perception modules 221. Consequently, the connected nature of the platform (streaming visual data between assisted perception modules 221 to the central command display device 228) elevates the safety of the entire workspace by providing a shared operating picture between individuals in the field and leaders monitoring workers from the periphery.
Retrofittable Mask Mount System
In one embodiment, the cognitive load reducing platform 10 may be implemented as an OEM-ready system that makes use of currently available SCBAs. Accordingly, the cognitive load reducing platform 10 further comprises a retrofittable mask mount system to allow components of the cognitive load reducing platform 10 to integrate with the face mask 224 of a SCBA, for example, without penetrating the mask or otherwise compromising certifiability of the mask. The retrofittable mask mount system also enables the components to reside in and around the face mask 224 in an ergonomic and balanced manner.
As used herein, the term SCBA is intended to include any type of breathing apparatus that may be worn by rescue workers, firefighters, cave/mine explorers, divers, industrial workers, medical staff and others, to provide breathable air in an immediately dangerous to life or health atmosphere (IDLH). Examples types of SCBA's may include, but are not limited to a breathing apparatus (BA), a compressed air breathing apparatus (CABA), and a self-contained underwater breathing apparatus (SCUBA). The main components of a conventional SCBA typically include a face mask, an inhalation connection (mouthpiece) and regulator hose, and a high pressure tank mounted to a back frame.
In one embodiment, the first mount mechanism 520 for mounting the sensor 12 may fit on the existing mask frame 504, rather than on the mask window so as to not impede vision. The first mount mechanism 520 may removeably mount the sensor 12 to mask 502 without the need of tools by any means. In one embodiment, for example, the first mount mechanism 520 removeably mounts the sensor 12 using any type of mechanical fastener that joins two (or more) objects or surfaces. In one embodiment, the first mount mechanism 520 uses a latch mechanism with negative surface matching. Other embodiments for the latch mechanism may include a spring-loaded connector, a magnetic snap, a hook-and-loop fastener, a built-in flexible compliant hinge, and a clamp, for instance. In one embodiment, the latch mechanism with negative surface matching includes a combination of pins and/or wedges. Other attachment mechanisms are possible. In one embodiment, the latch mechanism may be integrated to work with built-in quick release connectors on the mask frame 504.
The sensor 12 may include a protective housing enclosure for the sensor (impact/heat/humidity/vibration). The sensor 12 may incorporate a switch 213 button (conductive, electro mechanical or mechanical) in the housing in some embodiments to allow the user to switch 213 between different processed sensory outputs based on user experience. The switch 213 is ergonomically placed on the sensor 12 based on the use case. In one embodiment, the sensor 12 may comprise TIC 212.
The processor complex 214 receives sensor data collected by sensor 12 and processes the sensor data into enhanced characterization data. In one embodiment, the processor complex 214 may be implemented with a ruggedized enclosure that is preferably heat, humidity and impact resistant. The enclosure may be carried by a wearer on an item of clothing or on/in a back frame the SCBA. Examples of item of clothing include a belt, jacket or pants of the user. In another embodiment, the processor complex 214 may be implemented as a server located remote from the user, such as in the cloud.
The output device 22 is removably mounted to the face mask 502 that electronically receives the enhanced characterization data from the processor complex 214 and communicates the enhanced characterization data to a wearer of the face mask 502. In one aspect of the disclosed embodiments, the output device 22 is attached to the face mask 502 using a second mount mechanism 522 that does not penetrate the mask window 502. The output device 22 communicates the enhanced characterization data to a wearer of the mask, such that the enhanced characterization data is integrated into natural senses of the wearer and optimized for the performance of a specific task of the user to reduce the cognitive load of the wearer. In the embodiment where the output device 22 is a display device 222, the enhanced characterization data comprises a stream of visual images that is ergonomically aligned to the wearer's line of sight with edges of objects in the images having increased luminosity and contrast (over baseline thermal images) and appear as decluttered, enhanced line drawings.
Due to the first and second mount mechanisms 520 and 522, once the cognitive load reducing platform is integrated with a government certified or compliant SCBA guidelines/standards, the nature of the non-invasive integration ensures that the SCBA retains its certification. Examples of such SCBA guidelines/standards include, but are not limited to: SCBAs guidelines established by the National Fire Protection Association, NFPA Standard 1981 for firefighting; National Institute for Occupational Safety and Health (NIOSH) certification for SCBAs that are used in chemical, biological, radiological, and nuclear (CBRN) environments; Personal Protective Equipment Directive (89/686/EEC) for SBCAs used in Europe (see European Standard EN 137:2006).
Regardless of whether the housing of the TIC 212 comprises a single component or multiple components, the first mount mechanism 520 uses a latch mechanism with negative surface matching in one embodiment. Other embodiments for the latch mechanism may include a spring-loaded connector, a hook-and-loop fastener, a built-in flexible compliant hinge, and a clamp, for instance. Compliant hinges/mechanisms are those that do not use a multi-part hinge but rather use flexible hinge mechanisms that take advantage of material properties to form the hinge. In one embodiment, the latch mechanism with negative surface matching includes a combination of pins and/or wedges, as described below.
According to a further embodiment, the face mask includes a first built-in connector on the outside of the mask frame to receive and mate with a matching connector on a sensor 12 or the processor complex. But the face mask may also include a second built-in connector on the inside of the mask frame to receive and mate with a matching connector on the display unit. The first and second built-in connectors may be coupled to one another to provide a direct connection between the TIC/processor complex and the display unit.
According to the present embodiment, to affix or mount the display unit 222 inside the face mask 502 using the second attachment mechanism 522, a user slightly folds the left frame member 802 and/or the right frame member 804 inwards about the vertical axis of the flexible bridge 800 (step 1). The slightly folded display unit 222 is then inserted into the face mask 502 wherein once released the left frame member 802 and the right frame member 804 flex back to an original shape and press against the contours of the interior of mask window 502 (step 2). The user then releases pressure on the left frame member 802 and the right frame member 804 and the display unit 222 is held in place by spring-like pressure against the mask window 502 (step 3). Once the display unit 222 is mounted inside the face mask 502, the display unit 222 is implemented such that the flexible bridge 800, the left frame member 802 and the right frame member 804 do not affect an in-mask airflow path that keeps the mask visor glass cool.
In a first embodiment shown in
A method and system for implementing a cognitive load reducing platform with a retrofittable mount system has been disclosed. The present invention has been described in accordance with the embodiments shown, and there could be variations to the embodiments, and any variations would be within the spirit and scope of the present invention. For example, the exemplary embodiment can be implemented using hardware, software, a computer readable medium containing program instructions, or a combination thereof. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
This application claims the benefit of provisional Patent Application Ser. No. 62/758,438, filed Nov. 9, 2018, assigned to the assignee of the present application, and incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5778092 | MacLeod | Jul 1998 | A |
6195467 | Asimopoulos | Feb 2001 | B1 |
6611618 | Peli | Aug 2003 | B1 |
6891966 | Chen | May 2005 | B2 |
6898559 | Saitta | May 2005 | B2 |
6909539 | Korniski | Jun 2005 | B2 |
7085401 | Averbuch | Aug 2006 | B2 |
7190832 | Frost | Mar 2007 | B2 |
7369174 | Olita | May 2008 | B2 |
7377835 | Parkulo | May 2008 | B2 |
7430303 | Sefcik | Sep 2008 | B2 |
7460304 | Epstein | Dec 2008 | B1 |
7598856 | Nick | Oct 2009 | B1 |
8054170 | Brandt | Nov 2011 | B1 |
8358307 | Shiomi | Jan 2013 | B2 |
8463006 | Prokoski | Jun 2013 | B2 |
8836793 | Kriesel | Sep 2014 | B1 |
9177204 | Tiana | Nov 2015 | B1 |
9498013 | Handshaw | Nov 2016 | B2 |
9728006 | Varga | Aug 2017 | B2 |
9729767 | Longbotham | Aug 2017 | B2 |
9875430 | Keisler | Jan 2018 | B1 |
9918023 | Simolon | Mar 2018 | B2 |
9924116 | Chahine | Mar 2018 | B2 |
9930324 | Chahine | Mar 2018 | B2 |
9995936 | Macannuco | Jun 2018 | B1 |
9998687 | Lavoie | Jun 2018 | B2 |
10033944 | Högasten | Jul 2018 | B2 |
10042164 | Kuutti | Aug 2018 | B2 |
10044946 | Strandemar | Aug 2018 | B2 |
10089547 | Shemesh | Oct 2018 | B2 |
10091439 | Högasten | Oct 2018 | B2 |
10122944 | Nussmeier | Nov 2018 | B2 |
10182195 | Kostrzewa | Jan 2019 | B2 |
10192540 | Clarke | Jan 2019 | B2 |
10230909 | Kostrzewa | Mar 2019 | B2 |
10230910 | Boulanger | Mar 2019 | B2 |
10244190 | Boulanger | Mar 2019 | B2 |
10249032 | Strandemar | Apr 2019 | B2 |
10250822 | Terre | Apr 2019 | B2 |
10338800 | Rivers | Jul 2019 | B2 |
10417497 | Cossman | Sep 2019 | B1 |
10425603 | Kostrzewa | Sep 2019 | B2 |
10436887 | Stokes | Oct 2019 | B2 |
10598550 | Christel | Mar 2020 | B2 |
10623667 | Högasten | Apr 2020 | B2 |
10803553 | Foi | Oct 2020 | B2 |
10909660 | Egiazarian | Feb 2021 | B2 |
10937140 | Janssens | Mar 2021 | B2 |
10962420 | Simolon | Mar 2021 | B2 |
10983206 | Hawker | Apr 2021 | B2 |
10986288 | Kostrzewa | Apr 2021 | B2 |
10986338 | DeMuynck | Apr 2021 | B2 |
10996542 | Kostrzewa | May 2021 | B2 |
11010878 | Högasten | May 2021 | B2 |
11012648 | Kostrzewa | May 2021 | B2 |
11029211 | Frank | Jun 2021 | B2 |
20020020652 | Martinez | Feb 2002 | A1 |
20030122958 | Olita | Jul 2003 | A1 |
20030190090 | Beeman | Oct 2003 | A1 |
20050150028 | Broersma | Jul 2005 | A1 |
20060023966 | Vining | Feb 2006 | A1 |
20060048286 | Donato | Mar 2006 | A1 |
20070257934 | Doermann | Nov 2007 | A1 |
20080092043 | Trethewey | Apr 2008 | A1 |
20080146334 | Kil | Jun 2008 | A1 |
20110135156 | Chen | Jun 2011 | A1 |
20110239354 | Celona | Oct 2011 | A1 |
20110262053 | Strandemar | Oct 2011 | A1 |
20130050432 | Perez | Feb 2013 | A1 |
20130307875 | Anderson | Nov 2013 | A1 |
20140182593 | Duffy | Jul 2014 | A1 |
20150025917 | Stempora | Jan 2015 | A1 |
20150067513 | Zambetti | Mar 2015 | A1 |
20150163345 | Cornaby | Jun 2015 | A1 |
20150172545 | Szabo | Jun 2015 | A1 |
20150202962 | Habashima | Jul 2015 | A1 |
20150244946 | Agaian | Aug 2015 | A1 |
20150302654 | Arbouzov | Oct 2015 | A1 |
20150324989 | Smith | Nov 2015 | A1 |
20150334315 | Teich | Nov 2015 | A1 |
20150338915 | Publicover | Nov 2015 | A1 |
20150339570 | Scheffler | Nov 2015 | A1 |
20160097857 | Gokay | Apr 2016 | A1 |
20160187969 | Larsen | Jun 2016 | A1 |
20160260261 | Hsu | Sep 2016 | A1 |
20160295208 | Beall | Oct 2016 | A1 |
20160350906 | Meier | Dec 2016 | A1 |
20160360382 | Gross | Dec 2016 | A1 |
20170061663 | Johnson | Mar 2017 | A1 |
20170123211 | Lavoie | May 2017 | A1 |
20170192091 | Felix | Jul 2017 | A1 |
20170208260 | Terre | Jul 2017 | A1 |
20170224990 | Goldwasser | Aug 2017 | A1 |
20170251985 | Howard | Sep 2017 | A1 |
20180012470 | Kritzler | Jan 2018 | A1 |
20180029534 | De Wind | Feb 2018 | A1 |
20180165978 | Wood | Jun 2018 | A1 |
20180189957 | Sanchez Bermudez | Jul 2018 | A1 |
20180204364 | Hoffman | Jul 2018 | A1 |
20180205893 | Simolon | Jul 2018 | A1 |
20180241929 | Bouzaraa | Aug 2018 | A1 |
20180266886 | Frank | Sep 2018 | A1 |
20180283953 | Frank | Oct 2018 | A1 |
20180330474 | Mehta | Nov 2018 | A1 |
20190141261 | Högasten | May 2019 | A1 |
20190228513 | Strandemar | Jul 2019 | A1 |
20190231261 | Tzvieli | Aug 2019 | A1 |
20190325566 | Högasten | Oct 2019 | A1 |
20190335118 | Simolon | Oct 2019 | A1 |
20190342480 | Kostrzewa | Nov 2019 | A1 |
20190359300 | Johnson | Nov 2019 | A1 |
20200005440 | Sanchez-Monge | Jan 2020 | A1 |
20200090308 | Lin | Mar 2020 | A1 |
20200141807 | Poirier | May 2020 | A1 |
20200193652 | Hoffman | Jun 2020 | A1 |
20200327646 | Xu | Oct 2020 | A1 |
20200349354 | Cossman | Nov 2020 | A1 |
20200401143 | Johnson | Dec 2020 | A1 |
20210080260 | Tremblay | Mar 2021 | A1 |
Number | Date | Country |
---|---|---|
1168033 | Jan 2002 | EP |
1659890 | Jan 2009 | EP |
2017130184 | Aug 2017 | WO |
2018167771 | Sep 2018 | WO |
Entry |
---|
Patent Cooperation Treaty: International Search Report and Written Opinion for PCT/US2020/048636 dated Nov. 24, 2020; 20 pages. |
Khan et al., “Tracking Visual and Infrared Objects using Joint Riemannian Manifold Appearance and Affine Shaping Modeling” Dept. of Signals and Systems, Chalmers University of Technology, Gothenburg, 41296, Sweden; IEEE International Conference on Computer Vision Workshop (2011); pp. 1847-1854. |
Patent Cooperation Treaty: International Search Report and Written Opinion for PCT/US2019/058635 dated Jan. 15, 2020; 14 pages. |
Bretschneider et al., “Head Mounted Displays for Fire Fighters” 3rd International Forum on Applied Wearable Computing 2006; 15 pages. |
Chen, “Reducing Cognitive Load in Mobile Learning: Activity-centered Perspectives” Published in International Conference on Networking and Digital Society; DOI: 10.1109/ICNDS.2010.5479459; pp. 504-507 (2010). |
Fan, et al., “Reducing Cognitive Overload by Meta-Learning Assisted Algorithm Selection” Published in 5th IEEE International Conference on Cognitive Informatics; DOI: 10.1109/COGINF.2006.365686; pp. 120-125 (2006). |
Gimel'Farb Part 3: Image Processing, Digital Images and Intensity Histograms; COMPSCI 373 Computer Graphics and Image Processing; University of Auckland, Auckland, NZ; Date unknown; 57 pages. |
Haciomeroglu, “C-thru smoke diving helmet” Jan. 8, 2013; 15 pages; behance.com <http://ww.behance.net/gallery/6579685/C-Thru-Smoke-Diving-Helmet>. |
Haciomeroglu, “C-thru smoke diving helmet” Jan. 8, 2013, 14 pages; coroflot.com <https://www.coroflot.com/OmerHaciomeroglu/C-Thru-smoke-Diving-Helmet>. |
McKinzie, “Fire Engineering: The Future of Artificial Intelligence in Firefighting” Oct. 25, 2018; available at <https://www.fireengineering.com/articles/2018/10/artificial-intelligence-firefighting.html>; 16 pages. |
Reis, et al., “Towards Reducing Cognitive Load and Enhancing Usability Through a Reduced Graphical User Interface for a Dynamic Geometry System: An Experimental Study” Proceedings—2012 IEEE International Symposium on Multimedia, ISM 2012. 445-450. 10.1109/ISM.2012.91; pp. 445-450 (2012). |
Thomsen-Florenus, “Thermal Vision System” Berlin, Germany; Dec. 2017; 7 pages. |
Wu et al., “Contract-Accumulated Histogram Equalization for Image Enhancement”, IEEE SigPort, 2017. [Online]. Available at <http://sigport.org/1837>. |
Wu, “Feature-based Image Segmentation, Texture Synthesis and Hierarchical Visual Data Approximation” University of Illinois at Urbana-Champaign, Apr. 2006; 61 pages. |
Number | Date | Country | |
---|---|---|---|
20200147418 A1 | May 2020 | US |
Number | Date | Country | |
---|---|---|---|
62758438 | Nov 2018 | US |