The present disclosure generally relates to safety accessories such as helmets, and more particularly to a system and a device for active threat monitoring in a user worn helmet and for proactively warning the user about threats in both visible and invisible range.
Safety accessories such as helmets and other protective headgear have evolved over the years. Helmets have the primary function of protecting the head of a person from an injury that may be sustained while engaged in work, sports and other activities. Presently, the helmets are one of the most common safety gears that is worn by cyclists, sport professionals, race car drivers, go-cart drivers, snowmobile drivers, commuters, and others to protect them from any injuries that could arise out of an accident such as collision or a fall or an attack. It is not uncommon for individuals to wear protective headgear when they are, for example, riding bicycles, riding horses, roller-blading, playing football, playing baseball, playing hockey, skiing and skating, as well as for other general safety purposes.
These protective helmets are great at protecting heads from sustaining the full impact in a collision. However, the helmets may impede the wearer's ability to see (and perhaps even fully hear) potential danger approaching, particularly from behind.
In some solution, the helmet manufactures realize protective helmets can incorporate other safety features such as two-way and AM/FM radios, turn signals, rear view mirrors and other safety devices. Protective helmets with two-way communication systems are generally well known. Some of these well-known systems carry a transmitting unit within the helmet. Such a unit is not a complete and self-contained system. Other known units have an external antenna, are not protected from shock, and provide earphones which may completely cover the ear. In other solutions, by integrating safety monitoring features (integrating navigation and communications into the helmet), the helmet provides an extra level of security in case of emergency.
Most solutions in the market today are reactive in nature. However, the helmets described in the art are passive in nature and fail to be responsive to the user's environment, and also fails to warn the users about threats which are in both visible range and invisible range.
The present disclosure has been made in view of such considerations, and it is an object of the present disclosure to provide systems and devices for active threat monitoring in a user worn helmet that provides a high degree of safety for all of these users bring the need to create a head worn product that can actively protect and help avoid collision or fall or attack or minimize the impact and protect vital parts by combining passive safety of injury protection and proactively warns the users about threats in both visible range and invisible range.
In an aspect, a threat monitoring system is disclosed. The threat monitoring system comprises a controller and an Internet of Things (IoT) cloud server. The IoT cloud server includes an IoT database wherein the IoT database is configured to generate virtual data feed. The system further comprises a head mounted wearable device. The head mounted wearable device comprises a plurality of physical sensors. The plurality of physical sensors is configured to sense and generate at least one of visual feed and radar feed. The head mounted wearable device further comprises a multimodal communication interface. The system further comprises a user database. The user database is configured to store user profile data. The system further comprises an environment monitoring unit. The environment monitoring unit is configured to map an environment based on at least of one of the visual feed, the radar feed, and the virtual data feed. The system further comprises an awareness unit. The awareness unit is configured to generate an environmental context based on position of a user. The controller is configured to receive the user profile data from the user database, the visual feed and the radar feed from the plurality of physical sensors, the mapped environment from the environment monitoring unit, the environment context from the awareness unit, and the virtual data feed from the IoT database. The controller is further configured to analyse the user profile data, the visual feed and the radar feed, the environment context, and the virtual data feed to predict occurrence of threat. The controller is further configured to warn the user about a threat based on the predicted occurrence of the threat via the multimodal communication interface.
In one or more embodiment, the controller, the user database, the environment monitoring unit, and the awareness unit are disposed in the head mounted wearable device.
In another embodiment, the controller, the user database, the environment monitoring unit, the awareness unit, and the multimodal communication interface are operationally configured in a user device.
In one or more embodiments, the plurality of physical sensors comprises at least one of a front camera, a rear camera, side cameras, an aerial camera, a 360° view camera, and a radar.
In one or more embodiments, the front camera is mounted in the head mounted wearable device facing towards the user for monitoring awareness, attention, and glanced based information of the user.
In one or more embodiments, the 360° view camera is configured to capture images for performing depth detection and for sending the captured images to the IoT cloud server.
In one or more embodiments, the virtual data feed at the IoT server is received from IoT devices.
In one or more embodiments, the virtual data feed comprises cloud-based data, wherein the cloud-based data comprises at least one of weather data, historical data, data on amount of light falling on the user's eyes, SOS alert information, environmental data, and data received from other users.
In one or more embodiments, the virtual data feed comprises peer to peer communication data, wherein the peer to peer communication data comprises vehicle to everything (V2X) data received from peer devices.
In one or more embodiments, the controller is further configured to process the threat based on the analysed data and transmits the processed threat data over V2X (vehicle to everything) communication to the peer devices.
In one or more embodiments, the multimodal communication interface is configured to receive multimodal states related to the user for intent detection thereof.
In one or more embodiments, the multimodal states comprise at least one of user's speech, gesture, touch, brainwave, gaze, pupil movement, head movement and body movement.
In one or more embodiments, the multimodal communication interface warns the user with directionality and intensity of the warning via multimodal communication.
In one or more embodiments, the multimodal communication comprises at least one of bone conduction communication, visual and audible communication, augmented reality projection, virtual reality projection, and stationary display.
In one or more embodiments, the controller is further configured to determine a threshold level for warning the user, wherein the user is warned if the predicted occurrence of the threat is greater than the determined threshold level.
In another aspect, a head mounted wearable device in communication with an Internet of Things (IoT) cloud server for threat monitoring is disclosed. The IoT cloud server includes an IoT database. The IoT database is configured to generate virtual data feed. The device comprises a plurality of physical sensors. The plurality of physical sensors is configured to sense and generate at least one of visual feed and radar feed. The device further comprises a user database. The user database is configured to store user profile data. The device further comprises an environment monitoring unit. The environment monitoring unit is configured to map an environment based on at least of one of the visual feed, the radar feed, and the virtual data feed. The device further comprises an awareness unit. The awareness unit is configured to generate an environmental context based on position of a user. The device further comprises a multimodal communication interface. The device further comprises a controller. The controller is configured to receive the user profile data from the user database, the visual feed and the radar feed from the plurality of physical sensors, the mapped environment from the environment monitoring unit, the environment context from the awareness unit, and the virtual data feed from the IoT database. The controller is further configured to analyse the user profile data, the visual feed and the radar feed, the environment context, and the virtual data feed to predict occurrence of threat. The controller is further configured to warn the user about a threat based on the predicted occurrence of the threat via the multimodal communication interface.
In one or more embodiments, the physical sensors comprise at least one of a front camera, a rear camera, side cameras, an aerial camera, a 360° view camera, and a radar.
In one or more embodiments, the virtual data feed at the IoT server is received from IoT devices.
In one or more embodiments, the virtual data feed comprises cloud-based data, wherein the cloud-based data comprises at least one of weather data, historical data, data on amount of light falling on the user's eyes, SOS alert information, environmental data, and data received from other users.
In one or more embodiments, the virtual data feed comprises peer to peer communication data, wherein the peer to peer communication data comprises vehicle to everything (V2X) data received from peer devices.
In one or more embodiments, the controller is further configured to process the threat based on the analysed data and transmits the processed threat data over V2X (vehicle to everything) communication to the peer devices.
In one or more embodiments, the multimodal communication interface is configured to receive multimodal states related to the user for intent detection thereof.
In one or more embodiments, the multimodal communication interface warns the user with directionality and intensity of the warning via multimodal communication.
In one or more embodiments, the multimodal communication comprises at least one of bone conduction communication, visual and audible communication, augmented reality projection, virtual reality projection, and stationary display.
In one or more embodiments, the device is used by at least one of a biker, a mining worker, a road worker, and a sportsperson.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
For a more complete understanding of example embodiments of the present disclosure, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure is not limited to these specific details.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.
Furthermore, in the following detailed description of the present disclosure, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present disclosure.
Embodiments described herein may be discussed in the general context of computer-executable instructions residing on some form of computer-readable storage medium, such as program modules, executed by one or more computers or other devices. By way of example, and not limitation, computer-readable storage media may comprise non-transitory computer-readable storage media and communication media; non-transitory computer-readable media include all computer-readable media except for a transitory, propagating signal. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments.
Some portions of the detailed description that follows are presented and discussed in terms of a process or method. Although steps and sequencing thereof are disclosed in figures herein describing the operations of this method, such steps and sequencing are exemplary. Embodiments are well suited to performing various other steps or variations of the steps recited in the flowchart of the figure herein, and in a sequence other than that depicted and described herein. Some portions of the detailed descriptions that follow are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those utilizing physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as transactions, bits, values, elements, symbols, characters, samples, pixels, or the like.
In some implementations, any suitable computer usable or computer readable medium (or media) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-usable, or computer-readable, storage medium (including a storage device associated with a computing device) may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fibre, a portable compact disc read-only memory (CD-ROM), an optical storage device, a digital versatile disk (DVD), a static random access memory (SRAM), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, a media such as those supporting the internet or an intranet, or a magnetic storage device. Note that the computer-usable or computer-readable medium could even be a suitable medium upon which the program is stored, scanned, compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of the present disclosure, a computer-usable or computer-readable, storage medium may be any tangible medium that can contain or store a program for use by or in connection with the instruction execution system, apparatus, or device.
In some implementations, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. In some implementations, such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. In some implementations, the computer readable program code may be transmitted using any appropriate medium, including but not limited to the internet, wireline, optical fibre cable, RF, etc. In some implementations, a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
In some implementations, computer program code for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java®, Smalltalk, C++ or the like. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. However, the computer program code for carrying out operations of the present disclosure may also be written in conventional procedural programming languages, such as the “C” programming language, PASCAL, or similar programming languages, as well as in scripting languages such as JavaScript, PERL, or Python. In present implementations, the used language for training may be one of Python, Tensorflow™. Bazel, C, C++. Further, decoder in user device (as will be discussed) may use C, C++ or any processor specific ISA. Furthermore, assembly code inside C/C++ may be utilized for specific operation. Also, ASR (automatic speech recognition) and G2P decoder along with entire user system can be run in embedded Linux (any distribution), Android, IOS, Windows, or the like, without any limitations. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the internet using an Internet Service Provider). In some implementations, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGAs) or other hardware accelerators, micro-controller units (MCUs), or programmable logic arrays (PLAs) may execute the computer readable program instructions/code by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
In some implementations, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus (systems), methods and computer program products according to various implementations of the present disclosure. Each block in the flowchart and/or block diagrams, and combinations of blocks in the flowchart and/or block diagrams, may represent a module, segment, or portion of code, which comprises one or more executable computer program instructions for implementing the specified logical function(s)/act(s). These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the computer program instructions, which may execute via the processor of the computer or other programmable data processing apparatus, create the ability to implement one or more of the functions/acts specified in the flowchart and/or block diagram block or blocks or combinations thereof. It should be noted that, in some implementations, the functions noted in the block(s) may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
In some implementations, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks or combinations thereof.
In some implementations, the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed (not necessarily in a particular order) on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts (not necessarily in a particular order) specified in the flowchart and/or block diagram block or blocks or combinations thereof.
Referring now to the example implementation of
In some implementations, the instruction sets and subroutines of system 100. which may be stored on storage device, such as storage device 16, coupled to computer 12, may be executed by one or more processors (not shown) and one or more memory architectures included within computer 12. In some implementations, storage device 16 may include but is not limited to: a hard disk drive; a flash drive, a tape drive; an optical drive; a RAID array (or other array); a random-access memory (RAM); and a read-only memory (ROM).
In some implementations, network 14 may be connected to one or more secondary networks (e.g., network 18), examples of which may include but are not limited to: a local area network; a wide area network; or an intranet, for example.
In some implementations, computer 12 may include a data store, such as a database (e.g., relational database, object-oriented database, triplestore database, etc.) and may be located within any suitable memory location, such as storage device 16 coupled to computer 12. In some implementations, data, metadata, information, etc. described throughout the present disclosure may be stored in the data store. In some implementations, computer 12 may utilize any known database management system such as, but not limited to, DB2, in order to provide multi-user access to one or more databases, such as the above noted relational database. In some implementations, the data store may also be a custom database, such as, for example, a flat file database or an XML database. In some implementations, any other form(s) of a data storage structure and/or organization may also be used.
In some implementations, user device 20 may include, but are not limited to, a personal computer, a laptop computer, a smart/data-enabled, cellular phone, a notebook computer, a tablet, a server, a television, a smart television, a media (e.g., video, photo, etc.) capturing device, and a dedicated network device. User device 20 may each execute an operating system, examples of which may include but are not limited to, Android®, Apple® IOS®, Mac® OS X®; Red Hat® Linux®, or a custom operating system. In some implementations, user device 20 may be installed with an application (App).
In some implementations, user 22 may access computer 12 and system 100 (e.g., using user device 20) directly through network 14 or through secondary network 18. Further, computer 12 may be connected to network 14 through secondary network 18, as illustrated with phantom link line 44. System 100 may include one or more user interfaces, such as browsers and textual or graphical user interfaces, through which user 22 may access system 100. Herein, user may be cyclists, bike sports professional, race car drivers, go-cart drivers, snowmobile drivers, commuters, and others.
In some implementations, the various user devices may be directly or indirectly coupled to communication network, such as communication network 14 and communication network 18, hereinafter simply referred to as network 14 and network 18, respectively. User device 20 is shown wirelessly coupled to network 18 via wireless communication channel 42 established between user device 20 and cellular network/bridge 24, which is shown directly coupled to network 18.
In some implementations, helmet 34 is provided for protection of the head of biker. Helmet 34 may be connected with computer 12 and system 100 directly through network 14 or through secondary network 18. Helmet 34 is shown wirelessly coupled to network 14 via wireless communication channel 46 established between helmet 34 and wireless access point (i.e., WAP) 26, which is shown directly coupled to network 14. WAP 26 may be, for example, an IEEE 802.11a. 802.11b, 802.11g, Wi-Fi®, RFID, and/or Bluetooth™ (including Bluetooth™ Low Energy) device that is capable of establishing wireless communication channel 46 between helmet 34 and WAP 26. By using this arrangement, multiple bikers can get connected with computer 12 and system 100. Herein, helmet 34 is equipped with multiple sensors (not shown) and cameras (not shown). The sensors and cameras are involved for capturing data such as surrounding environment data, occurrence of the emergency situation ahead, accident happened ahead on road, and biker's fall, accident, crash or a mishap etc. Helmet 34 transmits that captured data with the current location (for example, using GPS) of the biker to computer 12 and system 100 for further processing.
In some implementations, a roadside worker wears helmet 36 while working on the road. Helmet 36 may be connected with computer 12 and system 100 directly through network 14 or through secondary network 18. Helmet 36 is shown wirelessly coupled to network 14 via wireless communication channel 48 established between helmet 36 and wireless access point (i.e., WAP) 28, which is shown directly coupled to network 14. WAP 28 may be, for example, an IEEE 802.11a, 802.11b, 802.11g, Wi-Fi®, RFID, and/or Bluetooth™ (including Bluetooth™ Low Energy) device that is capable of establishing wireless communication channel 48 between helmet 36 and WAP 28. Herein, helmet 34 is equipped with multiple sensors (not shown) and cameras (not shown). The sensors and cameras are involved for capturing data such as surrounding environment data and road construction in progress data etc. Helmet 36 transmits the captured data with current location (for example, using GPS) of the workers to computer 12 and system 100 for further processing.
In some implementations, vehicle uses Vehicle to Everything (V2X) communication. V2X is a vehicular communication system that supports the transfer of information from a vehicle to moving parts (any entity) of the traffic system that may affect the vehicle or may be affected by the vehicle. V2X serves to organise communication and interaction between vehicle to vehicle (V2V), vehicle to infrastructure (V2I), vehicle to pedestrians (V2P), and vehicle to networks (V2N). The main purpose of V2X technology is to improve road safety, energy savings, and traffic efficiency on the roads. For example, car 38 is may be connected with computer 12 and system 100 directly through network 14 or through secondary network 18. In some implementations, car 38 may be connected with cloud server. Car 38 is shown wirelessly coupled to network 14 via wireless communication channel 50 established between car 38 and wireless access point (i.e., WAP) 30, which is shown directly coupled to network 14. WAP 30 may be, for example, an IEEE 802.11a, 802.11b, 802.11g. Wi-Fi®, RFID, and/or Bluetooth™ (including Bluetooth™ Low Energy) device that is capable of establishing wireless communication channel 50 between car 38 and WAP 30. Herein, car 38 is equipped with multiple sensors (not shown) and cameras (not shown). The sensors and cameras are involved for capturing data such as surrounding environment data, road condition (for example, pothole), traffic information (such as traffic jam information and traffic regulation information), road safety information etc. Car 38 transmits the captured data with current location (for example, using GPS) of the car to computer 12 and system 100 for further processing.
In some implementations, traffic signal apparatus 40 is shown wirelessly coupled to network 14 via wireless communication channel 52 established between traffic signal apparatus 40 and wireless access point (i.e., WAP) 32, which is shown directly coupled to network 14. WAP 32 may be, for example, an IEEE 802.11a. 802.11b, 802.11g, Wi-Fi®, RFID, and/or Bluetooth™ (including Bluetooth™ Low Energy) device that is capable of establishing wireless communication channel 52 between traffic signal apparatus 52 and WAP 32. Traffic signal apparatus 40 is equipped with transmitter (not shown). Transmitter broadcasts, on a regular basis, traffic signal data identifying its location (e.g., by broadcasting, for example, GPS coordinates associated with its location). In addition, traffic signal apparatus 40 transmits traffic signal data identifying present and future traffic signal sequences, e.g., current information regarding the status of the light (red, greed, or yellow), timing data related to its cycle, and any schedule information regarding future cycles (e.g., if at a particular time of day, the timing of the signal cycle changes due to changing traffic conditions, this information is also transmitted). In some implementation, traffic signal apparatus 40 is equipped with multiple sensors (not shown) and surveillance cameras (not shown).
In some implementations, the surveillance cameras of traffic signal may be a single panoramic camera or a plurality of cameras for outgoing lanes, each camera capturing traffic flow images in the intersection and on the corresponding outgoing lane before a green light is on, and sending the traffic flow images to computer 12 and system 100 for processing. In some implementations, road sensors (not shown) of traffic signal apparatus 40 are used when recording of accurate vehicle speed is required in addition to traffic light violation. Traffic signal apparatus 40 transmits the recorded data with current location (for example, using GPS) of traffic signal apparatus 40 to computer 12 and system 100 for further processing.
In some implementations, some or all of the IEEE 802.11x specifications may use Ethernet protocol and carrier sense multiple access with collision avoidance (i.e., CSMA/CA) for path sharing. The various 802.11x specifications may use phase-shift keying (i.e., PSK) modulation or complementary code keying (i.e., CCK) modulation, for example, Bluetooth™ (including Bluetooth™ Low Energy) is a telecommunications industry specification that allows, e.g., mobile phones, computers, smart phones, and other electronic devices to be interconnected using a short-range wireless connection. Other forms of interconnection (e.g., Near Field Communication (NFC)) may also be used.
Referring now to
As illustrated in
Further in some examples, as illustrated in
Referring back to
The IoT database 208 is configured to generate virtual data feed based on the data received from various IoT devices. Herein, the IoT devices may be smart mobiles, smart refrigerators, smartwatches, smart fire alarms, smart door locks, smart bicycles, sensors, fitness trackers, smart security system, actuators, gadgets, appliances, or machines, without any limitations. In one implementation, the virtual data feed is received from a plurality of virtual sensors or a plurality of remote sensors disposed on the IoT devices or coupled with the IoT devices. The virtual sensor is also known as a synthetic sensor. The terms “virtual sensor” and “synthetic sensor” have been interchangeably used hereinafter without any limitations. In a non-limiting example, the synthetic sensors provide virtual data feed to the IoT database 208. The synthetic sensor's data is a virtual sensor's from different devices (IoT devices) or systems that is similar to physical inputs of the physical sensors. For example, a rain sensor is disposed in a remote device which transmits the data related to rain to the cloud server which could be a synthetic sensor's input from the remote device. Herein, the rain sensor acts as a virtual sensor for the system 100.
The virtual data feed comprises cloud-based data. The cloud-based data are stored at the IoT database 208. Herein, the cloud-based data comprises at least one of weather data, historical data, data on amount of light falling on the user's eyes, SOS alert information, environmental data, and data received from other users.
The virtual data feed further comprises peer to peer communication data. The peer to peer communication data are stored at the IoT database 208. The peer to peer communication data is vehicle to everything (V2X) data received from peer devices. Herein, the peer devices may be vehicle, network, infrastructure, and pedestrian (who is equipped with any communication device, helmet, or V2P enabled device and the like). Herein, the peer to peer communication data includes vehicle-to-network communication, vehicle-to-vehicle communication, vehicle-to-pedestrian communication, and vehicle-to-infrastructure communication. These peer to peer communication data are transmitted to the IoT database 208 and later the peer to peer communication data are used as virtual data feed.
The synthetic sensor fuses the received data to generate the virtual data feed to be stored in the IoT database 208. In the present disclosure, the virtual data feed could be cloud data, mathematically computed data, map data, weather data, traffic data, spatial mapping of enemies when a soldier is using it in war field, police cop detected by the helmet of the rider in the front, any crash information detected by the camera on rider in front, and the like. For example, the bike rider could get feed of the road condition from a driver who is 1 km ahead from the rider for which the system 100 may be generating the virtual data feed. In a non-limiting example, the cloud-based data may be general environmental data, user data from other systems or source or platform, data from other wearable devices or mobile units, and data from vehicle or sports equipment or tool etc.
Referring to
As illustrated in
As illustrated in
Further, as illustrated in
Further, as illustrated in
Referring back to
Further, the helmet 240 includes a communications module 244. The communication module 244 may be utilized as a transceiver. Herein, transceiver is a combination of transmitter and receiver in a single package. The transceiver 244 can both transmit and receive radio waves using antenna, for communication purposes. An integrated-circuited type transceiver 244 is built into the helmet 240. The communication module 244 is coupled with the control circuit 242.
As illustrated in
As illustrated in
In the present disclosure, the cameras 252, 254, 256, 258 are disposed on the helmet 240. By way of example and not by way of limitation, these cameras may be built-in cameras with helmet or cameras attached to helmet through any means (may be connected through slots provided in the helmet). In general, the cameras 252, 254, 256, 258 are involved in capturing images and recording videos. For instance, the front camera 252 is mounted on the helmet 240 such so as to face the front camera 252 towards the wearer, as illustrated in the
The rear camera 254 is mounted on the helmet 240. The rear mounted camera 254 is aimed at the road behind the biker. The rear camera 254 provides a constant rear view image which is independent of the rider's head and body position. In one example, the rear camera 254 captures an image of a vehicle which is approaching the biker. The captured images can be used for further processing, which is described later.
As illustrated in
As illustrated in
In one purpose, the 360° view camera 258 is used to generate a 3D field view. The 360° view camera 258 is further configured to capture images for performing depth detection and for sending the captured images to the database 208. This happens in the helmet 240 and extended processing in the server 202. The depth calculation and object detection may be performed in the helmet 240. Further, the 360° view camera 258 is used for advanced processing such as object granular classification (e.g., to track another cyclist in vicinity). The 360° view camera 258 is also configured to detect what could be an interesting scene to automatically capture and further the helmet 240 transmits the interesting scene to the database 208. For example, a nice scenic image of waterfall, this will be autodetected based on monitoring the glance of the user and him slowing down. By way of example and not by way of limitation, the 360° view camera 258 may be a built-in 360-degree camera with helmet 240 or the 360° view camera 258 is attached to the helmet 240 through any means (may be connected through a slot provided in the helmet).
Further, the helmet 240 includes a radar 260. In general, radar (radio detection and ranging) is a detection system that uses radio waves to determine the distance (ranging), angle, and radial velocity of objects relative to the site. In the present disclosure, the 360° view camera 258 is coupled with the radar 260. By using the image captured by the 360° view camera 258 and the radio waves of the radar 260, the helmet 240 can sense depth and also contentions in the field. In one example, an image of a pothole is captured by the 360° view camera 258. Thereafter, by using the captured image of the 360° view camera 258 and the radio waves of the radar 260, depth of the pothole can be sensed. In one implementation, the helmet 240 is equipped with a laser imaging, detection and ranging (LIDAR) (not shown). In general, the helmet-mounted LIDAR system may be used for diving in environments such as sewage treatment plant tanks, chemical plant waste and sludge pits and cooling tower sump basins, harbors, rivers, dams and lakes to complete tasks such as pier and dock inspections and repairs, underwater cutting and welding, ship inspections, repairs and maintenance, pipeline inspections and repairs, and major underwater construction projects.
Further, the helmet 240 includes a speaker 262. The speaker 262 is coupled with the control circuit 242 of the helmet 240. The speaker 262 is positioned and oriented such so as to provide audio signal to the helmet wearer without blocking surrounding sound, and without affecting the safety aspects of the helmet 240. The audio signal may be received through the communication module 244. The audio signals may be stored and/or emanate from the cloud server 202. The audio signals may also stream through a portable sound signal-producing device. The portable sound signal-producing device is a portable media player or digital audio player. The portable media player and the digital audio player are consumer electronics devices capable of storing and playing digital media including audio, images, video, and/or documents. The digital media is stored on a hard drive, microdrive, and/or flash memory.
Further, the helmet 240 includes a microphone 246. Herein, the helmet 240 is not limited to a single microphone. The helmet 240 may have a plurality of microphones The microphone 246 may be a mic. The microphone 246 is coupled with the control circuit 242 of the helmet 240. In an embodiment of the present disclosure, the microphone 246 can be attached to any suitable position on or inside of the body of the helmet 240. Preferably, the microphone 246 may be placed in the helmet 240 proximate the rider's mouth. The microphone 246 may be configured to receive voice commands from the rider. The microphone 246 may also be included to convert voice commands into electrical signals that may be used to operate the main functionality within the helmet 240, communicate with emergency services, telephone services, or other Bluetooth connected helmets. In one example, the microphone 246 may allow recording the voice commands or voice inputs of the rider while riding the bike. By way of example and not by way of limitation, the microphone 246 is connected with a mobile phone (not shown) through wirelessly or through wired connection. In one implementation, the microphone 246 may be a built-in microphone with the helmet 240 In another implementation, the microphone 246 may be a plug and play type microphone which can be attached to the helmet 240 through a slot (for example, a USB slot is provided in the helmet for plugging the microphone therewith).
Further, the helmet 240 includes a display device 264. The display device 264 may be provided which can be any device capable of displaying visual information in response to a signal from the server 202. In a non-limiting example, the display device 264 may be integrated with various head supported structures including goggles, eyewear, headbands, or any head supported structure which supports and positions a display such that the visual information is projected into the rider's eyes. The display device 264 may be located inside or outside the helmet 240. For example, the display device 264 could be installed within the interior or exterior region of the helmet 240. In an embodiment of the present disclosure, the display device 240 may be a visor mounted lighting and display system which can be used to provide a visual clue of the intensity by using the colour and brightness of the visor mounted lighting and display system.
Further, the helmet 240 includes a memory device (not shown). The memory device is associated with the computing device of the helmet which may store music, save user level settings, system level caches, BIT errors, and store temporary data locally. In one implementation, the memory device locally stores the images/videos captured by the cameras and thereafter, the images/videos are transmitted to the database 208. The memory device may be built-in memory device with the helmet 240 or a removable memory device. In a non-limiting example, the memory device may be a SD card (Secure Digital card).
Further, the helmet 240 includes a multimodal communication interface 266. In general, the multimodal communication interface 266 processes two or more combined user input modes or inputs related to the user states, such as speech, pen, touch, manual gestures, and gaze and head and body movements, in a coordinated manner with multimedia system output. In embodiments of the present disclosure, the multimodal communication interface 266 is configured to receive multimodal states/inputs related to the user for intent detection thereof. Herein, the multimodal states include at least one of user's speech, gesture, touch, brainwave, gaze, pupil movement, head movement and body movement. Also, the multimodal states may be for example, but is not limited to, intent detection of the wearer, his stress load and glance detection. The multimodal communication interface is configured to warn the user with directionality and intensity of the warning via multimodal communication. An example of the directionality of the warning, if the threat is approaching from the rear left, this position is communicated to the user by effecting electrical signals to the back portion of the left side of the car. In addition, the helmet 240 provides a visual clue of the intensity by using the colour and brightness of the visor mounted lighting and display system 264. An example of intensity of the warning, in case, the user is ignoring the warning with the low warning intensity then the high warning intensity is used for warning the user. Herein, the multimodal communication may be for example, but is not limited to, bone conduction communication via a bone conduction device 268, visual and audible communication via the display 264, augmented reality projection, virtual reality projection, stationary display, light, sound, electric pulses, etc.
In particular, the multimodal communication interface 266 is configured to receive multimodal states related to the user in conjunction with the front camera 252. The front camera 252 is configured to capture the user's speech (which is captured with help of the speaker 262 of the
As illustrated in
As illustrated in
As illustrated in
As illustrated, the system 100 further includes an interesting event detection unit 214 for capturing interesting moments data from sport or work activity. The interesting event detection unit 214 may be coupled with the controller 204, the user database 206, the IoT database 208, the environment monitoring unit 210, and the awareness unit 212. The interesting event detection unit 214 receives the captured video, photos, and sensor data to capture the interesting moments. The interesting event detection unit 214 transmits the captured interesting moments data to the IoT database 208 for storing therein. The interesting moments data stored at the IoT database 208 is further utilized for other purposes, for example, the interesting moments data may be shared with friends, colleagues, social media and the like.
Referring to
As illustrated in
The controller 204 is further configured to analyse the user profile data, the visual feed and the radar feed, the environment context, and the virtual data feed. Herein, the controller 204 is in communication with the IoT server 202 to implement the artificial intelligence (AI) for processing and analysing the user profile data, the visual feed, and the radar feed, the environment context, and the virtual data feed.
The controller 204 is further configured to predict occurrence of threat based on the analysed data (for computing/processing the threat). The artificial intelligence/neural network is involved in predicting the occurrence of threat. Herein, in a non-limiting example, threat may be fall, accident, crash or a mishap. Herein, the threat may be anticipated head on, rear end by other objects, persons, in visible and invisible range, etc. The threat prediction or threat detection is real time and time sensitive. For example, a rider approaching a pothole, the warning has to be in seconds ahead of him going down that trajectory. Another example, a golf ball that is going to hit a person. The threat detection has to happen in near real time.
Referring to
In one or more embodiments, the controller 204 is further configured to process the threat based on the analysed data and transmits the processed threat data over V2X (vehicle to everything) communication to the peer devices. In another example, if utility workers wear the helmet when they are working on the road, the cars around get alert about these users in the field even before they come into visible range of the car. In some examples, when a biker rider wears the helmet when he is riding the bike. A car accident or car crash occurs on the road in front of the rider and that is, captured by the physical sensors of the helmet 240. In this case, the helmet 240 broadcasts the alert about the car accident or car crash, to the peer devices.
In one or more embodiments, the controller 204 is further configured to determine a threshold level for warning the user, wherein the user is warned if the predicted occurrence of the threat is greater than the determined threshold level. Herein, the controller 204 implements the artificial intelligence for determining threshold level for warning the user. The determined threshold level is compared with the predicted occurrence of the threat. In particular, when the predicted occurrence of the threat is greater than the determined threshold level, the controller 204 warns the user via the helmet 240.
The controller 204 is also configured to perform fusion of sensor data from camera, radar, vehicle to everything data to detect the significance of the threats. Further, the controller 204 is configured to implement machine learning models to understand what qualifies for warning a particular user. The warning levels vary considerably between the users, the environment they are in and also the type of activity they are involved in. The controller 204 may identify these criteria (warning levels) and warn the user only about the threats that matter for the particular user. In a specific example for determining threshold level, the user (biker) is riding a bike on the road. On the same road, other cyclists are also riding the cycle. The other cyclists who are in 5 feet distance behind the biker. In this case, the other cyclists will not be a threat to the biker. Herein, if the predicted occurrence of the threat is less than the determined threshold level; and based on the prediction, the controller 204 ignores the threat since the warning level is considerably low. However, in the case of a car, when the user (biker) is riding a bike on the road and on the same road the car is approaching at a higher speed behind the user. For example, when the car is coming into a 100 meter range of the user; in this case, the car will be a threat for the user. Herein, the predicted occurrence of the threat is greater than the determined threshold level; and based on the prediction, the controller 204 sends a warning to the user via the helmet 240.
In one implementation, the system 100 involves augmenting predictions based on cloud based processing which helps in threat monitoring. The helmet 240 has limited compute and storage capabilities. Some portions of the data captured by the helmet 240 will be moved to the server 202 to do threat detection. For example, if there is a road signage about a potential hazard. The 360° view camera captures an image of the road signage. Herein, the captured image (for example, road signage) may be in form of raw data specifically, image of the road signage with steepness (sharp slope) provided thereon. Thereafter, the helmet 240 transmits the captured image of the road signage with steepness (sharp slope) to the server 202. The server 202 calculates the steepness value based on the image received from the helmet 240. The server 202 detects that the steepness value is greater than 15 degrees. The steepness value is further being used to calibrate the warnings to be provided to the user. For example, in a gradient condition, the risk of tailgater crashing on the cyclist is much higher.
Further, the system 100 further involves augmenting edge-based capability with cloud based intelligence in threat monitoring. It is possible that the same detection logic explained above runs on MEC/Network EDGE which is located in a kilometre radius rather than going to a cloud server. This decreases the latency in detection. For example, raw images of the road are used to detect potholes in the EDGE. This is used to seek for the threat of a rider running into the pothole and falling.
In one or more embodiments, the controller 204, the user database 206, the environment monitoring unit 210, and the awareness unit 212 are disposed in the head mounted wearable device 240, as illustrated in
In one or more embodiments, the controller 204, the user database 206, the environment monitoring unit 210, the awareness unit 212, and the multimodal communication interface 266 are operationally configured on a user device 280, as illustrated in
In another aspect of the present disclosure and in the preferred embodiment, the threat monitoring system 100 is implemented at the helmet 240, as illustrated in
The helmet 240 includes the plurality of physical sensors. Herein, the plurality of physical sensors is configured to sense and generate at least one of visual feed and radar feed. Herein, the plurality of sensors, visual feed, and radar feed are already explained in the above paragraphs.
As illustrated in
Referring to
As illustrated in
Further, the helmet 240 includes a controller 242. Referring to
The controller 242 is further configured to analyse the user profile data, the visual feed and the radar feed, the environment context, and the virtual data feed. The controller 242 is configured to implement the artificial intelligence (AI) for processing and analysing the user profile data, the visual feed, and the radar feed, the environment context, and the virtual data feed In one embodiment, the controller 242 is in communication with the server 202 to implement the artificial intelligence (AI) at the server 202 for processing and analysing the user profile data, the visual feed, and the radar feed, the environment context, and the virtual data feed.
The controller 242 is further configured to predict occurrence of threat based on the analysed data (for computing/processing the threat). The artificial intelligence/neural network is involved in predicting the occurrence of threat. Herein, in a non-limiting example, threat may be fall, accident, crash or a mishap. Herein, the threat may be anticipated head on, rear end by other objects, persons, in visible and invisible range, etc. The threat prediction or threat detection is real time and time sensitive.
Referring to
In one or more embodiments, the controller 242 is further configured to process the threat based on the analysed data and transmits the processed threat data over V2X (vehicle to everything) communication to the peer devices. In another example, if utility workers wear the helmet when they are working on the road, the cars around get alert about these users in the field even before they come into visible range of the car. In some examples, when a biker rider wears the helmet when he is riding the bike, and a car accident or car crash may have occurred on the road in front of the rider and that has been captured by the physical sensors of the helmet 240; in this case, the helmet 240 broadcasts the alert about the car accident or car crash, to the peer devices.
In one or more embodiments, the controller 242 is further configured to determine a threshold level for warning the user, wherein the user is warned if the predicted occurrence of the threat is greater than the determined threshold level. Herein, the controller 242 implements the artificial intelligence for determining threshold level for warning the user. The determined threshold level is compared with the predicted occurrence of the threat. In particular, when the predicted occurrence of the threat is greater than the determined threshold level, the controller 242 warns the user.
The controller 242 is also configured to perform fusion of sensor data from the cameras, the radar, vehicle to everything data in order to detect the significance of the threats. Further, the controller 242 is configured to implement machine learning models to understand what qualifies for warning a particular user. The warning levels vary considerably between the users, the environment they are in and also the type of activity they are involved in. The controller 242 may identify these criteria (warning levels) and warn the user only about the threats that matter for the particular user. If the predicted occurrence of the threat is less than the determined threshold level, the controller 242 ignores the threat since the warning level is considerably low. If the predicted occurrence of the threat is greater than the determined threshold level, the controller 242 sends a warning to the user.
In one embodiment, the controller 204, the user database 206, the environment monitoring unit 210, the awareness unit 212, and the interesting event detection unit 214 are disposed in the server 202, as illustrated in
In one or more embodiments, the plurality of physical sensors comprise at least one of a front camera 252, a rear camera 254, side cameras 256, an aerial camera (not shown), a 360° view camera 258, and a radar 260. Further, the helmet 240 uses different sensors such as LIDAR along with the plurality of physical sensors mounted thereon to do a threat mapping.
In one or more embodiments, the virtual data feed at the IoT server 202 is received from IoT devices.
In one or more embodiments, the virtual data feed comprises cloud-based data, wherein the cloud-based data comprises at least one of weather data, historical data, data on amount of light falling on the user's eyes, SOS alert information, environmental data, and data received from other users.
In one or more embodiments, the virtual data feed comprises peer to peer communication data, wherein the peer to peer communication data comprises vehicle to everything (V2X) data received from peer devices.
In one or more embodiments, the controller 242 is further configured to process the threat based on the analysed data and transmits the processed threat data over V2X (vehicle to everything) communication to the peer devices.
In one or more embodiments, the multimodal communication interface 266 is configured to receive multimodal states related to the user for intent detection thereof.
In one or more embodiments, the multimodal communication interface 266 warns the user with directionality and intensity of the warning via multimodal communication
In one or more embodiments, the multimodal communication comprises at least one of bone conduction communication, visual and audible communication, augmented reality projection, virtual reality projection, and stationary display.
In one or more embodiments, the device 240 is used by at least one of a biker, a mining worker, a road worker, and a sportsperson.
In another aspect of the present disclosure, the controller 242, the user database 206, the environment monitoring unit 210, the awareness unit 212, and the multimodal communication interface 266 are operationally configured on the user device 280. Herein, the user device 280 may be a mobile phone, a personal digital assistant, a tablet, a laptop, and a computer, as illustrated in
The processing logic of the helmet 240 may be shifted to the user device 280. In particular, the processing logic of the helmet 240 could run on the mobile phone 280. The physical sensors data from the helmet 240 is shared to the mobile phone 280 via WiFi or cellular 220 and the processing logic runs on the mobile phone 280 and the results are received back to the helmet 240 to broadcast alert/warning or to take any other precaution.
In one implementation, the rider may use the display of the user device 280, instead of the dedicated display provided on the helmet 240. The user device 280 may provide a visual clue of the intensity on the display thereof.
In one or more implementations, instead of an onboard v2x receiver of the helmet 240, data could come from an external v2x receiver (not shown). The external v2x receiver may be installed on roadside. The external v2x receiver communicates with the helmet 240 for transmitting the v2x data. For example, the helmet will connect to a mobile phone via WiFi or BT or cellular and receive the live sensors data feed from the v2x receiver on the mobile phone.
Referring now to
At step 516, the predicted threat level is compared with the threshold level and checked if the predicted threat level is greater than the threshold level. If not, at step 518, then the predicting occurrence of the threat data to create event statistics (for example, biometric data, stress load, physical exertion) is used. If yes, at step 520, the user or the rider (helmet worn user or rider) is warned. At step 522, the warning may be multisensory and positional warning (directional sensitive). In particular, the multimodal communication interface warns the user with directionality and intensity via at least one of bone conduction communication, visual and audible communication, augmented reality projection, virtual reality projection, and stationary display. At step 524, the feedback received from the user in response to the warning is analysed. At step 526, the analysed feedback data to the IoT server is updated. At step 528, additional threat monitoring on the IoT server is performed. At step 530, the analysed feedback data is used to improve the threat monitoring model. At step 532, if an accident occurs then the alert to all peer devices (IoT devices) over v2x communication is broadcasted. At step 534, SoS or emergency contact of the user is called. At step 536, all the data of vitals and accident scene information which is done by the helmet is transmitted.
The system 100 and the device 240 as described in the embodiments of the present disclosure are allowed for active threat monitoring which monitors threats and provides avoidance warning/alerts to the user. The primary usefulness of the system and device is to detect a threat before it occurs to avoid accidents from occurring. The system 100 works in combination of sensors, that is, physical sensors, synthetic (virtual) sensors (provides virtual data feed), cloud-based data, and peer to peer devices data. The system generally corresponds to the device 240 (that is, helmet 240) that is worn by the user while commuting including traveling, riding, working or doing any sport activity which offers proactive safety warning and importantly driving avoidance solutions to fall, accident, crash or a mishap. In case of such an emergency event, the system warns the user and protects the user by minimizing the impact or the injury. The system also allows an external authorized user to monitor the threat conditions and to take mitigation actions. The system 100 also enables the creation of a digital twin of the user whereby the status of the user is replicated for other appliances to utilize; for example, the rate of fall, distance to others is continuously replicated in the cloud server 202. In one implementation, when the system 100 or the device 240 is used for sport application, the system 100 or the device 240 tracks performance of the user. In another implementation, when the system 100 or the device 240 is used for safety workers, the system or the device tracks alertness, stress, and the like. In another implementation, the device 240 may be used for defence applications. Specifically, the device 240 may be used to enable threat monitoring for soldiers.
The system 100 involves sharing the federated threat modelling data to peer users or peer devices for helping them. In particular, the system 100 involves active sharing of curated threat models over v2x apart from the sensor data sharing, which helps other devices for increasing accuracy.
The system 100 is also configured to be implemented for mapping of the environment which includes figuring out what is where, without any limitations. The system 100 is also configured to be implemented for three-dimensional mapping with location.
The system 100 and the device 240 use sensor fused data, object detection, and AI algorithm (AI based threat detection) to determine if the objects around could pose a threat. As used herein, AI based threat detection is not a fixed algorithm that raises a threat alert, and may include a neural network based algorithm.
The device 240 is designed as a head worn device with integration of advanced sensors including ones that can use audio, visual methods to proactively study the threat vectors around the user. The device 240 could be worn even in a sport activity or by a safety worker, while threat assessment may be performed in the cloud server 202. In addition, the gadget also relies on data and intelligence outside of itself to understand and proactively study the threat vectors in the range outside of the visible range. In short, the device continuously sees through the sensors directly connected and also through a cloud or edge service, the sensors which are in other systems and are sharing data.
When the device 240 is worn by the sports person (for example, a football player), the physical inputs (from cameras) could be the same. Synthetic sensors input could be temperature, heat, previous history of accidents in the zone, etc. Herein, threat patterns scanned around the sports person could be very different from the biker who is riding the bike on the road. For example, when a person is wearing the helmet while playing a game, the system or the device could be always looking for flying objects such as a ball that could crash into him from behind.
The device 240 warns the user by using a multimodal communication method. The device 240 uses bone conduction to communicate warnings with directionality and intensity to the user. For example, if the threat is approaching from the rear left, this position is communicated to the user by effecting electrical signals to the back portion of the left side of the car. In addition, the device provides a visual clue of the intensity by using the colour and brightness of a visor mounted lighting and display system.
The device 240 also actively broadcasts the status of the user such as the location of the user and also the whereabouts such as angle, speed, altitude, etc. This helps the other devices or vehicles understand and get alerts. For example, if someone is doing a skydive, the peer divers get alerts if they come too close. The planes flying get information about the position of the user. The same also is applicable on the road.
The device 240 actively learns when the threat is mitigated without danger and learns that this is within the threshold levels that does not require warning the user. For example, a user wearing it when playing soccer, learns that it is common for other players from opponent teams to come so close while it is still not considered a threat. While in such a scenario, it warns when the player is running backwards and could run into another player.
The device 240 also uses electrical signals around the head to understand the response of the user to threat to learn more about what matters and what is important in different contexts.
The present disclosure provides an increase in safety for users while performing activities that involve risks. This is made possible by performing active threat management or active threat monitoring on behalf of the user thereby becoming his extended sense. The present disclosure also fosters creating an overall safe environment by enabling sharing of critical parameters of the user such as location, approach vector, speed, etc. to devices (peer devices) around. The present disclosure enables proactive remote threat management to be provided as a service to the user. The present disclosure also enables faster reaction time for helpline in case an emergency occurs by using features like digital twin.
The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the present disclosure and its practical application, to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated.