1. Field of the Invention
This invention relates to an augmented sight and sensing system. More specifically it is a system that allows improved pointing accuracy and improved detail of sensing information through a plurality of means that incorporates the superimposition of a “reticle” image upon the device's concurrent sensing of a target image in a manner that allows a composite image, with or without additional sensed information, to be detected at one or more subsequent sensory locations or devices.
2. Background
Aids to sighting systems have taken many forms to serve a variety of purposes and to incorporate new or improved materials or methods. Examples of sight systems include sights on weapons, on cameras, and other devices that require user-pointing control for correct operation. Examples of variants among those systems include iron sights, reflex sights, and holographic sights on weapons or other devices. Currently available reflex sighting systems generally cannot be easily expanded in functionality. In addition to reflex sights, electronic sighting systems that employ technology aided reticles typically utilize a small display screen directly in front of the eye and a camera module mounted at the front of the optics; on these systems the camera records all optical data and transmits it to the display, with the potential of added overlays such as an electronically generated reticle; these systems may also have limited image resolution. Such systems are subject to a variety of problems, including potential incompatibilities with lens systems, interference with secondary systems during power outages, unexpected power outages at critical moments, system failure due to moisture intrusion, increased eyestrain during low-light use and possibly even the illegality of use with a firearm.
Some disclosed sighting systems include, for example, US Reg. No. H1,891 that describes an apparatus for displaying and recording an image viewed by a marksman or shooter through a weapon sight having a beam-splitter and video camera.
U.S. patent application 2009/0168057 describes a reflex sight with a cross-hair reticle image that allows the user to view a target image and a superimposed reticle that is said to be not easily perceived by the target.
U.S. patent application 2010/00251593 describes a system to automatically calibrate a firearm using a reflex sight (beam-splitter) that projects a target image to a video camera with a computer processing element that assists in adjusting and calibrating the sight.
U.S. Patent application 2010//0258000, published Oct. 14, 2010 discloses a wireless, waterproof, remotely operated weapon-mounted sighting system having a camera, an operator borne CPU, and a “heads-up”/Mounted Display. This patent discloses a system with a digital reticle electronically superimposed (and electronically boresighted to the weapon) on an image from a weapon-mounted camera. The image is transferred via wire or wirelessly to a “heads-up” display. The software-generated reticle has no relationship to any other sighting device or system attached to, or integral with, the weapon. It is apparently a part of a family of sighting systems designated as “SmartSights”. The “SmartSights” have a camera mounted to the rail of a rifle that transmits live video to a small computer worn on the user's vest. This is claimed to be a significant improvement over a cancelled Army “Land Warrior” system that has proven to be too heavy for military use.
Most current sight or visioning systems, as those described above, are specifically designed as weapon sights, yet sighting and sensing systems have many potential uses beyond weapon sights, including use in any situation where accurate pointing and/or sighting is desirable, as, for example, where it is desirable to remotely view and optionally record a target image and associated audio. Accurate sighting may be coupled with appropriate means to manipulate various functions at or on the target.
It would be desirable to have a sighting and sensing system that provides a reticle type image or other situation-relevant data superimposed on a target image that could be concurrently viewable by a user (or image capture device) and captured by a second imaging sensor that would allow transmittal, viewing and/or recording of the same image that is observable by an on-sight or remote observer. The present invention provides such a system.
The present invention broadest scope is a sighting and sensing system comprising the components;
a.) a beamsplitter that separates a target image into at least two images, one for each of at least two image sensors;
b.) means for producing situation-relevant information;
c.) at least two image sensors;
d.) at least one imaging optic to focus at least one separated images at infinity for said sensors;
arranged in relationship to each other enabling each image sensor to simultaneously sense a target image or to sense a target image having superimposed upon it situation-relevant data images to form a composite image.
The system may also comprise at least one microphone to record the audio signals which correspond to received video signals.
This system overcomes many of the limitations of prior systems and expands the functionality of reflex style sights. It can provide a superior and lower cost reticle image; can include integrated night vision, thermographic imaging systems, X-ray or terahertz-wave body scanning systems, or other information sensed by the system. It is a sight and sensing system that augments the user's view by superimposition of a reticle images and other sighting aids. The system is, in one of its many applications, optionally mountable on a standardized mounting rail and may incorporate a wireless transceiver means capable of streaming live audio/video data to a device capable of receiving and displaying such data. It may also stream digital data containing measurements of sensed information and optionally incorporates functionality for recording live audio/video and still images for retrieval/playback from a storage device.
Moreover, the system allows a point-from-safety provision, natural target acquisition with aided or unaided eye, night/low-light target acquisition, aggregation and transmission of sensed data and video recording of the target images.
The present invention may be understood more readily by reference to the following detailed description of embodiments and the figures and examples. It is to be understood that this invention is not limited to specific systems or to particular embodiments, as such may, of course, vary.
In this specification and in the claims which follow, reference will be made to a number of terms which shall be defined to have the following meanings;
The singular forms “a”, “an” and “the” include plural referents unless the context clearly dictates otherwise.
“Optional” or “optionally” means that the subsequently described event or circumstances may or may not occur, and that the description includes instances where the event occurs and instances where it does not.
“Include” or “included” or “including” when used in reference to systems of the invention means that the subsequently named items or events may be a subset of a larger group of relevant items or events, and that the listing of items or events should not be construed to exclude items or events that are not named within that listing.
“Beamsplitter,” when used in reference to systems of the invention, means any of a variety of devices and materials or plurality of them that can divide an incoming light ray (beam) into two or more beams that are substantially identical. Beamsplitters may include, for example, cube beam splitters, Wollaston prisms, pellicle mirrors, dichroic mirrors or simply a flat pane of glass or plastic.
“Collimated light,” when used in reference to systems of the invention, means light whose rays are parallel and considered to be focused at infinity, and therefore will spread slowly as they propagate.
“Imaging optic,” when used in reference to systems of the invention, means a lens used to form an image consisting of a curved mirror or lens with some type of light source and/or an image at its focus. This can be used to replicate a target at infinity without parallax. Imaging optics can be, but not limited to, double convex lenses, plano-convex lenses, positive meniscus lenses, or off-axis parabolic mirrors.
“Engineered diffuser lens assembly,” when used in reference to systems of the invention, means any beam shaper capable of homogenizing light, shaping its intensity profile and distribution in space; thereby creating a new point source that is then collimated by an imaging optic.
“OAP,” when used in reference to systems of the invention, means off-axis paraboloidal mirror.
“Reticle,” when used in reference to systems of the invention, means an illuminated dot or shape produced by non-magnifying optical devices of various constructions (often called reflex sights, collimated reticles, or holographic reticles) that give the viewer an image of the aiming point superimposed over the field of view.
“Image sensor module,” when used in reference to systems of the invention, means a subassembly comprised of imaging optics and one or a plurality of digital image sensors (which may or may not be programmable) and supportive packaging and circuitry.
“Image sensor,” when used in reference to systems of the invention, means any type of digital image sensor without associated imaging optics; it may refer to one or a plurality of devices, which may or may not be programmable, and which are capable of receiving optical signals and converting them to electronic and/or digital signals in a defined pattern. An image sensor can also mean an observer's eye.
“Composite image,” when used in reference to systems of the invention, means the compound image formed by the superimposition of an image containing situation-relevant information, such as
a. a reticle image
b. device state information pictograms, such as battery charge indicators, wireless signal strength, on/off/standby status, current mode of operation, charging status, software update status, and any other displayable information regarding the state of the device itself;
c. situational awareness pictograms, such as 2D and 3D compasses, rally point markers, numerals for range to target and compass headings, virtual “sighting wires”, multiple target indicators, detected threat indicators, and any other displayable data that would inform the user of any information pertinent to his environment;
d. sensed information outside the visible spectrum such as light in the ultraviolet, near infrared, mid-infrared, far-infrared, or any wavelength that can be refracted or reflected with imaging optics or beamsplitters, onto a second image, such as a target image, such that combined image may seen by an observer or detected by a sensor.
“Processor,” when used in reference to systems of the invention, means one or a plurality of programmable devices that control and manage system functions, and which collect and manipulate data from other devices, which themselves may or may not be programmable.
“PCA,” when used in reference to systems of the invention, means printed circuit assembly, which is a printed circuit board (or a substrate providing similar function) that facilitates the use of electronics designs.
“Micro-display,” when used in reference to systems of the invention, means any substantially small electronic image generating apparatus such as those that are made from a variety of types of light emitting diodes or other display technologies that may be used to display an electronic image.
“Receiver,” when used in reference to systems of the invention, means a programmable or non-programmable device or plurality of devices that can receive analog or digital data, control signals, or one or more of these.
“Transmitter,” when used in reference to systems of the invention, means a programmable or non-programmable device or plurality of devices that can transmit analog or digital data, control signals, or one or more of these.
“Transceiver,” when used in reference to systems of the invention, means a programmable or non-programmable device or plurality of devices that can transmit and receive analog or digital data, or control signals, or one or more of these.
“Wireless receiver,” when used in reference to systems of the invention, means a receiver that does not require a physical connection to a transmitter.
“Wireless transmitter,” when used in reference to systems of the invention, means a transmitter that does not require a physical connection to a receiver.
“Communication,” when used in reference to systems of the invention, means the device or devices used to allow unidirectional, bidirectional or multidirectional exchange of information, or the act of using those devices.
“Standardized mount/ing rail,” when used in reference to systems of the invention, means a mounting rail of a standardized design that allows the installation and interchangeability of accessories from different sources and of different designs. These mounting rails are used on telescopes, firearms, optics, etc. and provide a standard interface to mount reflex sights, telescopic sights, lights, lasers, or any desirable accessory to various objects. Some such standard rails include the Weaver standard, Picatinny rail (also known as a MIL-STD-1913 rail), NATO accessory rail, dovetail mount, optical rail, or optical bench.
“audio/video” when used in reference to systems of the invention, means the associated acoustic signals that accompany a live event and are recorded and or transmitted. These audio signals are incorporated into the data stream using standard encoding techniques.
The present invention in broadest scope is a sighting and sensing system comprising:
1) a beamsplitting means to divide incoming light beams from a source (target) image into two or more separate beams, one of which beams may be collected by a primary sensor (such as seen by an observer's eye or alternate sensor such as an image sensor module) and one of which beams is collected by one or a plurality of secondary image sensors or image sensor modules;
2) a situation-relevant image generating means by which a light source projected out at infinity is directed into a beamsplitting means and then separated into two substantially identical beams that may enter the observer's eye (or a primary sensor) and a second image sensor or module substantially simultaneously. The situation-relevant image impinging on the second sensor is the actual image of the reticle itself, not an electronically generated digital image added during post processing of the target image;
3) a means to generate and superimpose an image (such as a reticle, augmented reality information, or more broadly “situation-relevant” information) onto a target image such that a composite image is created;
4) a means of image collection and manipulation such that the images collected by a second imaging sensor or module will be substantially identical to the images perceivable by the first sensor; and optionally,
5) a means to record or transmit the composite image and optional associated audio in any of a plurality of ways such as:
a) storage of the image within the system, in a fashion such that it may retrieved for further use within the system or external to the system
b) transmission of the image or audio/video to any of a wide variety locations or devices, using any of a wide variety of transmission or communication methods and associated devices, where the image maybe recorded, viewed, heard, processed, or otherwise acted upon.
The system components are, in one embodiment, preferably housed in a form that is mountable on a standardized mounting rail. While the sighting system is generally described herein as a weapons sight it is also useful for other sighting purposes such as surveillance, remote sensing, supervisory control and data acquisition (SCADA) and the like.
There are a myriad of uses that embodiments of the invention may be adapted to serve. For example, the system of the invention can also be implemented as a next generation security system that enables manual or automatic isolation of a threat from background movement and provide information for remotely acting upon this detection.
The information that may be obtained with the system can provide information simultaneously to a proximal user as well as to other people and devices. This enables capabilities for the system to be deployed for sighting in which the goal is not (immediately) to fire a weapon, but to identify an object or location of interest and transmit visual, audible, and additional cybernetic data (including positioning, target analysis and other types) to peer devices or to a central system. So for example, it is possible to pinpoint an object of interest and send an electronic message to a central recording system or a team decision maker that can also send back information on action to take toward that object or location as well as to other assets that may be under its control.
In non-combative use cases, the system may be used in sighting and sensing information for industrial use in which the augmentation allows a user to point at a distant object or location from which additional sensors can gather and concurrently transmit with the audio and/or visual data a variety of details about the object/location. For example, from a helicopter moving down a pipeline, the system could allow precise pointing to objects and locations on the pipeline that trigger collection of infrared data that could determine the location of a leak, or even a potential for a leak to occur. In SCADA (Supervisory Control and Data Acquisition) applications this capability can improve the accuracy of data gathered while reducing risk. The capabilities of the system also have applicability in tactical law enforcement and in a myriad of military uses for the automation of augmented intelligence gathering.
An important aspect of the invention is that it may be small in size and lightweight. A prototype of the complete system may be contained in a package size of about 5″×2″×2.5″. While not required for suitable operation of the system of this invention it is preferred that essential components be completely self-contained, designed to be resistant to dust damage, shock damage, moisture contamination, and to this end the system would be a completely sealed design-protected from such environmental damage. A sealed design requires the electronic system's power source to be charged remotely by means of wireless methods. Wireless charging can include, but isn't limited to inductive charging systems that remove the requirement for charging portholes and/or removable battery covers and seams.
Another important aspect of the invention is a point-from-safety provision providing the user the ability to accurately target from cover or protection using composite images and augmented information. In this point-from-safety provision the video display optionally displays an image of the target with an aiming spot superimposed thereon. The video display may be located remotely from the optical sight at some distance (such as a Smartphone or HUD), thus the user may optionally view the video display to effect aiming of the firearm or other device remotely or may place his eye or sensor in line with the aiming reticle and sight along it to effect aiming of the firearm or other device directly. Further, if used on a weapons platform as a primary aiming device, the user is assured of accurate target acquisition in a point-from-safety situation, irrespective of whether his eye or video display is used to affect aiming, because the reticle image is
a. the same image,
b. has an inseparable relationship to itself (unlike the “SmartSight” which has no relationship to the primary aiming device/s),
c. housed in a single device,
d. boresighted to the weapon.
The video display may display the aiming spot and an image of the target using software written specifically for the display to affect any number of novel aiming functions.
Aspects of embodiments of the invention are illustrated in the figures. Each of the embodiments will employ a hardware system that may be one of two types, a “basic” hardware system or an “advanced” hardware system, and each embodiment designates the type of hardware system used for that embodiment.
A suitable electronics system for the embodiments described as using the “basic” electronics system is illustrated in the functional block diagram in
Both the “basic” and “advanced” electronics systems employ an integrated software system that coordinates, manages and controls the operation of the system according to the user supplied configuration settings, and where applicable the presence of optional or auxiliary devices, and it coordinates the operation of timing-sensitive functions of the system such as: digital signal processing (312), the micro-display subsystem (313), image sensors (186, 316), digital and analog sensor inputs (193, 322, 323, 325, 326, 327, 328), on-board file storage (307) and communications subsystem (308), and any ancillary processes that may be advantageous in certain applications of an embodiment. In some embodiments, the software system may also beneficially manipulate, alter, or correct information prior to its display (313) to the user, or further augment the information displayed (313), stored (307) and/or transmission by the communication system (308). The system software and hardware architecture support optional subsystems and features (described in each embodiment) when they are present, features and platform configuration changes, such as alternative wired or wireless communication methods, and/or updates to system software or firmware.
The software system may have several modes of operation, including: a) a live-video (with or without associated audio) streaming mode to one or more devices via the communications subsystem (308); b) a file access mode to allow access to stored files and images (307); c) a calibration mode allowing concurrent access to all sensors, data, and calibration of the display, and secondary subsystems; and d) a setup mode to allow for configuration and system maintenance. Under system software control, the communications subsystem (308) may be configured to provide remote access to various features of the system's modes of operation. Referencing
The software may be used to enable a variety of conflict support capabilities including: data encryption, intelligence mode imaging, geospatial location of casualty, real-time electronic order of battle, silent distress or alert beacon, tactical mode geospatial target location communication for fire support, concurrent audio/video and data communications with or without encryption, in ITAR mode NSA and DoD approved vendor-blind encryption, captured weapon tracking and targeting beacon, tactical peer-to-peer coordinated attack, remotely engaged secure data self destruction, telemetry based device authentication with access rejection and reporting, and other uses, integration with C4I, LVC, NCW, and other programs and systems.
In an embodiment (
Reticle beam (26) produced by light source (2) and imaging optic (3) impinges on beam splitter (1) and is divided into beams (26b) where it may be detected by image sensor module (5) after, optionally, passing through a band-pass wavelength filter (4) (for the same reasons stated above), and (26a) to an observer or sensor (30) where the user may affect aiming of the device. Examples of possible reticle images are illustrated in
The composite image from beams (25b) and (26b) (and any other sensed information) may be transmitted by means of wireless signal (6) to a suitable receiver (40) after processing by the PCA (14). The composite image produced from the target (20) and the reticle image (26) is simultaneously “seen” by observer or sensor (30) and image sensor module (5), and may be viewed in real-time via wireless video display device (40). The video data receiver and display device may be any suitable device known to those skilled in the art. In a preferred implementation it will be a heads up device (HUD).
In another basic embodiment (
The simultaneous impingement of the targeting reticle image (26) generated by the imaging optical assembly (10) and the target image (25) entering through distal aperture (9) advantageously precludes the need for a software generated reticle on the wireless video display device (40) since the actual aiming reticle (as bore-sighted to a device via adjustment screws (15) co-located with the imaging optical assembly (10)) is the same light beam as seen by the observer's eye or sensor (30). A transparent display device (16) interposed between beamsplitter (1) and proximal aperture (8) receives processed data from various sensory inputs (such as a digital compass (19) and GPS receiver (18)) from the main electronics board (24), allowing an observer to view augmentation data as in
In a secondary operation mode the system allows the user to retrieve locally stored audio/video files, video files, and images residing on a storage device (17) at a later time. Advantageously due to the nature of an optional polarizing beamsplitter (1) no light generated by the imaging optical assembly (10) or by the transparent display device (16) will exit distal aperture (9) maintaining covertness of the observer and is only observable through the proximal aperture (8).
While, in general, reticles for reflex sighting systems are simple dots as in
In yet another embodiment there is provided a ballistic drop compensator and electronic windage and elevation adjustment via a beam steering device. These beam steering devices can reposition the reticle's vertical/horizontal position on the beamsplitter cube (1) either manually from user input, or automatically, calculated using and algorithm based on a ballistic table of specified ammunition of the users weapon and data from an on-board range-finder (21). Additionally, the transparent display (16) could show the numerical range to target in yards/feet/meters or any suitable measurement system desired (222
A set of specific preferred embodiments is shown in
In broad scope, these embodiments comprise a double off-axis parabolic mirror system that replaces an image sensor module (5 in
As in the previous embodiments, and also based upon the “advanced” electronics system, incoming light (25) from target (20) impinges upon the beamsplitter (1), some of which (25a) goes through the beamsplitter to the eye or sensor (30), and the remainder of which (25b) is reflected by the first OAP (124), which focuses the image to the proper point on the image sensor (186). The micro-display (313) projects the situation-relevant information (26) onto the second OAP (126), which collimates and reflects onto the beamsplitter (1). The situation-relevant information is split into beams (26a) and directed to the eye or sensor (30) and (26b) to the image sensor (186) via first OAP (124); additionally, both beams are substantially identical as seen by both sensors as in the prior embodiments. Although it is possible in this embodiment to use a standard, masked, or auto-iris diaphragm light source with mechanical means to boresight the device to an apparatus, it is not the preferred method. Boresighting means in this embodiment is accomplished by selective illumination of pixels on the micro-display (313) to produce a reticle image (
The construction of a double OAP beamsplitter, although novel in its concept and configuration is not complicated to construct by those skilled in the art, therefore the juxtaposition of the various parts is a novelty of this embodiment. Another important aspect of this embodiment is that the situation-relevant information displayed by (313) can either be sent to the imaging sensor (186) through double reflection (124+126) or may be digitally overlaid onto the video feed; note that in this embodiment, if doubly reflected, the eye or sensor (30) will sense the situation-relevant information and it could be used for targeting which was not possible in prior embodiments. Further, a filter (such as 4 in
For some applications of the system of this invention based on the “advanced” electronics system, the sighting function may optionally incorporate automatic compensation capabilities. In firearm sighting applications of the system, for example, it may employ manual and/or automatic reticle correction to accommodate ambient factors detected by the user or sensed by the system, such as range to target, wind effects, temperature, barometric pressure, position-relative inclination and elevation, or other ambient factors that affect firing accuracy.
There may be built directly into the system a ranging system (21 in
The sighting function may also employ optional synthetic aperture radar (SAR) capabilities, as illustrated in
A primary feature of the system of the invention is the ability to record video and images from the sight system, inline with the sight path. This capability provides a foundation for a range of functions that augment the utility of information collected that may be based upon the advanced electronics system.
Parallel Multi-Path Operation. The system can automatically transmit the live video feed via the communications subsystem (308,
Augmentation. Because the system processor(s) (302) act in a supervisory control capacity, directing data acquisition from multiple advanced electronics system sensors (186, 316, 325, 326, 327, 328), optionally supplemented by sensing inputs (321, 322, 323), the system may apply one or several signal processing algorithms to each of the signal streams. The simplest, but still valuable, example is high fidelity image compression and/or encryption, which can reduce the elapsed time required to transmit image data. This is particularly valuable when the use of a live stream might compromise a covert user's location.
In a similar but advanced implementation, reticle derived information is used to enhance the operation of the image processing algorithms to optimize the balance of clarity and size while minimizing signal loss. This is achieved through real-time calculation of the velocities of image movement relative to the reticle that are used to optimize the bounds of an area within the image to use as a criterion zone for signal analysis and processing. When this technique is applied to a previously stored signal stream, the method permits “backward enhancement”, wherein the optimal criterion zone bounds established later in the signal stream can be applied to the optimization of frames earlier in the stream.
Advanced Augmentation. Building upon basic augmentation methods, more advanced implementations can significantly increase image utility without detracting from the live use of the system. In one such embodiment, a parallel stream of clock synchronous subchannel data, including high precision geospatial location and orientation information (325, 326, 327, 328), may be indexed by independent stream processes or processors, can be tagged to the sensor streams (321, 322, 323) that may be running at differing sample rates, such that real-time display is not slowed while providing high-clock-precision, parallel post-processing systems, and high-precision signal coherency across signal streams.
In an advanced implementation, the criterion zone bounds are applied to thermal and other non-visual data streams to establish a priority space for signal enhancement within each stream. The demarcation of features within the frame of each stream may then be used to more precisely optimize each of the other streams.
In one such implementation, automatic and/or manually triggered still-image capture while in video mode can be used to obtain high sensitivity reference points that are used to inform video signal processing algorithms. This results in a substantial improvement in the video signal-to-noise ratio. In another advanced implementation, rapid still image captures are composited using digital signal processing techniques in combination with reticle derived information to generate a composite image with greater detail than was visible with any of the individual images.
A highly useful function of the sighting system is the recording of the video image within the optic path of the sight systems, but isn't limited to recording or projecting onto the sight system's visible spectrum alone. The system can be provided with the capability to overlay additional information onto the video image and to the eye or sensor (see 30
An infrared sensor array has the capability to view light in the IR band, which correlates to thermal output of the imaged area. Adding thermal information to the sight system allows the intelligent systems to determine true live targets from non-targets and adds resolution to the available information presented to the user (see
The thermal data can be lightly overlaid upon the image the user sees, which adds real-time situational awareness increasing efficiency and accuracy and reduces inadvertent targeting.
The augmented sight and sensing system's modular and extensible architecture provides for a plurality of sensed information to be managed in optical forms, digital forms, both forms separately or both as composite forms, a flexibility that enables a plurality of uses and application-specific designs. The modular and extensible architecture enables embodiment designs that include: 1) the sensed detection of a significant acoustic event; 2) the determination of the source location of the event; and 3) augmented information and decision support for the user or subsequent sensing device to react to the event, including recording of the event, its location, or a combination thereof. An illustrative embodiment for armed conflict use cases, such as military or law enforcement use, (see
The Shot Detection and Targeting capability (SDT) of the augmented sensing and sighting system's sensor and processing functions (
When the sensing and processing functions (328, 327, 321, 322, 323) calculate an origin vector (in this embodiment the source location of a shot) they incorporate the current origin vector and system relative compensated vector information (326, 325) into the data stream for display (313). The system relative compensated vector information (325, 326) is continuously sensed and recalculated as described above (328, 327, 321, 322, 323) to adjust for subsequent sensed information (325, 326) such that the current pointing vector, current response origin vector, and a current vector alignment indication are provided on display (313).
The augmented sighting and sensing system architecture as applied to armed conflicts in this embodiment allow more rapid acquisition of the response origin vector, continued alignment to the response origin vector, faster and more accurate response decisions, improved situational awareness and effectiveness of response that individually and collectively improve the safety of the user.
In this specification, the invention has been described with reference to specific embodiments. It will, however, be evident that various modifications and changes can be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification is, accordingly, to be regarded in an illustrative rather than a restrictive sense. Therefore, the scope of the invention should be limited only by the appended claims.
This application claims the benefit and priority from U.S. Provisional Patent Application 61/660,720, filed Jun. 16, 2012, the contents and disclosure of which is incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
61660720 | Jun 2012 | US |