The present disclosure relates to night vision display devices, and in particular, user wearable night vision display devices such as night vision goggles.
Technology is becoming more and more integrated into the equipment used by modern soldiers. For example, global positioning system (“GPS”) receivers, night vision devices, and gunfire detectors are becoming standard equipment for today's modern soldier.
GPS receivers allow users, such as soldiers to navigate in unfamiliar territory, track potential targets, and provide guidance to “smart” bombs and missiles.
A night vision device, such as a pair of night vision goggles, provides enhanced images of low light environments. Night vision is made possible by a combination of two approaches: increasing spectral range, and increasing intensity range. For example, some night vision devices operate by collecting tiny amounts of visible light, converting the photons of light into electrons, amplifying the number of electrons in a microchannel plate, and converting the amplified electrons back to a visible image. The enhanced images allow soldiers to operate effectively while remaining safe under the cover of darkness.
A gunfire detector is a system that detects and conveys the location of gunfire or other weapon fire using acoustic and/or optical sensors or arrays of sensors. The detectors are used by law enforcement, security, military and businesses to identify the source and, in some cases, the direction of gunfire and/or the type of weapon fired. Most systems possess three main components: a microphone or sensors, a processing unit, and a user-interface that displays gunfire alerts.
Overview
Sensor data indicative of a user's environment is received from a sensor. A video signal is generated which comprises a visual representation of the sensor data. The video signal is combined with a night vision view of the user's environment to overlay the visual representation of the sensor data over the night vision view of the user's environment. The overlaid night vision view of the user's environment is displayed to the user.
Depicted in
Unlike the device of
Other sensors rely on audio cues to communicate their data to a user. For example, some gunshot detectors include an ear piece, with a computer generated voice providing an auditory indication of the location of a detected gunshot. Unfortunately, these auditory indications may be heard by unintended parties, the shooter of the detected gunshot for example. Furthermore, individuals may find auditory indications of sensor data to be less convenient, descriptive, and accurate than visual indications.
In order to provide a different sensor/display solution, the device of
Gunshot detector 110, through the detection of the sound of a gunshot, and/or through the light emitted by a muzzle flash, is able to provide sensor data about the location of the gunshot. For example, if gunshot detector 110 is worn by the user, gunshot detector 110 may determine the location of the gunshot relative to the user, and provide sensor data, such as the range and direction for the location of the gun shot. Similarly, if gunshot detector 110 is located remotely from the user, the sensor data 140 provided by gunshot detector 110 can be combined with location information for the user to determine the location of the gunshot relative to the user.
According to other examples, sensor 110 may also include a global positioning system (“GPS”) receiver. Accordingly, sensor 110 may receive global positioning data from a global positioning satellite. For example, the sensor 110 may receive global positioning data for the user, other individuals in the area, or the location of other items of interest, such as the location of a gunshot. The global positioning data may be provided to video generator 120 through sensor data 140. In other examples, sensor 110 may be embodied in a vehicle diagnostic sensor configured to provide diagnostic data for a vehicle, such as the vehicle in which the user is travelling. The diagnostic data may be provided to video generator 120 through sensor data 140, and displayed to the user through night vision device 130. In other examples, sensor 110 may be embodied in a light detection and ranging (“LIDAR”) device.
Video generator 120 may be included in a multipurpose computing device, such as a laptop, a tablet computer, a smart phone, or other multipurpose computing device. Accordingly, the video generator 120 may be embodied in a microcontroller or microprocessor in order to generate video signal 150. In other examples, the video generator 120 may be a purpose-built processor, such as an application specific integrated circuit (“ASIC”) or a field programmable gate array (“FPGA”). Whether the video processor 120 utilizes a multipurpose process, or a purpose built processor, the video processor may be incorporated into the night vision goggle 130, or be arranged external to the night vision goggle 130 and configured to communicate video signal 150 through a wired or wireless connection. Similarly, video generator 120 may receive the sensor data through a wired or wireless connection.
With reference now made to
In step 220, a video signal comprising a visual representation of the sensor data is generated. For example, if the sensor data comprises gunshot location data as well as GPS coordinates for the user, generating the video signal may comprise generating an alphanumeric representation of the gunshot's location. According to other examples, the location of the user in the environment and the orientation of the user may be known. Therefore, the generated video signal may be a visual representation, such as an arrow or crosshairs, indicating where in the user's night vision view of the environment a gunshot originated. Similarly, if the sensor data includes GPS coordinates for a user's desired destination, the video signal may include a visual representation of the direction the user needs to travel to reach the destination.
In step 230, the video signal is combined with a night vision view of the user's environment, thereby overlaying the visual representation of the sensor data over the night vision view of the user's environment. Specific examples of overlaying the video signal with the night vision view of the user's environment are described in greater detail in reference to
Finally, in step 240 the night vision view of the user's environment overlaid with the video signal is displayed to the user.
With reference now made to
In order to provide overlaid image 330, video generator 120 receives sensor data 140 from sensor 110. The sensor data 140 may be received in the form of a serial stream, such as serial stream of binary characters. While the serial stream may be encoded with coordinate information for a user's location, the location of a user's destination, the location of a gunshot, the location of a user's desired direction, or the location of another item of interest for the user, the sensor data itself is non-visual data. Accordingly, video generator 120 converts this serial stream into a video signal 150 to provide a visual representation of the sensor data 140 that can be overlaid on a night vision image. As depicted in
In order to overlay the video signal with the night vision view, a first display and a second display may be used. Turning briefly to
Returning to
With reference now made to
As depicted in
If sensor 110 is not located on the user's person, the relative positions of the sensor 110 and the user must be known to accurately orient the arrow in video signal 150. Accordingly, GPS data 460 may provide the location of the user and the sensor 110. Additional data may provide the orientation of the user and/or the sensor. For example, a magnetic or gyroscopic sensor may be included in sensor 110 and the GPS receiver 470, thereby allowing orientation data to be included in sensor data 140 and GPS data 460, respectively. Similarly, a motion vector may be calculated for the user from GPS data 460, which can also be used to determine orientation of the user. Of course, while the GPS receiver 470 and the sensor 110 are depicted as two separate devices, the functions of the GPS receiver 470 and the sensor 110 may be embodied in more or fewer devices. For example, the sensor 110 may be embodied as a GPS receiver, and therefore the functions of the GPS receiver 470 would be provided by sensor 110. Similarly, the locations for the user and the sensor 110 may be sent by separate GPS receivers. On the other hand, if sensor 110 is embodied in a gunshot detector and on the person of the user, only the orientation data and the gunshot detector data may be necessary to accurately orient the arrow in video signal 150, and the GPS data may be omitted.
Turning now to
Because overlaid images 530a and 530b are combined to form stereoscopic image 550, the user may be provided with a 3-dimensional image. Yet, the use of two optical channels may add additional complexity to the overlaying of sensor data onto night vision images For example, left enhanced image 520a will be slightly different than right enhanced image 520b. Accordingly, left overlay 540a may need to be positioned in a different location in left enhanced image than where right overlay 540b is positioned in right enhanced image 520b. The difference in positioning of left overlay 540a and right overlay 540b is depicted in left overlaid image 530a and right overlaid image 530b, though the positioning has been exaggerated to better illustrate the point.
In order to accurately position overlays 540a and 540b, video generator 120 is provided with night vision device data 560 from night vision device 570. Specifically, night vision device 570 may provide data indicative of, for example, where and how night vision device 570 is arranged and focused. Furthermore, the night vision device data 560 may include information indicating the relative positions of the left optical channel used to create the left enhanced image 520a and the right optical channel used to create the right enhanced image 520b. By considering the night vision device data 560, the video generator may generate two separate video signals, left video signal 150b and right video signal 150a. Left video signal 150a will be overlaid on left enhanced image 520a to generate left eye image 530a. Similarly, right video signal 150b is overlaid on right enhanced image 520b to generate right eye image 540b. Because video generator 120 has taken the night vision device data 560 into consideration when generating left video signal 150a and right video signal 150b, when the user views stereoscopic overlay 580 in stereoscopic image 550, stereoscopic overly 580 may accurately indicate the location of the item of interest to the user.
According to other examples, night vision device 570 may receive a single video signal, similar to the signal provided in
With reference now made to
The above description is intended by way of example only.