SIMULATED FIREARM WITH TARGET ACCURACY DETECTION, AND RELATED METHODS AND SYSTEMS

Information

  • Patent Application
  • 20170321987
  • Publication Number
    20170321987
  • Date Filed
    May 03, 2017
    7 years ago
  • Date Published
    November 09, 2017
    7 years ago
  • Inventors
    • Apkarian; Jacob
  • Original Assignees
    • Coriolis Games Corporation
Abstract
A simulated firearm includes a body, a camera, a memory, and a processor. The body has a handle, a trigger, and a barrel portion. The barrel portion defines a shooting axis. The camera is coupled to the body and aligned with the shooting axis. The camera is configured to capture an image in response to actuation of the trigger. The memory is coupled to the body and stores marker data corresponding to at least one target marker. The processor is coupled to the body and configured to determine whether a portion of the image captured by the camera matches any one or more of the at least one target marker.
Description
FIELD

This disclosure relates to simulated firearms with integrated target accuracy detection capabilities, and to related methods and systems.


INTRODUCTION

Firearms, such as pistols, rifles, and shotguns have a variety of uses including recreation, protection, hunting, and combat. However, firearms are also dangerous tools that can cause severe injury or harm if used without proper training. Even among experienced users, firearm training can present significant risk of harm to the user, and to nearby people and property. Furthermore, firearms are not suitable for use as toys.


SUMMARY

In one aspect, a simulated firearm is provided. The simulated firearm may include a body, a camera, a memory, and a processor. The body may have a handle, a trigger, and a barrel portion. The barrel portion may define a shooting axis. The camera may be coupled to the body and aligned with the shooting axis. The camera may be configured to capture an image in response to actuation of the trigger. The memory may be coupled to the body and stores marker data corresponding to at least one target marker. The processor may be coupled to the body and configured to determine whether a portion of the image captured by the camera matches any one or more of the at least one target marker.





DRAWINGS


FIG. 1 is a schematic illustration of a simulated firearm and a plurality of target markers, in accordance with an embodiment;



FIG. 2 is a perspective view of the simulated firearm of FIG. 1 aimed at a target including a target marker;



FIG. 3 shows a captured image containing a target marker, in accordance with an embodiment;



FIG. 4 shows a captured image containing a target marker, in accordance with another embodiment;



FIG. 5 shows a captured image containing a target marker, in accordance with another embodiment;



FIG. 6 shows a schematic illustration of a simulated firearm system, in accordance with an embodiment;



FIG. 7 shows a perspective view of the simulated firearm of FIG. 1 aimed at two targets, each including a target market;



FIG. 8 shows a captured image of a first visual tag of a first target of FIG. 7;



FIG. 9 shows a captured image of a second visual tag of a second target of FIG. 7;



FIG. 10 is a schematic illustration of a simulated firearm mounted to a remotely controlled, movable stand; and



FIG. 11 is a schematic illustration of a simulated firearm in accordance with another embodiment.





DESCRIPTION OF VARIOUS EMBODIMENTS

Numerous embodiments are described in this application, and are presented for illustrative purposes only. The described embodiments are not intended to be limiting in any sense. The invention is widely applicable to numerous embodiments, as is readily apparent from the disclosure herein. Those skilled in the art will recognize that the present invention may be practiced with modification and alteration without departing from the teachings disclosed herein. Although particular features of the present invention may be described with reference to one or more particular embodiments or figures, it should be understood that such features are not limited to usage in the one or more particular embodiments or figures with reference to which they are described.


The terms “an embodiment,” “embodiment,” “embodiments,” “the embodiment,” “the embodiments,” “one or more embodiments,” “some embodiments,” and “one embodiment” mean “one or more (but not all) embodiments of the present invention(s),” unless expressly specified otherwise.


The terms “including”, “comprising”, and variations thereof mean “including but not limited to,” unless expressly specified otherwise. A listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a,” “an” and “the” mean “one or more,” unless expressly specified otherwise.


As used herein and in the claims, two or more parts are said to be “coupled”, “connected”, “attached”, or “fastened” where the parts are joined or operate together either directly or indirectly (i.e., through one or more intermediate parts), so long as a link occurs. As used herein and in the claims, two or more parts are said to be “directly coupled”, “directly connected”, “directly attached”, or “directly fastened” where the parts are connected in physical contact with each other. As used herein, two or more parts are said to be “rigidly coupled”, “rigidly connected”, “rigidly attached”, or “rigidly fastened” where the parts are coupled so as to move as one while maintaining a constant orientation relative to each other. None of the terms “coupled”, “connected”, “attached”, and “fastened” distinguish the manner in which two or more parts are joined together.


As used herein and in the claims, a first element is said to be “received” in a second element where at least a portion of the first element is received in the second element, unless specifically stated otherwise.


As used herein and in the claims, two components are said to be “communicatively coupled” where at least one of the components is capable of communicating signals (e.g. electrical signals) to the other component, such as across a wired connection (e.g. conductive wire cable), or a wireless connection (e.g. radio frequency).



FIG. 1 shows a schematic illustration of a simulated firearm 100, in accordance with an embodiment. FIG. 1 illustrates various exemplary components of simulated firearm 100. Explicit reference is hereby made to FIG. 1 for the remainder of the description wherever components of simulated firearm 100 are mentioned.


Simulated firearm 100 can simulate a firearm experience, without the risks associated with firing live ammunition. For example, simulated firearm 100 may be employed for training exercises such as target practice, as well as a recreational toy for playing games. Simulated firearm 100 includes a body 104 that simulates a firearm. As shown, body 104 includes a handle 108, a trigger 112, and a barrel portion 116.


Body 104 can be shaped to resemble any firearm. In the illustrated example, body 104 is shaped as a pistol. In other examples, body 104 may be shaped as a rifle (FIG. 11) or a shotgun. Barrel portion 116 resembles a barrel portion or barrel containing portion of a genuine firearm. In a genuine firearm, the barrel provides an exit passage for an ammunition (e.g. bullet) and thereby defines a shooting axis along which the ammunition initially travels as it exits the barrel and moves away from the genuine firearm. Similarly, barrel portion 116 defines a shooting axis 120 with which various components of simulated firearm 100 are aligned in parallel or collinearly in furtherance of simulating the genuine firearm experience. It will be appreciated that in some genuine firearms, the barrel is concealed behind a casing. Accordingly, in some embodiments, barrel portion 116 may not include a barrel or barrel shaped portion. For example, barrel portion 116 may resemble a barrel casing.


Simulated firearm 100 is described below as having various components coupled to body 104. It will be understood that any of these components may be entirely or partially housed in body 104, or position outside of and physically attached to body 104. In the illustrated example, a camera 124, a memory 128, and a processor 132 are coupled to body 104.


In brief, a user may take a “shot” with simulated firearm 100 by actuating trigger 112, which causes camera 124 to capture an image. The processor 132 determines whether a portion of the captured image matches a target marker 136 for which corresponding marker data is stored in memory 124. The processor 132 may then score the shot based on a distance between a center of the image and the position of the image portion matching the target marker 136.


As shown, camera 124 is coupled to body 104 and oriented to align with shooting axis 120. This allows camera 124 to capture images centered on the shooting axis 120, which may simulate a flight path of ammunition from a genuine firearm. Camera 124 can be any optical imaging device that can capture a target marker 136 located at a shooting distance from a firing end 140 of simulated firearm 100. For example, camera 124 may be able to capture a target marker 136 located within a range of distances which includes at least a portion of the range 10 cm to 1.5 km. For example, camera 124 may include a CCD-sensor, a CMOS-sensor, or another optical sensor that can detect optical radiation in any wavelengths that can emanate (e.g. emit or reflect) from a target marker 136. In some examples, camera 124 can detect optical radiation having wavelength(s) which reside in one or more (or all) of the visible spectrum (e.g. 390 nm to 700 nm), the infrared spectrum (e.g. 700 nm to 1000 nm), and the ultraviolet spectrum (e.g. 10 nm to 390 nm).


Trigger 112 is manually (i.e. by hand) user operable to cause firearm 100 to execute a shot sequence. In some cases, the shot sequence includes camera 124 capturing an image. In other cases (e.g. where the simulated ammunition count is zero), the shot sequence may not include camera 124 capturing an image (e.g. nothing may happen, or the user may be audibly or visually prompted to reload). Trigger 112 can be any device that allows the user to direct firearm 100 to fire a shot (e.g. to direct processor 132 to execute a shot sequence). In the illustrated example, trigger 112 is formed as a finger-trigger of the kind commonly found in firearms. For example, a user may actuate trigger 112 by grasping simulated firearm 100 by handle 108 and pulling trigger 112 towards handle 108 (e.g. away from firearm firing end 140) using one or more fingers. In other embodiments, trigger 112 may be formed by another user-input control, such as a depressible button, a lever, a switch, or a touch sensitive surface (e.g. based on resistive or capacitive technology).


Memory 128 provides electronic storage for marker data corresponding to one or more target markers 136. Target markers 136 can be any visible indicia that can be applied to, embedded into, incorporated into, or comprises the appearance of, a real-world (i.e. physical and tangible) shooting target (e.g. a can, sign, building, person, or vehicle). The physical embodiment of the target marker 136 emits or reflects optical radiation for detection by a camera, such as camera 124. In some examples, a target marker 136 can be a symbol, logo, or photograph that is painted, printed, molded, or sculpted into or onto a shooting target, or onto a substrate (e.g. adhesive sticker) which is applied to the shooting target. In other examples, a target marker 136 includes the appearance of a shooting target (e.g. a person's face). It will be appreciated that a shooting target can bear multiple of the same and/or different target markers 136, and that the same target marker 136 can be borne by multiple different shooting targets.


Marker data in memory 128 can be any information that identifies a target marker 136 for the purpose of matching a portion of an image to that target marker. In some examples, the marker data corresponding to a target marker 136 can include an image of the target marker 136, or an encoded or decoded representation of the target marker 136. For example, for a target marker 136 that is a 2D barcode, the marker data may include an image of the 2D barcode or the barcode information (e.g. ASCII text) encoded in the 2D barcode. In the latter case, the processor 132 may determine a match by encoding the barcode information into a 2D barcode prior to the comparison with a captured image, or the processor 132 may decode barcode information from a 2D barcode located in the captured image for comparison with the barcode information stored in memory 128.


Processor 132 is configured to determine whether a portion of an image captured by camera 124 matches a target marker 136 for which corresponding marker data is stored in memory 128. For this purpose, processor 132 is communicatively coupled to camera 124 and memory 128. Processor 132 may also be communicatively coupled to other components of simulated firearm 100 for directing the operation of simulated firearm 100. As used herein and in the claims, a “processor” is not limited to a single processing chip. Simulated firearm 100 may include a plurality of logic components within and between various components of simulated firearm 100 (e.g. camera 124, memory 128, trigger 112, etc.), and collectively these logic components be included in the meaning of processor 132.


Processor 132 can determine whether a portion of a captured image matches a target marker 136 according to any known image recognition process or algorithm. It will be appreciated that such processes or algorithms may be able to determine a positive match where the captured image includes only a portion of a target marker (e.g. a partially obstructed target marker), or includes a distorted target marker (e.g. target marker is at an angle to shooting axis 120). Accordingly, determining a match may not be limited to identifying a perfect 1-to-1 representation of the target marker 136 in the captured image. For example, a target marker 136 may comprise the appearance of a shooting target (e.g. a drone) and processor 132 may determine a match where a captured image includes an image of that shooting target from any angle (e.g. front, back, top, bottom, left, or right side of the drone). The marker data in memory 128 may include a plurality of images of the shooting target taken from different angles (e.g. front, back, top, bottom, left, or right side of the drone) for use by processor 132 in determining a match with a captured image.


In some embodiments, processor 132 scores a shot based at least in part on the captured image. For example, processor 132 may score a shot with a binary result (e.g. hit or miss, success or failure). Alternatively or in addition, processor 132 may score a shot quantitatively (e.g. on a scale of 1 to 100 based on accuracy). Alternatively or in addition, a shot may be scored qualitatively based on other characteristics of the shot taken (e.g. shot low or high, shot left or right, shot fast or slow, etc.).



FIG. 2 shows an example of simulated firearm 100 taking a shot (e.g. from manual activation of trigger 112) at a target 146 bearing a target marker 136, whereby camera 124 captures an image. In this example, the field of view 144 of camera 124 includes at least a portion of target 146 (in this example in the form of a signpost) which bears target marker 136. FIG. 3 shows an example of the image 152a captured by camera 124 in FIG. 2. In this example, processor 132 matches image portion 156a with tag information corresponding to target marker 136 stored in memory 128 of simulated firearm 100.


Still referring to FIG. 3, in some embodiments, processor 132 scores a shot as a success if the captured image 152a includes an image portion 156a located anywhere within the captured image 152a that matches a target marker 136 having corresponding tag information stored in memory 128. In this case, processor 132 would score the captured image 152a as a successful shot (e.g. a “hit”).


In some embodiments, processor 132 scores a shot as a success if a center 164 of the captured image 152a overlaps with the image portion 156a or the target marker 136 therein. In this case, processor 132 would score captured image 152a as a failure (e.g. a “miss”).


In some embodiments, processor 132 scores a shot (e.g. as a success or failure, or quantitatively) based on a distance between image portion 156a or the target market 136 therein and a center of the captured image 152a. For example, processor 132 may score a shot as a success if a distance 160a between a center of the image portion 156a, or the target marker 136 therein, to a center 164 of image 152a is less than a threshold distance. Alternatively, processor 132 may score a shot as a success if a closest distance 168a between the image portion 156a or the target marker 136 therein to a center 164 of image 152a is less than a threshold distance.


Processor 132 may express distances, such as the threshold distances mentioned above, as a function of any one or more of: the image dimension(s) (e.g. vertical or horizontal dimensions of image 152a), the image portion dimension(s) (e.g. vertical or horizontal dimension of image portion 156a), or the target marker dimension(s) (e.g. vertical or horizontal dimension of target marker 136 within image portion 156a).


Referring to FIGS. 2 and 3, in some embodiments, processor 132 may express distances by virtual distance (e.g. pixels) or real distances (e.g. millimeters or inches). For example, processor 132 may estimate real distances between points on a captured image 152a based on the camera field of view 144 (FIG. 2) and a camera subject distance 172 (FIG. 2). In some embodiments, subject distance 172 may be estimated based on camera focus information associated with the captured image 152a. For example, camera 124 may include autofocus functionality that makes fine adjustments to camera components (e.g. one or more lenses) in order to focus the camera prior to capturing an image. The subject distance 172 can be estimated based on the camera focus information using known techniques. Alternatively or in addition, simulated firearm 100 may include a range finder 176 that senses the subject distance 172, and processor 132 may receive subject distance 172 from range finder 176. As shown in FIG. 1, the range finder 176 may be coupled to the body 104 and oriented to take distance measurements parallel to the shooting axis 120. Range finder 176 may be any device that can sense distance at least within a range of normal shooting ranges. For example, range finger 176 may sense distances within a range including at least some portion between 10 cm to 1.5 km.



FIG. 4 shows another example of a captured image 152b. In some embodiments, processor 132 may score a shot based on contextual information associated with the captured image 152b, such as subject distance 172 (FIG. 2), gravity, and/or weather data. In the example of FIG. 4, processor 132 scores the shot based at least in part on gravity which is assessed using subject distance 172 (FIG. 2). Subject distance 172 (FIG. 2) may be determined by any of the methods discussed above. Processor 132 may determine the distance 180 by which an ammunition from the simulated firearm would fall based an estimated travel time for that ammunition to the target which is subject distance 172 (FIG. 2) away. Memory 128 may store ammunition information including one or both of ammunition velocity and aerodynamic characteristics such as air drag, which the processor 132 may use to estimate the travel time of the ammunition. Accordingly, processor 132 may determine gravity distance 180 based on subject distance 172 (FIG. 2), and the ammunition information in memory 128.


Continuing the example of FIG. 4, processor 132 may score the shot based on the relative positions of image portion 156b or target marker 136 therein, and a context-adjusted image center 184b. In this example, context-adjusted image center 184b is image center 164 offset vertically by gravity distance 180 to account for gravity. Using context-adjusted image center 184b, the shot may be scored in any manner described above, such as by determining whether a distance 160b is less than a threshold distance.


Gravity acts in a vertical direction. However, in some cases it may be possible to fire simulated firearm 100 while firearm 100 is tilted (e.g. rotated about a horizontal axis). In this circumstance, camera 124 may capture an image having a “Y” axis that does not align with gravity. In this case, it would be inaccurate to offset image center 164 along the Y-axis of the captured image 152b to account for gravity in determining context-adjusted image center. In some embodiments, simulated firearm 100 includes an orientation sensor 188 which is communicatively coupled to processor 132. Orientation sensor 188 is any device that can sense the rotational orientation of simulated firearm 100 about at least one axis. For example, orientation sensor 188 may include one or more (or all of) a magnetometer (e.g. for sensing compass direction), an accelerometer (e.g. for sensing the direction of gravity), and a gyroscopic sensor (e.g. for sensing the direction of gravity). Processor 132 may receive orientation data from orientation sensor 188 to determine the direction of gravity relative to captured image 152b. The processor 132 can then offset image center 164 by a distance 180 in the determined direction of gravity to determine context-adjusted image center 184.



FIG. 5 shows another example of a captured image 152c. In the example of FIG. 5, processor 132 scores the shot based at least in part on contextual information which includes gravity (as discussed above) and weather data. The weather data may include, for example wind speed and direction. In some genuine firearms with certain types of ammunition, a strong crosswind may cause the ammunition to deviate laterally (e.g. horizontally left or right) from the shooting axis. Processor 132 may determine offset distance 190 and its direction based on, for example, the weather data (e.g. wind speed and direction), ammunition data (e.g. ammunition velocity and aerodynamic characteristics), and subject distance 172 (FIG. 2). In the illustrated example, context-adjusted image center 184c is image center 164 offset by horizontal distance 192c due to weather (e.g. forces from a rightward crosswind), and by vertical distance 180c due to the force of gravity.


Referring to FIG. 1, in some embodiments, processor 132 generates weather data (e.g. wind speed and direction). For example, processor 132 may generate weather data randomly within prescribed limits or according to a weather generation algorithm. Alternatively or in addition, processor 132 may receive weather data based on user-input received by user input controls 204. For example, processor 132 may receive a wind speed and wind direction entered using user-input controls 204. Alternatively or in addition, processor 132 may receive weather data from a weather sensor 208. Weather sensor 208 may be coupled to the body 104, communicatively coupled to processor 132, and configured to detect weather conditions such as wind speed and direction for example. Referring to FIG. 6, processor 132 may alternatively or in addition, receive weather data from one or more devices external to and disconnected from the simulated firearm (“external devices”). For example, processor 132 may include a wireless communication device 210 communicatively coupled to processor 132 and operable to wirelessly receive weather data from one or more (or all) of a portable weather station 212, a server device 216, or another simulated firearm 100b.


In some embodiments, processor 132 may direct weather data to be communicated to the user audibly by speaker 196, visually by display 200, or both. This allows simulated firearm 100 to inform the user of the weather data, such as when processor 132 generates the weather data, so that the user can take corrective action in taking their shot.


Simulated firearm 100 may include a display 200 coupled to the body 104 and communicatively coupled to processor 132. Display 200 can be any device capable of presenting visual information to the user. For example, display 200 may include a light (e.g. LED, incandescent, halogen, etc.), an LCD screen, an LED screen, an e-ink display, an OLED display, or combinations thereof. In some embodiments, processor 132 may direct display 200 to visually communicate a successful or failed shot indication, a successful shot counter or a quantitative score (e.g. 1 to 100 or A to F) based on shot scoring performed by processor 132. Alternatively or in addition, processor 132 may direct display 200 to visually communicate firearm configuration information such as stored target marker(s) 136, ammunition information, and weather information. Alternatively or in addition, processor 132 may display status information such as a count of remaining ammunition (e.g. bullets remaining).


In some embodiments, processor 132 may direct display 200 to visually communicate the target marker 136 matched in a portion of a captured image. For example, processor 132 may direct display 200 to visually communicate the target marker 136 for which the processor scored a successful hit. This can inform the user of the target marker 136 they successfully hit in a previous shot.


In some embodiments, processor 132 may direct display 200 to visually communicate the target marker(s) 136 not yet successfully hit from among the target marker(s) stored in memory 128 or from among the target marker(s) 136 belong to an ongoing shooting program (e.g. training exercise or game). A shooting program may include a plurality of target markers 136 from among the plurality of target markers 136 stored in memory 128, and the training exercise may be completed when a successful hit (or a threshold quantitative hit score is obtained) for each of the target markers 136. A shooting program may be generated by processor 132, user selected with user-input controls 204 from among shooting programs stored in memory 128, user generated with user-input controls 204, or combinations thereof.


In some embodiments, a shooting program may include a plurality of target markers 136 ordered according to a sequence. In this example, processor 132 may direct display 200 to visually communicate the target marker(s) 136 next in sequence, and once these target marker(s) 136 are hit processor 132 may direct display 200 to visually communicate the subsequent target marker(s) 136 in sequence, and so on.


In some embodiments, processor 132 may direct display 200 to visually communicate the target marker(s) 136 that are not to be shot (e.g. “friendlies”). Processor 132 may penalize the user in response to scoring a successful hit of a target marker 136 that is not to be hit. For example, the penalty may include processor 132 terminating a shooting program, deducting from the user's score, audibly or visually cautioning the user, or combinations thereof.


In some embodiments, processor 132 may determine shot commentary (e.g. corrective advice or shot error), and direct the shot commentary to be communicated visually with display 200 and/or audibly with speaker 196. For example, in FIG. 5, the processor 132 may determine the direction of the distance 190 between context-adjusted image center 184c and image portion 156c or target marker 136 within the image portion 156, and then communicate this direction to the user audibly or visually (e.g. “SHOT HIGH AND LEFT”) or communicate corrective advise to the user (e.g. “SHOOT LOWER AND MORE RIGHT”).


Reference is now made to FIG. 7, which illustrates an example of firearm 100 directed at two target markers 136d and 136e of equal size at subject distances 172d and 172e, respectively. As shown, subject distance 172e is greater than subject distance 172d. Given the same field of view 144 of camera 124 (e.g. constant optical zoom), the first target marker 136d would be captured by camera 124 with an image boundary 220d, and the second target marker 136e would be captured by camera 124 with an image boundary 220e.



FIG. 8 illustrates an example image 152d containing first target marker 136d and having image boundary 220d. FIG. 9 illustrates an example image 152e containing second target marker 136e and having image boundary 220e. As shown, first target marker 136d occupies a greater proportion of image 152d (approximately 11%), than does second target marker 136e of image 152e (approximately 3%). This can mean that first target marker 136d is captured with more detail than second target marker 136e. For example, if both images 152d and 152e were captured at the same image resolution (i.e. the same pixel count and dimensions), then target marker 136d would be defined by a greater number of pixels than 136e (about 3.6 times more pixels). A target marker 136 having greater detail (e.g. more pixels) may be more accurately and/or more quickly matched by the processor 132 to marker data corresponding to that target marker 136 in memory 128.


In some embodiments, camera 124 includes a zoom lens 224 that is operable to provide optical zoom to camera 124. Zoom lens 224 may include a plurality of zoom positions corresponding to different levels of zoom. By increasing the zoom position of zoom lens 224, camera 124 is provided with a smaller field of view, whereby a target marker 136 within the field of view can occupy a greater portion of the captured image 152 than if the field of view was larger. This can result in a target marker 136 with greater detail in the captured image 152 for more accurate and/or faster matching by processor 132 to marker data corresponding to that target marker 136 in memory 128. FIG. 9 shows an exemplary zoomed image boundary 228e2 of an image 152e2 taken at a higher zoom level. As shown, due to the reduced field of view, the target marker 136e occupies a greater portion of the image 152e2, whereby the target marker 136e may have greater detail.


The zoom position of zoom lens 224 may be manually adjustable (e.g. by user-input controls 204). Alternatively or in addition, processor 132 may direct camera 124 to change the zoom position of zoom lens 224 based on the determined subject distance 172. For example, processor 132 may determine subject distance 172 by reference to camera focus information, or by another method such as the methods described above. In some embodiments, processor 132 may direct camera 124 to increase the zoom position of zoom lens 224 to compensate for a long subject distance 172, and direct camera 124 to decrease the zoom position of zoom lens 224 to compensate for a short subject distance 172. This can provide camera 124 with a reduced field of view for distant shots where a target marker 136 is expected to occupy a smaller portion of the captured image 152, and can provide camera 124 with a greater field of view for close shots where target marker 136 is expected to occupy a greater portion of the captured image 152 (e.g. to ensure the target marker 136 can fit within the captured image 152).


In some cases, the image recognition process employed by processor 132 is slower or requires greater processing power to match a target marker 136 with an image having very high resolution (e.g. high pixel count). For example, at a certain image resolution, the benefits of speed and accuracy to providing greater detail to target marker 136 may be outweighed or reduced by the burden of processing a larger image. The characteristics of some target markers 136 may be such that recognition accuracy is improved little or none by providing the target marker 136 with detail above a threshold level of detail (e.g. threshold number of pixels). Alternatively or in addition to zoom lens 224 (and to manual or automatic zoom positioning), processor 132 may change the image capture resolution of camera 124. For example, based on the camera subject distance 172 and field of view, processor 132 may reduce (or increase) the image quality (e.g. pixel density) setting of camera 124 prior to image capture, or processor 132 may crop the field of view of camera 124 prior to image capture (e.g. increase “digital zoom”) to optimize the detail of a captured image 152 and the target marker 136 within. This can reduce the time required for camera 124 to capture the image. In some cases, this can also reduce the time or processing power required for processor 132 to match a captured target marker 136 without significantly affecting matching accuracy. For example, processor 132 may determine an expected detail (e.g. size such as pixel dimensions) of a target marker 136 at a given subject distance 172 and field of view, and then change (e.g. reduce or increase) the image capture resolution of camera 124 to optimize (e.g. reduce or increase) the detail of the captured image 152 and target marker 136 within (e.g. optimal for matching speed, matching accuracy, or both).


In some embodiments, the optimal detail of a target marker 136 is predetermined and included in marker data corresponding to that target marker 136 stored in memory 128. The predetermined optimal detail may be parametrically defined based on contextual shot information, such as lighting, subject distance, and camera movement for example. For example, the optimal detail of a target marker 136 may be greater in low lighting conditions than in bright lighting conditions; greater at farther distances than at near distances (e.g. due to air clarity and focus accuracy); and greater when camera 124 is moving than when camera 124 is still (e.g. due to motion blur).


Referring to FIG. 1, in some embodiments marker data corresponding to additional target markers 136 can be selectively added to memory 128 (e.g. to be matched by processor 132 against subsequently captured images). For example, a user can direct processor 132 to initiate a new target marker setup process using user-input controls 204. The user may then direct camera 124 towards a target marker 136 to be added and actuate trigger 112 to cause processor 132 to capture a setup image including the new target marker 136. Processor 132 may store marker data corresponding to the new target marker 136 in memory 128.


In some embodiments, marker data corresponding to an additional target marker 136 may be downloaded to memory 128. For example, memory 128 may include removable memory (e.g. a flash memory card) for downloading marker data onto memory 128 (e.g. using an external computing device) and then reconnecting memory 128 to simulated firearm 100. Alternatively or in addition, memory 128 may store marker data received from an external computing device communicatively coupled to memory 128. For example, referring to FIG. 6, memory 128 of simulated firearm 100 may store marker data received from any one or more of another simulated firearm 100b, a shooting target 146, or a server device 216, any of which may be communicatively coupled to memory 128 by wire (e.g. USB cable) or wirelessly, indirectly across network 240 or directly.


In some embodiments, marker data corresponding to target markers 136 can be selectively removed from memory 128 (e.g. to be excluded from matching against subsequently captured images). For example, a user may select marker data in memory 128 using user-input controls 204, and thereby direct processor 132 to remove that marker data from memory 128. Alternatively or in addition, a user can direct processor 132 to initiate a target marker removal process using user-input controls 204. The user may then direct camera 124 towards a target marker 136 to be removed and actuate trigger 112 to cause processor 132 to capture a setup image including a target marker 136. The processor 132 may attempt to match the captured target marker 136 to marker data stored in memory 128, and if a match is found then delete that marker data from memory 128.


Wireless communication device 210 can be any device that allows data signals to be sent or received wirelessly between simulated firearm 100 and a device external to and physically disconnected from simulated firearm 100 (an “external device”). For example, wireless communication device 210 may be a device that wirelessly communicates according wireless communication technology, such as infrared (e.g. IrDA), Wifi (e.g. IEEE 802.11x), cellular (e.g. 2G, 3G, or LTE), and Bluetooth communication means.



FIG. 6 shows a simulated firearm system 232. As shown, simulated firearm system 232 may include a simulated firearm 100 that is wirelessly communicatively coupled (e.g. through wireless communication device 210) to one or more external devices, which may be other simulated firearms 100b, a server computing device 236, or a target 146 for example. Further, simulated firearm 100 may communicate directly with one or more external devices, and/or simulated firearm 100 may communicate with one or more external devices across a network 240 (e.g. the internet and/or a LAN). Simulated firearm 100 may be configured to send and/or receive shot data to/from one or more of the external devices in system 232. The shot data may identify the target marker 136 matched to a portion of an image captured by the simulated firearm 100.


In some cases, processor 132 only directs wireless communication device 210 to transmit shot data where the shot was scored successfully or where the shot satisfies another criterion (e.g. a quantitative shot score above a threshold, or where a designated target marker 136 is matched in the captured image). For example, in a group shooting program (e.g. team and/or competitive game or training exercise) including a plurality of designated target markers 136 to be shot, simulated firearm 100 may identify a successfully shot target marker 136 to the other simulated firearm(s) 100b in system 232. This can allow the other simulated firearms 100b to remove the identified target marker 136 from a list of target markers 136 remaining in the shooting program for example.


Alternatively or in addition, simulated firearm 100 may transmit shot data to a target 146. In response to receiving the shot data, the target 146 may take a visually or audibly perceptible reaction. The target 146 may be any object that embodies or carries a target marker 136, such as a signpost or a vehicle (e.g. land vehicle or aerial drone). For example, the target 146 may determine that the shot data identifies a target marker 136 carried by that target 146, and in response activate, modulate (e.g. change color, intensity, or pattern), or terminate a light emission, or activate, modulate (e.g. move faster, slower, or in a different pattern), or terminate a movement. In some embodiments, a target 146 is a remotely user-controller device (e.g. remote controlled car or aerial drone).


Still referring to FIG. 1, simulated firearm 100 can include a speaker 196, which may be any device that can audibly communicate information to the user. In some embodiments, processor 132 directs speaker 196 to communicate information such as the next target marker(s) 136 to shoot, an assessment of a previous shot (e.g. success, failure, a quantitative score, a qualitative description, or corrective advice), and/or a time remaining in a current shooting program.


In some embodiments, simulated firearm 100 includes a light source 244. Light source 244 can be any device that can emit optical radiation (e.g. in the infrared, visible, and/or ultraviolet light spectrums). The optical radiation (“light”) from light source 244 may be substantially collimated (e.g. as in a laser) or uncollimated (e.g. divergent or convergent). As shown, light source 244 is coupled to body 104 and oriented to emit light in parallel with shooting axis 120. This can allow light source 244 to illuminate target markers 136, which can improve the clarity of target markers 136 in captured images. Alternatively or in addition, light source 244 can visually communicate information to the user. For example, processor 132 may direct light source 244 to emit flashing light or to change the color of the emitted light in response to matching stored marker data to a portion of a captured image, and/or in response to scoring of a shot (e.g. green for successful and red for unsuccessful). This can allow the user to receive visual shot feedback without having to look at, e.g. display 200 of simulated firearm 100. Alternatively or in addition, light source 244 may include an aiming aid, such as a laser pointer for marking the intersection of the shooting axis 120 on the external environment.


In some embodiments, a target marker 136 may emit or reflect optical radiation outside of the visible spectrum, such as infrared or ultraviolet radiation. In this case, camera 124 may be configured to detect at least part of the non-visible radiation spectrum that the target marker 136 emits or reflects. This can allow usage of target markers 136 that are optically detectable by camera 124 for matching by processor 132, but which are not visible by the user. This can allow shooting programs (e.g. games or target practice) to better simulate real-world situations in which targets are not normally labelled with target markers 136. Where target markers 136 are reflective to non-visible radiation, simulated firearm 100 may include a light source 244 that emits non-visible optical radiation for the target marker 136 to reflect.


Simulated firearm 100 can include any user-input controls 204 that allow for manual (i.e. by hand) user selections receivable by processor 132. For example, user-input controls 204 may include one or more (or all) of buttons, switches, dials, slides, knobs, touch sensitive surfaces (e.g. capacitive or resistive based), wheels, or levers.


Simulated firearm 100 may include a battery 248 coupled to body 104. Battery 248 may be any one or more energy storage devices suitable for providing electrical power to one or more (or all) energy consuming components of simulated firearm 100, such as one or more (or all) of processor 132, camera 124, memory 128, range finder 176, orientation sensor 188, speaker 196, display 200, user-input controls 204, and light source 244. For example, battery 122 may include an alkaline, Ni-CAD, NiMH, or Li-ion battery that may be rechargeable or single-use disposable.


Memory 128 can be any one or more devices that can store electronic information, such as marker data corresponding to target markers 136. Memory 128 may include one or both of volatile memory (e.g. RAM), and non-volatile memory (e.g. ROM, flash memory, hard disk drive, solid state drive, diskette). In some cases, all of or a portion of memory 128 may be removable.



FIG. 10 is a schematic illustration of a simulated firearm 100 mounted to a remotely controlled, movable stand 252. Movable stand 252 may allow simulated firearm 100 to be moved around or along one or more axes. In the illustrated example, movable stand 252 includes a base 256, a first arm 260 mounted to base 256 and rotatable relative to base 256 about first axis 264, and a second arm 268 mounted to first arm 260 and rotatable relative to first arm 260 about second axis 272. As shown, movable stand 252 may further include a first actuator 276 (e.g. motor) operable to rotate first arm 260 relative to base 256 about first axis 264, and a second actuator 280 (e.g. motor) operable to rotate second arm 268 relative to first arm 260. In some embodiments, movable stand 252 may be remotely controlled by user-input controls 282 communicatively coupled to movable stand 252. In some embodiments, processor 132 is responsive to user-input controls 282 to initiate a shot sequence (e.g. just as if trigger 112 was user actuated).



FIG. 11 is a schematic illustration of a simulated firearm 1100. Part numbers from the previous figures have been incremented by 1000 to refer to analogous parts from the previous figures. As shown, simulated firearm 1100 includes a body 1104 shaped to simulate an automatic rifle. In the illustrated example, barrel portion 116 defines a shooting axis 1120. Camera 1124 is coupled to body 1100 proximate shooting end 1140 and aligned with shooting axis 1120 to capture images centered on shooting axis 1120. Light source 1244 is coupled to body 1100. As shown, light source 1244 may be exterior to and connected to body 1100. Simulated firearm 1100 further includes a processor 1132 and a trigger 1112. Simulated firearm 1100 may provide any of the functionality described above with respect to firearm 100.


While the above description provides examples of the embodiments, it will be appreciated that some features and/or functions of the described embodiments are susceptible to modification without departing from the spirit and principles of operation of the described embodiments. Accordingly, what has been described above has been intended to be illustrative of the invention and non-limiting and it will be understood by persons skilled in the art that other variants and modifications may be made without departing from the scope of the invention as defined in the claims appended hereto. The scope of the claims should not be limited by the preferred embodiments and examples, but should be given the broadest interpretation consistent with the description as a whole.


Items

Item 1: A simulated firearm comprising:

    • a body having a handle, a trigger, and a barrel portion, the barrel portion defining a shooting axis;
    • a camera coupled to the body and aligned with the shooting axis, wherein the camera is configured to capture an image in response to actuation of the trigger;
    • a memory coupled to the body, the memory storing marker data corresponding to at least one target marker; and
    • a processor coupled to the body, the processor configured to determine whether a portion of the image captured by the camera matches any one or more of the at least one target marker.


      Item 2: The simulated firearm of item 1, wherein the processor is configured to:
    • score a shot taken by actuating the trigger based at least in part on a position of the portion of the image within the image.


      Item 3: The simulated firearm of item 2, wherein the processor is configured to:
    • one or more of receive and determine contextual information relating to the image; and
    • determine a successful shot based at least in part on the position of the portion of the image within the image, and the contextual information.


      Item 4: The simulated firearm of item 3, wherein:
    • the contextual information comprises a camera subject distance associated with the image.


      Item 5: The simulated firearm of item 4, wherein the processor is configured to:
    • determine the camera subject distance based at least in part on camera focus information associated with the image.


      Item 6: The simulated firearm of item 4, further comprising:
    • a range finder coupled to the body and directed parallel to the shooting axis.


      Item 7: The simulated firearm of item 6, wherein the processor is configured to:
    • receive the camera subject distance from the range finder.


      Item 8: The simulated firearm of item 3, further comprising:
    • an orientation sensor coupled to the body and configured to measure an orientation of the body,
    • wherein the contextual information comprises the orientation of the body.


      Item 9: The simulated firearm of item 3, wherein:
    • the contextual information comprises weather data.


      Item 10: The simulated firearm of item 9, wherein:
    • the memory stores weather data.


      Item 11: The simulated firearm of item 9, further comprising:
    • a weather sensor.


      Item 12: The simulated firearm of item 9, wherein:
    • the weather data comprises wind speed and direction.


      Item 13: The simulated firearm of item 1, wherein:
    • the camera comprises a zoom lens having a zoom position, and
    • the processor is configured to direct the camera to change the zoom position of the zoom lens based on a subject distance of the camera.


      Item 14: The simulated firearm of item 1, further comprising:
    • a wireless communication device coupled to the body,
    • wherein the processor is configured to transmit shot data identifying the target marker matched to the portion of the image, using the wireless communication device, to a device external to the simulated firearm.


      Item 15: The simulated firearm of item 14, wherein:
    • the external device is one of: a second simulated firearm, a server device, and a target.


      Item 16: The simulated firearm of item 14, wherein:
    • the camera detects at least infrared light,
    • the image includes at least infrared information, and
    • the processor is configured to determine whether the portion of the image captured by the camera includes infrared information that matches any one or more of the at least one target marker.


      Item 17: The simulated firearm of item 16, further comprising:
    • a light source coupled to the body and directed parallel to the shooting axis, wherein the light source emits at least infrared light.


      Item 18: The simulated firearm of item 1, wherein:
    • the camera is configured to capture a setup image comprising an additional target marker, and
    • the processor is configured to store marker data corresponding to the additional target marker in the memory.


      Item 19: The simulated firearm of item 1, further comprising:
    • a user interface device that is manually user operable to select at least one target marker having corresponding marker data stored in the memory, and
    • the processor is configured to delete the marker data corresponding to the selected at least one target marker from the memory.

Claims
  • 1. A simulated firearm comprising: a body having a handle, a trigger, and a barrel portion, the barrel portion defining a shooting axis;a camera coupled to the body and aligned with the shooting axis, wherein the camera is configured to capture an image in response to actuation of the trigger;a memory coupled to the body, the memory storing marker data corresponding to at least one target marker; anda processor coupled to the body, the processor configured to determine whether a portion of the image captured by the camera matches any one or more of the at least one target marker.
  • 2. The simulated firearm of claim 1, wherein the processor is configured to: score a shot taken by actuating the trigger based at least in part on a position of the portion of the image within the image.
  • 3. The simulated firearm of claim 2, wherein the processor is configured to: one or more of receive and determine contextual information relating to the image; anddetermine a successful shot based at least in part on the position of the portion of the image within the image, and the contextual information.
  • 4. The simulated firearm of claim 3, wherein: the contextual information comprises a camera subject distance associated with the image.
  • 5. The simulated firearm of claim 4, wherein the processor is configured to: determine the camera subject distance based at least in part on camera focus information associated with the image.
  • 6. The simulated firearm of claim 4, further comprising: a range finder coupled to the body and directed parallel to the shooting axis.
  • 7. The simulated firearm of claim 6, wherein the processor is configured to: receive the camera subject distance from the range finder.
  • 8. The simulated firearm of claim 3, further comprising: an orientation sensor coupled to the body and configured to measure an orientation of the body,wherein the contextual information comprises the orientation of the body.
  • 9. The simulated firearm of claim 3, wherein: the contextual information comprises weather data.
  • 10. The simulated firearm of claim 9, wherein: the memory stores weather data.
  • 11. The simulated firearm of claim 9, further comprising: a weather sensor.
  • 12. The simulated firearm of claim 9, wherein: the weather data comprises wind speed and direction.
  • 13. The simulated firearm of claim 1, wherein: the camera comprises a zoom lens having a zoom position, andthe processor is configured to direct the camera to change the zoom position of the zoom lens based on a subject distance of the camera.
  • 14. The simulated firearm of claim 1, further comprising: a wireless communication device coupled to the body,wherein the processor is configured to transmit shot data identifying the target marker matched to the portion of the image, using the wireless communication device, to a device external to the simulated firearm.
  • 15. The simulated firearm of claim 14, wherein: the external device is one of: a second simulated firearm, a server device, and a target.
  • 16. The simulated firearm of claim 14, wherein: the camera detects at least infrared light,the image includes at least infrared information, andthe processor is configured to determine whether the portion of the image captured by the camera includes infrared information that matches any one or more of the at least one target marker.
  • 17. The simulated firearm of claim 16, further comprising: a light source coupled to the body and directed parallel to the shooting axis, whereinthe light source emits at least infrared light.
  • 18. The simulated firearm of claim 1, wherein: the camera is configured to capture a setup image comprising an additional target marker, andthe processor is configured to store marker data corresponding to the additional target marker in the memory.
  • 19. The simulated firearm of claim 1, further comprising: a user interface device that is manually user operable to select at least one target marker having corresponding marker data stored in the memory, andthe processor is configured to delete the marker data corresponding to the selected at least one target marker from the memory.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/332,276 filed on May 5, 2016, which is incorporated by reference herein in its entirety.

Provisional Applications (1)
Number Date Country
62332276 May 2016 US