Trail cameras have been used for decades to capture wildlife images using still imagery. Early trail cameras were tree-mounted cameras that used trip wires or rudimentary technology to take a single 35 mm picture of a target area. Today, trail cameras reflect the progression of camera technology and digital imagery. Modern trail cameras offer the ability to capture full-color, high resolution (8-10+ megapixels (Mps)) images, and in limited instances, short videos. For the most part, such imagery is stored on removable storage medium (e.g., memory cards), which are viewed hours or days later when a user visits the trail camera, removes the storage medium and views the captured images on a separate viewing device (e.g., a computer) or, alternatively, uses an integrated viewing screen of the camera.
In very limited instances, modern trail cameras have been adapted to transmit captured and stored imagery wirelessly. For such wireless transmissions, storage-transmission schemes are used to accommodate the movement of high-resolution imagery through a conventional wide-area communication network. These schemes include degrading captured imagery quality to produce smaller or compressed files sizes, transmitting only single images, and/or transmitting short videos that are recorded, stored and transmitted at appointed times (i.e., batch transmissions). These schemes, while pragmatic, provide sub-standard image quality and/or time-shifted (i.e., non-real-time) information to remotely located users. These trail cameras and their image handling schemes prevent their application for real-time monitoring of target sites and the ability to take immediate action based on transmitted image data from such target sites.
To manage power consumption, trail cameras commonly “sleep” between image capture events. It is common practice to stay in such a sleep mode until activity within a field-of-view (FOV) awakens the trail camera. Accordingly, trail cameras include motion detectors capable of detecting animals within such field-of-view (i.e., a motion field-of-view, MFOV). For modern trail cameras, the MFOV tends to be broader than an imaging FOV (IFOV) associated with the cameras' image sensor. The MFOV is dimensionally either (a) short (i.e., near range, 20-40 ft.) and wide (i.e., 45-70°) or (b) long (i.e., greater than 50 ft.) and skinny (i.e., <45°). Practically, the IFOV tends to focus on a target point or feature (e.g., an animal feeder) 20-40 ft. from the camera. It is due to this operational application that manufacturers focus, or narrow, the angle of the IFOV.
Lastly, it is notable that the image sensors used in today's trail cameras are well-suited for operation within the realm of visible light. For low-light/no-light environments, which is often the prevailing environment for the observation or capture of nocturnal animals or other similar targets, such image sensors do not perform optimally. Consequently, using conventional image sensors, today's trail cameras provide poor performance in low-light/no-light environments and diminished ranges and distances of operation relative to their theoretical maximums (as referenced above). Further yet, in those instances where image quality is reduced (e.g., compressed) through the use of an algorithm (or other mechanism) for wireless transmission, the overall quality of such images become further compromised or degraded.
In recent years, camera mechanisms have been combined with trapping systems to enable remote monitoring of such systems, and in limited instances, when such trail cameras are paired with separate (and external) actuation devices, a user has the ability to dial a number or take such other action to actuate a gate of a distantly located corral trap. Examples of such systems to assist in trapping feral hogs include, camera mechanisms as shown in U.S. patent application 2011/0167709, An animal trap requiring a periphery fence and U.S. patent application 2007/0248219, System and method for wirelessly actuating a movable structure, wherein this latter example may be directed to a remotely controlled gate/trap system. Combining the shortcomings discussed above (i.e., time-shifted imagery and poor image quality) with the intellect, numbers and mannerisms of potential targets being trapped (e.g., deer, bears, feral hogs), the operational outcomes are commonly non-optimal and incapable of responding to the challenges of real-time monitoring and trap actuation. Consequently, a need exists.
Overpopulation of wild animals, such as feral hogs (or wild pigs), can be problematic in a number of ways. Feral hogs may damage trees, vegetation, agricultural interests, and other property—including in recent years, cemeteries and golf courses. According to popular press articles and experts in this field, the extent of property damage associated with feral hogs is estimated to be as high as $1.5 billion annually in the United States alone with approximately $800 million attributed to agricultural losses. It is widely accepted that feral hog damage is expanding, wherein destructive feral hog activity has been regularly reported in more than forty states. In addition to direct damage to real property, feral hogs may prey on domestic animals such as pets and livestock, and may injure other animal populations by feeding on them, destroying their habitat and spreading disease. Feral hogs are not limited to the United States.
The size and number of feral hogs in the United Sates contribute to their ability to bring about such destruction. Mature feral hogs may be as tall as 36 inches and weigh from 100 to 400 lbs. Feral hog populations are difficult to ascertain but are staggering in size. In Texas alone, feral hog populations are estimated to range from 1.5-2.4 million. The animals' population rates are attributed to the limited number of natural predators and high reproductive potential. Sows can produce up to ten piglets per litter and may produce two litters per year. Further, piglets reach sexual maturity at six months of age, underscoring the animals' ability to quickly reach a state of overpopulation.
Feral hogs travel in groups, or sounders, including 8-20 hogs per sounder. Feral hogs are relatively intelligent animals that have keen senses of hearing and smell and quickly become suspicious of traps and trap systems. Further, hogs that escape a trapping event become “educated” about failed attempts, trap mechanisms and processes. Through research, it is shown that such education is shared amongst hogs within a sounder and across sounders, which can heighten animal-shyness and render traps less effective (i.e., requiring extended animal re-training, which reduces the efficiency of such trapping operations).
Because of their destructive habits, disease potential and exploding numbers, it is desirable to artificially control their populations by hunting and trapping them. To control or reduce feral hog populations, it is required that approximately 70+% of hogs be captured/harvested annually. Hunting provides limited population control. Further, animal-actuated traps are not effective, which are only capable of capturing one or two animals per trapping event. Accordingly, to effectively control feral hog populations within a geography, it is critical to regularly and consistently capture all hogs within each sounder.
To achieve this goal, a trap system is required that can (a) physically accommodate a feral hog sounder(s); (b) allow a remote user to clearly monitor and observe, in real-time, the on-going and erratic animal movements into and out of a trap area in both day and night conditions; and (c) control actuation of a trapping mechanism to effect animal capture. More specifically, a need exists for an improved, advanced trail camera that can function in the traditional role of a trail camera to offer enhanced functionality in low-light/no-light environments and/or serve as a central control component of the above-described trap system.
To provide users of trail cameras with the ability to better view animals within a natural environment, particularly, in low-light (and even no-light) conditions, the principles of the present invention provide for a trail camera that provides better light sensing in low-light conditions than existing trail cameras. In providing the better low-light sensing, the trail camera provides an image sensor that has an operational range that includes visible light (day operations) and near infrared (NIR) (low-light/night operations), which aligns with a light source integrated into the trail camera. The trail camera may use a monochromatic image sensor that is responsive to ambient light conditions and provides a high-contrast, high performance image output. Further yet, such monochromatic image sensor provides high-quality imagery at a lower resolution (approximately 1 Mps v. 8+ Mps), which further enables increased storage of such imagery and/or the transmission of real-time video of a monitored target area via a local or wide-area communication network to a remote user. An infrared light source (operatively aligned with a wavelength sensitivity of the image sensor), such as an array of light emitting diodes (LEDs), may be used to selectively illuminate a monitored target area in low-light or no-light conditions.
One embodiment of a trail camera to transmit image data to a communications network to enable a remote user to monitor a scene in real-time may include a lens configured to capture the scene, an infrared (IR) illumination device configured to illuminate the scene at an IR wavelength, and an image sensor being configured to sense the scene being captured by the lens and to produce image signals representing the scene. The image sensor further may have a wavelength sensitivity at the IR wavelength. The trail camera may further include a processing unit in communication with the image sensor. The processing unit may be configured to receive and process the produced image signals. The trail camera may further include an antenna, configured to communicate with the communications network, and an input/output (I/O) unit, configured to communicate with both the processing unit and the antenna. The I/O unit further is configured to communicate image signals from the processing unit to the communications network proximate to production of such image signals. A housing may be adapted to house the lens, IR illumination device, image sensor, processing unit, and I/O unit.
Another embodiment of a trail camera configured to communicate to a communication network to enable a user to monitor a horizontal, first field-of-view encompassing a target area and receive data from an external device located proximate to such target area may include a housing, a lens configured to capture the first field-of-view, an infrared (IR) illumination device configured to selectively illuminate the first field-of-view at an IR wavelength, and an image sensor having a wavelength sensitivity at least at the IR wavelength. The image sensor further may sense the first field-of-view and produce image signals representing the sensed first field-of-view. The trail camera further may include a processor, which receives and processes image signals from the image sensor, and an antenna configured to communicate with the communication network. To facilitate communications, the trail camera further may include an input/output (I/O) unit and a transceiver. The I/O unit may communicate with the processor and the antenna to communicate image signals from the processor to the communications network proximate to production of such image signals. The transceiver may communicate with the I/O unit as well as the external device and is configured to receive data from the external device. The housing is adapted to house the lens, IR illumination device, image sensor, processor, and I/O unit.
One embodiment of an animal trapping system may be viewable and controllable by a remote user using an electronic device. The system may include a trap enclosure configured to deploy to confine animals within a trap area and a controller configured to deploy the trap enclosure in response to a user-issued command. The system may further include a head unit that includes both a camera unit and multiple communications modules. The head unit is configured to produce video signals representative of at least the trap area, communicate with the electronic device via a wide-area communications network, and communicate with said controller via a local wireless network. The head unit further is configured to transmit produced video signals to the electronic device for user-viewing proximate to production of the video signals, receive a user-issued command from the electronic device, and transmit the received user-issued command to the controller to deploy the trap enclosure to confine animals within the viewed trap area as viewed via the electronic device.
Illustrative embodiments of the present invention are described in detail below with reference to the attached drawing figures, which are incorporated by reference herein and wherein:
As specifically shown in
With regard to
The rear housing 102b may provide mounting strap pass-thrus 116 that are configured to accommodate a mounting strap (
The rear housing 102b may incorporate a T-post mounting system 120 inclusive of a recessed element 120a to accommodate a studded T-post (
To better ensure the physical security of the camera unit 100 once it is placed at a monitoring site, the housing 102 may further include a security cable pass-thru 122 to accommodate a steel cable (or other cable-like element) with a locking mechanism (not shown). Operatively, once the camera unit 100 is positioned and secured to a supporting feature, a cable is passed through the security cable pass-thru 122 to (a) encompass the supporting feature or (b) secure the camera unit 100 to a proximate, immobile object (not shown). Additionally, the housing 102 may further include an integrated lock point 124, which is formed when the front housing 102a and the rear housing 102b are brought together, to create a common pass-thru. Through such pass-thru, a standard lock (e.g., combination, keyed) or other securing element (e.g., carabiner, clip) (not shown) may be inserted and secured to ensure that the housing 102 is not readily opened and better ensure that an unintended person does not access the interior of the camera unit 100.
In a closed position, as shown, the front housing 102a and the rear housing 102b are brought together, pivoting around hinge 129 (
With regard to
The control panel 130 may include a graphical display 136, such as a liquid crystal display (LCD), to enable a user to set up various functions of the camera unit 100, receive status information regarding the camera unit 100 (e.g., battery strength, wireless signal strength (if applicable), self-diagnostics), and in an alternative embodiment, view images and/or video captured by the camera unit 100. As further shown, a battery compartment 138 is provided to receive internal batteries to provide power to the camera unit 100. In this case, the illustrated power source includes eight AA batteries, but alternative battery numbers or sizes may be utilized with this or another configuration of the camera unit 100.
While not illustrated in
As shown in this embodiment, the control panel 130 does not fully span the height of the interior of the front housing 102a. As a consequence, an internal cavity 140 is created that can provide access to a variety of internalized connection points and ports, as described further below. By including the connection points and ports interior to the camera unit 100, the connection points and ports are further protected and do not require their own (susceptible) individual weatherproofing/waterproofing. Consequently, there exists less opportunity for system failure due to weather or environmental interference. Access to the exterior of the housing 102, for example, for cabling is provided through weatherproof or waterproof pass-thrus 142.
For a communications-enabled embodiment, an Ethernet (or other similar connector, for example, USB) port 144 may be available to enable the camera unit 100 to communicate via an external communications network, either wired or wireless. Additionally, the port 144 may be used to connect accessories, for example, an external antenna (not shown) to the camera unit 100 to enhance connectivity (e.g., range and/or quality) of the camera unit 100 in remote or rural locations. Additionally, the port 144 could accept control and command devices, as described above, such as an external keyboard or video/image replay device. The camera unit 100 may accommodate a removable memory card 146, such as a secure digital (SD) card, so that captured data, including image data, may be collected and stored. The camera unit 100 may further include an auxiliary power input port 148 to enable connection of the camera unit 100 to an external power source (not shown), such as a battery, solar panel, wind generator, or other similar external power source. While in the illustrated configuration, an auxiliary power source (via input port 148) compliments the internal power source (provided through batteries within the battery compartment 138), the camera unit 100 may not include any internal power source and may rely solely on an external power source.
As illustrated in
With regard to
In further reference to
As shown, the image sensor/lens 152 incorporates a lens, lens holder and image sensor; provided, however, these elements may be separate and distinct rather than as shown. The image sensor/lens 152 preferably includes a monochromatic, light-sensitive sensor capable of dynamic operation in day/night operations. In an embodiment, the image sensor/lens 152 has low-light (e.g., 0 lux) sensing capabilities and is calibrated for enhanced near infrared (NIR) detection, i.e., night vision capability with NIR (e.g., 850 nm wavelength) to detect non-visible light. For NIR applications, the image sensor/lens 152 provides increased sensitivity to reduce the need for applied light (e.g., LED lighting requirements). An image sensor/lens 152 having the above-described characteristics facilitates a lower image resolution, for example, approximately one megapixel. These imaging characteristics provide additional capabilities and user flexibility for a communication-enabled embodiment of the camera unit 100, including transmission capabilities that allow real-time streaming video of captured video or transmission of still images via a wide-area communication network (e.g., cellular network).
In an embodiment, the image sensor of the image sensor/lens 152 may have a pixel size of 3.75 μm×3.75 μm. The frame rates of the image sensor of the image sensor/lens 152 may include a range of operation, including 1.2 megapixel or VGA (full IFOV) at approximately 45 fps or 720 pHD or VGA (reduced IFOV) at approximately 60 fps. For representative performance, the image sensor of the image sensor/lens 152 may have a responsivity of 5.5V/lux-sec at 550 nm, a dynamic range at or about 83.5 db and a quantum efficiency of 26.8%. The integrated lens of the image sensor/lens 152 may have a focal length of 4.5 mm, relative aperture of F2.3, and a wavelength bandwidth that extends from visible through NIR. This integrated lens, at least for this embodiment, is adapted and tuned to a 1.2 megapixel sensor (or the resolution of the underlying sensor of the image sensor/lens 152).
In one embodiment, the image sensor/lens 152 may be an Aptina image sensor (model number AR0130CS) with an optical format of one-third of an inch. It should be understood that alternative image sensors having similar characteristics and performance of sensing images in low-light and NIR conditions may be used in accordance with an embodiment.
One skilled in the art shall recognize that the image sensor/lens 152 could be a color, high resolution (e.g., 3-10+ megapixel) image sensor/lens combination—consistent with more traditional trail cameras—to provide full-color, high resolution images of animals or other targets. As cellular and other wireless networks enhance their speed and transmission capabilities (as well as network rates become more affordable), the transmission of such imagery could become more practical and expected. Alternatively, for a non-communication-enabled camera unit 100, low-resolution or high-resolution images may be stored on removable memory card 146, as an alternative to wireless transmission, or a scheme may be used that uses a combination of storage of image data and after-the-fact (i.e., time-shifted) wireless transmission, consistent with more traditional approaches.
An image sensor cover 104, fabricated of optical-grade plastic or glass, may be positioned within an aperture of the front housing 102a and positioned forward of the image sensor/lens 152. The image sensor cover 104 may provide a weatherproofing or waterproofing seal. In one embodiment, the image sensor cover 104 does not have an optical filter; however, an optical filter(s) to transmit light of predetermined wavelengths may be provided, whether incorporated into the image sensor cover 104 or added to the optical path through the use of dye and/or coatings. In one embodiment, as further illustrated in
In one embodiment, surrounding the image sensor/lens 152, an IR LED PCB 164 is provided that includes at least one LED. The illustrated IR LED PCB 164 includes thirty infrared LEDs (e.g., arranged in six string of five LEDs) configured in a circular arrangement to evenly distribute light about the image sensor/lens 152. It is recognized that this IR LED PCB 164 could take any number of physical arrangements, number of LEDs (e.g., 1 to 50+), and placement, e.g., located to one side of the image sensor/lens 152, partially about the image sensor/lens 152, or encompassing the image sensor/lens 152 (as shown). In accordance with an aspect, selection and arrangement of the LEDs complement the image sensor/lens 152, particularly in low-light environments. In one embodiment, the LEDs have a wavelength of 850 nm with a half-brightness angle of approximately 60° and radiant intensity of 55 mW/sr.
The LEDs are positioned so that IR light generated by the LEDs are transmitted through the illumination source lens 108, or more specifically, an IR LED ring lens 108. The IR LED ring lens 108 is fabricated of optical-grade plastic or glass. Operationally, the IR LED ring lens 108 guides and focuses the illumination of the LED PCB 164 to define an area to be illuminated, where the area of illumination should at least cover a portion of the prescribed horizontal MFOV of the PIR sensor 156. While the image lens cover 104 and the illumination source lens 108 may be separate components, the cover 104 and lens 108 may also be integrated into a single component as shown in
In an embodiment, the horizontal MFOV operatively aligns with the horizontal IFOV of image sensor/lens 152 (
Operatively,
Operatively, a user may properly mount and orient the camera unit 100 so as to establish a desired MFOV/IFOV to encompass the target (T), which may include a path, an animal feeder, water source, a trap or trapping system, or other desired target to be monitored. In one embodiment, in low-light/no-light conditions, an operative linkage between the IR LED PCB 164, the ambient light sensor 106 and the PIR sensor 156 may be configurable to enable the IR LED PCB 164 to illuminate—when needed due to ambient light conditions—upon detecting motion at or about the target (T) by the PIR sensor 156.
While vertical observation may or may not be needed (as some targets (T), for example, feral hogs, are exclusively located on the ground (G)); provided however, if trapping game birds, bear or other like animals, vertical observation may be of value), the vertical IFOV of the camera unit 100 also provides a greater-than-typical vertical IFOV relative to traditional trail cameras. Specifically, for the illustrated example of (D) being approximately 35 ft., a viewable height (H) of approximately 17 ft. at (T) is achievable with the camera unit 100 being located approximately 4 ft. above the ground (G).
It should be recognized that the combination of the image sensor/lens 152, the image sensor cover 104 and their proximate position can provide camera unit 100 a more traditional, narrow horizontal IFOV. Traditional trail cameras are developed to focus on a target (T) (e.g., an animal feeder) at a prescribed distance (D), which limits the ability to view proximate areas. While not as practical, the camera unit 100 may be so configured to provide users a more commonplace IFOV.
With regard to
With regard to
The I/O unit 308 may include a variety of features depending on the embodiment of the camera unit 100. Specifically, the I/O unit 308 may include a wireless communications element 308a, which permits communication with an external wireless network (e.g., local communication network, cellular network). Specifically, element 308a enables instructions and/or commands to be received from remote users and transmit status information, instructions and/or data, including still and video imagery, to such users.
The I/O unit 308 may further include a wireless communications element 308b, which may permit communications with one or more external devices (
The processing unit 302 may further be in communication with a user interface 310, such as a keypad (not shown) and/or LCD 136, which may be a touch-screen. The processing unit 302 may further be in communication with and control sensors 312, including at least PIR sensor 156 and image sensor/lens 152. The processing unit 302 may further be in communication with and control an illumination source 314, which could take the form of the IR LED PCB 164 or could take the form of a flash or other controllable (switchable) visible light.
With regard to
A motion sensor module 404 may be configured to sense motion of animals or other targets (e.g., people) via a PIR sensor 156. The motion sensor module 404 may be configured to generate a motion detect signal upon the PIR sensor 156 receiving reflected light from an animal or such other target within a MFOV of the PIR sensor 156. A motion detect signal may be used to notify or initiate other module(s), for example, a data communications module 406 (for communications-enabled embodiments) to communicate an alert to a user and/or to initiate recording and/or communication of image data/information.
The data communications module 406 may be configured to communicate information, data, instructions and/or commands to a user and/or an external device(s). This module effects the receipt of information (e.g., status information, sensed information or data) from external devices to be delivered to remote users and, in other embodiments, transmit status information, instructions and/or data from remote users to such external devices to, for example, control such external devices. Depending on the target of such communication (e.g., user, camera unit 100, external device) a communication network—wide-area or local-area—is selected and used. Information and/or data may include, among other types of data (outlined below), image data, whether stills or real-time streaming video, captured from the image sensor/lens 152. In the context of the external device(s) and their potential interaction with a camera unit 100, the data communications module 406 may serve as a central point for a command-and-control hub system as controlled per a remote user. In such an embodiment, module 406 communicates with a local communication network, e.g., a wireless network using an IEEE 802.15 standard, as but one example, a ZigBee® communications protocol. For any such embodiment, the camera unit 100 serves a “master” device that communicates with, and in certain scenarios, controls external device(s) as “slave” devices (e.g., controllers, feeders, illumination devices, irrigation and water systems, gates). It should be understood that other local, wireless standards and devices may be used.
The process commands module 408 may be configured to receive and process commands for the camera unit 100. For example, commands, such as “enter a low-power mode” (e.g., when there is no detected motion), “initiate image capture,” and “stop image capture.” The process commands module 408 may modify a sensitivity characteristic of the motion sensing functionality (i.e., PIR sensor 156), activate an illumination source 314 (upon detected motion) when ambient light is below a threshold level, and/or increase an intensity characteristic or focal point of the camera unit 100 illumination source 314. In a complementary embodiment, this module may be subject to user-issued commands communicated through a wide-area communication network. Specifically, the process commands module 408, in combination with other modules, may effect the command, control, and management of external devices (e.g., controllers, feeders, illumination devices, irrigation and water systems, gates). Also, internal processes of the camera unit 100 may be modified by user-issued commands. As but one example, if the camera unit 100 was equipped with a zoom lens (not shown), the process commands module 408 may control, internally (based on detected motion within the MFOV) or externally (based on user-issued commands), the magnification of such zoom lens.
A data bridge module 410 may be configured to cause the camera unit 100 to operate as a “bridge” by transmitting status information, instructions, and/or data to and/or receiving status information, instructions, and/or data from nearby external device(s) and communicating such information/data via a wide-area communication network. Other examples of the bridge functionality may include receiving information/data from a tag, band, implant or other device on or in wild, feral or domesticated animals (e.g., ear tags, bands, collars, implants or consumables), equipment (e.g., tractors, sprinklers, irrigation systems, gates), and/or sensors (e.g., temperature, wind velocity, soil moisture, water level, air quality, including pollen or pollutant content and/or levels, ambient light levels, humidity, soil composition, animal weight, animal health and/or condition) via a personal/local communication network.
For certain embodiments, an alerts module 412 may be configured to generate alerts or messages that may be communicated by the data communications module 406 to a user. The alerts module 412 may be configured with threshold parameters that, in response to exceeding such threshold parameters, the module issues a signal that results in a user-directed alert and/or message to be generated and delivered.
A standby module 414 may be configured to cause the camera unit 100 to operate in a “rest” state between periods of activity (e.g., capturing images, transmitting information and data), where many of the electronic components, excluding the PIR sensor 156, are turned off or maintained at low- to very low-power during such rest states. Upon detection of motion within the MFOV, as described above, the standby module 414 is deactivated, and the camera system 100 and the remaining modules, individually or in some combination, are initiated or become active.
Additional and/or different modules may be used to perform a variety of additional, specific functions. As but one example, a small/large feed dispense module (not shown) may be provided (rather than inclusion within the process commands module 408) to cause a feeder 1160 (
As discussed above, in one embodiment, camera unit 100 may serve as a traditional, standalone trail camera, which is placed at a site, activated, and directed toward a target area. The camera unit 100 may operate, for example, in a standby state to detect motion within or about such target area, whether in day or night settings; initiate operation of the camera unit 100 upon detection of motion; and capture images (whether still or video) for storage on a memory card 146. In this operational scenario, a user would visit the camera unit 100 to retrieve the memory card 146 to view earlier captured images.
In another embodiment, as schematically illustrated in
In this illustrated example, the monitoring system 1100 includes three primary components: a user device(s) 1120, an on-site system 1130, and an interposed communication network 1140 (e.g., a wide-area communication network). Camera unit 100 is placed at a site, activated, and directed toward the target area. The camera unit 100 would operate, for example, in a standby state (ready to detect motion within or about such target area, whether in day or night settings); initiate operation of the camera unit 100 upon detection of such motion; and capture images (whether still or video) for transmission to a remote user via the communication network 1140. The communications network 1140 may include a conventional server 1142 to store and/or manage data transferred through the control and operation of the camera unit 100 and IP network 1144 (or like components as are well known in the art). The user device 1120 receives information from the on-site system 1130, but also may transmit control commands (e.g., terminate transmission of images, initiate transmission of images, activate illumination source) through the communication network 1140.
The user device(s) 1120 may be a computer 1120a, a cellular device 1120b (e.g. smart phone), pager (not shown) or other similar electronic communications device. At the user device 1120, data is managed and presented through an appropriate user interface, for example, a desktop application (for computer 1120a) or smartphone application (for cellular device 1120b). The on-site system 1130, for this illustrated system, may include the camera unit 100.
An extension of the prior embodiment and schematically illustrated in
The user-controlled animal trapping system 1200 includes three primary components: a user device(s) 1120, an on-site system 1130, and an interposed communication network 1140 (e.g., a wide-area communication network). Camera unit 100 is placed at a site, activated, and directed toward the target area. The camera unit 100 would operate, for example, in a standby state (ready to detect motion within or about such target area, whether in day or night settings); initiate operation of the camera unit 100 upon detection of such motion; and capture images (whether still or video) for transmission to a remote user via the communication network 1140 (consistent with that described above). The user device 1120 receives information from the on-site system 1130, but also may transmit control commands (e.g., terminate transmission of images, initiate transmission of images, activate illumination source, and/or actuate the enclosure or enclosure component through the communication network 1140.
Similar to above, the user device(s) 1120 may be a computer 1120a, a cellular device 1120b (e.g. smart phone), pager (not shown) or other similar electronic communications device. At the user device 1120, data is managed and presented through an appropriate user interface, for example, a desktop application (for computer 1120a) or smartphone application (for cellular device 1120b). The on-site system 1130, for this illustrated system, may include the camera unit 100 and controller 1132. The camera unit 100 may communicate with the controller 1132, whether wirelessly (preferably, through a local communication network), wired, or as an integrated unit. The user-controlled animal trapping system 1200 includes a controllable, enclosure mechanism 1150, which may include a suspendable enclosure (movable from a raised position to a lowered position) 1152, a drop net (not shown), a corral structure with a closable gate or door (not shown), a box structure with a closable gate or door (not shown), or similar structure.
Expanding on the abbreviated description above for this embodiment, the camera unit 100, positioned at a trap area, operates to detect motion within a MFOV. Upon detecting such motion, the camera unit 100 exits its standby state, which may include activating its illumination source, if warranted (i.e., low- or no-light conditions); taking a still image of the IFOV; and transmitting such still image (in the form of an alert) to a user via the communications network 1140, which is delivered to the user through a user device 1120(s). Because animal motion, whether alone or in groups, make effect multiple such alerts, a user may set a rule at the camera unit 100, the server 1142, and/or software application of the user device 1120 to not notify the user unless a certain amount of motion is sensed and/or after a lapse of time, measured from the last motion detection.
Upon receiving an alert or upon the user's own initiative, the user may send a command to the camera unit 100 to initiate real-time streaming video, which is delivered to the remote user via the communications network 1140. Upon receiving such user-command, the camera unit 100 activates its illumination source, if warranted (i.e., low- or no-light conditions), activates the image sensor/lens 152, and begins transmission of real-time live video, which the user receives and can view via the user device 1120.
Using real-time streaming video, the user can watch both a trap area and an area surrounding such trap area to gain an understanding of animal movement in and out of the trap area. When an optimum number of animals are within the trap area, the user sends a command (using a user device(s) 1120) to the camera unit 100 to deploy the enclosure mechanism 1150. Upon receiving such user command, the camera unit 100 transmits a related instruction to the controller 1132 to effect such deployment. Through such deployment and thereafter, the user may watch real-time streaming video of the trap area, which includes, for example, the enclosure 1152 and any and all captured animals.
Referring to
The user places bait (e.g., corn for feral hogs) within the trap area (beneath and within the to-be-perimeter of the enclosure 1152) to prepare the trap area. To ready the enclosure 1152, the user raises the movable enclosure 1152 to a suspended position and releasably couples the enclosure 1152 to a release mechanism/controller 1132. The release mechanism/controller 1132 communicates with the camera unit 100. The release mechanism/controller 1132 further releasably holds the enclosure 1152 in the suspended position until the user issues an actuation signal to drop the enclosure 1152 to the lowered position.
In operation, as more fully described above, the user assesses the number of animals in and about the trap area through viewing the trap and surrounding areas through a user device 1120 in real-time. When all animals are determined to be within the trap area, the user transmits a drop signal via the user device 1120 (
It should be understood that many of the features and functions of the camera unit, server, user device, controller and/or other devices located at the trap structure may be executed by more than one of the components of the illustrated systems. For example, functionality to initiate the enclosure 1152 to drop may be incorporated into the camera unit 100, controller/release mechanism 1132, server 1142 and/or user device 1120. That is, logic for performing various functions may be executed on a variety of different computing systems, and various embodiments contemplate such configurations and variations.
A variation of the above embodiment further is illustrated in
In another embodiment, as illustrated in
As illustrated in
Although particular embodiments of the present invention have been explained in detail, it should be understood that various changes, substitutions, and alterations can be made to such embodiments without departing from the scope of the present invention as defined by the following claims.