Systems and methods for remote multi-directional bark deterrence

Information

  • Patent Grant
  • 11238889
  • Patent Number
    11,238,889
  • Date Filed
    Thursday, July 25, 2019
    5 years ago
  • Date Issued
    Tuesday, February 1, 2022
    2 years ago
Abstract
An apparatus is described that comprises a microphone array and a plurality of transducers. The microphone array and the plurality of transducers are communicatively coupled with at least one processor. The apparatus includes the microphone array for receiving at least one signal. Each transducer of the plurality of transducers is configured to deliver a correction signal along a transducer beam spread axis, wherein the plurality of transducers is positioned on the apparatus for providing a combined transducer beam spread coverage in the horizontal plane. One or more applications running on the at least one processor use information of the at least one signal to detect a sound event. The detecting the sound event includes selecting transducers from the plurality of transducers and instructing the selected transducers to deliver a correction signal.
Description
RELATED APPLICATIONS

Not applicable


TECHNICAL FIELD

The disclosure herein involves monitoring and deterring noise disturbances generated by an animal.


BACKGROUND

A dog may from time to time generate noise disturbances including barking episodes. Bark deterrence systems often use delivery of sound stimulus to deter this unwanted behavior.


INCORPORATION BY REFERENCE

Each patent, patent application, and/or publication mentioned in this specification is herein incorporated by reference in its entirety to the same extent as if each individual patent, patent application, and/or publication was specifically and individually indicated to be incorporated by reference.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a beam spread of an ultrasonic transducer, under an embodiment.



FIG. 2 shows delivery of an ultrasonic signal to an animal, under an embodiment.



FIG. 3 shows an animal outside the coverage of an ultrasonic signal, under an embodiment.



FIG. 4 shows a detection and correction unit, under an embodiment.



FIG. 5 shows beam spread regions of transducers located on a detection and correction unit, under an embodiment.



FIG. 6 shows a microphone array, under an embodiment.



FIG. 7 shows a detection and correction unit, under an embodiment.



FIG. 8A shows a detection and correction unit, under an embodiment.



FIG. 8B shows a bearing of an animal relative to a detection and correction unit, under an embodiment.



FIG. 9 shows a detection and correction unit, under an embodiment.



FIG. 10 shows a detection and correction unit, under an embodiment.



FIG. 11 shows determination of bearing, under an embodiment.



FIG. 12 shows determination of bearing, under an embodiment.





DETAILED DESCRIPTION

Remote bark deterrent technology may incorporate a bark detect component responsible for detecting the occurrence of a dog bark and a correction component responsible for providing a stimulus to the dog when the bark detection component indicates an occurrence of a bark event. The stimulus discourages present and potentially future bark episodes. The bark detect and correction component may be combined into an integral bark detection and correction unit as further described herein.


The bark detect component may comprise a microphone (or microphone array) for analyzing one or more of audio frequency spectrum, sound pressure level (audio magnitude), and audio signature. These concepts are further described below.


An audio frequency or audible frequency is a periodic vibration whose frequency is in the band audible to the average human. The SI unit of audio frequency is the hertz (Hz). It is the property of sound that most determines pitch. The generally accepted standard range of audible frequencies for humans is 20 to 20,000 Hz, although the range of frequencies individuals hear is greatly influenced by environmental factors. In air at atmospheric pressure, these represent sound waves with wavelengths of 17 meters (56 ft) to 1.7 centimeters (0.67 in). Frequencies below 20 Hz are generally felt rather than heard, assuming the amplitude of the vibration is great enough. High frequencies are the first to be affected by hearing loss due to age or prolonged exposure to very loud noises.


Sound pressure or acoustic pressure is the local pressure deviation from the ambient (average or equilibrium) atmospheric pressure, caused by a sound wave. In air, sound pressure can be measured using a microphone. Sound pressure, like other kinds of pressure, is commonly measured in units of Pascals (Pa). The quietest sound that most people can hear has a sound pressure of 2×10−5 Pa, so this pressure is called the threshold of human hearing.


Sound pressure level (SPL) uses a logarithmic scale to represent the sound pressure of a sound relative to a reference pressure. The reference sound pressure is typically the threshold of human hearing: remember that it's 2×105 Pa. Sound pressure level is measured in units of decibels (dB) and is calculated using the following equation, where p is the sound pressure of the sound wave and po is the reference sound pressure:








L
p

=

20







log
10



(

p

p
0


)



dB










An audio signature comprises characteristics of an audio signal that may be interpreted as a bark event. Bark detection may be as simple as detection of sound pressure level (dB) in certain frequency ranges. For example, the detection of energy within the frequency range of 140 Hz and 2000 Hz with a SPL above 60 dB. An embodiment may add a time component to the combination of frequency and sound pressure level. Under this embodiment, a signature comprises a combination of frequency and signal pressure level analysis over a period of time. For example, the detection of energy within the frequency range of 140 Hz and 2000 Hz, with a SPL above 60 dB, within at least three 16 millisecond periods within any ten contiguous 16 millisecond periods.


As indicated above, remote bark deterrent technology may incorporate a correction component responsible for providing a stimulus to the dog when the bark detect component indicates an occurrence of a bark event. The stimulus of course discourages present and potentially future bark episodes. The correction component typically involves a sound generation device, under an embodiment. The sound may be generated in the audible region via audio circuitry connected to a speaker or in the ultrasonic region via ultrasonic drive circuitry connected to an ultrasonic transducer.


Ultrasonic transducers convert alternating current (AC) into ultrasound, as well as the reverse. Ultrasonics typically refers to piezoelectric transducers or capacitive transducers. Piezoelectric crystals change size and shape when a voltage is applied: AC voltage makes them oscillate at the same frequency and produce ultrasonic sound. Capacitive transducers use electrostatic fields between a conductive diaphragm and a backing plate.


As indicated above, one type of ultrasonic transducer comprises the piezoelectric crystal, which converts an oscillating electric field applied to the crystal into a mechanical vibration. Piezoelectric crystals may include quartz, Rochelle salt, certain types of ceramic, etc. Piezoelectric transducers may be employed over the entire frequency range and at all output levels.


The sound generated by a correction component may be in the audible region or ultrasonic region. Some audible sounds are effective at distracting a barking dog such as white noise (white noise is a sound that contains every frequency within the range of human hearing; generally from 20 hertz to 20 kHz in equal amounts), the owner's voice, and/or the sound of coins shaking in a can. To obtain a response from a dog while not disturbing people, the ultrasonic region may be chosen, under an embodiment. The ultrasonic frequencies are heard by dogs but not typically heard by humans. The frequencies chosen are under one embodiment in the 20 kHz to 27 kHz range. Also, frequencies up to 50 kHz may be heard by dogs and may be utilized. Sound pressure level increases as amplitude of the ultrasonic signal increases. Under one embodiment acoustic power is maximized for efficacy but only to the point where the maximum sound pressure level at the dog's ears is less than or equal a value determined to be safe for the dog and any human that is within the field of the ultrasonic field. Ultrasonic frequencies and power levels are chosen under one embodiment based on the frequencies/power levels determined to be effective for a specific dog or event type. They may be determined based on a logging/learning methodology and/or based on repeat occurrences of a behavior following an ultrasonic correction. At the ear on the animal, 115 dBA may be tolerated for 15 minutes, under an embodiment. However, the scale may slide all the way to 160 dBA for 1 second. Typically correction tones are 2 seconds or less. Source sound pressure level at the face of the transducer under an embodiment is typically no more than 130 dBA. As long as the ear of a target animal is 1 meter away, this animal would experience levels significantly below 115 dBA. This also ensures safety of humans within the field of the ultrasonic signals.


The higher the frequency of the correction sound, the narrower the beam spread. (Note that beam spread is a measure of the whole angle from side to side of the main sound beam in the far field. Beam divergence is a measure of the angle from one side of the sound beam to the central axis of the beam in the far field). Therefore, the higher the frequency, the more aligned the dog must be with the correction speaker or transducer to receive a sufficient sound pressure level to distract a dog from a barking episode.


The ultrasonic frequencies especially suffer from this phenomena. The maximum sound pressure level for the ultrasonic frequencies is found along the centerline (i.e., axis) of the transducer signal and then diminishes significantly as the angle from centerline increases.


The audible frequency bark itself has a wide, audible frequency, beam spread, and thus, the bark is detected by the bark detection circuitry from a wide angle in the horizontal plane. This angle is under one embodiment wider than the correction sound, especially if the correction sound is in the ultrasonic range.


In order to effectively interrupt an unwanted behavior exhibited by the dog, the sound pressure level, i.e. the correction signal, must be of a significant enough level when it reaches the dog's ears. The dog may not necessarily be aligned with the front of the speaker or transducer when the barking is detected. In other words, the bark detection component may detect a bark event but the animal may be outside the beam spread of the correction signal. The signal reaching the dog's ears may be significantly attenuated, possibly even below perception levels.


Carrying this concept to the next level, if a single bark deterrent system covers a wide region, a single fixed speaker or transducer may likely not generate enough sound pressure at every spot in the yard or indoor room to deter the barking behavior.


As indicated above, a bark detection and correction unit may comprise a bark detect component responsible for detecting the occurrence of a dog bark and a correction component for transmitting a bark correction signal. The bark correction (or deterrent) component may cover a wider angular region of the horizontal plane in order to optimize delivery of sound pressure levels to animals upon detection of a bark event. In other words, a wider angular correction region increases the chances of delivering sufficient sound energy, i.e. sound pressure level, to a dog's ears. The bark detection component may detect a direction of an incoming bark event and provide information to the bark correction component for activating a plurality of transducers that emit signals covering wide angular regions, for selecting a specific transducer that transmits a bark deterrent signal in the direction of the animal, and/or for rotating one or more transduces to direct their beam spread regions in a direction of the animal.



FIG. 1 shows a bark detection and correction unit 110. The unit comprises an ultrasonic transducer 185 that emits an ultrasonic signal. FIG. 1 shows the ultrasonic beam spread. The beam spread comprises regions 130, 140, 150, 160, 170. Note that sound pressure level is greatest along the central axis 180 of a beam spread and dissipates as the angle 190 from central axis 180 increases. Therefore the beam spread comprises the following regions: strongest perception region 150; weaker perception regions 140, 160; and lowest perception regions 130, 170. The correction signal is more likely to reach and deter the barking behavior of an animal in region 150 version region 170.



FIG. 2 shows a bark detection and correction unit 200. The unit comprises an ultrasonic transducer 210 that emits an ultrasonic signal. The unit also comprises a bark detection sensor 212 (which may be located at any position on the unit for detecting audible sound in the broadest possible range around the unit). Under one embodiment, the transducer and sensor may comprise a common component.


The sensor may comprise a microphone which is itself a transducer that converts sound into an electrical signal. Several different types of microphone may be used for implementing bark sensing methods described herein. The different types of microphone employ different methods for converting the air pressure variations of a sound wave to an electrical signal.


Under one embodiment, a detection sensor microphone comprises a MEMS (MicroElectrical-Mechanical System) microphone (also called a microphone chip or silicon microphone). A pressure-sensitive diaphragm is etched directly into a silicon wafer by MEMS processing techniques, and is usually accompanied with integrated preamplifier. Most MEMS microphones are variants of a condenser microphone design. Under an embodiment, digital MEMS microphones have built in analog-to-digital converter (ADC) circuits on the same CMOS chip making the chip a digital microphone and so more readily integrated with modern digital products.


Under one embodiment, a detection sensor microphone comprises a piezoelectric ultrasonic transducer for detecting the vibration of the bark. A piezoelectric sensor is a device that uses the piezoelectric effect to measure changes in pressure by converting them to an electrical charge.


With continued reference to FIG. 2, the bark detection sensor detects sound waves 230 of a bark event. In response, the transducer emits an ultrasonic signal with beam spread 240. As the dog 250 is located directly in the path of the beam spread, the sound pressure level of the ultrasonic correction sound is likely sufficient to command attention of the animal and deter barking.



FIG. 3 shows a bark detection and correction unit 300. The unit comprises an ultrasonic transducer 310. The unit also comprises a bark detection sensor 312 (which may be located at any position on the unit for detecting audible sound in the broadest possible range around the unit). Under one embodiment, the transducer and sensor may comprise a common component. The bark detection sensor detects sound waves of a bark event. In response, the transducer emits an ultrasonic signal with beam spread 340. As the dog 350 is located outside the path of the beam spread 340, the sound pressure reaching the animal is very low and very unlikely to influence animal behavior.



FIG. 4 shows a bark detection and correction unit 400. The unit comprises ultrasonic transducers 410, 412, 420. The unit also comprises bark detection sensors 416, 418, 414. The transducer/sensor pairs (as seen in FIG. 4) may comprise combined transducer/sensor components or discrete components paired at common locations of the bark detection and correction unit 400. (Under an embodiment, the detection sensors may be located at any position on the unit for detecting audible sound in the broadest possible range around the unit). One or more of the bark detection sensors detects sound waves 430 of a bark event. In response, the transducers 410, 412, 420 emit an ultrasonic signal with corresponding beam spread regions 440, 450, 460. As the dog 470 is located directly in the path of the beam spread 460, the sound pressure level of the ultrasonic correction sound is likely sufficient to command attention of the animal and deter barking.


The embodiment of FIG. 4 effectively monitors approximately a 180 degree range comprising beam spread regions 440, 450, 460. Irrespective of the animal's location in this 180 degree region, one or more of the bark sensors detect a bark event. For example, the animal 470 may be located in region 440 in close proximity to region 450. As the animal approaches region 450 (but while still located in region 440), the animal may begin barking. As long as the bark event is detectable by at least one of the sensors 416, 418, 414, detection then triggers operation of all transducers 410, 412, 414. Assume that the bark event is only detected by microphone 416. Nevertheless, all transducers operate simultaneously to produce overlapping beam spread regions that span approximately 180 degrees. Even if the animal accelerated into region 450 and then region 460, an effective ultrasonic signal reaches the animal.



FIG. 5 shows a bark detection and correction unit 500. The unit 500 comprises eight correction units 512, 514, 516, 518, 520, 522, 524, 526. Each correction unit comprises an ultrasonic transducer. The correction units may include detection sensors for detecting bark events. Alternatively, one or more detection sensors may be placed at any other location on bark detection and correction unit 500 for detecting audible sound in the broadest possible range around the unit. As just one example, a sensor 540 may be located at a highest point (or other location) enabling detection of bark events from any direction. The combined beam spread of the transducers covers a 360 degree range around the unit 500. One or more sensors may detect a bark event anywhere in the detection region and trigger operation of all transducers. The resulting signal is then directed toward the animal irrespective of its position. Of course, fewer than eight transducers may be used to cover the 360 degree range correction region.



FIG. 6 shows four microphone sensors 610, 612, 614, 616 configured as a microphone array 600 for use in detecting bark events. Assuming a Cartesian coordinate system imposed on the microphone array as seen in FIG. 6, microphone 612 is located in region I, microphone 610 is located in region II, microphone 614 is located in region III, and microphone 616 is located in region IV. The microphone array may be placed on a bark detection and correction unit as seen in FIG. 7. The microphones illustrated in FIG. 6 detect sound waves 630 of animal 620. The microphone array is coupled to an analog to digital converter 670 which is further coupled to at least one processor 680. The microphone array, analog to digital converter, and processor are also communicatively coupled with transducers located on a bark detection and correction unit under an embodiment. The microphones receive information of the audible bark event including arrival time, phase, and magnitude at the respective microphone. This information is passed through the analog to digital converter 670 to processer 680. (As indicated above, one or more microphones may include analog-to-digital converter (ADC) circuits). Based on arrival time, phase difference, and/or signal magnitude, the processor determines the bearing to the dog relative to the position of the microphone array and therefore relative to a bark detection and correction unit using the array for detection purposes. Methods for determining the bearing from a microphone array to a sound source are further described below. The processor also uses information of the audible bark event to confirm that the audible event is in fact a bark. Methods for detecting bark events are also further described below.


Once the bark sound is confirmed to be valid, and bearing from the unit to the pet is known, the processor 680 can easily determine, via lookup table, the proper single or multiple transducers to utilize in directing the maximum SPL to the bearing angle. The detected bearing angle may comprise a positive rotation from a zero degree reference line (as further described below). The lookup table may include bearing information for each transducer, i.e. bearing information for each respective beam axis relative to such zero degree reference line. Upon detecting a bearing to a barking animal, the lookup table may be used to select one or more transducer(s) most aligned (plus or minus a threshold degree range) with the detected bearing.



FIG. 7 shows a bark detection and correction unit 700, under an embodiment. The unit includes a microphone array 600 (as seen in FIG. 6) on a top surface of the unit. The microphone array is also coupled to an analog to digital converter 670 which is also coupled to a processor 680 (also shown in FIG. 6). The unit 700 includes a plurality of ultrasonic transducers 710, 712, 714, 716, 718, 728 and corresponding beam spread regions. FIG. 7 illustrates additional beam spread regions 720, 722, 724, 726 emanating from additional ultrasonic transducers (not shown) surrounding the bark detection and correction unit 700. An animal within a monitored region corresponding to the combined span of the beam spread regions seen in FIG. 7 may bark producing sound waves that arrive at the microphone array. Under an embodiment, the microphone array may detect signals corresponding to the bark event and determine a location of the animal relative to the microphone array and relative to the ultrasonic transducers positioned around the upper periphery of the bark detection and correction unit 700.


As indicated, the microphone array 600 and processor 680 determine location of the animal relative to the unit using one or more location methods (further described below). The processor then uses information of the location and a lookup table to select the transducer or transducers (and corresponding beam spread coverage) most aligned with the bearing of the animal, i.e. to select the transducer(s) capable of delivering maximum sound pressure level to the source. Under one embodiment, the unit 700 may then identify an additional transducer or transducers on either side of the previously selected transducer(s). The bark detection and correction unit 700 then transmits a correction signal using the selected transducer and any additional transducers. Under an embodiment, bark detection may continue during the correction phase. If a new bearing of the animal is determined, the processor may identify transducers for delivery of a correction signal in the same manner as described above. If the newly identified transducers are the same as the previously identified transducers, then the detection and correction unit may simply continue the correction signal. If the newly identified set of transducers is entirely different than previously identified set, then detection and correction unit may use the new set to deliver a correction signal. If the newly identified set of transducers partially overlaps with the previously identified set, then the detection and correction unit may simply initiate use of the additionally identified transducers (cumulative of operational transducers) in delivering a correction signal.



FIG. 8A provides the same bark detection and correction unit as displayed in FIG. 7 and already described above. An animal in a proximity to the bark detection and correction unit may begin barking producing a sound wave which approaches microphone array 600. The microphone array 600 and processor 680 determine location of the animal relative to the unit using one or more of the location methods further described below. The processor may determine a bearing of 225 degrees. In other words, the animal is positioned along a line corresponding to a 225 degree positive rotation from a zero degree reference line. (FIG. 8B shows the bearing according the geometry of the microphone array described in FIG. 6). The processor then uses information of the location and a lookup table to select the transducer or transducers (and corresponding beam spread coverage) most aligned with the bearing of the animal, i.e. to select the transducer(s) capable of delivering maximum sound pressure level to the source. In the example of FIG. 8A, the processor selects transducers 710, 712 for delivering a correction signal.



FIG. 9 shows a bark detection and correction unit 900, under an embodiment. The embodiment of FIG. 9 features a rectangular prism. The unit includes a microphone array 600 (as seen in FIG. 6) on a top surface of the unit. The microphone array is also coupled to an analog to digital converter 670 which is also coupled to a processor 680 (also shown in FIG. 6). A front of the rectangular prism features an ultrasonic linear phased 1×5 array. In the event of a nearby bark, the microphone array and processor determine a bearing of the animal. Under the embodiment of FIG. 9, the bearing is determined to be 225 degrees (using location methods further described below). Utilizing beam forming from the multiple transducers, the correction sound is directed along the 225 degree bearing.


Under one embodiment a linear phased array as shown in FIG. 9 may deliver a correction signal. The linear phased array comprises a number of transducer elements positioned in a physical linear array. The elements are fired sequentially from one side to the other, all within a single cycle time of the driven correction signal frequency, allowing constructive interference to focus the beam along a specified bearing. Note that waves generally do not reflect when they strike other waves. Instead, they combine. If the amplitudes of two waves have the same sign (either both positive or both negative), they will add together to form a wave with a larger amplitude. This is called constructive interference. The distance between transducers, and the phase relationship of the signals driven by each transducer determine direction of the sound beam. Further, note that the linear phased array may comprise greater than or less than the five transducers shown in FIG. 9.



FIG. 10 shows a bark detection and correction unit 1000, under an embodiment. The embodiment of FIG. 10 features a unit in the shape of a conical tower. The unit includes a microphone array 600 (as seen in FIG. 6) on a top surface of the unit. The microphone array is also coupled to an analog to digital converter 670 which is also coupled to a processor 680 (also shown in FIG. 6). A top portion 1010 of the detection and correction unit is rotatably coupled to a lower portion 1020. A rotation of the top portion also rotates a position of an ultrasonic transducer 1040 around the periphery of the unit. The transducer may be rotated to any position along a 360 degree path around the exterior of the unit. Under one embodiment, the bearing of a barking animal is determined to be 225 degrees (using location methods described below). The transducer is then rotated to deliver an ultrasonic signal with a beam spread axis aligned along the 225 degree bearing. Note that the embodiment of FIG. 10 may comprise multiple transducers. As one example, multiple transducers may be equally spaced around a periphery of the unit. The processor 680 may use information of the bearing and information of transducer location (available via lookup table) to rotate the transducers in a direction (clockwise or counterclockwise) that positions a transducer (or transducers) for operation with the least amount of rotation.


One or more applications may run on the processor described above for (i) determining effectiveness of a driven correction sound based on time between barking episodes; (ii) changing the correction sound when a driven correction sound is determined to be ineffective, i.e. the dog continues to bark; and/or (iii) improving detection of bark sounds and rejection of non-bark sounds.


The processor monitors and analyzes the bark events, under an embodiment. Based on the bark (or even bark type as further described below) and repeat episode timing or sequence, the correction sound can be found to be effective or ineffective.


An example is as follows:


A. Detect bark.


B. Drive a frequency range with a specific on/off pattern as a deterrent for a period of time. Under an embodiment ultrasonic products drive fixed-amplitude chirps or signals from about 22 kHz to 27 kHz at varying time intervals including time intervals ranging from 600 milliseconds to 4 seconds. These fixed amplitude chirps/signals may under an embodiment start at 22 kHz and rise to 27 kHz within the correction period. Under an alternative embodiment, these fixed amplitude chirps/signals may start at 37 kHz and rise to 42 kHz within the correction period.


C. Monitor for a repeat bark event for a period of time.


D. If the period of time shows a continuation of the same barking episode (i.e. repeats within 5 seconds), the chosen frequency range and specific on/off pattern can be repeated.


E. If A-D occurs more than a pre-determined number of times a new frequency range and specific on/off pattern can be driven.


Repeating A to E.


Systems and methods for detecting bark events are set forth in U.S. patent application Ser. No. 15/871,846, filed Jan. 15, 2018, which application is incorporated herein by reference as if set forth herein in its entirety. Bark detection methods of a bark detection and correction unit are further described below.


Note that the microphone array of a bark detection and correction unit may be coupled to a front end filter controlled by firmware running on one or more processors as further described below. Further, the bark detection methods described below incorporate power saving functionality by scheduling detection tests to terminate a detection process early when the sound event clearly fails to qualify as a bark.


The electrical signals provided by a microphone array's detection of a nearby sound event is processed through a series of three analog operational amplifier stages. Stage one provides buffering and a ˜160 Hz high pass filter. The filter eliminates low frequency content of the signals that are primarily created from (non-bark) activity. Stage two provides unity gain with variable attenuation that is updated and controlled by the bark detection algorithm firmware. If the signal is found to be very large in magnitude (using most of the dynamic range of the ADC (Analog to Digital Convertor)) the signal is attenuated in the second stage in an attempt to prevent clipping of the signal (a condition in which the input signal exceeds the dynamic range of the ADC). The third stage is a fixed gain stage to increase the overall signal to optimally conform to the electrical limits of the ADC with no attenuation from stage 2. The overall set point for the gain is a function of both the mechanical and electrical characteristics of the entire system. Finally, the conditioned signals from the three stage operational amplifier AFE (analog front end) are converted to a digital equivalent by using an ADC which is used by a bark detection algorithm for bark event detection, under one embodiment. The bark detection algorithm runs on a processor as shown in FIG. 6 above.


The algorithm for bark detection analyzes the whole waveform of a sound event to accept or reject the sound event as a bark. This improves both bark detection and false (non-bark) rejection.


The bark detection algorithm samples the signals from the ADC. A number of criteria are included as part of this method of discrimination. The algorithm works to positively detect a bark event on the basis of not being rejected by the following criteria:


Too Few Timeslots—This indicates a short duration event such as a strike, a bump, or a quick scratch or scrape. The bark detection algorithm identifies these events and aborts as quickly as possible in order to conserve battery. On the other hand, if an early abort does not occur due to a short duration event, then a single trigger event can cause an increment of more than one (possibly several) of the abort causes listed below. That is, a possible bark can fail the bark validation for more than one reason.


Abort Unsaturated Timeslots—This means that the signal amplitude was too high. Too many time slots were saturated; meaning there were too few unsaturated timeslots to properly analyze the signal for a bark. When used in conjunction with the hardware attenuation (stage 2) automatic gain control very few of these aborts should occur since it results in a decrease in the overall gain.


Abort Amplitude—This means that too few timeslots met the minimum amplitude requirement.


Abort Minimum Ratio and Abort Median Ratio—These indicate that the signal had features that made it look more like a scratch or a scrape than a real bark. These tests look at the ratio of the signal's absolute average voltage to its peak-to-peak voltage in each timeslot. Under one embodiment a single finding of a low ratio in one timeslot is enough to categorize the signal as a non-bark event. Real barks tend to have a higher ratio than scrapes and scratches


Abort Low Pass Minimum Ratio and Abort Low Pass Median Ratio—These are very similar to the Abort Minimum Ratio and Abort Median Ratio, except that we first low pass filter the signal in firmware before we calculate the ratio of the absolute average voltage to the peak-to-peak voltage.


Abort Spike Ratio—This indicates that the peak-to-peak amplitude of the signal in one or more timeslots exceeded the average peak-to-peak amplitude of its two neighboring timeslots by a factor of >1.75. That means that there was at least one spike in the data, and spikes of that high an amplitude are uncharacteristic of a real bark. (Real barks usually score below 1.30).


Abort Absolute Spike Ratio—This indicates that the absolute average amplitude of the signal in one or more timeslots exceeded the absolute average amplitude of its two neighboring timeslots by a factor of >1.75. That means that there was at least one spike in the data, and spikes of that high an amplitude are uncharacteristic of a real bark. (Real barks usually score below 1.30).


Historically, bark detection has used positive logic. That is, software (implementing audio signal processing through analog or digital signal processing methods) would test an arbitrary waveform and if the criteria by which the waveform is evaluated is met (was true), a correction would be invoked, with the implication that the event is a bark.


The new approach for detection described herein uses negative logic. Each waveform is evaluated by a series of “tests” to prove that the waveform is not a bark. Furthermore, multiple tests have been created to allow greater discrimination of an arbitrary waveform. If the first test is true (i.e., if first test criteria are met indicating that the waveform is not a bark) then the negative logic process may abort. However, if the first test criteria are not met, then negative logic suggests that the waveform may be a bark. Then the second test is applied. If the second test is true (i.e., if second test criteria are met indicating that the waveform is not a bark) then the negative logic process may abort. The waveform is then reevaluated by the next negative logic test. If the waveform passes through all of the tests (i.e. failing to meet test criteria), the implication is that the event was a bark. These additional tests allow greater discrimination of an event from previous designs to improve the confidence in the waveform identification since each additional test reinforces the negative logic and thus further provides that the waveform is a bark at the conclusion of all waveform test criteria.


In addition to the improved confidence in the detection algorithm, negative logic minimizes power. The order of the negative logic test criteria was chosen in such a way such that the most frequently created non-bark waveforms (those created by bumps, strikes, impacts, etc.) would be detected first (aborting any further checks for that waveform) and the unit quickly returns to a low power state.


The bark detection unit may include user selectable sensitivity ranges. The different sensitivity settings change the levels of specific negative logic tests which in turn, result in either a broader range of waveforms in which a correction will be triggered or a narrower range.


The bark detection unit can identify other dog vocalizations (whining, whimpering, moaning) to be detected/corrected in addition to barks.


The ratios and tests set forth above are applied to a waveform (potential bark event) using negative logic. The application of such method allows for multiple testing criteria to be conducted and sequenced in such a way so as to maximize battery life.


The use of ratios and tests as set forth above allows for fixed limits to be set while using a variable gain system. Without the use of ratios the limits would need to be modified every time the gain of the AFE (Acoustic Front End) is changed.


As for improving detection of bark sounds and rejection of non-bark sounds, an embodiment of the bark detection and correction unit may implement machine learning. Under an embodiment detected audio samples are compared to stored library samples. This approach allows library growth as additional audio samples (corresponding to bark events) are encountered. The library may encompass the many bark types of many different breeds. The library may include samples corresponding to “types” of bark. Using such library, bark detection may discount some bark types (i.e. a bark indicating stranger on premises) as a non-event while detecting and discouraging other bark types (i.e. a bark indicating boredom). The library may include sufficient samples from a single animal to detect whether a bark event corresponds to that animal. The library may also include samples corresponding to common sounds occurring within detection range of a bark detection and correction unit such that common sounds may be filtered.


Methods for determining the bearing from a microphone array to a sound source are referenced above. The most basic approach to determine the bearing from a microphone array to a sound source is to determine the time difference of arrival of the sound as it arrives at each pair of microphones in a fixed array. FIG. 11 shows microphone array 1100 comprising microphonei and microphonej. Of course, the array may include a greater number of microphones. FIG. 11 also illustrates arriving sound waves 1110. Once the time difference of arrival (T) is known for a microphone pair, the angle (q) to the sound source from each set of microphones can be calculated as follows:






T
=

Arrival





time





difference







c
=

Speed





of





sound







D
=

distance





between





microphones







q
=


cos

-
1





c

T

D






The time difference of arrival method described above can be more accurately implemented using an XCORR Cross Correlation method if the form factor of a detection and correction unit allows sufficient spacing between microphones to determine the required resolution. FIG. 12 shows microphone array 1200 comprising microphonei and microphonej. Of course, the array may include a greater number of microphones. FIG. 12 also illustrates arriving sound waves 1210. To determine the time delay between microphone sets, the XCORR Cross correlation method determines the time delay utilizing the cross correlation between the received signals at the microphones. The signals from each microphone are first sampled at a sampling frequency (Fs) sufficient to yield the resolution required of the system. The sampled signals, i.e. the resulting digital signals Si (n) and Sj (n), are then fed to a cross correlator 1220. The cross correlator output is typically then fed to a peak detector 1240. The delay time in reaching the correlation peak (signal match) determines the time difference of arrival of the sound arriving at the two microphones. This time difference is then used to determine the bearing angle (q) as follows:






T
=

Number





of





samples





to





reach





correlation





peak







c
=

Speed





of





sound







D
=

Distance





between





microphones








F
s

=

Sampling





frequency







q
=


cos

-
1





cT


/



F
s


D






The GCC-PHAT algorithm may under an embodiment be used to determine bearing from a microphone array to a sound source. It can be implemented in either the time or frequency domain. The approach also allows for microphone arrays at all spacing intervals.


The generalized cross correlation (GCC) can estimate the time delay between two microphone inputs by obtaining the inverse Fast Fourier Transform (I-FFT) of the cross-spectrum of the signals.


The phase amplitude transform (PHAT) attempts to improve the correlation by using a weighting function to obtain a unity gain for all frequency components while not affecting the phase information. The phase information yields the sound delay information. The result is a cross correlation with reduced noise.


Many other techniques may be employed to determine the bearing to an animal based on the bark sound arriving at a microphone array.


An apparatus is describe herein comprising a microphone array and a plurality of transducers, the microphone array and the plurality of transducers communicatively coupled with at least one processor, the at least one processor communicatively coupled with a memory. The microphone array is configured to receive at least one signal. Each transducer of the plurality of transducers is configured to deliver a correction signal along a transducer beam spread axis, wherein the plurality of transducers is positioned on the apparatus for providing a combined transducer beam spread coverage in the horizontal plane. One or more applications running on the at least one processor use information of the at least one signal to detect a sound event, the detecting the sound event including selecting transducers from the plurality of transducers, the detecting the sound event including instructing the selected transducers to deliver a correction signal.


The detecting the sound event includes determining a sound event bearing, wherein the sound event bearing comprises an angle from a reference line to a direction of the sound event, under an embodiment.


The determining the sound event bearing comprises using time difference of arrival of the sound event at two or more microphones of the microphone array, under an embodiment.


The determining the sound event bearing comprises cross correlating signals corresponding to the sound event received by at least two microphones of the microphone array, under an embodiment.


The determining the sound event bearing comprises using a generalized cross correlation and phase amplitude transform applied to signals corresponding to the sound event received by at least two microphones of the microphone array, under an embodiment.


The selecting the transducers from the plurality of transducers includes comparing the sound event bearing with transducer bearings of the plurality of transducers, wherein a transducer bearing comprises an angle from a respective transducer beam spread axis to the reference line, wherein the memory comprises the transducer bearings, under an embodiment.


The selecting the transducers from the plurality of transducers includes selecting the transducers when an absolute difference between the sound event bearing and respective transducer bearing is below a threshold value, under an embodiment.


The combined transducer beam spread coverage of an embodiment spans ninety (90) degrees or less.


The combined transducer beam spread coverage of an embodiment is greater than ninety (90) degrees.


The selecting the transducers from the plurality of transducers includes selecting one or more additional transducers, under an embodiment.


The one or more additional transducers of an embodiment comprise a respective beam spread axis adjacent a beam spread axis of the selected one or more transducers in the horizontal plane, wherein the selected one or more transducers comprise the one or more additional transducers.


The sound event of an embodiment comprises barking of a dog.


The plurality of transducers comprise an ultrasonic transducer, wherein the correction signal comprises an ultrasonic signal, under an embodiment.


The ultrasonic signal of an embodiment comprises a frequency range between 20 kHz and 50 kHz.


The ultrasonic signal of an embodiment comprises a sound pressure level between 70 dBA and 130 dBA measured 0.3 meters from the face of the transducer.


The reference line of an embodiment forms a fixed angle with a line connecting two microphones of the microphone array.


The reference line of an embodiment comprises magnetic north.


An apparatus is described herein comprising a microphone array and at least one transducer, the microphone array and the at least one transducer communicatively coupled with at least one processor, the at least one processor communicatively coupled with a memory. The apparatus include the microphone array for receiving at least one signal. Each transducer of the at least one transducer is configured to deliver a correction signal along a transducer beam spread axis. Each transducer of the at least one transducer is configured to rotate. One or more applications running on the at least one processor use information of the at least one signal to detect a sound event, the detecting the sound event including determining a sound event bearing, wherein the sound event bearing comprises an angle from a reference line to a direction of the sound event, the detecting the sound event including using the sound event bearing and transducer bearing information of the at least one transducer to rotate the at least one transducer, the rotating including adjusting a transducer beam spread axis of the at least one transducer to align with the sound event bearing, the detecting the sound event including instructing the at least one transducer to deliver a correction signal.


The rotating comprises rotating around a longitudinal axis of the apparatus, wherein the longitudinal axis is orthogonal to a horizontal plane, under an embodiment.


A method is described that includes under an embodiment receiving at least one signal through a microphone array. The method includes using information of the at least one signal to detect a sound event, the detecting the sound event including determining a sound event bearing, wherein the sound event bearing comprises an angle from a reference line to a direction of the sound event. The method includes selecting one or more transducers using the sound event bearing and transducer bearing information of the one or more transducers, wherein each transducer of the one or more transducers is configured to deliver a correction signal along an axis. The method includes instructing the selected one or more transducers to deliver a correction signal.


The axis of an embodiment comprises a transducer beam spread axis.


The memory of an embodiment comprises the transducer bearing information, wherein the transducer bearing information includes a transducer bearing for each transducer of the one or more transducers, wherein a transducer bearing comprises an angle from a transducer beam spread axis to the reference line.


The selecting one or more transducers includes using the transducer bearing information to compare the sound event bearing with transducer bearings of the one or more transducers, under an embodiment.


The selecting one or more transducers includes selecting the one or more transducers when an absolute difference between the sound event bearing and respective transducer bearing is below a threshold value, under an embodiment.


An apparatus is described herein that comprises a microphone array and a plurality of transducers, the microphone array and the plurality of transducers communicatively coupled with at least one processor. The apparatus includes the microphone array for receiving at least one signal. The plurality of transducers are configured to deliver a correction signal. One or more applications running on the at least one processor use information of the at least one signal to detect a sound event, the detecting the sound event including determining a sound event bearing, wherein the sound event bearing comprises an angle from a reference line to a direction of the sound event, the detecting the sound event including instructing the plurality of transducers to direct a correction signal in a direction of the sound event bearing.


The plurality of transducers comprise a linear phased array of transducers, wherein the instructing the plurality of transducers to deliver the correction signal comprises selecting a correction signal phase difference between the transducers of the linear phased array to direct a majority of the corrections signal's sound energy along the sound event bearing, under an embodiment.


Computer networks suitable for use with the embodiments described herein include local area networks (LAN), wide area networks (WAN), Internet, or other connection services and network variations such as the world wide web, the public internet, a private internet, a private computer network, a public network, a mobile network, a cellular network, a value-added network, and the like. Computing devices coupled or connected to the network may be any microprocessor controlled device that permits access to the network, including terminal devices, such as personal computers, workstations, servers, mini computers, main-frame computers, laptop computers, mobile computers, palm top computers, hand held computers, mobile phones, TV set-top boxes, or combinations thereof. The computer network may include one of more LANs, WANs, Internets, and computers. The computers may serve as servers, clients, or a combination thereof.


The systems and methods for remote multi-directional bark deterrence can be a component of a single system, multiple systems, and/or geographically separate systems. The systems and methods for remote multi-directional bark deterrence can also be a subcomponent or subsystem of a single system, multiple systems, and/or geographically separate systems. The components of systems and methods for remote multi-directional bark deterrence can be coupled to one or more other components (not shown) of a host system or a system coupled to the host system.


One or more components of the systems and methods for remote multi-directional bark deterrence and/or a corresponding interface, system or application to which the systems and methods for remote multi-directional bark deterrence is coupled or connected includes and/or runs under and/or in association with a processing system. The processing system includes any collection of processor-based devices or computing devices operating together, or components of processing systems or devices, as is known in the art. For example, the processing system can include one or more of a portable computer, portable communication device operating in a communication network, and/or a network server. The portable computer can be any of a number and/or combination of devices selected from among personal computers, personal digital assistants, portable computing devices, and portable communication devices, but is not so limited. The processing system can include components within a larger computer system.


The processing system of an embodiment includes at least one processor and at least one memory device or subsystem. The processing system can also include or be coupled to at least one database. The term “processor” as generally used herein refers to any logic processing unit, such as one or more central processing units (CPUs), digital signal processors (DSPs), application-specific integrated circuits (ASIC), etc. The processor and memory can be monolithically integrated onto a single chip, distributed among a number of chips or components, and/or provided by some combination of algorithms. The methods described herein can be implemented in one or more of software algorithm(s), programs, firmware, hardware, components, circuitry, in any combination.


The components of any system that include systems and methods for remote multi-directional bark deterrence can be located together or in separate locations. Communication paths couple the components and include any medium for communicating or transferring files among the components. The communication paths include wireless connections, wired connections, and hybrid wireless/wired connections. The communication paths also include couplings or connections to networks including local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), proprietary networks, interoffice or backend networks, and the Internet. Furthermore, the communication paths include removable fixed mediums like floppy disks, hard disk drives, and CD-ROM disks, as well as flash RAM, Universal Serial Bus (USB) connections, RS-232 connections, telephone lines, buses, and electronic mail messages.


Aspects of the systems and methods for remote multi-directional bark deterrence and corresponding systems and methods described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits (ASICs). Some other possibilities for implementing aspects of the systems and methods for remote multi-directional bark deterrence and corresponding systems and methods include: microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM)), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the systems and methods for remote multi-directional bark deterrence and corresponding systems and methods may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies like complementary metal-oxide semiconductor (CMOS), bipolar technologies like emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc.


It should be noted that any system, method, and/or other components disclosed herein may be described using computer aided design tools and expressed (or represented), as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, etc.). When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of the above described components may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs.


Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.


The above description of embodiments of the systems and methods for remote multi-directional bark deterrence is not intended to be exhaustive or to limit the systems and methods to the precise forms disclosed. While specific embodiments of, and examples for, the systems and methods for remote multi-directional bark deterrence and corresponding systems and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems and methods, as those skilled in the relevant art will recognize. The teachings of the systems and methods for remote multi-directional bark deterrence and corresponding systems and methods provided herein can be applied to other systems and methods, not only for the systems and methods described above.


The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the systems and methods for remote multi-directional bark deterrence and corresponding systems and methods in light of the above detailed description.

Claims
  • 1. An apparatus comprising, a microphone array and a plurality of transducers, the microphone array and the plurality of transducers communicatively coupled with at least one processor, the at least one processor communicatively coupled with a memory;the microphone array for receiving at least one signal;each transducer of the plurality of transducers configured to deliver a correction signal along a transducer beam spread axis, wherein the plurality of transducers is positioned on the apparatus for providing a combined transducer beam spread coverage in the horizontal plane;one or more applications running on the at least one processor for using information of the at least one signal to detect a sound event, the detecting the sound event including selecting one or more transducers from the plurality of transducers, the detecting the sound event including instructing the selected one or more transducers to deliver a correction signal, the detecting the sound event including determining a sound event bearing, wherein the sound event bearing comprises an angle from a reference line to a direction of the sound event, the selecting the one or more transducers from the plurality of transducers including comparing the sound event bearing with transducer bearings of the plurality of transducers, wherein a transducer bearing comprises an angle from a respective transducer beam spread axis to the reference line, wherein the memory comprises the transducer bearings.
  • 2. The apparatus of claim 1, the determining the sound event bearing comprising using time difference of arrival of the sound event at two or more microphones of the microphone array.
  • 3. The apparatus of claim 1, the determining the sound event bearing comprising cross correlating signals corresponding to the sound event received by at least two microphones of the microphone array.
  • 4. The apparatus of claim 1, the determining the sound event bearing comprising using a generalized cross correlation and phase amplitude transform applied to signals corresponding to the sound event received by at least two microphones of the microphone array.
  • 5. The apparatus of claim 1, the selecting the one or more transducers from the plurality of transducers including selecting the one or more transducers when an absolute difference between the sound event bearing and respective transducer bearing is below a threshold value.
  • 6. The apparatus of claim 1, wherein the combined transducer beam spread coverage spans ninety (90) degrees or less.
  • 7. The apparatus of claim 1, wherein the combined transducer beam spread coverage is greater than ninety (90) degrees.
  • 8. The apparatus of claim 1, the selecting the one or more transducers from the plurality of transducers including selecting one or more additional transducers.
  • 9. The apparatus of claim 8, wherein the one or more additional transducers comprise a respective beam spread axis adjacent a beam spread axis of the selected one or more transducers in the horizontal plane, wherein the selected one or more transducers comprise the one or more additional transducers.
  • 10. The apparatus of claim 1, wherein the sound event comprises barking of a dog.
  • 11. The apparatus of claim 1, wherein the plurality of transducers comprise an ultrasonic transducer, wherein the correction signal comprises an ultrasonic signal.
  • 12. The apparatus of claim 11, wherein the ultrasonic signal comprises a frequency range between 20 kHz and 50 kHz.
  • 13. The apparatus of claim 11, wherein the ultrasonic signal comprises a sound pressure level between 70 dBA and 130 dBA measured 0.3 meters from the face of the transducer.
  • 14. The apparatus of claim 1, wherein the reference line forms a fixed angle with a line connecting two microphones of the microphone array.
  • 15. The apparatus of claim 1, wherein the reference line comprises magnetic north.
  • 16. A method comprising, one or more applications running on at least one processor for providing,receiving at least one signal through a microphone array;using information of the at least one signal to detect a sound event, the detecting the sound event including determining a sound event bearing, wherein the sound event bearing comprises an angle from a reference line to a direction of the sound event;selecting one or more transducers using the sound event bearing and transducer bearing information of the one or more transducers, wherein each transducer of the one or more transducers is configured to deliver a correction signal along an axis, wherein the axis comprises a transducer beam spread axis, wherein the transducer bearing information includes a transducer bearing for each transducer of the one or more transducers, wherein a transducer bearing comprises an angle from a transducer beam spread axis to the reference line;instructing the selected one or more transducers to deliver a correction signal.
  • 17. The method of claim 16, the selecting one or more transducers including using the transducer bearing information to compare the sound event bearing with transducer bearings of the one or more transducers.
  • 18. The method of claim 17, the selecting one or more transducers including selecting the one or more transducers when an absolute difference between the sound event bearing and respective transducer bearing is below a threshold value.
  • 19. An apparatus comprising, a microphone array and a plurality of transducers, the microphone array and the plurality of transducers communicatively coupled with at least one processor;the microphone array for receiving at least one signal;the plurality of transducers configured to deliver a correction signal;one or more applications running on the at least one processor for using information of the at least one signal to detect a sound event, the detecting the sound event including determining a sound event bearing, wherein the sound event bearing comprises an angle from a reference line to a direction of the sound event, the detecting the sound event including instructing the plurality of transducers to direct a correction signal in a direction of the sound event bearing, wherein the plurality of transducers comprises a linear phased array of transducers, wherein the instructing the plurality of transducers to deliver the correction signal comprises selecting a correction signal phase difference between the transducers of the linear phased array to direct a majority of the corrections signal'sound energy along the sound event bearing.
US Referenced Citations (399)
Number Name Date Kind
2364994 Moore Dec 1944 A
2741224 Putnam Apr 1956 A
3182211 Maratuech et al. May 1965 A
3184730 Irish May 1965 A
3500373 Arthur Mar 1970 A
3735757 MacFarland May 1973 A
4180013 Smith Dec 1979 A
4426884 Polchaninoff Jan 1984 A
4783646 Matsuzaki Nov 1988 A
4794402 Gonda et al. Dec 1988 A
4802482 Gonda et al. Feb 1989 A
4947795 Farkas Aug 1990 A
4969418 Jones Nov 1990 A
5054428 Farkus Oct 1991 A
5159580 Andersen et al. Oct 1992 A
5161485 McDade Nov 1992 A
5182032 Dickie et al. Jan 1993 A
5207178 McDade et al. May 1993 A
5207179 Arthur et al. May 1993 A
5471954 Gonda et al. Dec 1995 A
5526006 Akahane et al. Jun 1996 A
5559498 Westriok et al. Sep 1996 A
5576972 Harrison Nov 1996 A
5586521 Kelley Dec 1996 A
5601054 So Feb 1997 A
5642690 Calabrese et al. Jul 1997 A
5749324 Moore May 1998 A
5794569 Titus et al. Aug 1998 A
5810747 Brudny et al. Sep 1998 A
5815077 Christiansen Sep 1998 A
5844489 Yarnall, Jr. et al. Dec 1998 A
5857433 Files Jan 1999 A
5870029 Otto et al. Feb 1999 A
5872516 Bonge, Jr. Feb 1999 A
5886669 Kita Mar 1999 A
5913284 Van Curen et al. Jun 1999 A
5923254 Brune Jul 1999 A
5927233 Mainini et al. Jul 1999 A
5933079 Frink Aug 1999 A
5934225 Williams Aug 1999 A
5949350 Girard et al. Sep 1999 A
5957983 Tominaga Sep 1999 A
5982291 Williams et al. Nov 1999 A
6016100 Boyd et al. Jan 2000 A
6019066 Taylor Feb 2000 A
6028531 Wanderlich Feb 2000 A
6047664 Lyerly Apr 2000 A
6067018 Skelton et al. May 2000 A
6075443 Schepps et al. Jun 2000 A
6166643 Janning et al. Dec 2000 A
6170439 Duncan et al. Jan 2001 B1
6184790 Gerig Feb 2001 B1
6196990 Zicherman Mar 2001 B1
6204762 Dering et al. Mar 2001 B1
6215314 Frankewich, Jr. Apr 2001 B1
6230031 Barber May 2001 B1
6230661 Yarnall, Jr. et al. May 2001 B1
6232880 Anderson et al. May 2001 B1
6271757 Touchton et al. Aug 2001 B1
6297766 Koeller Oct 2001 B1
6327999 Gerig Dec 2001 B1
6353390 Beri et al. Mar 2002 B1
6360697 Williams Mar 2002 B1
6360698 Stapelfeld et al. Mar 2002 B1
6404338 Koslar Jun 2002 B1
6415742 Lee et al. Jul 2002 B1
6426464 Spellman et al. Jul 2002 B1
6427079 Schneider et al. Jul 2002 B1
6431121 Mainini et al. Aug 2002 B1
6431122 Westrick et al. Aug 2002 B1
6441778 Durst et al. Aug 2002 B1
6459378 Gerig Oct 2002 B2
6487992 Hollis Dec 2002 B1
6561137 Oakman May 2003 B2
6581546 Dalland et al. Jun 2003 B1
6588376 Groh Jul 2003 B1
6598563 Kim et al. Jul 2003 B2
6600422 Barry et al. Jul 2003 B2
6637376 Lee et al. Oct 2003 B2
6657544 Barry et al. Dec 2003 B2
6668760 Groh et al. Dec 2003 B2
6700492 Touchton et al. Mar 2004 B2
6747555 Fellenstein et al. Jun 2004 B2
6798887 Andre Sep 2004 B1
6799537 Liao Oct 2004 B1
6807720 Brune et al. Oct 2004 B2
6820025 Bachmann et al. Nov 2004 B2
6825768 Stapelfeld et al. Nov 2004 B2
6830012 Swan Dec 2004 B1
6833790 Mejia et al. Dec 2004 B2
6874447 Kobett Apr 2005 B1
6888502 Beigel et al. May 2005 B2
6901883 Gillis et al. Jun 2005 B2
6903682 Maddox Jun 2005 B1
6907844 Crist et al. Jun 2005 B1
6907883 Lin Jun 2005 B2
6921089 Groh et al. Jul 2005 B2
6923146 Korbitz et al. Aug 2005 B2
6928958 Crist et al. Aug 2005 B2
6937647 Boyd et al. Aug 2005 B1
6956483 Schmitt et al. Oct 2005 B2
6970090 Sciarra Nov 2005 B1
7061385 Fong et al. Jun 2006 B2
7079024 Alarcon et al. Jul 2006 B2
7114466 Mayer Oct 2006 B1
7142167 Rochelle et al. Nov 2006 B2
7164354 Panzer Jan 2007 B1
7173535 Bach et al. Feb 2007 B2
7198009 Crist et al. Apr 2007 B2
7222589 Lee et al. May 2007 B2
7249572 Goetzl et al. Jul 2007 B2
7252051 Napolez et al. Aug 2007 B2
7259718 Patterson et al. Aug 2007 B2
7267081 Steinbacher Sep 2007 B2
7275502 Boyd et al. Oct 2007 B2
7296540 Boyd Nov 2007 B2
7319397 Chung et al. Jan 2008 B2
7328671 Lawrence Feb 2008 B2
7339474 Easley et al. Mar 2008 B2
7382328 Lee, IV et al. Jun 2008 B2
7394390 Gerig Jul 2008 B2
7395966 Braiman Jul 2008 B2
7403744 Bridgelall Jul 2008 B2
7404379 Nottingham et al. Jul 2008 B2
7411492 Greenberg et al. Aug 2008 B2
7426906 Nottingham et al. Sep 2008 B2
7434541 Kates Oct 2008 B2
7443298 Cole et al. Oct 2008 B2
7477155 Bach et al. Jan 2009 B2
7503285 Mainini et al. Mar 2009 B2
7518275 Suzuki et al. Apr 2009 B2
7518522 So et al. Apr 2009 B2
7538679 Shanks May 2009 B2
7546817 Moore Jun 2009 B2
7552699 Moore Jun 2009 B2
7559291 Reinhart Jul 2009 B2
7562640 Lalor Jul 2009 B2
7565885 Moore Jul 2009 B2
7574979 Nottingham et al. Aug 2009 B2
7583931 Eu et al. Sep 2009 B2
7602302 Hokuf et al. Oct 2009 B2
7612668 Harvey Nov 2009 B2
7616124 Paessel et al. Nov 2009 B2
7656291 Rochelle et al. Feb 2010 B2
7658166 Rheinschmidt, Jr. et al. Feb 2010 B1
7667599 Mainini et al. Feb 2010 B2
7667607 Gerig et al. Feb 2010 B2
7680645 Li et al. Mar 2010 B2
7705736 Kedziora Apr 2010 B1
7710263 Boyd May 2010 B2
7760137 Martucci et al. Jul 2010 B2
7779788 Moore Aug 2010 B2
7786876 Troxler et al. Aug 2010 B2
7804724 Way Sep 2010 B2
7814865 Tracy et al. Oct 2010 B2
7828221 Kwon Nov 2010 B2
7830257 Hassell Nov 2010 B2
7834769 Hinkle et al. Nov 2010 B2
7841301 Mainini et al. Nov 2010 B2
7856947 Giunta Dec 2010 B2
7864057 Milnes et al. Jan 2011 B2
7868912 Venetianer et al. Jan 2011 B2
7900585 Lee et al. Mar 2011 B2
7918190 Belcher et al. Apr 2011 B2
7944359 Fong et al. May 2011 B2
7946252 Lee, IV et al. May 2011 B2
7978078 Copeland et al. Jul 2011 B2
7996983 Lee et al. Aug 2011 B2
8011327 Mainini et al. Sep 2011 B2
8047161 Moore et al. Nov 2011 B2
8049630 Chao et al. Nov 2011 B2
8065978 Duncan et al. Nov 2011 B2
8069823 Mainini et al. Dec 2011 B2
8098164 Gerig et al. Jan 2012 B2
8159355 Gerig et al. Apr 2012 B2
8161915 Kim Apr 2012 B2
8185345 Mainini May 2012 B2
8232909 Kroeger et al. Jul 2012 B2
8240085 Hill Aug 2012 B2
8269504 Gerig Sep 2012 B2
8274396 Gurley et al. Sep 2012 B2
8297233 Rich et al. Oct 2012 B2
8342134 Lee et al. Jan 2013 B2
8342135 Peinetti et al. Jan 2013 B2
8430064 Groh et al. Apr 2013 B2
8436735 Mainini et al. May 2013 B2
8447510 Fitzpatrick et al. May 2013 B2
8451130 Mainini May 2013 B2
8456296 Piltonen et al. Jun 2013 B2
8483262 Mainini et al. Jul 2013 B2
8714113 Lee, IV et al. May 2014 B2
8715824 Rawlings et al. May 2014 B2
8736499 Goetzl et al. May 2014 B2
8779925 Rich et al. Jul 2014 B2
8803692 Goetzl et al. Aug 2014 B2
8807089 Brown et al. Aug 2014 B2
8823513 Jameson et al. Sep 2014 B2
8854215 Ellis et al. Oct 2014 B1
8866605 Gibson Oct 2014 B2
8908034 Bordonaro Dec 2014 B2
8917172 Charych Dec 2014 B2
8947240 Mainini Feb 2015 B2
8967085 Gillis et al. Mar 2015 B2
9035773 Petersen et al. May 2015 B2
9125380 Deutsch Sep 2015 B2
9131660 Womble Sep 2015 B2
9186091 Mainini et al. Nov 2015 B2
9204251 Mendelson et al. Dec 2015 B1
9307745 Mainini Apr 2016 B2
9516863 Gerig et al. Dec 2016 B2
9861076 Rochelle et al. Jan 2018 B2
10514439 Seltzer Dec 2019 B2
10757672 Knas et al. Aug 2020 B1
20020010390 Guice et al. Jan 2002 A1
20020015094 Kuwano et al. Feb 2002 A1
20020036569 Martin Mar 2002 A1
20020092481 Spooner Jul 2002 A1
20020196151 Troxler Dec 2002 A1
20030034887 Crabtree et al. Feb 2003 A1
20030035051 Cho et al. Feb 2003 A1
20030116099 Kim et al. Jun 2003 A1
20030154928 Lee et al. Aug 2003 A1
20030218539 Hight Nov 2003 A1
20040108939 Giunta Jun 2004 A1
20040162875 Brown Aug 2004 A1
20040263322 Onaru et al. Dec 2004 A1
20050000469 Giunta et al. Jan 2005 A1
20050007251 Crabtree et al. Jan 2005 A1
20050020279 Markhovsky et al. Jan 2005 A1
20050035865 Brennan et al. Feb 2005 A1
20050059909 Burgess Mar 2005 A1
20050081797 Laitinen et al. Apr 2005 A1
20050139169 So et al. Jun 2005 A1
20050145196 Crist et al. Jul 2005 A1
20050145199 Napolez et al. Jul 2005 A1
20050148346 Maloney et al. Jul 2005 A1
20050172912 Crist et al. Aug 2005 A1
20050217606 Lee et al. Oct 2005 A1
20050231353 DiPoala et al. Oct 2005 A1
20050254663 Raptopoulos et al. Nov 2005 A1
20050258715 Schlabach et al. Nov 2005 A1
20050280544 Mishelevich Dec 2005 A1
20050280546 Ganley et al. Dec 2005 A1
20050288007 Benco et al. Dec 2005 A1
20060000015 Duncan Jan 2006 A1
20060011145 Kates et al. Jan 2006 A1
20060037559 Belcher Feb 2006 A1
20060061469 Jaeger et al. Mar 2006 A1
20060092676 Liptak et al. May 2006 A1
20060102100 Becker et al. May 2006 A1
20060102101 Kim May 2006 A1
20060112901 Gomez Jun 2006 A1
20060196445 Kates Sep 2006 A1
20060197672 Talamas, Jr. et al. Sep 2006 A1
20070011339 Brown Jan 2007 A1
20070197878 Shklarski Aug 2007 A1
20070204803 Ramsay Sep 2007 A1
20070204804 Swanson et al. Sep 2007 A1
20070209604 Groh et al. Sep 2007 A1
20070249470 Niva et al. Oct 2007 A1
20070266959 Brooks et al. Nov 2007 A1
20080004539 Ross Jan 2008 A1
20080055155 Hensley et al. Mar 2008 A1
20080058670 Mainini et al. Mar 2008 A1
20080061978 Huang Mar 2008 A1
20080119757 Winter May 2008 A1
20080129457 Ritter et al. Jun 2008 A1
20080141949 Taylor Jun 2008 A1
20080143516 Mock et al. Jun 2008 A1
20080156277 Mainini et al. Jul 2008 A1
20080163827 Goetzl Jul 2008 A1
20080186167 Ramachandra Aug 2008 A1
20080204322 Oswald et al. Aug 2008 A1
20080216766 Martin et al. Sep 2008 A1
20080236514 Johnson et al. Oct 2008 A1
20080252527 Garcia Oct 2008 A1
20090000566 Kim Jan 2009 A1
20090002188 Greenberg Jan 2009 A1
20090012355 Lin Jan 2009 A1
20090020002 Williams et al. Jan 2009 A1
20090025651 Lalor Jan 2009 A1
20090031966 Kates Feb 2009 A1
20090061772 Moon et al. Mar 2009 A1
20090082830 Folkerts et al. Mar 2009 A1
20090102668 Thompson et al. Apr 2009 A1
20090112284 Smith et al. Apr 2009 A1
20090129338 Horn et al. May 2009 A1
20090224909 Derrick et al. Sep 2009 A1
20090239586 Boeve et al. Sep 2009 A1
20090289785 Leonard Nov 2009 A1
20090289844 Palsgrove et al. Nov 2009 A1
20090299742 Toman et al. Dec 2009 A1
20100008011 Ogram Jan 2010 A1
20100019903 Sawaya Jan 2010 A1
20100047119 Cressy Feb 2010 A1
20100049364 Landry et al. Feb 2010 A1
20100107985 O'Hare May 2010 A1
20100139576 Kim et al. Jun 2010 A1
20100201525 Bahat et al. Aug 2010 A1
20100231391 Dror et al. Sep 2010 A1
20100238022 Au et al. Sep 2010 A1
20100315241 Jow Dec 2010 A1
20100321180 Dempsey et al. Dec 2010 A1
20110140967 Lopez et al. Jun 2011 A1
20110182438 Koike et al. Jul 2011 A1
20110203529 Mainini et al. Aug 2011 A1
20120000431 Khoshkish et al. Jan 2012 A1
20120006282 Kates Jan 2012 A1
20120032855 Reede et al. Feb 2012 A1
20120037088 Altenhofen Feb 2012 A1
20120078139 Aldridge et al. Mar 2012 A1
20120087204 Urbano et al. Apr 2012 A1
20120132151 Touchton et al. May 2012 A1
20120165012 Fischer et al. Jun 2012 A1
20120236688 Spencer et al. Sep 2012 A1
20120312250 Jesurum Dec 2012 A1
20130099920 Song et al. Apr 2013 A1
20130099922 Lohbihler Apr 2013 A1
20130157564 Curtis et al. Jun 2013 A1
20130169441 Wilson Jul 2013 A1
20130203027 De Villers-Sidani et al. Aug 2013 A1
20130321159 Schofield et al. Dec 2013 A1
20130340305 Mobley Dec 2013 A1
20140020635 Sayers et al. Jan 2014 A1
20140053788 Riddell Feb 2014 A1
20140057232 Wetmore et al. Feb 2014 A1
20140062695 Rosen et al. Mar 2014 A1
20140069350 Riddell Mar 2014 A1
20140073939 Rodriguez-Llorente et al. Mar 2014 A1
20140120943 Shima May 2014 A1
20140123912 Menkes et al. May 2014 A1
20140132608 Mund et al. May 2014 A1
20140174376 Touchton et al. Jun 2014 A1
20140176305 Aljadeff Jun 2014 A1
20140228649 Rayner et al. Aug 2014 A1
20140228927 Ahmad et al. Aug 2014 A1
20140253368 Holder Sep 2014 A1
20140253389 Beauregard Sep 2014 A1
20140261235 Rich et al. Sep 2014 A1
20140265568 Crafts et al. Sep 2014 A1
20140267299 Couse Sep 2014 A1
20140275824 Couse et al. Sep 2014 A1
20140276278 Smith et al. Sep 2014 A1
20140307888 Alderson et al. Oct 2014 A1
20140343599 Smith et al. Nov 2014 A1
20140358592 Wedig et al. Dec 2014 A1
20150040840 Muetzel et al. Feb 2015 A1
20150043744 Lagodzinski et al. Feb 2015 A1
20150053144 Bianchi et al. Feb 2015 A1
20150075446 Hu Mar 2015 A1
20150080013 Venkatraman et al. Mar 2015 A1
20150107531 Golden Apr 2015 A1
20150149111 Kelly et al. May 2015 A1
20150163412 Holley et al. Jun 2015 A1
20150172872 Alsehly et al. Jun 2015 A1
20150199490 Iancu et al. Jul 2015 A1
20150223013 Park et al. Aug 2015 A1
20150289111 Ozkan et al. Oct 2015 A1
20150350848 Eramian Dec 2015 A1
20150358768 Luna et al. Dec 2015 A1
20160015005 Brown, Jr. et al. Jan 2016 A1
20160021506 Bonge, Jr. Jan 2016 A1
20160021850 Stapelfeld et al. Jan 2016 A1
20160029466 DeMao et al. Jan 2016 A1
20160044444 Rattner et al. Feb 2016 A1
20160084801 Robinson et al. Mar 2016 A1
20160094419 Peacock et al. Mar 2016 A1
20160102879 Guest et al. Apr 2016 A1
20160125867 Jarvinen et al. May 2016 A1
20160150362 Shaprio et al. May 2016 A1
20160174099 Goldfain Jun 2016 A1
20160178392 Goldfain Jun 2016 A1
20160187454 Orman et al. Jun 2016 A1
20160234649 Finnerty et al. Aug 2016 A1
20160253987 Chattell Sep 2016 A1
20160292635 Todasco Oct 2016 A1
20160335917 Lydecker et al. Nov 2016 A1
20160363664 Mindell et al. Dec 2016 A1
20160366813 Haneda et al. Dec 2016 A1
20170026798 Prevatt Jan 2017 A1
20170212205 Bialer et al. Jul 2017 A1
20170323630 Stickney et al. Nov 2017 A1
20180027772 Gordon et al. Feb 2018 A1
20180077509 Jones et al. Mar 2018 A1
20180078735 Dalgleish et al. Mar 2018 A1
20180094451 Peter et al. Apr 2018 A1
20180188351 Jones et al. Jul 2018 A1
20180190311 Kato Jul 2018 A1
20180210704 Jones et al. Jul 2018 A1
20180234134 Tang et al. Aug 2018 A1
20180235182 Bocknek Aug 2018 A1
20180249680 Van Curen et al. Sep 2018 A1
20180303066 Weimin et al. Oct 2018 A1
20180315262 Love et al. Nov 2018 A1
20190013003 Baughman et al. Jan 2019 A1
20190066651 Yang Feb 2019 A1
20190110430 Badiou Apr 2019 A1
20190165832 Khanduri et al. May 2019 A1
20200367810 Shouldice et al. Nov 2020 A1
Foreign Referenced Citations (19)
Number Date Country
101937015 Jan 2011 CN
101112181 Nov 2012 CN
102793568 Dec 2014 CN
H0974774 Mar 1997 JP
20130128704 Nov 2013 KR
20180029636 Mar 2018 KR
20180059684 Jun 2018 KR
WO-02060240 Feb 2003 WO
WO-2006000015 Jan 2006 WO
WO-2008085812 Jul 2008 WO
WO-2008140992 Nov 2008 WO
WO-2009105243 Aug 2009 WO
WO-2009106896 Sep 2009 WO
WO-2011055004 May 2011 WO
WO-2011136816 Nov 2011 WO
WO-2011160698 Dec 2011 WO
WO-2012122607 Sep 2012 WO
WO-2015015047 Feb 2015 WO
WO-2016204799 Dec 2016 WO
Non-Patent Literature Citations (50)
Entry
Canine 10RC-HP10 Instructions, Jul. 8, 2017, Info Unlimited https://www.amazing1.com/content/download/CANINE10_Instructions.pdf (Year: 2017).
Ayele E., et al., “Highly Energy Efficient Animal Mobility Driven BLE Beacon Advertising Control for Wildlife Monitoring,” IEEE Xplore, 2020, 8 pages.
Crutchfield., “What You Need to Know About Sound Masking,” pp. 1-7.
Integrated Building System., “The What, Where, Why and How of Sound Masking,” 2016, 6 pages.
Nam M., et al., “Combined Scheduling of Sensing and Communication for Real-Time Indoor Tracking in Assisted Living,” IEEE Real Time System Symposium, 2007, 10 pages.
Radoi I.E., et al., “Tracking and Monitoring Horses in the Wild Using Wireless Sensor Networks,” IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, 2015, 8 pages.
Supplementary European Search Report for European Application No. 18887374.9, dated Aug. 10, 2021, 7 pages.
Supplementary European Search Report for European Application No. 18888089.2, dated Aug. 6, 2021, 6 pages.
Supplementary European Search Report for European Application No. 18889799.5, dated Aug. 13, 2021, 10 pages.
Wood D.J., “Speech Privacy & Sound Masking In Modern Architecture,” 2008, 51 pages.
Baba A. I., et al., “Calibrating Time of Flight in Two Way Ranging,” IEEE Xplore Digital Library, Dec. 2011, pp. 393-397.
Eileen A., “How to Protect Your Dog From Loud and Scary Sounds,” 2013, pp. 1-8.
Extended European Search Report for Application No. EP17180645, dated May 9, 2018, 7 pages.
Extended European Search Report for European Application No. 18738816, dated Sep. 29, 2020, 10 pages.
Extended European Search Report for European Application No. 11784149.4, dated Nov. 17, 2017, 7 pages.
Extended European Search Report for European Application No. 15735439.0, dated Oct. 18, 2017, 9 pages.
Extended European Search Report for European Application No. 15895839.7, dated Oct. 9, 2018, 5 pages.
Extended European Search Report for European Application No. 17162289.7, dated Aug. 31, 2017, 7 pages.
Extended European Search Report for European Application No. 18738547.1, dated Sep. 24, 2020, 9 pages.
Extended European Search Report for European Application No. 18738743.6, dated Sep. 24, 2020, 11 pages.
High Tech Products, Inc., “Human Contain Model X-10 Rechargeable Multi-function Electronic Dog Fence Ultra-system,” Internet citation, Retrieved from the Internet: URL: http://web.archive.org/web/20120112221915/http://hightechpet.com/user_Manuals/HC%20X-10_Press.pdf [retrieved on Apr. 10, 2017], Apr. 28, 2012, pp. 1-32, XP008184171.
Info Unlimited “Canine 10RC-HP10 Instructions”; Instruction manual V.0420; Publication 12 & 15 [online]. Jun. 14, 2013 [retrieved Sep. 29, 2020]. Retrieved from the Internet: URL: https://www.amazing1.com/content/download/CANINE10_Instructions.pdf.
International Search Report and Written Opinion for Application No. PCT/US2020/042601, dated Nov. 9, 2020, 11 pages.
International Preliminary Report for Patentability Chapter II for International Application No. PCT/US2014/024875, dated Mar. 12, 2015, 17 pages.
International Preliminary Report on Patentability for Application No. PCT/US2015/043653, dated Dec. 19, 2017, 14 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/013737, dated Mar. 7, 2018, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/013738, dated Mar. 20, 2018, 6 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/013740, dated Mar. 20, 2018, 6 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/019887, dated May 8, 2018, 10 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/065122, dated Mar. 25, 2019, 7 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/065154, dated Apr. 23, 2019, 7 pages.
International Search Report and Written Opinion for Application No. PCT/US2018/065938, dated Mar. 8, 2019, 9 pages.
International Search Report and Written Opinion for Application No. PCT/US2020/016617, dated Apr. 27, 2020, 13 pages.
International Search Report and Written Opinion for International Application No. PCT/US2014/024875, dated Jun. 27, 2014, 12 pages.
International Search Report for International Application No. PCT/US2014/020344, dated Jun. 5, 2014, 2 pages.
International Search Report for International Application No. PCT/US2014/066650, dated Feb. 19, 2015, 3 pages (Outgoing).
International Search Report for International Application No. PCT/US2015/010864, Form PCT/ISA/210 dated Apr. 13, 2015, 2 pages.
International Search Report for International Application No. PCT/US2015/043653, Form PCT/ISA/210 dated Oct. 23, 2015, 2 pages.
Jondhale S. R., et al., “Issues and Challenges in RSSI Based Target Localization and Tracking in Wireless Sensor Networks,” International Conference on Automatic Centrol and Dynamic Optimization Techniques (ICACDOT),2016, pp. 594-598.
Joshi A., et al., “GPS-less Animal Tracking System,” Fourth International Conference on Wireless Communication and Sensor Networks,2008, pp. 120-125.
Kuo S., et al., “Detecting Movement of Beacons in Location-Tracking Wireless Sensor Networks,” IEEE 66th Vehicular Technology Conference, 2007, pp. 362-366.
Kuo S., et al., “The Beacon Movement Detection Problem in Wireless Sensor Networks for Localization Applications,” IEEE Transactions on Mobile Computing, Oct. 2009, vol. 8(10), pp. 1326-1338.
Millner H., et al., “Wireless 3D Localization of Animals for Trait and Behavior Analysis in Indoor & Outdoor Areas,” IEEE MTT-S International Microwave Workshop on Wireless Sensing, Local Positioning, and RFID, 2009, pp. 1-4.
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority for International Application No. PCT/US2015/043653, Form PCT/ISA/220 dated Oct. 23, 2015, 1 page.
Notification of Transmittal of the International Search Report and Written Opinion for the International Application No. PCT/US2014/066650, dated Feb. 19, 2015, 1 page.
Panicker, G. J., et al., “A LoRa Wireless Mesh Network for Wide-Area Animal Tracking,” IEEE International Conference on Electrical, Computer and Communication Technologies,2019, pp. 1-5.
Welch G., et al., “An Introduction to the Kalman Filter,” Department of Computer Science, Jul. 24, 2006, pp. 1-16.
Written Opinion for International Application No. PCT/US2014/066650, dated Feb. 19, 2015, 15 pages(outgoing).
Written Opinion for International Application No. PCT/US2015/043653, Form PCT/ISA/237 dated Oct. 23, 2015, 13 pages.
Written Opinion of the International Application No. PCT/US2015/010864, Form PCT/ISA/237 dated Apr. 13, 2015, 6 pages.
Related Publications (1)
Number Date Country
20210027801 A1 Jan 2021 US