The present invention relates to a voice connection system between humans and animals.
The Italian patent application no. TO2000A001154, filed by the Applicant, discloses a voice connection system between humans and domestic animals, comprising:
The microprocessor is designed to receive signals coming from the field sensors located on the animal's head and to activate a loudspeaker, so as to issue every time a voice message selected in the aforesaid storage means, on the basis of the aforesaid received signals.
The system according to the prior document mentioned also comprises speech recognition means for sending to the microprocessor signals representing the content of voice messages uttered by a person, and pulse-generating means associated to the animal's head, which receive from the microprocessor the aforesaid signals representing the content of voice messages and send corresponding pulses to the animal's brain.
The present invention aims at carrying out a particularly advantageous structure of a voice connection system between humans and animals of the type previously referred to.
Said aim is achieved according to the invention by means of a voice connection system between humans and animals having the characteristics as in the appended claims, which are intended as an integral and substantial part of the present description.
Further aims, characteristics and advantages of the invention shall be evident from the following description, made with reference to the accompanying drawings, provided as a mere non-limiting examples, in which:
With reference to
The system 1 comprises a series of sensors 2 for detecting and sending to a microprocessor-based unit 3 signals 4 representing various thoughts-desires of the dog. In the preferred embodiment of the invention, the system 1 is integrated into a collar, referred to with C in
Thus, the first and second sensor matrix 2A mentioned above lie close to the occipital-temporal region (back O1 and back O2: even number on right side of the skull and odd number on left side of the skull) and are particularly suitable for detecting cerebral waves; conversely, the sensors 2B are mainly intended for detecting signals coming from muscular and nervous bundles, in an annular area lying relatively close to the space where cerebral signals originate, i.e. the animal's neck (or anyhow its cervical-cephalic region), where the propagation of the nervous pulse and muscle contractions are relevant for the aim here proposed.
According to the invention, the detections made with the sensors 2A and 2B are electroencephalographic, or EEG, and electromyographic, EMG.
As is generally known, electroencephalography uses electrodes placed on the head of a patient for detecting and measuring models of brain electric activity, given by millions of neurons mainly located in the cerebral cortex. On the other hand, electromyography is a similar technique intended for the detection of electric activity related to muscular contraction and to the analysis of qualitative and quantitative variations of action potentials of the motory unit. EEG and EMG analyses can give useful, objective information on specific transitory stimuli-events-actions-behaviors of a patient under examination.
In said light it should then be pointed out that the sensors 2 are not therefore intended for detecting conventional cerebral waves only, in terms of spontaneous electric activity of the cerebral cortex, but for detecting a general spectrum of signals that are the consequence of specific transitory stimuli-events-actions-feelings-behaviors, included those shown by muscle motion.
Going back to
The system 1 further comprises speech recognition means 7, which receive a voice message uttered by a person 8 and issue output signals 9 received by the unit 3. According to an important feature of the invention the unit 3, depending on the type of signal 9 received, activates a generator of radioelectric waves 10; the signals issued by the generator 10, corresponding to the voice message uttered by the user 7, are sent directly to the animal's brain. The technique used for sending signals from the generator 10 directly to the animal's brain, without any artificial guide, can be of any known type (radiofrequency, microwaves, ultrasounds, voice-FM). For instance, in a possible embodiment the generator 10 works with radio frequency, so as to modulate a steady-state frequency at about 15 KHz with signals varying from 300 Hz to 4 KHz; the output result is an approximately steady-state tone incorporating a non-audible signal, which however can be perceived directly by the animal's brain. Techniques like the one mentioned above are used for instance for inserting subliminal messages into audio communications or in the field of radio-hypnosis.
In other words, therefore, depending on a signal 9 representing a voice message of the person 8 received and decoded by the means 7, the unit 3 controls the generator 10 so that the latter issues a suitable radioelectric signal that directly reaches the animal's cerebral area, in order to stimulate the execution of given actions or to have given feelings. Said operating mode of the system according to the invention is schematically shown in
The speech recognition means 7, comprising an electronic circuit integrated into a microphone or a microphone matrix, convert in per se known ways a PCM (pulse Code Modulation) digital audio signal into a corresponding graph of frequency component amplitudes. The speech recognition means 8 are also associated to a second database (for instance encoded into a convenient area of the storage means 5) containing several thousands of sample graphs, which identify different types of sounds the human voice can produce; indeed, therefore, the input sound getting into the system is identified by relating it to the type of pre-stored sound which is closer to the one under examination.
In practice, therefore, when the microphone mentioned above detects sound waves, these are processed by the speech recognition means 7, which select and encode useful sounds; the corresponding codes are sent to the generator 10, which converts said codes into radioelectric signals directly stimulating the animal's cerebral fibers. The animal thus hears almost simultaneously the human voice and the radiofrequency signal, thus associating the two stimuli (as in a sort of Pavlov's conditioning) and getting to understand human language.
Said part of the operation of the system according to the invention is schematically shown by blocks 1-5 of
According to an important feature of the invention, in order to help the system to adapt to the user's 8 voice and to his/her speaking style, the unit 3 integrates a neural network structure. As is known, neural networks are mathematical systems developed in the course of researches on artificial intelligence, said systems being characterized by a high level of adaptability, meant as the ability to learn and store information, as well as use it whenever necessary, and meant above all as the ability to approximate an unknown function between input and output.
Also in the case of the present invention it is provided for a system “teaching” period, in order to achieve a correct configuration of the neural network, which is necessary for an accurate operation of the speech recognition system. Said teaching period is also necessary for correctly relating the signals 4 to the corresponding stimuli-events-actions-feelings-behaviors of the animal, in order to issue an audio message by means of the loudspeaker 6, and for correctly linking the radioelectric waves produced by the generator 10 to the corresponding voice message uttered by the person 8.
With reference to the first aspect, the system shall be taught to record the signals 4 of the animal “at work”. An example of said activity can consist in finding a relation between an indication made by the person 8, here acting as supervisor or instructor, to a series of selected substances and the corresponding signals 4 recorded by unit 3, which reflect the global reactions of the animal in terms of effects-behaviors-feelings towards a given smell-substance.
With reference to the second aspect, a series of basic words or sentences are recorded into the storage means 5 of unit 3 through the speech recognition means 7. The vocalization of these words/sentences is associated to specific actions which the dog has to perform and their utterance is controlled by the person 8, who here again acts as supervisor or instructor, through the neural network implemented into the system control logic. The algorithms of the neural network shall determine the best relation between the voice input provided by the person 8 and the output of the generator 10.
Note that during a first stage, matrixes of electrodes 2 and microphones shall be placed on the animal's head, so as to achieve a better processing of the signals used for setting the system 1; then they shall be located in the collar C, so as to convert both the pulses detected on the animal's body into electric signals indicating a status of the animal due to stimuli, feeling, events, actions, behaviors, and the vocalization of the same animal or of another animal being, such as man.
Thanks to the characteristics mentioned above, the unit 1 according to the invention can simulate an exchange of human voice messages between the user and the animal, in which the voice messages “provoked” by the animal are the result of a language generated iteratively on the basis of pre-recorded messages selected by unit 3 so as to actually correspond to the animal's feelings-behaviors-thoughts-desires.
According to an important feature of the invention, the presence of the neural network control system and of the speech recognition system 7 enables unit 3 to perform a self-learning logic, in which the animal develops little by little its own language with an evolutive process, through the interactive loop brain-sensors 2-loudspeaker 6-microphone-generator 10-brain, i.e. by listening through the means 7 and the loudspeaker 6 to the vocalizations it issues in association to the reactions to the environment; the instructor 8 can correct or acknowledge with his/her own voice messages the correctness of the voice messages issued by the loudspeaker 6 on the basis of the signals 4 (as shown by blocks 6-7 of
From the description made above it is evident that the system 1 carries out an actual human-animal adaptation interface system, which can support a bi-directional communication controlled by the neural network logic unit implemented into the microprocessor-based unit 3, where
the input for human-animal communication comprises vocal instructions from the person 8, detected through the recognition means 7, and its output comprises signals that can be perceived by the animal's brain, produced by the generator 10, as schematically shown by blocks 1-6 of
the input for animal-human communication comprises data detected through the sensors 2, and its output comprises the indication of the “status” the animal is in at that moment, issued through the loudspeaker 6, as schematically shown by blocks 7 and 8 of
All components referred to above for the system according to the invention, as well as the necessary electric supply means, can be carried out with modern technologies in miniaturized size and can therefore be easily positioned on the animal's body, preferably in one collar C.
Obviously, though the basic idea of the invention remains the same, construction details and embodiments can widely vary with respect to what has been described and shown by mere way of example, however without leaving the framework of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
TO2002A0933 | Oct 2002 | IT | national |
Number | Name | Date | Kind |
---|---|---|---|
5392735 | Xitco et al. | Feb 1995 | A |
5749324 | Moore | May 1998 | A |
5790033 | Yamamoto | Aug 1998 | A |
6017302 | Loos | Jan 2000 | A |
6178923 | Plotkin | Jan 2001 | B1 |
6254536 | DeVito | Jul 2001 | B1 |
6496115 | Arakawa | Dec 2002 | B2 |
6547746 | Marino | Apr 2003 | B1 |
6556868 | Naritoku et al. | Apr 2003 | B2 |
6712025 | Peterson et al. | Mar 2004 | B2 |
6761131 | Suzuki | Jul 2004 | B2 |
7282028 | Kim et al. | Oct 2007 | B2 |
20020152970 | Takeda | Oct 2002 | A1 |
Number | Date | Country |
---|---|---|
0730261 | Sep 1996 | EP |
2350263 | Nov 2000 | GB |
WO 03079775 | Oct 2003 | WO |
Number | Date | Country | |
---|---|---|---|
20040088164 A1 | May 2004 | US |