The present invention relates to methods and system for configuring networking and matching communication platform of an individual by evaluating manifestations of physiological change in the human voice. More specifically, this embodiment of the present invention relates to methods and system for configuring networking and matching communication platform of an individual by evaluating emotional attitudes based on ongoing activity analysis of different vocal categories.
Recent technologies have enabled the indication of emotional attitudes of an individual, either human or animal, and linking them to ones voice intonation. For example, U.S. Pat. No. 8,078,470 discloses means and method for indicating emotional attitudes of a individual, either human or animal, according to voice intonation. The invention also discloses a system for indicating emotional attitudes of an individual comprising a glossary of intonations relating intonations to emotions attitudes. Furthermore, U.S. Pat. No. 7,917,366 discloses a computerized voice-analysis device for determining an SHG profile (as described therein, such an SHG profile relates to the strengths (e.g., relative strengths) of three human instinctive drives). Of note, the invention may be used for one or more of the following: analyzing a previously recorded voice sample; real-time analysis of voice as it is being spoken; combination voice analysis that is, a combination of: (a) previously recorded and/or real-time voice; and (b) answers to a questionnaire.
A review of existing Internet social networking sites reveals a need for a platform that utilizes said technologies by providing an easy to use, automated matching feedback mechanism by which each social networking participant can be matched to the other user. Such evaluations would be useful not only to the users themselves, but to other people who might be interested in matching to one of the users.
In light of the above, there is a long term unmet need to provide such social networking and matching communication platform implementing analysis of voice intonations and providing such an automated matching feedback mechanism to match between users.
It is hence one object of this invention to disclose a social networking and matching communication platform capable of implementing analysis of voice intonations and providing such an automated matching feedback mechanism to match between matching users. Briefly, a matching user can be evaluated by manifestations of physiological change in the human voice based on four vocal categories: vocal emotions (personal feelings and emotional well-being in a form of offensive/defensive/neutral/indecisive profile, with the ability to perform zoom down on said profiles) of users; vocal personalities (set of user's moods based on SHG profile) of users; vocal attitudes (personal emotional expressions towards user's point/subject of interest and mutual ground of interests between two or more users) of users; and vocal imitation of two or more users. Moreover, a matching user can be evaluated based on manifestations of physiological change in the human voice and user's vocal reaction to his/her point/subject of interest through a predetermined period of time. The Internet matching system in accordance with the present invention processes the evaluation and determines a matching rating and sends the rating to the other participant to the matching by, for example, email or short message service (SMS). The evaluations and ratings may also be stored in an emotionbase for later review by the participants and/or other interested people. Advantageously, the system may also prompt the participants to take further action based on that rating. For example, if a user rates a matching positively, the system may prompt the participant to send a gift to the other participant, send a message to the other participant, or provide suggestions to that participant another matching. A user receiving a positive rating may be likewise prompted by the system.
In yet another aspect of the present invention to disclose configuring social networking and matching communication platform by implementing analysis of voice intonations of a first user, said system comprising (1) an input module, said input module is adapted to receive voice input and orientation reference selected from a group consisting of matching, time, location, and any combination thereof; (2) a personal collective emotionbase; said emotionbase comprising benchmark tones and benchmark emotional attitudes (BEA), each of said benchmark tones corresponds to a specific BEA; (3) at least one processor in communication with a computer readable medium (CRM), said processor executes a set of operations received from said CRM, said set of operations comprising steps of (a) obtaining a signal representing sound volume as a function of frequencies from said volume input; (b) processing said signal so as to obtain voice characteristics of said individual, said processing includes determining a Function A; said Function A being defined as the average or maximum sound volume as a function of sound frequencies, from within a range of frequencies measured in said volume input; said processing further includes determining a Function B; said Function B defined as the averaging, or maximizing of said function A over said range of frequencies and dyadic multiples thereof; (c) comparing said voice characteristics to said benchmark tones; (d) allocating to said voice characteristics at least one of said BEAs corresponding to said benchmark tones; and (e) assigning said orientation reference to said allocated at least one of said BEAs. It is in the core of the invention wherein said set of operations additionally comprises a step of evaluating, determining and presenting, a matching rating of said user and matching the rating to another user to matching. Said matching, for example, can be analyzed and established through combination of user's vocal expression and opinion, after presenting to him/her a series of pictures for a predetermined period of time.
In another aspect of the present invention, the system enables a participant to authorize members of the Internet website system to view his or her matching evaluation. In that way, other members may consider that evaluation in deciding whether to arrange a matching with the reviewed participant.
In yet another aspect of the present invention, the system may be linked to an established Internet matching website to provide that website with the features described herein. Alternatively, the system may be linked to blogs (weblogs) or social networking sites such as Facebook, Twitter, Xanga, Tumblr, TagWorld, Friendster, and LinkedIn.
In yet another aspect of the present invention, a widget is provided as a user-interface.
In yet another aspect of the present invention, a physical feedback (smell, touch, vision, taste) of matching intensity between two or more users is provided as a notification via mobile and/or computer platform.
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.
The term “word” refers in the present invention to a unit of speech. Words selected for use according to the present invention usually carry a well defined emotional meaning. For example, “anger” is an English language word that may be used according to the present invention, while the word “regna” is not; the latter carrying no meaning, emotional or otherwise, to most English speakers.
The term “tone” refers in the present invention to a sound characterized by a certain dominant frequencies. Several tones are defined by frequency in Table 1 of US 2008/0270123, where shown that principal emotional values can be assigned to each and every tone. Table 1 divides the range of frequencies between 120 Hz and 240 Hz into seven tones. These tones have corresponding harmonics in higher frequency ranges: 240 to 480, 480 to 960 Hz, etc. Per each tone, the table describes a name and a frequency range, and relates its accepted emotional significance.
The term “intonation” refers in the present invention to a tone or a set of tones, produced by the vocal chords of a human speaker or an animal. For example the word “love” may be pronounced by a human speaker with such an intonation so that the tones FA and SOL are dominant.
The term “dominant tones” refers in the present invention to tones produced by the speaker with more energy and intensity than other tones. The magnitude or intensity of intonation can be expressed as a table, or graph, relating relative magnitude (measured, for example, in units of dB) to frequency (measured, for example, in units of HZ.)
The term “reference intonation”, as used in the present invention, relates to an intonation that is commonly used by many speakers while pronouncing a certain word or, it relates to an intonation that is considered the normal intonation for pronouncing a certain word. For example, the intonation FA SOL may be used as a reference intonation for the word “love” because many speakers will use the FA-SOL intonation when pronouncing the word “love”.
The term “emotional attitude”, as used in the present invention, refers to an emotion felt by the speaker, and possibly affecting the behavior of the speaker, or predisposing a speaker to act in a certain manner. It may also refer to an instinct driving an animal. For example “anger” is an emotion that may be felt by a speaker and “angry” is an emotional attitude typical of a speaker feeling this emotion.
The term “emotionbase”, as used in the present invention, refers to an organized collection of human emotions. The emotions are typically organized to model aspects of reality in a way that supports processes requiring this information. For example, modeling archived assigned referenced emotional attitudes with predefined situations in a way that supports monitoring and managing one's physical, mental and emotional well-being, and subsequently significantly improve them.
The term “configure”, as used in the present invention, refers to designing, establishing, modifying, or adapting emotional attitudes to form a specific configuration or for some specific purpose, for example in a form of collective emotional architecture.
The term “user” refers to a person attempting to configure or use one's social networking and matching communication platform capable of implementing analysis of voice intonations and providing such an automated matching feedback mechanism to match between matching participants based on ongoing activity analysis of three neurotransmitter loops, or SHG profile.
The term “SHG” refers to a model for instinctive decision-making that uses a three-dimensional personality profile. The three dimensions are the result of three drives: (1) Survival (S)—the willingness of an individual to fight for his or her own survival and his or her readiness to look out for existential threats; (2) Homeostasis (H) [or “Relaxation”]—the extent to which an individual would prefer to maintain his or her ‘status quo’ in all areas of life (from unwavering opinions to physical surroundings) and to maintain his or her way of life and activity; and (3) Growth (G)—the extent to which a person strives for personal growth in all areas (e. g., spiritual, financial, health, etc.). It is believed that these three drives have a biochemical basis in the brain by the activity of three neurotransmitter loops: (1) Survival could be driven by the secretion of adrenaline and noradrenalin; (2) Homeostasis could be driven by the secretion of acetylcholine and serotonin; (3) Growth could be driven by the secretion of dopamine. While all human beings share these three instinctive drives (S,H,G), people differ in the relative strengths of the individual drives. For example, a person with a very strong (S) drive will demonstrate aggressiveness, possessiveness and a tendency to engage in high-risk behavior when he or she is unlikely to be caught. On the other hand, an individual with a weak (S) drive will tend to be indecisive and will avoid making decisions. A person with a strong (H) drive will tend to be stubborn and resistant to changing opinions and/or habits. In contrast, an individual with a weak (H) drive will frequently change his or her opinions and/or habits. Or, for example, an individual with a strong (G) drive will strive to learn new subjects and will strive for personal enrichment (intellectual and otherwise). A weak (G) drive, on the other hand, may lead a person to seek isolation and may even result in mental depression.
The term “matching intensity level” refers to a level of two or more users vocal compatibility with each other based on four vocal categories: vocal emotions (personal feelings and emotional well-being in a form of offensive/defensive/neutral/indecisive profile, with the ability to perform zoom down on said profiles) of users; vocal personalities (set of user's moods based on SHG profile) of users; vocal attitudes (personal emotional expressions towards user's point/subject of interest and mutual ground of interests between two or more users) of users; and vocal imitation of two or more users.
The principles, systems and methods for determining the emotional subtext of a spoken utterance used in this invention are those disclosed by Levanon et al. in PCT Application WO 2007/072485; a detailed description of their method of intonation analysis may be found in that source. Reference is made to
Reference is now made to
Reference is now made to
Reference is now made to
Reference is now made to
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IL15/50876 | 8/31/2015 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62044345 | Sep 2014 | US |