The present invention relates to the field of feedback of audio conversations and, in particular, relates to feedback of an audio conversation by utilizing interactive wearable devices.
In today's fast paced life, success of endeavor hinges on the ability to communicate effectively. Communication skills of an individual is tested everywhere, for example, during an interview, or during a presentation to business clients, project leader and board of directors, or writing a report and the like. An engineer giving his/her thesis presentation would require different skills from a business leader who has to present the roadmap of a company to their board of directors. Effective communication skills depend on a number of factors. The factors include but may not be limited to the usage of words, speed of delivery of words, pitch modulation, body language, tone, accent and one or more external factors.
Using the right tools to communicate the right messages at the right time can salvage crises and motivate people to work hard towards success. There are many institutions that have courses teaching effective communications for leaders. For example, a few of the top notch instructors delivers communication lessons either online or via CD/DVDs. Further, there are several books/e-books for advanced communication skills. In addition, there are communication courses available at universities and institutions. Although, these systems/sources of understanding the art of effective communication provide some feedback mechanism but it last only till the time of the course. However, it is a known fact that the communication only gets better with continuous feedback and with continuous monitoring. For busy people, it's often very time consuming to record their conversations, heard by an expert to provide advice and iterate on their mistakes. None of the existing alternatives provides a real time feedback mechanism to the users for improving their communication on regular basis. Also, even if the instructor provides the feedback to recorded speech of an individual at a later stage, the feedback may not be accurate as the instructor may not be able to accurately imagine the circumstances/environment in which the speech is delivered.
In light of the above stated discussion, there is a need for a method and system that overcomes the above stated disadvantages. Further, the method and system provide real-time feedback for improving communication skills.
In an aspect of the present disclosure, a method for providing a feedback of analysis of a plurality of pre-defined attributes of an audio conversation to a user is provided. The user wears an interactive wearable device. The method includes enabling selection of a pre-defined profile from a plurality of pre-defined profiles of the interactive wearable device, extracting values of the plurality of pre-defined attributes of the audio conversation corresponding to the selected pre-defined profile, transmitting the values of the plurality of pre-defined attributes of the audio conversation and receiving the feedback corresponding to the audio conversation. The received feedback is based on processing of the values of the plurality of pre-defined attributes of the audio conversation with respect to a pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile of the interactive wearable device.
In an embodiment of the present disclosure, the method includes activating the interactive wearable device worn by the user. In an embodiment of the present disclosure, the processing is based on matching of the values of the plurality of pre-defined attributes of the audio conversation with the pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile. The received feedback includes providing at least one of alerting vibrations and a pre-determined set of reports.
In an embodiment of the present disclosure, the alerting vibrations are produced when the corresponding value for each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile exceeds beyond a corresponding threshold mark. In another embodiment of the present disclosure, the pre-determined set of reports is generated by utilizing a pre-defined set of scoring steps. The pre-defined set of scoring steps is measured against a baseline using one or more pre-configured profiles.
In an embodiment of the present disclosure, the plurality of pre-defined attributes includes a first set of pre-defined attributes based on technical attributes associated with the user, a second set of pre-defined attributes based on a plurality of bio-markers associated with the user, a third set of pre-defined attributes based on responses to interaction of the audio conversation with other one or more users and a fourth set of pre-defined attributes based on physical attributes associated with the user. The first set of pre-defined attributes includes at least one of tone, accent, pitch level, language, vocal energy and grammar. The second set of pre-defined attributes includes at least one of stress, body temperature, deep breaths, and heart beat rate. The fourth set of pre-defined attributes includes at least one of hand gestures and facial expressions.
In another aspect of the present disclosure, a method for providing a feedback of analysis of a plurality of pre-defined attributes of an activity performed by a user is provided. The user wears an interactive wearable device. The method includes receiving a selected pre-defined profile from a plurality of pre-defined profiles of the interactive wearable device, collecting the corresponding values for the plurality of pre-defined attributes of the activity corresponding to the selected pre-defined profile, processing the corresponding values for the plurality of pre-defined attributes of the activity with respect to a corresponding pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile and transmitting the feedback corresponding to the activity based on the processing.
In an embodiment of the present disclosure, the method includes storing the corresponding values for the plurality of pre-defined attributes of the activity of the user, the pre-defined profile corresponding to the user and the plurality of pre-defined profiles.
In an embodiment of the present disclosure, the collected plurality of pre-defined attributes includes a first set of pre-defined attributes based on technical attributes associated with the user, a second set of pre-defined attributes based on a plurality of bio-markers associated with the user, a third set of pre-defined attributes based on responses to interaction of the activity with the other one or more users and a fourth set of pre-defined attributes based on physical attributes associated with the user. The first set of pre-defined attributes includes at least one of tone, accent, pitch level, language, vocal energy, and grammar. The second set of pre-defined attributes includes at least one of stress, body temperature, deep breaths, and heart beat rate. The fourth set of pre-defined attributes includes at least one of hand gestures and facial expressions.
In an embodiment of the present disclosure, the processing is based on matching of the corresponding values for the plurality of pre-defined attributes of the activity with the pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile.
In an embodiment of the present disclosure, the feedback includes providing at least one of alerting vibrations and a pre-determined set of reports. The alerting vibrations are produced on exceeding the corresponding value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile beyond a corresponding threshold mark.
In an embodiment of the present disclosure, the pre-determined set of reports is generated utilizing a pre-defined set of scoring steps. The pre-defined set of scoring steps is measured against a baseline using one or more pre-configured profiles. In an embodiment of the present disclosure, the activity includes at least one of an audio conversation, hand movements of the user and facial gestures of the user.
In yet another aspect of the present disclosure, a system for analysis of a plurality of pre-defined attributes of an audio conversation to a user is provided. The system includes an interactive wearable device worn by the user and an application server. The interactive wearable device includes a microphone configured to fetch corresponding values of plurality of pre-defined attributes of the audio conversation of the user, a plurality of sensors configured to fetch a second set of pre-defined attributes and a fourth set of pre-defined attributes from the plurality of pre-defined attributes associated with the user and a data transmission chip configured to transmit the corresponding values of the plurality of pre-defined attributes of the audio conversation. The plurality of pre-defined attributes is associated with a selected profile of a plurality of pre-defined profiles. The second set of pre-defined attributes is based on a plurality of bio-markers including at least one of stress, body temperature, deep breaths, and heart beat rate. The fourth set of pre-defined attributes is based on physical attributes associated with the user including at least one of hand gestures and facial expressions. The application server includes a processing module to process the corresponding values of the plurality of pre-defined attributes of the audio conversation corresponding to the pre-defined profile of the user and a feedback module configured to transmit a real time feedback to the user.
In an embodiment of the present disclosure, the application server includes a selection module to select the pre-defined profile from the plurality of pre-defined profiles, a receiving module configured to receive the selected pre-defined profile from the plurality of pre-defined profiles by the interactive wearable device and the corresponding values of plurality of pre-defined attributes of the audio conversation corresponding to the selected pre-defined profile, and a database configured to store the corresponding values for the plurality of pre-defined attributes of the audio conversation of the user, the pre-defined profile corresponding to the user and the plurality of pre-defined profiles.
In an embodiment of the present disclosure, the application server includes an activation module to activate the interactive wearable device worn by the user. In an embodiment of the present disclosure, the processing is based on matching of the corresponding values of the plurality of pre-defined attributes of the audio conversation with the pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile.
In an embodiment of the present disclosure, the feedback includes providing alerting vibrations and a pre-determined set of reports. The alerting vibrations are produced on exceeding the pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile beyond a threshold mark.
In yet another embodiment of the present disclosure, the pre-determined set of reports is generated utilizing a pre-defined set of scoring steps. The pre-defined set of scoring steps is measured against a baseline using one or more pre-configured profiles.
Having thus described the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
It should be noted that the terms “first”, “second”, and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. Further, the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
In an embodiment of the present disclosure, the user 102 set a schedule to activate the interactive wearable device 104. In addition, the user 102 selects a pre-defined profile from a plurality of pre-defined profiles. The plurality of pre-defined profiles includes but may not be limited to a business meeting, a hallway conversation, a public speech and a classroom presentation. For example, a user X selects the business meeting profile and activates his/her interactive wearable device Y before the meeting starts to monitor his/her communication skills.
The interactive wearable device 104 extracts values of the plurality of pre-defined attributes of the activity corresponding to the selected pre-defined profile. The activity includes at least one of an audio conversation, hand movements of the user 102 and facial gestures of the user 102. Examples of the plurality of pre-defined attributes include tone, accent, pitch level, stress, body temperature and the like. The interactive wearable device 104 transmits the values of plurality of pre-defined attributes of the activity to an application server 108. The application server 108 processes the plurality of pre-defined attributes and provides a feedback to the user 102. In an embodiment of the present disclosure, the interactive wearable device 104 transmits the plurality of pre-defined attributes to the communication device 106.
The communication device 106 transmits the plurality of pre-defined attributes to the application server 108. Continuing with the above example, when the user X goes for the business meeting, the interactive wearable device Y extracts the attributes including his/her tone, stress and accent during the meeting and transmits these attributes to the application server 108. The application server 108 processes these attributes to provide feedback to the user X.
It may be noted that in
The flowchart 200 initiates at step 202. At step 204, the interactive wearable device 104 selects the pre-defined profile from the plurality of pre-defined profiles of the interactive wearable device 104. The plurality of pre-defined profiles includes the business meeting, the hallway conversation, the public speech, the classroom presentation and the like. For example, the user X selects the business meeting profile and activates his/her interactive wearable device Y before the meeting starts to monitor his/her communication skills. In an embodiment of the present disclosure, the user 102 may activate the interactive wearable device 104 and configure the pre-defined profile in early morning.
Following step 204, at step 206, the interactive wearable device 104 extracts the values of the plurality of pre-defined attributes of the activity corresponding to the selected pre-defined profile. The activity includes at least one of the audio conversation, hand movements of the user 102 and facial gestures of the user 102. The plurality of pre-defined attributes includes a first set of pre-defined attributes based on technical attributes associated with the user 102, a second set of pre-defined attributes based on a plurality of bio-markers associated with the user 102, a third set of pre-defined attributes based on responses to interaction of the activity with other one or more users and a fourth set of pre-defined attributes based on physical attributes associated with the user 102. The first set of pre-defined attributes includes tone, accent, pitch level, language, vocal energy, grammar and the like. The second set of pre-defined attributes includes stress, body temperature, deep breaths, heart beat rate, breath rate and the like. The fourth set of pre-defined attributes includes hand gestures, facial expressions and the like. For example, when the user X goes for the business meeting, the interactive wearable device Y extracts the attributes including his/her tone, stress, facial expressions and accent during the meeting. In addition, the interactive wearable device Y extracts the tone and accent of the one or more other users present in the meeting.
At step 208, the interactive wearable device 104 transmits the plurality of pre-defined attributes of the activity to the application server 108. The application server 108 processes the plurality of pre-defined attributes. In an embodiment of the present disclosure, the interactive wearable device 104 transmits the plurality of pre-defined attributes of the activity to the communication device 106. The communication device 106 transmits the plurality of pre-defined attributes of the activity to the application server 108. The processing is based on matching of the pre-determined values of the plurality of pre-defined attributes of the activity with a pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile. In an embodiment, an administrator decides the pre-determined values for each of the attributes. In an embodiment of the present disclosure, the term value of a corresponding attribute can be a range of the values set by the administrator or collected by application server 108.
At step 210, the interactive wearable device 104 receives the feedback corresponding to the activity performed by the user 102. The received feedback is based on the processing of the pre-determined values of the plurality of pre-defined attributes of the activity with respect to the pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile. The received feedback includes at least one of alerting vibrations and a pre-determined set of reports. The alerting vibrations are produced on exceeding the corresponding pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile beyond a corresponding threshold mark. The pre-determined set of reports is generated by utilizing a pre-defined set of scoring steps. The pre-defined set of scoring steps is measured against a baseline using one or more pre-configured profiles. In an embodiment of the present disclosure, the feedback can be provided real-time or by an online expert/agent or through artificial intelligence to the user 102 by utilizing the recorded values of the pre-defined attributes corresponding to pre-defined profiles, past records, consolidated records, his/her profile information including age, gender, jurisdiction, and the like and techniques for improvement of the communication skills. The online expert/agent has a marketplace ratings, pricings to review and schedules.
Extending the above example, the interactive wearable device Y transmits the attributes including the tone, stress, facial expressions and accent of the user X during the meeting and the tone and accent of the one or more other users present in the meeting to the application server 108. The application server 108 matches the tone, stress, facial expressions and accent of the user X during the meeting with stored values of the tone, stress, facial expressions and accent for the business meeting profile and generates the feedback. If the attributes (say tone and stress) of the user X in the meeting do not lie in appropriate range, then the user X receives the feedback (vibrations) with set of reports showing inappropriate results.
In an embodiment of the present disclosure, the user 102 configures the alerting messages. For example, the user X can set the vibration whenever his/her words per minute exceed normal perception levels or the user X can set the vibration when pitch radius is not enough for the radius intended thus providing intelligible ways to the user X for better communication in real time.
In an embodiment of the present disclosure, the system 100 includes activating the interactive wearable device 104 worn by the user 102. The user 102 activates the interactive wearable device 104 just before the meeting to monitor the communication skills. In another embodiment of the present disclosure, the user 102 may activate the interactive wearable device 104 in the early morning to monitor the communication skills for the entire day. The flowchart 200 terminates at step 210.
In an embodiment of the present disclosure, the user 102 may check his/her historical performance of communication with pre-defined metrics in his/her personalized dashboard.
It may be noted that the flowchart 200 is explained to have above stated process steps; however, those skilled in the art would appreciate that the flowchart 200 may have more/less number of process steps which may enable all the above stated embodiments of the present disclosure.
As mentioned above, the plurality of pre-defined attributes is associated with the selected profile of the plurality of profiles. The plurality of pre-defined attributes includes the first set of pre-defined attributes based on the technical attributes associated with the user 102, the second set of pre-defined attributes based on the plurality of bio-markers associated with the user 102, the third set of pre-defined attributes based on the responses to the interaction of the activity with the other one or more users and the fourth set of pre-defined attributes based on the physical attributes associated with the user 102. The first set of pre-defined attributes includes tone, accent, pitch level, language, vocal energy, grammar and the like. The second set of pre-defined attributes includes stress, body temperature, deep breaths, heart beat rate, breath rate and the like. The fourth set of pre-defined attributes includes hand gestures, facial expressions and the like. The plurality of sensors 304 fetches the second set of pre-defined attributes and the fourth set of pre-defined attributes from the plurality of pre-defined attributes associated with the user 102. The data transmission chip 306 may include but may not be limited to Bluetooth program or any other chip capable of transmitting data. The data transmission chip transmits the corresponding values of the plurality of pre-defined attributes of the activity. Continuing with the above example, the microphone 302 fetches the values of attributes like the tone, accent and pitch level of the activity of the user X during his/her business meeting and the sensors fetches the attributes like the stress, the body temperature, the deep breaths, the heart beat rate and the breath rate and the Bluetooth of the interactive wearable device Y transmits these attributes to the application server 108. The application server 108 processes the corresponding pre-determined values of the plurality of pre-defined attributes of the activity corresponding to the pre-defined profile of the user 102.
The application server 108 includes an activation module 308, a selection module 310, a receiving module 312, a processing module 314, a feedback module 316 and a database 318. The activation module 308 activates the interactive wearable device 104 worn by the user 102. The selection module 310 selects the pre-defined profile from the plurality of pre-defined profiles. The plurality of pre-defined attributes includes the first set of pre-defined attributes based on the technical attributes associated with the user 102, the second set of pre-defined attributes based on the plurality of bio-markers associated with the user 102, the third set of pre-defined attributes based on the responses to the interaction of the activity with the other one or more users and the fourth set of pre-defined attributes based on the physical attributes associated with the user 102. The first set of pre-defined attributes includes tone, accent, pitch level, language, vocal energy, grammar and the like. The second set of pre-defined attributes includes stress, body temperature, deep breaths, heart beat rate, breath rate and the like. The fourth set of pre-defined attributes includes hand gestures, facial expressions and the like.
The receiving module 312 receives the selected pre-defined profile from the plurality of pre-defined profiles by the interactive wearable device 104 and the corresponding values of the plurality of pre-defined attributes of the activity corresponding to the selected pre-defined profile. The processing module 314 processes the corresponding values of the plurality of pre-defined attributes of the activity corresponding to the pre-defined profile of the user 102. The processing is based on matching the corresponding pre-determined values of the plurality of pre-defined attributes of the activity with the pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile.
Going further, the feedback module 316 transmits a feedback to the user 102. The feedback includes at least one of the alerting vibrations and the pre-determined set of reports. The alerting vibrations are produced on exceeding the corresponding pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile beyond the corresponding threshold mark. The pre-determined set of reports is generated utilizing the pre-defined set of scoring steps. The pre-defined set of scoring steps is measured against a baseline using the one or more pre-configured profiles. The feedback can be provided real-time or by the online expert/agent or through artificial intelligence to the user 102 by utilizing the recorded values of the pre-defined attributes corresponding to pre-defined profiles, past records, consolidated records, his/her profile information including age, gender, jurisdiction, and the like and techniques for improvement of the communication skills (as illustrated in detailed description of
In an embodiment of the present disclosure, the feedback module 316 generates the feedback corresponding to the measured heart rate and the breath rate of the user 102. The pre-determined set of reports includes level of anxiousness of the user 102 and the techniques for controlling emotions and the anxiousness of the user 102.
In another embodiment of the present disclosure, the interactive wearable device 104 may include a wrist band, a smart watch or any other wearable device worn by the user 102 on wrist. The plurality of sensors 304 of the interactive wearable device 104 (the wrist band and the smart watch) captures the hand gestures of the user 102. The pre-determined set of reports provides the feedback for body language of the user 102 corresponding to the selected profile and associated techniques for delivery of non-verbal representation of speech.
In yet another embodiment of the present disclosure, the interactive wearable device 104 may include a contact lens. The plurality of sensors 304 of the interactive wearable device 104 (the contact lens) tracks eye contact of the user 102. The pre-determined set of reports provides the associated techniques for effective face to face conversations.
In yet another embodiment of the present disclosure, an in-built camera of the one or more other interactive wearable devices may detect the facial expressions of the user 102. For example, if the user X wearing the digital glass selects the public speaking profile, then the in-built camera of the digital glass detects the facial expressions of speaker (the user X) to correlate with his/her voice modulation and provides necessary feedback.
In yet another embodiment of the present disclosure, the interactive wearable device 104 integrates with the one or more other interactive wearable devices measuring the plurality of pre-defined attributes associated with the user 102.
The database 318 stores the corresponding pre-determined values of the plurality of pre-defined attributes of the activity of the user 102, the pre-defined profile corresponding to the user 102 and the plurality of pre-defined profiles.
In an embodiment of the present disclosure, the application server 108 maintains a profile of the interactive device 104 of the user 102. The profile may include the values of attributes from the plurality of pre-defined attributes corresponding to the selected profile of activity, past records, input of the user 102, and the like. The inputs of the user 102 can be a specific area of improvement, an area which he/she wants to ignore, and the like. This profile may be utilized for providing feedback to the user 102.
It may be noted that in
At step 408, the application server 108 processes the corresponding pre-determined values of the plurality of pre-defined attributes of the activity with respect to the corresponding pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile. The processing is based on matching of the corresponding pre-determined values of the plurality of pre-defined attributes of the activity with the pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile.
At step 410, the application server 108 transmits the feedback corresponding to the activity based on the processing. The feedback includes at least one of alerting vibrations and the pre-determined set of reports. The alerting vibrations are produced on exceeding the corresponding pre-determined value of each of the corresponding plurality of pre-defined attributes for the selected pre-defined profile beyond the corresponding threshold mark. The pre-determined set of reports is generated utilizing the pre-defined set of scoring steps. The pre-defined set of scoring steps is measured against a baseline using the one or more pre-configured profiles.
In an embodiment of the present disclosure, the application server 108 stores the corresponding pre-determined values of the plurality of pre-defined attributes of the activity of the user 102, the pre-defined profile corresponding to the user 102 and the plurality of pre-defined profiles.
In another embodiment of the present disclosure, the activity includes at least one of the audio conversation, hand movements of the user 102 and facial gestures of the user 102. The flowchart 400 terminates at step 412. It may be noted that the flowchart 400 is explained to have above stated process steps; however, those skilled in the art would appreciate that the flowchart 400 may have more/less number of process steps which may enable all the above stated embodiments of the present disclosure.
The above stated methods and system have many advantages. The above stated methods and system provides the real time feedback for improving communication skills of one or more users. In addition, the above stated methods and system provides continuous monitoring with real time feedback along with creating a new line of employment opportunities for communication experts to help users using technology as a medium.
While the disclosure has been presented with respect to certain specific embodiments, it will be appreciated that many modifications and changes may be made by those skilled in the art without departing from the spirit and scope of the disclosure. It is intended, therefore, by the appended claims to cover all such modifications and changes as fall within the true spirit and scope of the disclosure.